26
CSE 2010: Algorithms and Data Structures Algorithms

CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

CSE 2010: Algorithms and Data Structures

Algorithms

Page 2: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Worst case vs. Average case

Average case is often as bad as the worst case.

When analyzing algorithms, we will mostly focus on the worst case.

Page 3: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Linear search example

22 Chapter 2 Getting Started

a pointer to the array is passed, rather than the entire array, and changes toindividual array elements are visible to the calling procedure.

! A return statement immediately transfers control back to the point of call inthe calling procedure. Most return statements also take a value to pass back tothe caller. Our pseudocode differs from many programming languages in thatwe allow multiple values to be returned in a single return statement.

! The boolean operators “and” and “or” are short circuiting. That is, when weevaluate the expression “x and y” we first evaluate x. If x evaluates to FALSE,then the entire expression cannot evaluate to TRUE, and so we do not evaluate y.If, on the other hand, x evaluates to TRUE, we must evaluate y to determine thevalue of the entire expression. Similarly, in the expression “x or y” we eval-uate the expression y only if x evaluates to FALSE. Short-circuiting operatorsallow us to write boolean expressions such as “x ¤ NIL and x: f D y” withoutworrying about what happens when we try to evaluate x: f when x is NIL.

! The keyword error indicates that an error occurred because conditions werewrong for the procedure to have been called. The calling procedure is respon-sible for handling the error, and so we do not specify what action to take.

Exercises2.1-1Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on thearray A D h31; 41; 59; 26; 41; 58i.2.1-2Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of non-decreasing order.2.1-3Consider the searching problem:Input: A sequence of n numbers A D ha1; a2; : : : ; ani and a value !.Output: An index i such that ! D AŒi " or the special value NIL if ! does not

appear in A.Write pseudocode for linear search, which scans through the sequence, lookingfor !. Using a loop invariant, prove that your algorithm is correct. Make sure thatyour loop invariant fulfills the three necessary properties.2.1-4Consider the problem of adding two n-bit binary integers, stored in two n-elementarrays A and B . The sum of the two integers should be stored in binary form in

Page 4: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Linear/sequential search

Page 5: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Linear search

vardoLinearSearch=function(array){for(varguess=0;guess<array.length;guess++){if(array[guess]===targetValue){returnguess;//foundit!}}return-1;//didn'tfindit};

Worst case: loop runs n times where n is the length of the input array.

Best case: loop runs 1 time.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

Page 6: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Linear search

vardoLinearSearch=function(array){for(varguess=0;guess<array.length;guess++){if(array[guess]===targetValue){returnguess;//foundit!}}return-1;//didn'tfindit};

• Each iteration of the loop performs a fixed number of instructions. • Each of these instructions runs in fixed amount of time, i.e., constant time. • Extra instruction: return -1 also runs in constant time.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

Page 7: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Linear search

vardoLinearSearch=function(array){for(varguess=0;guess<array.length;guess++){if(array[guess]===targetValue){returnguess;//foundit!}}return-1;//didn'tfindit};

• Running time of a program as a function of the size of its input.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

T (n) = c1

n + c2

(1)

Algorithm Running-Time Analysis

Page 8: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Search in a matrix

fori=0toNforj=0toNprintmatrix[i,j]print“Completed.”

• Running time of a program as a function of the size of its input.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

Page 9: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

fori=0toNforj=0toNprintmatrix[i,j]print“Completed.”

• Running time of a program as a function of the size of its input.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

Asymptotic notations

Page 10: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Rate of growth: Example functions that often appear in algorithm analysis

Introduction; Analysis of Algorithms 22© 2004 Goodrich, Tamassia

Seven Important Functions Seven functions that often appear in algorithm analysis:� Constant ≈ 1� Logarithmic ≈ log n� Linear ≈ n� N-Log-N ≈ n log n� Quadratic ≈ n2� Cubic ≈ n3� Exponential ≈ 2n

time T(n)

input size n10 2 3 4 5 6

12345678

T(n) = 1, T(n) = log2 n,

T(n) = n, T(n) = n2

9101112Constant ≈ 1

Logarithmic ≈ log n Linear ≈ n N-Log-N ≈ n log n Quadratic ≈ n2Cubic ≈ n3 Exponential ≈ 2n

T(n) = 1

T(n) = log2 n

T(n) = n

T(n) = n2

Introduction; Analysis of Algorithms 22© 2004 Goodrich, Tamassia

Seven Important Functions Seven functions that often appear in algorithm analysis:� Constant ≈ 1� Logarithmic ≈ log n� Linear ≈ n� N-Log-N ≈ n log n� Quadratic ≈ n2� Cubic ≈ n3� Exponential ≈ 2n

time T(n)

input size n10 2 3 4 5 6

12345678

T(n) = 1, T(n) = log2 n,

T(n) = n, T(n) = n2

9101112

Page 11: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

Page 12: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

Page 13: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper bound

Page 14: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper boundlower boundtight bound

Page 15: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper boundlower bound

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

1 < log n < n < n log n < n2 < n3 < . . . < 2

n < 3

n < . . . < nn (3)

T (n) = O(n2) (4)

T (n) = O(n3) (5)

T (n) = O(2

n) (6)

T (n) = O(nn) (7)

(8)

Algorithm Running-Time Analysis

We can then say that:

http

s://y

outu

.be/

ddsP

7Nec

EBk

tight bound

Page 16: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper boundlower bound

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

1 < log n < n < n log n < n2 < n3 < . . . < 2

n < 3

n < . . . < nn (3)

T (n) = O(n2) (4)

T (n) = O(n3) (5)

T (n) = O(2

n) (6)

T (n) = O(nn) (7)

(8)

Algorithm Running-Time Analysis

We can then say that:

http

s://y

outu

.be/

ddsP

7Nec

EBk

T (n) = ⌦(n2) (13)

T (n) = ⌦(n) (14)

T (n) = ⌦(log n) (15)

T (n) = ⌦(1) (16)

(17)

Algorithm Running-Time Analysis

tight bound

Page 17: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper boundlower bound

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

1 < log n < n < n log n < n2 < n3 < . . . < 2

n < 3

n < . . . < nn (3)

T (n) = O(n2) (4)

T (n) = O(n3) (5)

T (n) = O(2

n) (6)

T (n) = O(nn) (7)

(8)

Algorithm Running-Time Analysis

We can then say that:

http

s://y

outu

.be/

ddsP

7Nec

EBk

T (n) = ⌦(n2) (13)

T (n) = ⌦(n) (14)

T (n) = ⌦(log n) (15)

T (n) = ⌦(1) (16)

(17)

Algorithm Running-Time Analysis

tight bound

and

Page 18: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic notationsT (n) = c

1

n + c

2

(1)

T (n) = c

1

n

2

+ c

2

(2)

1 < log n < n < n log n < n

2 < n

3 < . . . < 2

n < 3

n < . . . < n

n(3)

Algorithm Running-Time Analysis

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

(3)

Algorithm Running-Time Analysis

upper boundlower bound

T (n) = c1

n + c2

(1)

T (n) = c1

n2 + c2

(2)

1 < log n < n < n log n < n2 < n3 < . . . < 2

n < 3

n < . . . < nn (3)

T (n) = O(n2) (4)

T (n) = O(n3) (5)

T (n) = O(2

n) (6)

T (n) = O(nn) (7)

(8)

Algorithm Running-Time Analysis

We can then say that:

http

s://y

outu

.be/

ddsP

7Nec

EBk

T (n) = ⌦(n2) (13)

T (n) = ⌦(n) (14)

T (n) = ⌦(log n) (15)

T (n) = ⌦(1) (16)

(17)

Algorithm Running-Time Analysis

tight bound

and

T (n) = ⌦(n2) (13)

T (n) = ⌦(n) (14)

T (n) = ⌦(log n) (15)

T (n) = ⌦(1) (16)

T (n) = ⇥(n2) (17)

Algorithm Running-Time Analysis

Page 19: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

Page 20: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

3.1 Asymptotic notation 47

expression does not indicate what variable is tending to infinity.2 We shall oftenuse the notation ‚.1/ to mean either a constant or a constant function with respectto some variable.

O-notationThe ‚-notation asymptotically bounds a function from above and below. Whenwe have only an asymptotic upper bound, we use O-notation. For a given func-tion g.n/, we denote by O.g.n// (pronounced “big-oh of g of n” or sometimesjust “oh of g of n”) the set of functionsO.g.n// D ff .n/ W there exist positive constants c and n0 such that

0 ! f .n/ ! cg.n/ for all n " n0g :

We use O-notation to give an upper bound on a function, to within a constantfactor. Figure 3.1(b) shows the intuition behind O-notation. For all values n at andto the right of n0, the value of the function f .n/ is on or below cg.n/.

We write f .n/ D O.g.n// to indicate that a function f .n/ is a member of theset O.g.n//. Note that f .n/ D ‚.g.n// implies f .n/ D O.g.n//, since ‚-notation is a stronger notion than O-notation. Written set-theoretically, we have‚.g.n// # O.g.n//. Thus, our proof that any quadratic function an2 C bnC c,where a > 0, is in ‚.n2/ also shows that any such quadratic function is in O.n2/.What may be more surprising is that when a > 0, any linear function an C b isin O.n2/, which is easily verified by taking c D aC jbj and n0 D max.1; $b=a/.

If you have seen O-notation before, you might find it strange that we shouldwrite, for example, n D O.n2/. In the literature, we sometimes find O-notationinformally describing asymptotically tight bounds, that is, what we have definedusing ‚-notation. In this book, however, when we write f .n/ D O.g.n//, weare merely claiming that some constant multiple of g.n/ is an asymptotic upperbound on f .n/, with no claim about how tight an upper bound it is. Distinguish-ing asymptotic upper bounds from asymptotically tight bounds is standard in thealgorithms literature.

Using O-notation, we can often describe the running time of an algorithmmerely by inspecting the algorithm’s overall structure. For example, the doublynested loop structure of the insertion sort algorithm from Chapter 2 immediatelyyields an O.n2/ upper bound on the worst-case running time: the cost of each it-eration of the inner loop is bounded from above by O.1/ (constant), the indices i

2The real problem is that our ordinary notation for functions does not distinguish functions fromvalues. In !-calculus, the parameters to a function are clearly specified: the function n2 could bewritten as !n:n2, or even !r:r2. Adopting a more rigorous notation, however, would complicatealgebraic manipulations, and so we choose to tolerate the abuse.

Page 21: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

3.1 Asymptotic notation 47

expression does not indicate what variable is tending to infinity.2 We shall oftenuse the notation ‚.1/ to mean either a constant or a constant function with respectto some variable.

O-notationThe ‚-notation asymptotically bounds a function from above and below. Whenwe have only an asymptotic upper bound, we use O-notation. For a given func-tion g.n/, we denote by O.g.n// (pronounced “big-oh of g of n” or sometimesjust “oh of g of n”) the set of functionsO.g.n// D ff .n/ W there exist positive constants c and n0 such that

0 ! f .n/ ! cg.n/ for all n " n0g :

We use O-notation to give an upper bound on a function, to within a constantfactor. Figure 3.1(b) shows the intuition behind O-notation. For all values n at andto the right of n0, the value of the function f .n/ is on or below cg.n/.

We write f .n/ D O.g.n// to indicate that a function f .n/ is a member of theset O.g.n//. Note that f .n/ D ‚.g.n// implies f .n/ D O.g.n//, since ‚-notation is a stronger notion than O-notation. Written set-theoretically, we have‚.g.n// # O.g.n//. Thus, our proof that any quadratic function an2 C bnC c,where a > 0, is in ‚.n2/ also shows that any such quadratic function is in O.n2/.What may be more surprising is that when a > 0, any linear function an C b isin O.n2/, which is easily verified by taking c D aC jbj and n0 D max.1; $b=a/.

If you have seen O-notation before, you might find it strange that we shouldwrite, for example, n D O.n2/. In the literature, we sometimes find O-notationinformally describing asymptotically tight bounds, that is, what we have definedusing ‚-notation. In this book, however, when we write f .n/ D O.g.n//, weare merely claiming that some constant multiple of g.n/ is an asymptotic upperbound on f .n/, with no claim about how tight an upper bound it is. Distinguish-ing asymptotic upper bounds from asymptotically tight bounds is standard in thealgorithms literature.

Using O-notation, we can often describe the running time of an algorithmmerely by inspecting the algorithm’s overall structure. For example, the doublynested loop structure of the insertion sort algorithm from Chapter 2 immediatelyyields an O.n2/ upper bound on the worst-case running time: the cost of each it-eration of the inner loop is bounded from above by O.1/ (constant), the indices i

2The real problem is that our ordinary notation for functions does not distinguish functions fromvalues. In !-calculus, the parameters to a function are clearly specified: the function n2 could bewritten as !n:n2, or even !r:r2. Adopting a more rigorous notation, however, would complicatealgebraic manipulations, and so we choose to tolerate the abuse.

Page 22: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

48 Chapter 3 Growth of Functions

and j are both at most n, and the inner loop is executed at most once for each ofthe n2 pairs of values for i and j .

Since O-notation describes an upper bound, when we use it to bound the worst-case running time of an algorithm, we have a bound on the running time of the algo-rithm on every input—the blanket statement we discussed earlier. Thus, the O.n2/bound on worst-case running time of insertion sort also applies to its running timeon every input. The ‚.n2/ bound on the worst-case running time of insertion sort,however, does not imply a ‚.n2/ bound on the running time of insertion sort onevery input. For example, we saw in Chapter 2 that when the input is alreadysorted, insertion sort runs in ‚.n/ time.

Technically, it is an abuse to say that the running time of insertion sort is O.n2/,since for a given n, the actual running time varies, depending on the particularinput of size n. When we say “the running time is O.n2/,” we mean that there is afunction f .n/ that is O.n2/ such that for any value of n, no matter what particularinput of size n is chosen, the running time on that input is bounded from above bythe value f .n/. Equivalently, we mean that the worst-case running time is O.n2/.

!-notationJust as O-notation provides an asymptotic upper bound on a function, !-notationprovides an asymptotic lower bound. For a given function g.n/, we denoteby !.g.n// (pronounced “big-omega of g of n” or sometimes just “omega of gof n”) the set of functions!.g.n// D ff .n/ W there exist positive constants c and n0 such that

0 ! cg.n/ ! f .n/ for all n " n0g :

Figure 3.1(c) shows the intuition behind !-notation. For all values n at or to theright of n0, the value of f .n/ is on or above cg.n/.

From the definitions of the asymptotic notations we have seen thus far, it is easyto prove the following important theorem (see Exercise 3.1-5).

Theorem 3.1For any two functions f .n/ and g.n/, we have f .n/ D ‚.g.n// if and only iff .n/ D O.g.n// and f .n/ D !.g.n//.

As an example of the application of this theorem, our proof that an2 C bnC cD‚.n2/ for any constants a, b, and c, where a > 0, immediately implies thatan2 C bnC c D !.n2/ and an2CbnCc D O.n2/. In practice, rather than usingTheorem 3.1 to obtain asymptotic upper and lower bounds from asymptoticallytight bounds, as we did for this example, we usually use it to prove asymptoticallytight bounds from asymptotic upper and lower bounds.

Page 23: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

44 Chapter 3 Growth of Functions

riety of ways. For example, we might extend the notation to the domain of realnumbers or, alternatively, restrict it to a subset of the natural numbers. We shouldmake sure, however, to understand the precise meaning of the notation so that whenwe abuse, we do not misuse it. This section defines the basic asymptotic notationsand also introduces some common abuses.

Asymptotic notation, functions, and running timesWe will use asymptotic notation primarily to describe the running times of algo-rithms, as when we wrote that insertion sort’s worst-case running time is ‚.n2/.Asymptotic notation actually applies to functions, however. Recall that we charac-terized insertion sort’s worst-case running time as an2CbnCc, for some constantsa, b, and c. By writing that insertion sort’s running time is ‚.n2/, we abstractedaway some details of this function. Because asymptotic notation applies to func-tions, what we were writing as ‚.n2/ was the function an2 C bn C c, which inthat case happened to characterize the worst-case running time of insertion sort.

In this book, the functions to which we apply asymptotic notation will usuallycharacterize the running times of algorithms. But asymptotic notation can apply tofunctions that characterize some other aspect of algorithms (the amount of spacethey use, for example), or even to functions that have nothing whatsoever to dowith algorithms.

Even when we use asymptotic notation to apply to the running time of an al-gorithm, we need to understand which running time we mean. Sometimes we areinterested in the worst-case running time. Often, however, we wish to characterizethe running time no matter what the input. In other words, we often wish to makea blanket statement that covers all inputs, not just the worst case. We shall seeasymptotic notations that are well suited to characterizing running times no matterwhat the input.

‚-notationIn Chapter 2, we found that the worst-case running time of insertion sort isT .n/ D ‚.n2/. Let us define what this notation means. For a given function g.n/,we denote by ‚.g.n// the set of functions‚.g.n// D ff .n/ W there exist positive constants c1, c2, and n0 such that

0 ! c1g.n/ ! f .n/ ! c2g.n/ for all n " n0g :1

1Within set notation, a colon means “such that.”

Page 24: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Asymptotic Notation3.1 Asymptotic notation 45

(b) (c)(a)

nnnn0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D !.g.n//

f .n/

f .n/f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and ! notations. In each part, the value of n0 shownis the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func-tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1,and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We writef .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the valueof f .n/ always lies on or below cg.n/. (c) !-notation gives a lower bound for a function to withina constant factor. We write f .n/ D !.g.n// if there are positive constants n0 and c such that at andto the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1

and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi-ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//”to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write“f .n/ D ‚.g.n//” to express the same notion. You might be confused becausewe abuse equality in this way, but we shall see later in this section that doing sohas its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, wheref .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n ! n0, thefunction f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is anasymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// beasymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf-ficiently large. (An asymptotically positive function is one that is positive for allsufficiently large n.) Consequently, the function g.n/ itself must be asymptoticallynonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that everyfunction used within ‚-notation is asymptotically nonnegative. This assumptionholds for the other asymptotic notations defined in this chapter as well.

48 Chapter 3 Growth of Functions

and j are both at most n, and the inner loop is executed at most once for each ofthe n2 pairs of values for i and j .

Since O-notation describes an upper bound, when we use it to bound the worst-case running time of an algorithm, we have a bound on the running time of the algo-rithm on every input—the blanket statement we discussed earlier. Thus, the O.n2/bound on worst-case running time of insertion sort also applies to its running timeon every input. The ‚.n2/ bound on the worst-case running time of insertion sort,however, does not imply a ‚.n2/ bound on the running time of insertion sort onevery input. For example, we saw in Chapter 2 that when the input is alreadysorted, insertion sort runs in ‚.n/ time.

Technically, it is an abuse to say that the running time of insertion sort is O.n2/,since for a given n, the actual running time varies, depending on the particularinput of size n. When we say “the running time is O.n2/,” we mean that there is afunction f .n/ that is O.n2/ such that for any value of n, no matter what particularinput of size n is chosen, the running time on that input is bounded from above bythe value f .n/. Equivalently, we mean that the worst-case running time is O.n2/.

!-notationJust as O-notation provides an asymptotic upper bound on a function, !-notationprovides an asymptotic lower bound. For a given function g.n/, we denoteby !.g.n// (pronounced “big-omega of g of n” or sometimes just “omega of gof n”) the set of functions!.g.n// D ff .n/ W there exist positive constants c and n0 such that

0 ! cg.n/ ! f .n/ for all n " n0g :

Figure 3.1(c) shows the intuition behind !-notation. For all values n at or to theright of n0, the value of f .n/ is on or above cg.n/.

From the definitions of the asymptotic notations we have seen thus far, it is easyto prove the following important theorem (see Exercise 3.1-5).

Theorem 3.1For any two functions f .n/ and g.n/, we have f .n/ D ‚.g.n// if and only iff .n/ D O.g.n// and f .n/ D !.g.n//.

As an example of the application of this theorem, our proof that an2 C bnC cD‚.n2/ for any constants a, b, and c, where a > 0, immediately implies thatan2 C bnC c D !.n2/ and an2CbnCc D O.n2/. In practice, rather than usingTheorem 3.1 to obtain asymptotic upper and lower bounds from asymptoticallytight bounds, as we did for this example, we usually use it to prove asymptoticallytight bounds from asymptotic upper and lower bounds.

Page 25: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

fori=0toMforj=0toNifa==b

count=count+1print“Completed.”

• Running time of a program as a function of the size of its input.

http

s://w

ww

.kha

naca

dem

y.org

/com

putin

g/co

mpu

ter-

scie

nce/

algo

rith

ms/

asym

ptot

ic-n

otat

ion/

a/bi

g-bi

g-th

eta-

nota

tion

Asymptotic notations

O( M N )

Page 26: CSE 2010: Algorithms and Data Structuresmy.fit.edu/.../cse2010_notation/02-algorihtms_complexity.pdfAsymptotic Notation 3.1 Asymptotic notation 45 (a) (b) (c) n n n n 0 n 0 n 0 f.n/D

Confusing worst case with upper bound

• Upper bound refers to a growth rate.

• Worst case refers to the worst input from among the choices for possible inputs of a given size.

http

://w

eb.m

it.ed

u/16

.070

/ww

w/le

ctur

e/le

ctur

e_5_

2.pd

f