17
EECS 598: EECS 598: Background in Theory of Background in Theory of Computation Computation Igor L. Markov and John P. Hayes http://vlsicad.eecs.umich.edu/Quantum/EECS598

EECS 598: Background in Theory of Computation

  • Upload
    zandra

  • View
    37

  • Download
    4

Embed Size (px)

DESCRIPTION

EECS 598: Background in Theory of Computation. Igor L. Markov and John P. Hayes http://vlsicad.eecs.umich.edu/Quantum/EECS598. Outline. On computational models … More on terminology and notation Representing states in a computer Tensor products (again) - PowerPoint PPT Presentation

Citation preview

Page 1: EECS 598:  Background in Theory of Computation

EECS 598: EECS 598: Background in Theory of Background in Theory of

Computation Computation

Igor L. Markov and John P. Hayeshttp://vlsicad.eecs.umich.edu/Quantum/EECS598

Page 2: EECS 598:  Background in Theory of Computation

OutlineOutlineOn computational models …More on terminology and notationRepresenting states in a computerTensor products (again)Postulates of Q.M. specified for Q.C.Handling the probabilistic nature of quantum

computation Entanglement

Page 3: EECS 598:  Background in Theory of Computation

Parameters of algorithms…Parameters of algorithms…

Execution Time– Wall-clock time– Asymptotic time

Memory (size)Ease of programmingWorst, best, average, amortized, typicalMultiple objectives -> often no single winner

– Cost trade-offs

Page 4: EECS 598:  Background in Theory of Computation

Principle of InvariancePrinciple of Invariance

Challenges for analysis of [time] complexity– Different hardware executes different #instr./sec– Need a uniform measure of time complexity

Solution: measure constant-time steps– Will be off by at most a constant

Principle of invariance– Two implementations of an algorithm (when executed

on two actual computers) will not differ in time complexity by more than a constant

– This is not a theorem !(unlike, e.g., mathematical induction)

Page 5: EECS 598:  Background in Theory of Computation

Comparing algorithmsComparing algorithms

POI: A similar situation with memory POI gives hope to compare algorithms,

but does not provide a good mechanism What we want:

– Equivalence relations for algorithms (“as fast”)– Order relations (“as fast or faster”)

Possible ideas:– Count elementary operations– Count elementary units of memory– Are all operations counted alike?

Page 6: EECS 598:  Background in Theory of Computation

Elementary OperationsElementary Operations

What are elementary operations?– Operations whose execution time is bounded by a constant

that does not depend on input values– Actual “seconds per operation” may be disregarded– But the very selection of elem. ops is still hardware-dependent !!!

What about +, -, *, /, <, >, =, ==, etc. of integers (or doubles)?– Yes, if integers have a bounded number of bits, e.g., 32 (or 64)– No, if the number of bits is not bounded

What about function calls? What about memory accesses, e.g., a[i] ?

Page 7: EECS 598:  Background in Theory of Computation

Computational ModelsComputational Models A computational model is determined by

– data representation and storage mechanisms– available elementary operations

Most popular examples– Sequential and combinational Boolean circuits – EECS 270, 478– [Non-] Deterministic Finite Automata (DFA/NFAs) – EECS

376(476),478 – Push-down automata – EECS 376(476)– Turing machines – EECS 376(476)– C programs, C++ programs – EECS 280, 281(380), 477

Computation models originate in technologies, physics, biology, etc– Parallel and distributed computing– Optical and DNA computing– Analog computing– Quantum computing

Page 8: EECS 598:  Background in Theory of Computation

Measures of Algorithm Efficiency Measures of Algorithm Efficiency Based on Allowed InputsBased on Allowed Inputs

Each set of input data gives a new measure of efficiency !– Best cases– Worst cases– Representative / typical inputs (“benchmarks”)

application-specific and domain-specific Averaged measures (give “expected efficiency”)

– Consider different inputs in independent experiments– Average over a particular distribution of inputs

Some inputs may happen more frequently than others Application-specific

– Formal “average case” Averaged over the uniform distribution of all allowed inputs

– Empirical evaluation by sampling Generate random samples of the target distribution Much faster than enumeration. Theorem: results are very close.

Page 9: EECS 598:  Background in Theory of Computation

Asymptotic ComplexityAsymptotic Complexity

Recall that resource consumption is typically described by positive monotonic functions

Asymptotic complexity measures (“on the order of”)– Main idea:

Given f(n) and g(n), does the difference grow with n or stay the same ? Difference: f(n)/g(n). Only pay attention to the “limit behavior” @ n∞

If f(n)/g(n) const as n∞ then f(n) is at least as good as g(n)if f(n)/g(n) ≤ const as n∞ then f(n) is at least as good as g(n)

If f(x) is at least as good as g(x) and g(x) is at least as good as f(x),then the two functions are equivalent

– Coarser than counts of elementary “things” (larger equiv. classes) e.g., “200 N steps” and “3 N + log(N) steps” are now equivalent

– Much greater hardware independence

Page 10: EECS 598:  Background in Theory of Computation

Complexity of ProblemsComplexity of Problems

In a given computational model,for a given problem…consider best possible algorithms– In fact, whole equivalence classs of algos

in terms of asymptotic worst-case complexity

Call that the complexity of the problem Can often consider the same problem

in multiple computational models– Will they have the same complexity?

Page 11: EECS 598:  Background in Theory of Computation

Problem reductionsProblem reductions

Reduction of problem X to problem Y– Every instance of problem X translated into an instance

of problem Y– Can apply any algorithm to solve Y– Every solution to a translated instance

translated back Complexity of problem X is no more than sum of

– Complexity of translating instances– Complexity of Y– Complexity of translating solutions back

Page 12: EECS 598:  Background in Theory of Computation

Simulating Computational ModelsSimulating Computational Models

Write a C program that simulatesa Turing machine

Write a C interpreter as a program for a Turing machine The computational models are equivalent

– Need to be careful about bit-sizes of basic types Modern Church-Turing thesis

– Whatever you can do in terms of computation, can be done on a Turing machine “as efficiently”

– “as efficiently” allows poly-time reductions (next slide) How would you prove/disprove this?

Page 13: EECS 598:  Background in Theory of Computation

NP-complete problemsNP-complete problems

P is the class of problems whose solutions can be found in poly-time NP is the class of problems whose solutions can be verified in

polynomial time Polynomial-time reductions

– Translations take poly-time NP-complete problems are those to which every other problem in NP

reduces in poly-time NP=P? is an open question Orphan problems:

– Those in NP\P but not NP-complete, assuming NP!=P– Number factoring– Graph auto-morphism

Page 14: EECS 598:  Background in Theory of Computation

So, how could you disprove the So, how could you disprove the modern Church-Turing thesis ?modern Church-Turing thesis ?

Find a problem that has different worst-case asymptotic complexity – In terms of Turing machines– In terms of, say, quantum computers

Need to define a computational model first! Back to Quantum Mechanics

– Measurements are not deterministic– So, need to include randomized algorithms– BPP (answers can be correct with probability ½+e)

More complexity classes:– PSPACE– Decidable problems

Page 15: EECS 598:  Background in Theory of Computation

Back to Quantum MechanicsBack to Quantum Mechanics

Details of computational model should follow from the postulates of quantum mechanics

Postulate 1: State Space (complex, Hilbert)– Will need only finite-dim vector spaces

Postulate 2: Evolutions = Unitaries– Think of matrices

Postulate 3: Quantum Measurement– Two forms: Hermitian observables and ortho-normal

decompositions Postulate 4: Composite systems

– Tensor products

Page 16: EECS 598:  Background in Theory of Computation

Computing with functionsComputing with functions

Functions form a Hilbert space– It is infinite-dimensional, much harder to study– Inner product is defined via integrals– Examples of Hermitian ops: differential operators– Analogues of matrices: “kernels”

Not nearly as nice as finite-dim matrices

How do we get finite-dim spaces?– Restrict allowed states to a finite-dim subspace

Special case: restrict functions to finite sets

– All notions map to common finite-dim notions

Page 17: EECS 598:  Background in Theory of Computation

Tensor productsTensor products

A state of a quantum system is f(x) A state of a composite system is f(x,y) We would like some construction that would give

us the space of f(x,y) from spaces of g(x) and h(y) Tensor product

– Take bases: g1(x), g2(x),… and h1(x), h2(x),…

– Pair-wise product will be a basis of 2-var funcs

Tensor product of operators