17
Turing Machines January 2003 Part 2:

Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

Embed Size (px)

Citation preview

Page 1: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

Turing Machines

January 2003

Part 2:

Page 2: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

2

TM Recap

We have seen how an abstract TM can be built to implement any computable algorithm• TM has components: M = (Q, T, g, q0, F)

So we have a machine (or computer) that can do some work. Unfortunately the abstract TM is so removed from modern

computers that comparisons are difficult I.e. if we have an algorithm and we can show it works on a

TM how do we then implement it on a conventional computer?

TM can be related to conventional computers via a simple Random-Access Machine (RAM)

Page 3: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

3

Random-Access Machines - RAM

A RAM has a (finite or infinite) number of memory words, numbered 1, 2, . . . containing integer values v1 , v2 , . . .

Also a CPU containing a finite number of registers R1 , R2 , . . . , Rn and a program counter PC.

The program is stored in memory. An op-code specifies a standard operation (LOAD, STORE, ADD, JUMP, . . . etc.) with operands for addresses or data as required.

Memory CPU. . .

v2

v1

. . .

PC

Rn

R1

21

Page 4: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

4

RAM equivalent Turing Machine

An equivalent Turing Machine to the RAM has n + 3 tapes, (where n is the number of registers) • one for each register,

• one for the PC,

• one for a “data address register”,

• and one for the memory.

tape. . .

R1

Rn

PC

Data address register

Memory

CPU registers

n+3 tapes

Page 5: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

5

RAM equivalent Turing Machine

The memory tape consists of blocks of the form

“$ i : vi ”,

where $ and : are separators.

E.g.

• address 1 contains v1• address 2 contains v2 • etc.

v1 v22$:1$ : . . . . .

Page 6: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

6

RAM equivalent Turing Machine

The TM is provided with standard routines for searching and for all the internal operations of the RAM, (this is long but straightforward in principle).

General process:

• If the PC tape holds number i, the memory tape is searched for “$ i :”. The op-code at that point specifies the routine to execute.

• A subsequent address (if needed) can also be copied from the memory tape into the “data address register” tape to control the search for the data.

• The PC tape is incremented and the computation continues.

The above RAM equivalent Turing machine shows how a Turing Machine can (in principle) be made to carry out the same computations as a typical stored-program digital computer.

Page 7: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

7

Church-Turing Thesis Part 3

No-one has been able either to extend the power of a simple TM or to find a computational process that could reasonably be called an effective procedure which cannot be carried out by a Turing Machine.

This supports the Church-Turing thesis.

The “Turing Complexity Thesis”

This states that :

It follows that :

anything that can be computed at all can be computed by a TM with at most

a polynomial slowdown

anything that a TM cannot compute in polynomial time cannot be computed at all in polynomial time

TfastestA = fastest time to compute A

TTMA = time to compute A on a TM

TTMA

TfastestA polynomial time

i.e.

Not to be confused with...

Page 8: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

8

The Complexity Class P

Definition:

The “length of computation” of a TM is the number of moves it makes before it halts. For a TM: M, let

TimeM(n) = max{m: x Tn such that the length of computation of M on input x is m},

that is, the maximum number of moves (worst case) for any input of length n symbols.

If M halts for every input then TimeM(n) is finite.

If TimeM(n) is O(nk) for some k 0 then M is a “polynomial-time algorithm”.

Definition:

The “complexity class” P is defined as :the set of problems for which a polynomial-time

TM can (in principle) be constructed

Page 9: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

9

Non-deterministic Turing Machine

A “non-deterministic Turing Machine” (NDTM)

M = (Q, T, g, q0, F)

has a choice of moves

i.e. g(q, a) may be multiple-valued.

As previously stated (when considering Turing Machine extensions) this adds nothing (fundamentally) to computability.

The output of a NDTM TM1, for any input, can be exactly reproduced by a deterministic machine TM2, which methodically tries all permutations of TM1’s choices.

The number of choices tends to grow exponentially with length of input, so TM2 takes exponential time. TM1 is assumed to “know” which choice to make each time (by intuition?), and may run in polynomial time.

Page 10: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

10

The Complexity Class NP

Definition:

The complexity class NP is defined as :

Clearly P NP.

The outstanding problem in theoretical computer science is whether P = NP.

It is generally believed that P NP, i.e. there are problems which are solvable in principle (by polynomial-time NDTM, exponential-time deterministic TM) but for which no polynomial-time deterministic TM can exist, and hence no polynomial-time algorithm on a real machine. These problems are “intractable”.

the set of problems for which a polynomial-time NDTM can (in principle) be constructed

Page 11: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

11

The Complexity Class NP-complete

Among the problems in NP\P there is a class of “hardest” ones, called “NP-complete”.

If a polynomial-time algorithm is found for any NP-complete problem it will follow that P = NP. This is unlikely.

Quantum computing:

• Some research in the abstract idea of a “Quantum computer” has suggested that NP-complete problems might be solvable in polynomial time on such a “Quantum computer”.

• However, the physical realisation of such a computer has not been established!

Page 12: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

12

NP

P NP-complete

NP, P, NP-complete

Hundreds of NP-complete problems are known.

The travelling Salesman Problem (TSP) is NP-complete.

Another simple one to state is the following:

“Given a finite set A consisting of n integers, and a number m, is there a subset of A which totals m?”

Page 13: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

13

Parallel Computation

It is often supposed that parallel computing is the solution to these problems of intractability.

Alas not!

Imagine a powerful parallel computer with a large array of processors, each of which is a serial computer with a fixed finite amount of memory (such as a Transputer). Choices in an NP-problem could be assigned to different processors and computed in parallel.

In practice, however, an exponentially-growing number of processors is just as impossible as an exponentially-growing period of time.

Page 14: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

14

Parallel Computation

For example, suppose a TSP takes time TS(n) = k.(n-1)!/2 (for some value of k) for n cities on a serial computer, and suppose TS(n1) = 105 seconds for n1 = 12, which is not too unrealistic.

For the same computation time on a parallel machine in which 106 processors can be used efficiently, we can have a larger number n2 of cities, and then TP = 105 = TS(n2)/106 , i.e Ts(n2) = 1011

so the serial machine would take 1011 seconds (3000 years) on this task.

But

TS(n2) = 1011 TS(n1).(n2-1)!/(n1-1)! = 105. (n2-1)!/11!

from which n2 = 17 cities.

Even with a million processors used to the full the possible number of cities is still small (17).

Page 15: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

15

Parallel Computation

Moreover, there is a fundamental limit to the utilisation of processors.

Chip fabrication (and ultimately atomic physics) implies that there is a minimum volume V which a processor must occupy. As time passes, more and more processors enter the computation. The information can only travel at the speed of light (c), so after time TP the information is within a sphere of radius c.TP with volume c3TP

3 in which there are at most N = c3TP

3/V processors.

For a problem which takes time TS on a serial computer, on the parallel computer TP TS/N = TSV/c3TP

3, hence TP4 (V/c3)TS. It

follows that TP and TS are polynomially related, so an algorithm which takes polynomial (or exponential) time on a serial machine also takes polynomial (or exponential) time on a parallel machine.

The definitions of P, NP, and NP-complete are unaltered.

Page 16: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

16

Summary

We can relate abstract Turing Machines to modern computers through the RAM

From a RAM we can generate an equivalent Turing machine where the number of tapes is n + 3 (n = number of CPU registers)

From the “Turing Complexity Thesis” we find that :

The complexity classes P : and NP :

NP-complete problems cannot be solved in polynomial time Parallelism is not the answer!

anything that a TM cannot compute in polynomial time cannot be computed at all in polynomial time

the set of problems for which a polynomial-time TM can (in principle) be constructed

the set of problems for which a polynomial-time NDTM can (in principle) be constructed

Page 17: Turing Machines January 2003 Part 2:. 2 TM Recap We have seen how an abstract TM can be built to implement any computable algorithm TM has components:

17