3
IEEE TRANSACTIONS ON COMPUTERS, OCTOBER 1975 provided that i < w - k I. In other words, if we shift the operator E(k,x) to the left or right of xo, its value falls off linearly as long as no part of it extends further than w away from x0. For k = w, this allows no shifting at all; for k = w/2, it allows shifting by up to w/2. The preceding remarks imply that, for the triangular portion of the (k,x) plane defined by w/2 < k < w; x-xo < w-k k the values of E (k,x) should be constant (=h) along the line x = xo, and should fall off linearly (=h - x - xo h/k) as one moves in the ax direction from that line. When the edge is noisy, this property of E(k,x) will no longer hold exactly. However, if w is sufficiently large so that averaging over an interval of length w/2 or greater smooths out the noise, then E(k,x) should still behave approximately as described in the aforementioned. Thus we should be able to detect step edges in noisy input by matching the ideal E(k,x) pattern with the actual pattern. Since we do not know the expected position or width of the edge, this rnust be done for every possible x0 and w. III. MATCHING STEP-EDGE PATTERNS One simple way to detect the desired patterns of E(k,x) values would be to use some type of template matching operation such as the normalized cross correlation. It was found, however, that this gave unsatisfactory results (for examples see [3]). Better results were obtained when some simple features of the E(k,x) pattern were analyzed. For example, at a good step edge xo, E (k,x) should be large and approximately constant for all w/2 < k < w; thus the mean u (w,xo) of these E values should be large, and their variance a (w,xo) small. Moreover, for x - xo < w/2, there should be a linear falloff of E(w/2,x), with positive slope on one side and negative on the other; and the scatters sL and sR of the E values about their two regression lines should be small. These remarks suggest that one could use a match measure directly proportional to ,u (w,xo) and inversely proportional to r(w,x0o), sL, and SR. It was found, in fact, that the results were little improved, if any, by using the a and s measures; y (w,xo) itself was a good measure of the presence of a step edge. The results obtained using the ,u measure on a single scan line across a picture of some chromosomes are shown in Fig. 2. (Results when a and s were incorporated are also shown for comparison.) Correct results were also obtained with the ,u measure in a case involving close-by step edges (Fig. 3). Further examples, both with and without the a and s measures, are given in [3]. Thep ymeasure is nothing more than an average ofE's taken over a 2:1 size range. It appears that using ju's gives better results than using the individual E's, as we did in [l],[2]. In particular, the fact that we are averaging small and large E's together helps us correctly locate the edges of large objects that are close together; this is a case where the use of individual E's could lead to absurd results, as seen in [2]. REFERENCES [1] A. Rosenfeld and M. Thurston, "Edge and curve detection for visual scene analysis," IEEE Trans. Comput., vol. 0-20, pp. 562-569, May 1971. [2] A. Rosenfeld, M. Thurston, and Y.-H. Lee, "Edge and curve detec- tion: Further experiments," IEEE Trans. Comput. (Special Issue on Two-Dimensional Digital Signal Processing), vol. C-21, pp. 677-715, July 1972. [3] L. S. Davis and A. Rosenfeld, "Detection of step edges in noisy one- dimensional data," Univ. Maryland Comput. Sci. Cent. Tech. Rep. 303, May 1974. On the Efficiency of Universal Machines ANDY N. C. KANG Abstract-The complexity of the universal programs is compared with the complexity of the programs being simulated. The results suggest that the efficiency of the universal machines depends heavily on the programs being simulated. All the results are machine-inde- pendent and they are derived based on the recursion theorem. Index Terms-Computational complexity, the recursion theorem, universal machines. I. INTRODUCTION Universal machines are usually constructed based on a simulation scheme. The number of steps taken by this kind of universal machine is approximately equal to that taken by the machine being simulated. One interesting question is this: Is the simulation necessary for a universal machine? We present a universal machine 0u with the property that for every partial recursive functionf, there is a program 4t for f such that the computation of Ok(x) proceeds much more slowly than that of 0. ( (k,x)). This is because the universal machine is not performing the straightforward simulation. On the other hand, a recursive function g exists such that for every universal machine 4' and every partial recursive function f, a program 0j for f exists with the property that4.isas slow as 4j modulog, i.e., g ( (j,x),I ( (j,x))) > by,(x). We also show that for any given universal machine, there exists another one which performs the simulation faster for some programs. This suggests that there is no best universal machine. II. MAIN RESULTS Our notation is based on Rogers [3]. +o,+1,+,-. denotes a Godel numbering of all the partial recursive functions. The list 4'0,41,t2,- * - should be thought of as a list of programs or devices for computing the partial recursive functions. According to the context, +; may represent the ith partial function, or it may represent the particular program (the ith program which computes 0j). The complexity of programs is defined as follows: With each fi is asso- ciated a partial recursive function '1' called the stepcounting function forfi. Intuitively (i (x) is the number of steps to compute oi(x). Formally, (ob4,1't, 2p is an effectively enumerable sequence of partial recursive functions which satisfy the following two axioms [1]: 1) For all i and x, +%(x) converges if and only if i (x) converges. 2) ((i,x,y) bi (x) = yI is a recursive set. Definition 1: P is a universal function if Ou is partial recursive and for all iand all x,*(x) = ((iyx)). For the model of TPuring machines, a universal function is called a universal machine. A universal machine is usually constructed so that upon receiving the index i of a program and the input x, it simulates the computa- tion of ci (x) [3]. The number of steps required to do the simulation is roughly about the same as that required to do the computation. One question is this: Is the straightforward simulation necessary for a universal machine? Theorem 1 will present a particular universal machine ou such that for each partial recursive function f there exists a program 4k such that 0, computes f and the computation of 0((k,x)) is much faster than the computation of 0jt(x). 4h is a deliberately slow program for f. With input (k,x), . (which is independent of f) discovers that 4t is slow and it will simulate Manuscript received August 24, 1973; revised April 15, 1975. The author is with the Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, Va. 24061. 1010

On the Efficiency of Universal Machines

  • Upload
    anc

  • View
    219

  • Download
    1

Embed Size (px)

Citation preview

Page 1: On the Efficiency of Universal Machines

IEEE TRANSACTIONS ON COMPUTERS, OCTOBER 1975

provided that i < w - k I. In other words, if we shift theoperator E(k,x) to the left or right of xo, its value falls off linearlyas long as no part of it extends further than w away from x0. Fork = w, this allows no shifting at all; for k = w/2, it allows shiftingby up to w/2.The preceding remarks imply that, for the triangular portion

of the (k,x) plane defined by

w/2 < k < w; x-xo < w-kk

the values of E (k,x) should be constant (=h) along the line x = xo,and should fall off linearly (=h - x - xo h/k) as one moves inthe ax direction from that line.When the edge is noisy, this property of E(k,x) will no longer

hold exactly. However, if w is sufficiently large so that averagingover an interval of length w/2 or greater smooths out the noise,then E(k,x) should still behave approximately as described in theaforementioned. Thus we should be able to detect step edges innoisy input by matching the ideal E(k,x) pattern with the actualpattern. Since we do not know the expected position or width of theedge, this rnust be done for every possible x0 and w.

III. MATCHING STEP-EDGE PATTERNS

One simple way to detect the desired patterns of E(k,x) valueswould be to use some type of template matching operation such asthe normalized cross correlation. It was found, however, that thisgave unsatisfactory results (for examples see [3]).

Better results were obtained when some simple features of theE(k,x) pattern were analyzed. For example, at a good step edgexo, E (k,x) should be large and approximately constant for allw/2 < k < w; thus the mean u (w,xo) of these E values should belarge, and their variance a (w,xo) small. Moreover, for x - xo <w/2, there should be a linear falloff of E(w/2,x), with positiveslope on one side and negative on the other; and the scatters sLand sR of the E values about their two regression lines should besmall.

These remarks suggest that one could use a match measuredirectly proportional to ,u (w,xo) and inversely proportional tor(w,x0o), sL, and SR. It was found, in fact, that the results were littleimproved, if any, by using the a and s measures; y (w,xo) itself wasa good measure of the presence of a step edge.The results obtained using the,u measure on a single scan line

across a picture of some chromosomes are shown in Fig. 2. (Resultswhena and s were incorporated are also shown for comparison.)Correct results were also obtained with the,u measure in a caseinvolving close-by step edges (Fig. 3). Further examples, both withand without thea and s measures, are given in [3].Thepymeasure is nothing more than an average ofE's taken over

a 2:1 size range. It appears that using ju's gives better results thanusing the individual E's, as we did in [l],[2]. In particular, thefact that we are averaging small and large E's together helps uscorrectly locate the edges of large objects that are close together;this is a case where the use of individualE's could lead to absurdresults, as seen in [2].

REFERENCES

[1] A. Rosenfeld and M. Thurston, "Edge and curve detection for visualscene analysis," IEEE Trans. Comput., vol. 0-20, pp. 562-569, May1971.

[2] A. Rosenfeld, M. Thurston, and Y.-H. Lee, "Edge and curve detec-tion: Further experiments," IEEE Trans. Comput. (Special Issue onTwo-Dimensional Digital Signal Processing), vol.C-21, pp. 677-715,July 1972.

[3] L. S. Davis and A. Rosenfeld, "Detection of step edges in noisy one-dimensional data," Univ. Maryland Comput.Sci. Cent. Tech. Rep.303, May 1974.

On the Efficiency of Universal Machines

ANDY N. C. KANG

Abstract-The complexity of the universal programs is comparedwith the complexity of the programs being simulated. The resultssuggest that the efficiency of the universal machines depends heavilyon the programs being simulated. All the results are machine-inde-pendent and they are derived based on the recursion theorem.

Index Terms-Computational complexity, the recursion theorem,universal machines.

I. INTRODUCTION

Universal machines are usually constructed based on a simulationscheme. The number of steps taken by this kind of universal machineis approximately equal to that taken by the machine being simulated.One interesting question is this: Is the simulation necessary fora universal machine? We present a universal machine 0u with theproperty that for every partial recursive functionf, there is a program4t for f such that the computation of Ok(x) proceeds much moreslowly than that of 0. ( (k,x)). This is because the universal machineis not performing the straightforward simulation. On the other hand,a recursive function g exists such that for every universal machine 4'and every partial recursive function f, a program 0j for f exists withthe property that4.isas slow as 4j modulog, i.e., g ( (j,x),I ( (j,x))) >by,(x). We also show that for any given universal machine, thereexists another one which performs the simulation faster for someprograms. This suggests that there is no best universal machine.

II. MAIN RESULTS

Our notation is based on Rogers [3]. +o,+1,+,-. denotes aGodel numbering of all the partial recursive functions. The list4'0,41,t2,-* - should be thought of as a list of programs or devices forcomputing the partial recursive functions. According to the context,+; may represent the ith partial function, or it may represent theparticular program (the ith program which computes 0j). Thecomplexity of programs is defined as follows: With each fi is asso-ciated a partial recursive function '1' called the stepcounting functionforfi. Intuitively (i (x) is the number of steps to compute oi(x).Formally, (ob4,1't, 2p is an effectively enumerable sequence ofpartial recursive functions which satisfy the following two axioms[1]:

1) For all i and x,+%(x) converges if and only if i (x) converges.2) ((i,x,y) bi (x) = yI is a recursive set.Definition 1:P is a universal function if Ou is partial recursive and

for all iand allx,*(x) = ((iyx)).For the model of TPuring machines, a universal function is called

a universal machine.A universal machine is usually constructed so that upon receiving

the index i of a program and the input x, it simulates the computa-tion of ci (x) [3]. The number of steps required to do the simulationis roughly about the same as that required to do the computation.One question is this: Is the straightforward simulation necessary fora universal machine? Theorem 1 will present a particular universalmachine ou such that for each partial recursive function f there existsa program 4k such that 0, computes f and the computation of0((k,x)) is much faster than the computation of 0jt(x). 4h is adeliberately slow program for f. With input (k,x), . (which isindependent of f) discovers that 4t is slow and it will simulate

Manuscript received August 24, 1973; revised April 15, 1975.The author is with the Department of Computer Science, Virginia

Polytechnic Institute and State University, Blacksburg, Va. 24061.

1010

Page 2: On the Efficiency of Universal Machines

CORRESPONDENCE

a faster program i for f. Thus the number of steps taken to computef ((k,x)) is about the same as that taken to simulate O1(x). There-fore, the computation of fs ((k,x)) is faster than the computationof 4uk(X).

Theorem 1: For every recursive function g, there is a universalfunction q5 such that for every partial recursive function f thereexists a program k which computes f and has the property thatg(4i ((k,x))) < 4'k(x) for all x.

Proof: From the s-m-n theorem [3], there exists a monotoneincreasing recursive function oa such that for every j and every i,

(Q,j)) (x) = oj((i,x)) for allx. (1)

The monotonicity of o- makes I (j,k) 3 i[a ( j,)) = k]J recursive.u is used to construct a partial recursive function Iu(j)- For somefixed j (which will be derived later) up(j) is the universal function inthe theorem. With input (k,x),+8(j) will either simulate k (X), orif k = a((j,i)) for some i, then it will simulate 4i(x). (We willdeliberately make Ok to be a much slower program than 4i.)

oi (x) if 3 i[(,i)) = k]bU(j) ((k,x)) =

4'k (x) otherwise.

Now we are going to use the given recursive function g and a in (1)to construct a program Oio. Let jo be the index of the following pro-gram (implicitly using the recursion theorem [3]).

'tio ( (i,x)): "With input (i,x), determine its own index jo, then run,j (x). If and when fi (x) converges, compute 4u(jo) ((U ( (jo,i)),x)).(Oi(x) converging implies tu&jo)((ur((jo,i)),x)) will converge.)Next see if 4',((jO,i))(x) > g(4'g,O)((Ur((j,i)),x))). If so, outputoi(x), otherwise output 1 + O,(Q0 i>) (x)."

By (1),

ojo,i)) (X) = jo ((i,x)) = oi(X) (2)

from which we have 4b,((jo0,)) (x) > g (4'ujso) ((U ( (jo,i)),x))). Thus forany partial recursive function f, if qi computes f, then k = U ( (jo,i))is the program we are looking for provided f'u(jo) is a universalfunction.We claim O(jo) is universal. Let k be any integer.Case 1: There does not exist i such that k = U( (jo,i)). Then

'u(io) ((k,x)) = -,0(x) (by the definition of Ou(Jo)).Case 2: There exists i such that k = U( (jo,i)), thus

?u(jo) ( (k,x)) = ;i (x) by the definition of q5u(jo)).= '0((io i)) (x) (by (2))

= ckk(X). Q.E.D.

Could we have a stronger result than Theorem 1, i.e., is therea universal 0, such that for every recursive function g, there existsa program k with the property that g ((D. ( (k,x))) < 4Dk(x)? This isfalse because it is easy to show that every universal machine is asslow as the machine being simulated modulo g, if we allow g to bedependent on the universal machine. Formally we have: for everyuniversal function O., there is a recursive function g such that forall i,

g (x, % ( (i,x)) ) > 4i (x) for all but finitely many x. (3)

This result follows if we take g (x,z) = max fh (i,x,z) I i < x}, whereh is defined as

h,z (x) if 4'u((,X )) = zho(ix,z) = r

0O otherwise.

Theorem 1 presents a somewhat clever universal machine whichcan compute the results of some slow programs much more quicklythan the execution of the programs themselves. It would be desirableif one could design a universal machine that can compute someresults of all programs much more quickly than the execution ofthe programs themselves. The next result shows that this is notpossible.

Theorem 2: There is a recursive function g, such that given auniversal function . and a partial recursive function f, a programO for f exists with the property that

g((k,x),4I ((k,x))) > 4k (X) for all x in the domain of f.

Proof: Let o- be a one-one recursive function such thatoa((u,j)) > max {u,j} and o((u,j)) is the index k of the followingprogram (implicitly using the recursion theorem):

k (X)): "With input x, determine index k, then run j (x). If andwhen oj(x) converges, see if 4k(X) > c'j(x) and 41 ( (k,x)) >4'j (x). If so, give output fj(x) ; otherwise, let the program diverge."

Suppose u is universal and *, (x) converges. We claim that k(X)converges, from which it also follows that 4k(x) = oj(x) and

min {I4'k(x), 4. ((k,x)) } > 4,(x) - (4)

Assume to the contrary that -Ok(x) diverges, then either 1) or 2)below will hold.

1) cbk(x) < ¢;,(x): This is impossible since 4k(x) diverges andOj (x) converges by the hypotheses.2) 4t,((k,x)) < 4'D(x): This implies that 4k(x) converges since

k (X) = 0. ((k,x)). This is a contradiction.Now we define g. Note that Ik (3u) (3j)[Ea((u,j)) = k]} is

recursive and, for each k, the u and j which satisfy a((u,j)) = kare unique.

(1J "k (X) if 3u 3j[ ( (u,j)) = kj&|'b((k,x)) = y&y > 4',(x)

g ( (k,x),y) = 1-g1 otherwise.

We now prove the theorem.a) g is recursive: Suppose not, then for some u,j,k,x, and y

o((u,j)) = k, Du((k,x)) = y, y > 4',(x), and Ok(x) diverges. Bythe definition of a, tk (x) = (((<uj>)(x) diverging implies either 3)or 4) below.

3) Pk(X) < 4',(x): This is impossible since +k(x) diverges and'j (x) < y.4) 4tu((k,x)) < 4',(x): This is impossible since I,u((k,x)) = y

and y > 4'j(x). Contradiction.b) Let cj be any program for f and 0. be the given universal

function. We claim k = U((u,j)) satisfies the theorem. By (4),ou is universal and oj(x) converges implies qk(X) = *i (x) and

g((k,x),4.( (k,x))) = g((Ur((u,j)),x),4'u((a((u,j)),x)))= 1 + 4q((u,i)) (X)

(by (4) 4'u(<v((u,j)),x)) > 4b'(x),and by the definition of g)

> 4'k(X). Q.E.D.

Next we want to know whether or not there exists a best universalmachine in the sense that it is faster than all the other universalmachines modulo some fixed function. If a universal machinesimulates a suitable program for a constant function, we can con-struct another universal machine which will discover that thisprogram computes a constant function. Then the universal machinewill give the constant output immediately without simulating. Forthe following result, we assume that the parallel computationproperty [2] is satisfied. There exists a recursive function -1 such thatfor all i and j

1011

Page 3: On the Efficiency of Universal Machines

IEEE TRANSACTIONS ON COMPUTERS, OCTOBER 1975

Ioi (x) if -ti (x) < s (x)O(t,j) (X) = {

[y (x) otherwise

(x) = min {'i(x), Eb (x) I . (5)Theorem 3: For every recursive t and every universal function P.,

one can effectively find a universal function 0,, and a program Xokfor the constant zero function such that

IO(i ,x)) < -I(i,x)) for allI and all x.

Furthermore, when i = ko,t(-D(Q((kox))) < D.((kox)) for all but finitely many x.

Proof: Let q be given. By (3) there exists a recursive function gsuch that for every i,

g (x,$,(((i,x))) > $i (x) for all but finitely many x. (6)

Let v be a recursive function such that

fJv(k) ( (i,x )) =

if i = k

oi(x) otherwise.

Define

h(x) = max {I ,(.)((i,x)) i < xl.

Notice that h is recursive and for every i,

h (x) > v(iy( ((i,x)) for all but finitely many x. (7)We are now going to prove the theorem. Let g be monotone

increasing in its second variable and satisfy (6), and let h be mono-

tone increasing and satisfy (7). We may assume without loss general-ity that t is monotone increasing. Let ko be the index of the followingprogram (implicitly using the recursion theorem):

O if 'Ik(x) > g(x,t(h(x)))'ko (x) =

{1 + Oko (x) otherwise.

As in [1, Theorem 4], it follows that, for all x,

4ko (x) = 0 and 4'ko(x) > g(x,t(h(x))). (8)

Thus, for all but finitely many x,

g(x,Iu ((ko,x))) > cIk (x) (by (6))

> g(x,t(h(x))) (by (8)).

By the monotone increasing property of g we have

4Yu((ko,x))) > t(h(x))

> t ((DXqko) ( (kox2;)) ) (by (7)) .

We need to show that q(ko) is universal.1) If i ko, then

10v(kO)M(iX )) = oi (x) .

2) If i = ko, then

tv(ko) ((ko,x)) = O,

and in this case +k0(x) = 0 (by (8 )).The universal function qO that we are looking for is then

where v is defined in (5). Q.E.D.

REFERENCES

[1] M. Blum, "A machine-independent theory of the complexity of re-cursive functions," J. Ass. Comput. Mach., vol. 14, pp. 322-336, 1967.

[2] L. H. Landweber and E. L. Robertson, "Recursive properties ofabstract complexity classes," J. Ass. Comput. Mach., vol. 19, pp.296-308, 1972.

[3] R. Rogers, Theory of Recursive Functions and Effective Computability.New York: McGraw-Hill, 1967.

A Probabilistic Approach of Designing More Reliable LogicGates with Asymmetric Input Faults

SUNG C. HU

Abstract-Logic gates subject to asymmetric input faults may bemade more reliable by employing redundant inputs. A mathematicalexpression for determining the optimum number of redundant inputsbased on input reliabilities of the gate is developed. The developmentfollows the theory of combinatorial probability.

Index Terms-Asymmetric input- faults, input redundancy, logicgate, probability, reliability.

I. INTRODUCTION

The complexity of digital systems is increasing greatly. Because ofthis complexity, there is an increase in the probability of fault. Atthe same time, however, applications have increased the demand forhigher reliability. These two contrasting developments have created aneed for the development of fault-tolerant circuits and systems.However, most fault-tolerant design techniques require a largenumber of extra circuits. Excessive numbers of components notonly increase the cost but also increase the volume and weight whichare important considerations in many applications. A better wayto satisfy any given reliability is to employ the probabilistic designapproach. References [1]-[4] are some of the work done in thisarea.

In this correspondence, we discuss the problem of improving thereliability of a single logic gate from the probabilistic point of view.The reliability improvement of a gate network using similar prob-abilistic approach is not discussed and will be a topic for furtherresearch.

Although the exact physics of failure is not yet known, the agingeffect in modern integrated circuit (IC) is considered to be negligible.If all IC's are tested beforehand, the gate failures may be consideredto be random and time-independent. We also assume that all failuresare independent of each other.

II. RELIABILITY IMPROVEMENT OF LOGIC GATESWITH ASYMMETRIC INPUT FAULTS

An input fault of a logic gate, whether produced by a previousgate or the input circuitry of the gate, may be classified as eithercritical or suberitical [5]. A critical input fault is an input fault thatcauses the output to be in a state that is not a valid application of thebasic gate function to the remaining inputs. A subcritical inputfault is an input fault that allows the output to be the valid logicfunction of the remaining inputs. If the probabilities of a subcritical

Manuscript received August 23, 1974; revised Janulary 27, 1975.The author is with the Department of Electrical Engineering, Cleve-

land State University, Cleveland, Ohio 44115.

1012