12
Testing methodology: implications for the circuit designer Circuit manufacturers must ensure that products perform to specification, but need to balance quality against the cost of testing. B R Wilkins reviews current methods of circuit design that are aimed at making testing simpler and cheaper The complexity achievable within a custom chip or on a PCB loaded with standard combinational or sequential elements, even without the use of VLSI components such as microprocessors, requires the use of automatic methods for the generation of test patterns if the task is to be completed within an acceptable time and at an acceptable cost. This paper reviews the current status of some aspects of the test process as applied to such circuits, and of the principles of structured design methodologies intended to reduce the difficulties of test pattern generation ~TPG ). The paper starts by reviewing the fault models on which most automatic TPG (A TPG ) methods are based, and goes on to discuss some of the available A TPG methods themselves. The problems involved in TPG for sequential circuits are briefly discussed to show the motivation behind structured design for testability using the scan-in scan-out (SISO) principle. The main implications of SISO are described, as are some of the applications of these principles to the construction of testable PCBs. microsystems IC test PCB test design for testability fault models lest patterns The essential function of manufacturing industry is to make products and sell them at a profit. In the last analysis, it is the customer who is the key ingredient in the manufacturing process; if the customer is not satisfied he will take his custom elsewhere. The customer's require- ments are quite simply stated: he wants a product that performs in accordance with its specification, and he wants to have the product at a competitive price. These Department of Electronics and Computer Science,Southampton University, Southampton SO9 5NH, UK Paper received: 8 June 1988. Revised: 27 September 1988 0141-9331/88/I 0573-I 2 $03.00 © 1988 two considerations -- quality and cost -- are fundamental to the way in which the problem of testing has to be approached. The dilemma confronting the manufacturer is that although he needs to test the completed product in order to ensure correct functioning and so protect his reputation, he is also aware that testing costs money (and more thorough testing costs more money) and that this cost is a manufacturing overhead that could threaten his com- petitive position. It is only on the basis of an overall economic model that it is possible to justify modifyingthe design approach (at the cost of additional hardware) in order to simplify the testing task: design for testability (DFT) is becoming an increasingly significant concept in VLSl design. Test pattern generation (TPG) is the process of deriving a sequence of test patterns, each of which consists of a defined set of input stimuli together with the corresponding fault-free outputs. The obvious approach to TPG is to exercise the circuit so as to check that it performs its intended function. A program developed in this way is known as a functional (or behavioural) test program, but, despite the appealing simplicity of this concept, its application to a practical circuit is not as easy as it appears. The first task with which a system designer is faced is that of preparing an accurate and comprehensive specifi- cation of the required function. This task has long been recognized as not only one of the most important in the design process, but also as one of the most difficult. For the test engineer, faced very often with nothing more than a circuit diagram, the task of identifying the function is even more difficult. Even a purely combinational circuit, for which a truth table might be considered as a complete function description allowing for an exhaustive test, cannot in general be handled that way because Butterworth & Co. (Publishers) Ltd Vol 12 No 10 December 1988 573

Testing methodology: implications for the circuit designer

Embed Size (px)

Citation preview

Page 1: Testing methodology: implications for the circuit designer

Testing methodology: implications for the

circuit designer Circuit manufacturers must ensure that products perform to specification,

but need to balance quality against the cost of testing. B R Wilkins reviews current methods of circuit design that are aimed at making testing

simpler and cheaper

The complexity achievable within a custom chip or on a PCB loaded with standard combinational or sequential elements, even without the use of VLSI components such as microprocessors, requires the use of automatic methods for the generation of test patterns if the task is to be completed within an acceptable time and at an acceptable cost. This paper reviews the current status of some aspects of the test process as applied to such circuits, and of the principles of structured design methodologies intended to reduce the difficulties of test pattern generation ~TPG ). The paper starts by reviewing the fault models on which most automatic TPG (A TPG ) methods are based, and goes on to discuss some of the available A TPG methods themselves. The problems involved in TPG for sequential circuits are briefly discussed to show the motivation behind structured design for testability using the scan-in scan-out (SISO) principle. The main implications of SISO are described, as are some of the applications of these principles to the construction of testable PCBs.

microsystems IC test PCB test design for testability fault models lest patterns

The essential function of manufacturing industry is to make products and sell them at a profit. In the last analysis, it is the customer who is the key ingredient in the manufacturing process; if the customer is not satisfied he will take his custom elsewhere. The customer's require- ments are quite simply stated: he wants a product that performs in accordance with its specification, and he wants to have the product at a competitive price. These

Department of Electronics and Computer Science, Southampton University, Southampton SO9 5NH, UK Paper received: 8 June 1988. Revised: 27 September 1988

0141-9331/88/I 0573-I 2 $03.00 © 1988

two considerations - - quality and cost - - are fundamental to the way in which the problem of testing has to be approached.

The dilemma confronting the manufacturer is that although he needs to test the completed product in order to ensure correct functioning and so protect his reputation, he is also aware that testing costs money (and more thorough testing costs more money) and that this cost is a manufacturing overhead that could threaten his com- petitive position. It is only on the basis of an overall economic model that it is possible to justify modifyingthe design approach (at the cost of additional hardware) in order to simplify the testing task: design for testability (DFT) is becoming an increasingly significant concept in VLSl design.

Test pattern generation (TPG) is the process of deriving a sequence of test patterns, each of which consists of a defined set of input stimuli together with the corresponding fault-free outputs. The obvious approach to TPG is to exercise the circuit so as to check that it performs its intended function. A program developed in this way is known as a functional (or behavioural) test program, but, despite the appealing simplicity of this concept, its application to a practical circuit is not as easy as it appears.

The first task with which a system designer is faced is that of preparing an accurate and comprehensive specifi- cation of the required function. This task has long been recognized as not only one of the most important in the design process, but also as one of the most difficult. For the test engineer, faced very often with nothing more than a circuit diagram, the task of identifying the function is even more difficult. Even a purely combinational circuit, for which a truth table might be considered as a complete function description allowing for an exhaustive test, cannot in general be handled that way because

Butterworth & Co. (Publishers) Ltd

Vol 12 No 10 December 1988 573

Page 2: Testing methodology: implications for the circuit designer

• the truth table will not be supplied and will not be easily derived

• the truth table will in many cases by impractical to apply, since although a circuit with 20 inputs could be tested on this basis in a second or less, one with 40 inputs would require several days

A sequential circuit is even less amenable to this exhaustive test approach, which would require a check on all possible transitions from all possible states.

Standard VLSI components (such as microprocessors) have to be subjected to some form of behavioural test procedure, because a gate-level circuit description of the device is generally not available to the user. Even so, an exhaustive form of testing (such as checking the execution of each instruction with all possible operands) is again not possible because of the virtually infinite size of the test set, while defining a comprehensive but non-exhaustive test set is a problem for which a general solution has yet to be found.

Instead of selecting tests to verify that the circuit is working, an alternative approach is to form tests that check whether the circuit is faulty. Programs developed on this basis are known as structural test programs: they are derived ultimately from a consideration of structural imperfections (known as defects). These are the actual physical failures that result in functional failure of the circuit, and the aim of a structurally based test program is to ensure that any circuit containing a defect will fail at least one of the tests.

In order to construct a structurally based test program, it is first necessary to distinguish between a defect, which is the physical imperfection (such as a broken bond wire), and a fault, which is the electrical effect produced by a defect. The relationship between defects and faults is described by a fault model, which is not necessarily a one-to-one relationship. The distinction between these two entities is clear (although not all writers are consistent in their use of the terminology), and it is equally clear that electrical tests can detect only faults rather than defects. It is not, therefore, surprising that considerable attention has been, and to some extent continues to be, directed towards the problem of defining fault models that are simple enough to allow economical TPG, but comprehensive enough to cover the defects that arise in practice.

DEFECTS AND FAULT MODELS

Single stuck-at fault model

Structural testing was introduced in the late 1950s, when TTL was the emerging dominant technology and levels of integration were low. It is remarkable that one of the earliest fault models to be proposed 1 not only became quickly established as a universal basis for TPG, but has very largely remained so to this day. The single stuck-at fault model does not attempt to provide a direct translation between defects and faults, but simply postulates that any failing circuit can be considered to have the following properties.

• The defect will directly affect only one node in the circuit.

• The faulty node will remain at a fixed logic level (0 or 1) irrespective of the values of the signals that normally control the node.

The model is illustrated in Figure 1. If C is stuck at ~ (( s-a-0r then it maintains that value irrespective of the values applied to A and B. Gates 2 and 3 behave normally, with their outputs depending on C, D and E. Notice that, with (_ s-a-0, gate 2 will have its output permanently at 1, but this does not contradict the single fault assumption because it is a consequential effect, not a direct one.

The single stuck-at model was intended to apply to TTL circuits mounted on PCBs, and the most common defects that were expected with that technology were assembly defects, chiefly related to soldering. A dry joint between a track and a chip pin would cause the input of a TTL gate to be left open circuit: this causes that gate to behave as if its input is s-a-1. Similarly solder splashes between tracks or pins can short-circuit nodes to the supply or ground rails. Faulty components (whether originally defective or damaged in use) would also very often exhibit stuck input or output behaviour, since excessive operational stress is likely to destroy input- or output-stage diodes or transistors, and internal manu- facturing defects will also, in most cases, prevent the output from switching 2.

As the level of integration has increased, the problems of testing have spread to encompass chips as well as boards. For 1-TL chips, the same plausibility arguments apply, since not only manufacturing defects within chips, but also operational stress, can produce similar effects as seen at the chip pins: high temperatures and high electric fields can result in melted bond wires 3, cracked bond- wire junctions 4, or electromigration s'6 within the chip, giving an electrical fault effect that can often be accurately represented by the stuck-at fault model.

These plausibility arguments explain why the single stuck-at model gained initial acceptance; but the basic assumptions are nevertheless very questionable. The single fault hypothesis clearly does not cover all practical cases: multiple faults can and will occur, either because an underlying physical defect affects more than one node, or because a defect at one node causes consequential damage at another node, or because there are two or more separate defects. In practice, however, the possibility of multiple faults is very rarely considered in TPG systems. One reason for this is that it seems intuitively unlikely that a circuit with two faults would pass a test sequence that would detect either fault individually. In the absence of this effect, known as fault masking, a test sequence constructed on a single fault basis will successfully reject circuits containing multiple faults, although for a repairable unit such as a PCB it may require multiple passes through the test/repair cycle before all the defects are corrected. Fault masking has in the past attracted a certain amount of research effort 7' 8, but there can be few test engineers today who believe that fault masking is a significant effect in practice; studies of multiple fault detection capabilities of

A

B

° F

1 Figure 1. Circuit fragment to illustrate the single stuck-at fault model

5 74 Microprocessors and Microsystems

Page 3: Testing methodology: implications for the circuit designer

test sets derived on the basis of single fault models have all demonstrated very high fault coverage 9' 10.

There is a second, more important, reason why multiple faults are not considered. In a circuit with N nodes there are 2N single stuck-at faults, which is manageable even for a circuit containing tens of thousands of gates, but there are 3 N - 1 multiple fault cases. For a circuit with even a few tens of gates, the number of multiple fault cases is to all intents and purposes infinite. However important an effect might be, the testing activity must in the end by governed by what can be achieved within the limits of available time and allowable cost: in the final analysis, testing has to accept empirical limitations.

~ m

Fi fir>_

Figure 3. Bridging defects, which produce non-stuck-at faults, or can induce sequential behaviour in a com- binational circuit

Fanout faults

In forming a list of faults for consideration during TPC, it is necessary to modify the circuit concept of a node to take account of the existence of fanout. The problem is illustrated in Figure 2, in which the signal E fans ou t to feed both gates 2 and 3. In circuit terms, the whole network from the output of gate I to the inputs of gates 2 and 3 will be a single node, but in a PCB the node will be implemented as a track, with connections (soldered joints) at E, x and y. It is clear that these soldered joints are all potential defect sites, and it is also clear that although a defect at E will transmit its effect to x and y, the converse is not necessarily true. Thus an open circuit at x between the track and the input pin of gate 2 will appear as an s-a-1 fault to gate 2, but the signals at E and y will continue to act normally. In as much as open circuits and other defects can appear in the interconnection metallization, the same effects can occur within a chip as on a PCB.

Fanout is invariably present in any non-trivial circuit, and the assumption that the whole fanout network can be treated as a single node is unjustifiable; if applied to a test pattern generation system, it will probably lead to inadequate fault coverage. The practice is, therefore, to regard the trunk (E in Figure 2) and each branch (x and y in Figure 2) as separate fault sites, each of which could be stuck at 0 or 1.

Bridging faults

A defect that is commonly met in practice gives rise to the bridging fault, in which two nodes that should be separate are connected together 11. In PCBs this effect can be brought about by a solder splash across two tracks; within an integrated circuit defects in metallization can produce the same effect. Figure 3 illustrates a circuit with two bridging defects FI and F 2. To develop a fault model to represent these defects we have to consider the particular technology in which the circuit is implemented. For TTL, the output stages of gates 1 and 2, feeding the defect FI, might be as shown in Figure 4. The significant point about a bridging fault that makes the stuck-at model inappropriate

B E

C ~

D

Figure 2. Fanout node (E-x-y) showing how trunk and branches form separate fault sites

On

Gate I

High--Low Low

~ ote 2

[

Figure 4. Effect of a bridging defect at the outputs of two TTL gates

is that neither of the affected nodes is, in fact, stuck. Furthermore, if the inputs to gates 1 and 2 in Figure 3 are such that the fault-free outputs are the same, then there will be no malfunction.

When the inputs to the two gates are such that the fault-free outputs are different, then the conditions shown in Figur e 4 prevail, the upper transistor of one gate and the lower transistor of the other gate being both on, and the other two output stage transistors off. It is clear from Figure 4 that with a TTL circuit the output that should be high will be pulled low under these fault conditions. Similar consideration applied to other technologies will show that NMOS circuits will behave like TTL, with the fault-free high being pulled low, while in ECLthe fault-free low will be pulled high. CMOS circuits present further difficulties, in that the fault effect produced by a short circuit between two gate outputs will depend on the logic functions of the particular gates 12.

The second bridging fault illustrated in Figure 3 presents rather more difficulties. F2 represents a bridge between the output of a combinational circuit and one of its inputs. The problem here is that by completing a feedback loop, the combinational circuit can be thereby converted into an asynchronous sequential one; the derivation of a test for this condition is not at all easy: the TPG system would have to compute the behaviour of the modified circuit and then derive a sequence of patterns to complete a test. For a TPG system intended to operate with a combinational circuit these exercises would be beyond its capabilities.

The extent to which bridging faults can be taken into account is governed by similar considerations to those applying to multiple faults: in a circuit with 1000 nodes, there are half a million node pairs. It is clearly impractical to take all these into account, but at the same time it is unnecessary: an accidental short circuit between two

Vol 12 No "10 December 1988 575

Page 4: Testing methodology: implications for the circuit designer

tracks is unlikely to occur unless the tracks are somewhere physically adjacent. It is, however, not easy to identify, in a complex PCB or integrated circuit, which pairs of tracks are adjacent: analysis of the layout data held in a CAD system would be a complex and time-consuming task. With a PCB, bridging faults between adjacent chip pins may be considered, but generally the tendency is to disregard bridging faults altogether, and to rely on the hope that a test program giving a very high coverage of single stuck-at faults will cover most bridging faults as well; this hope has been shown in empirical studies to be largely borne out even with CMOS circuits 12.

M O S faults

The single stuck-at fault model has for a couple of decades served with apparent success, particularly when applied to PCBs populated with TTL chips. With the growth in use of LSI and VLSI components, which are predominantly fabricated in MOS technology, increasing concern has been expressed that practical defects in MOS circuits have effects that are not well represented by the stuck-at fault model 13-17.

One major problem with using the stuck-at fault model is that it requires the use of a gate-level equivalent circuit to represent the performance of the circuit. Even with the older TTL MSI chips this introduces uncertainty in that the manufacturer only offers the gate-level circuit diagram as a 'functional equivalent'. To take a simple example, Figure 5 shows two different circuits, each of which implements an exclusive-OR function. If a test program based on the possible faults in Figure 5a is used to test an exclusive-OR circuit that is actually built as in Figure 5b it is clear that if the program covers faults in the real circuit, it will do so only fortuitously. The discrepancies between different possible implementations, and the differences in fault cover between test programs derived from the different implementations, will increase as the complexity of the circuit function increases. With MSI circuits such as the 74181 (ALU) or 74180 (parity generator), which could be implemented in many different ways, there is the choice of

• using the data sheet equivalent for want of anything better

• basing TPG on pin faults (pins have the merit that they do at least exist!)

• adopting a functional test approach.

B a

AeB

b

Figure 5. function

?

D A®B

Two different ways of implementing the XOR

A brief discussion ol these options with reTerence to ~h( 74180 can be found in reference 18 (pp 100-101)

When dealing with MOS circuits, discrepancies between the physical structure of the circuit (in which defects occur) and the logic representation (in which faults are postulated) become even more marked 19. In particular, MOS is especially well suited to implementing relatively complex AND-OR functions in single units. In such a unit, simple open or short circuits can have the effect of making a radical change in the logic function. Figure 6a, for example, shows an NMOS circuit implementing the function

Z = A B + C D

This could be represented by the logic circuit shown in Figure 6b. However, if the physical circuit has a defect consisting of a short circuit between points X and Y, the modified function would become

Z' = ( A + C ) ( B + D )

There is no way in which the single stuck-at fault analysis applied to Figure 6b could produce this result, and it can be demonstrated that a test program can be found to cover all single stuck-at faults in the gate-level equivalent circuit without detecting these real, and plausible, defects 2°. Whether this is a significant problem in practice is more debatable. The point here is that when embedded

a

Z = A B ÷ C D

IE 1 tl c Ix YL

IE 1t 0 1 r

I

b

A I B

c I D

Figure 6. Illustrating a possible defect in an NMOS circuit: a, a composite circuit element, showing a possible short-circuit X-Y; b, gate-level equivalent of a

576 Microprocessors and Microsystems

Page 5: Testing methodology: implications for the circuit designer

within a larger circuit an element would be subjected not just to the minimal test sequence that is derived for the element in isolation (four patterns in the case of Figure 6a), but to many thousands of patterns, so that there is a high probability that the unrepresented faults would be covered fortuitously; certainly it seems implausible that such a radical change of function as represented by the equations above should have no discemible effect on the overall circuit during an extended test sequence. A number of studies have attempted to resolve this dispute, but, as with bridging faults, the need for inclusion of these faults into the fault list is not by any means universally accepted.

The other non-stuck-at fault effect associated with MOS circuits has been discussed particularly with reference to CMOS circuits 21-23, but can also equally well affect NMOS. The critical feature here is that an open, or short, circuit can cause one of the transistors to be permanently switched on (stuck-on fault) or permanently switched off (stuck-open fault). The latter presents an especially difficult problem because it can result in the input of a particular transistor floating under some input conditions. Charge storage then ensures that the output will retain the logic value that it had immediately before it floated. Hence, a purely combinational circuit can, under this type of fault condition, exhibit sequential behaviour, with the output dependent on the sequence of test patterns. As with the MOS faults discussed earlier, examples have been shown of small circuits for which a conventionally derived test sequence will fail to detect certain stuck-on faults; but some empirical studies have suggested that this may not be a problem in practice 12,24,2s.

If the possibility of non-stuck MOS faults is to be taken into account during TPG, it is clear that the approach has to be modified. Some early suggestions 13 involved developing an elaborate circuit m o d e l - a single NOR gate was represented by 11 gates and a latch - - which was then used for single stuck-at fault testing. For a circuit of any size it seems impossible that a transformation of this complexity could be used in practice, and in any case although the modified circuit accounts for the sequential behaviour induced by the stuck-on fault, it also contains many stuck-at faults that bear no relation to real defects.

A different transformation has been proposed, taking the basic transistor-level description and converting it to a logic diagram that is claimed to allow all switch-level defects to be correctly modelled as stuck-at faults 17. An altemative approach to the problem is to abandon the use of an equivalent logic diagram to model an MOS circuit, and instead to move to a switch-level representation. It is claimed that this can lead to TPG procedures of no greater complexity than the ones based on stuck-at faults at the gate level, while accurately covering the real defects in MOS circuits 27. Other studies, however, have suggested that coverage is still inadequate unless the circuits are designed specifically to allow testability of stuck-on and

28 30 stuck-open faults - . An altemative, completely different, approach has been proposed in which stuck-at testing is supplemented by measurements of leakage current to cover non-stuck faults al. Finally, it has been suggested that by confining the circuit design to certain particular forms, the elements can be made stuck-fault testable 32.

Despite all the reservations expressed about its adequacy, the single stuck-at fault model remains the dominant basis for TPG and it seems likely to retain that position until convincing evidence is produced that it is

failing to deliver adequate test programs when applied to real size circuits.

TESTING COMBINATIONAL CIRCUITS

The basic strategy used to generate structurally based test pattems, whether manual or computer-based methods are used, is represented in the flowchart shown in Figure 7. The fault list could be chosen to reflect the anticipated fault spectrum of the particular circuit, although in many cases the list will simply be taken as the set of single stuck- at faults. However the list is compiled initially, it will probably be compacted by using the concepts of fault equivalence 33.

Although each test will be written to cover one particular fault on the fault list, it will almost always actually cover one or more other faults. It is therefore worthwhile, especially in the early stages of TPG, to assess the fault cover of each test as it is generated so as to avoid unnecessary duplication. After deleting all covered faults from the fault list, it is then possible to see whether or not the target fault cover has been reached.

The central process of TPG is based on the idea of the sensitive path, which is illustrated in Figure 8. This represents a combinational element whose output Z is a function of four inputs A, B, C and D. It is always possible in such a case to regard B, C and D as enable inputs and to find a set of values for which Z is a function of A only. In this case we have established a 'sensitive path' through the element from A to Z. The generation of a test for node N stuck at 1 then consists of two parts:

• activating the fault-sensitive condition (i.e. making the fault-free value at N equal to O)

Prepare fault list

. ~ Any faults left on ~ No list?

~ Yes For next fault on

list, write test

Check fault cover Delete covered faults

from list

NO~ Foult-covertorget rnet? ~ Yes --f F'nd ~

Figure 7. Flow diagram illustrating the process of auto- matic test pattern generation

B Z ~ - - C D values

Z=Aor;~

Figure 8. Sensitive path generated by regarding some inputs of a logic block as enables

Vol 12 No 10 December 1988 577

Page 6: Testing methodology: implications for the circuit designer

• establishing a sensitive path from N to one of the primary outputs where it can be observed.

The process can be illustrated by the circuit of Figure 9, which will serve to highlight some of the problems faced by an automatic TPG system. Suppose that the require- ment is to write a test for the output of gate 5 (G 5) stuck at 1 (s-a-l). The fault-free value at this node must therefore be made O; this requires ls on all inputs of G5. This therefore requires I c = I D = 1 , and, to obtain 1 at the output of G1, a 0 on either IA o r I B ( o r both). This is the first of many points at which a choice has to be made; at this stage in the process the choice can only be arbitrary, but it may be necessary to modify the choice later in the process.

Suppose, therefore, that the choice made is IA = 0 (I B will remain undefined for the moment). The fault- sensitive condition has now been established, so the next requirement is to sensitize a path to the output. A further set of choices is now presented: possible paths from G5 to G8 are GS-G3-G4-G8, GS-G3-G 7-G8, G5-G6-G 7-G8, G5-G6-G4-G8; the fault effect could be transmitted along any one or more of these paths. Whatever choice is made, the other paths will have to be checked (and perhaps blocked) because there is always the possibility with reconvergent fanout that two sensitive paths could cancel each other out. Taking the first option, a sensitive path through G3 requires both the other two inputs (I c and G2) to be 1; I c is a primary input already set to 1, while G2 is also set to 1 because I A = 0.

When we extend the sensitive path through G4, however, we meet a problem: needingthe other inputs to be 1, we find that one of them (IA) has already been set to 0. Hence the sensitive path G5-G3-G4-G8 is not possible unless earlier decisions are changed. Tracking back we find that there was an arbitrary decision involving IA: rather than making I A = 0 we could have used I B = 0. If this decision is modified, so that I A is made 1, the sensitive path through G4 can be completed if an appropriate value can be established on G6. Analysis of the conditions on G6 shows that the fault on G5 is in fact transmitted through G6 so that G4 receives two fault-sensitive lines. This is an example of reconvergence; in this case the two fault effects do not cancel, but are transmitted through G4. The final stage of the sensitive path is through G8: with I B = 0, the output of G7 is held at 1, and the conditions are satisfied. A test has therefore been generated: Ial B Icl D = 1011, fault-free output = 1, becoming 0 if G5 is s-a-1.

Automatic test pattern generation

The example above serves to indicate why ATPG is such a

XCu

Figure 9. Circuit illustrating some of the problems of test pattern generation

laborious business, demanding substantial computer resources and even then requiring very long runtirnes ~4 Pure combinational logic circuits consisting solely of combinational elements without global feedback (which can produce sequential behaviour) carl still contain features that defeat some ATPG systems. One such feature is dual path sensitization, as exemplified by the reconvergence at G4 in the example above: unless the ATPG system can recognize this condition it will classify such faults as untestable.

Many ATPG systems have been developed in an at tempt to make the test generation process both automatic and economic; all represent mechanizations of the path sensitization technique described above. They differ essentially in their ability to cope with 'awkward' circuit configurations and (particularly) in execution time. One of the earliest is known as the D-algorithm 3s, which has been very widely used throughout the industry, and has provided the foundation on which most other algorithms have been based.

The D-algorithm formalizes the sensitive path technique by evaluating circuit behaviour based on a five-value logic representation of signal states. In addition to 1, 0 and X, values of D and D are included, where D indicates a signal whose value is 1 in the fault-free circuit and 0 in the presence of the particular fault being considered. The D- algorithm operates on a netlist defining the circuit topology, and depends on a library of descriptions of the circuit elements (which can be just the basic gates, but which could also contain more complex elements); these descriptions really amount to truth tables encompassing both fault-free behaviour and fault-generation, fault- propagation and fault-blocking behaviour. The test generation process starts (having selected for TPG a particular fault at the output of a particular element) by establishing the fault-sensitive condition: this amounts to choosing input values for the element that are consistent with the D or D (as required) at the output of that element. Thus, referring to Figure 9, if a test is required for G5 s-a-l, we put G5 = D, and find that the only input pattern for G 5 consistent with this output is I c = 1, I D = 1 , G1 = 1. To establish G1 = 1 offers a choice; arbitrarily I A = 0 , IB = 1 can be selected. Having now established primary input values that are consistent with the required D at G 5, this is then propagated through the circuit to a primary output (a process known as D-drive), by specifying values on intermediate gate inputs. A further arbitrary choice is then entailed because of the fanout from G5: suppose the path chosen is through G3, which requires G2 = 1, I( = 1. This process continues, propagating the so-called 'D-frontier' through the circuit until it reaches a primary output.

The next stage of the process is to check consistency. This requires that the values assigned during the propa- gation phase are checked back through the circuit to determine ultimately appropriate primary input values. This process is called justification. If these two processes are completed without encountering any inconsistencies then the test is complete. If, however, there are inconsistencies in either the D-drive or the justification processes, then the algorithm has to backtrack to one of the arbitrary decision points, change this decision and then repeat the process. In the circuit of Figure 9, for example, propagating the fault through G4 will be found to be impossible because of the previously assigned value I A = 0. This path choice will then have to be changed. Fanout points always offer a number of choices:

578 Microprocessors and Microsvstems

Page 7: Testing methodology: implications for the circuit designer

propagation can be through any single path or through any two or more paths simultaneously. If none of the possibilities at a particular fanout point yields a test, then it is necessary to backtrack further to an earlier choice point. Ultimately the D-algorithm will try every possible choice before classifying a particular fault as undetectable: if a test exists, the D-algorithm will find it.

The D-algorithm represented an important break- through in ATPG when it appeared: by providing a mathematical formalism into which the various propa- gation and justification procedures could be fitted, together with routines for recognizing and using multiple- path propagation and dealing correctly with recon- vergence, fully automatic TPG for combinational circuits was made possible. Since then the D-algorithm has been widely used, and the main development effort has been devoted to finding modified algorithms that can achieve the same results at lower cost in terms of computer time and resources.

During the 15 or so years following the publication of the D-algorithm, a number of minor variations were developed, including systems such as Lasar 36, which is still commercially available. This uses a technique of establish- ing critical paths working back from the output and assigning values to gate inputs so as to maximize the number of fault-sensitive nodes. Lasar is faster than the D-algorithm, but does not have the same guarantee of fault coverage; in particular, faults than can be covered only by use of dual-path sensitization may be categorized as untestable.

An important factor in determining the amount of backtracking necessary is the amount of fanout in the circuit, because of the variety of possibilities available at the D-drive phase. The adoption of nine-valued logic 37 helps the situation by effectively simulating single and multiple path propagation simultaneously. The four extra values (in addition to the five used in the basic D- algorithm) are G1/0, implying 1 or 0 in the good machine and an unspecified value in the faulty machine, and F1/0, implying 1 or 0 in the faulty machine and an unspecified value in the good machine. The worst-case number of backtracks during D-drive by using this signal representation is N (where N is the number of fanout branches), compared with 2 N - 1 with the D-algorithm. The next major advance came with the development of Podem 38, which offered significant improvements in execution time. These are achieved not by any change in principle (Podem is based on establishing sensitive paths from fault sites to primary outputs just as the D-algorithm does), but by changes in the mechanization. This is not, perhaps, too surprising; it confirms that a major part of the time in ATPG is taken up with backtracking, changing decisions, and the associated housekeeping. The main difference between Podem and the D-algorithm is that rather than propagating the D-frontier all the way to the primary outputs before starting on justification, Podem justifies as it goes. Thus in Podem, as soon as a primary input value is assigned, the effects of that assignment are evaluated, and any conflicts are resolved without going further. Other time-saving procedures include heuristics for guiding the 'arbitrary' choices so as first to maximize the likelihood of finding a test with the minimum of backtracking and secondly to identify an undetectable fault with the minimum of wasted effort.

Further improvements in processing time have been 39 40 reported with the Fan algorithm ' , which was developed

specifically with a view to dealing with the problems presented by fanout; and a more recent system called Future 41, which makes use of an algorithm based on Fan, and also uses a nine-valued logic signal representation during the ATPG phase. An analysis of the CPU time while Future was running has shown that, for most circuits, 80% or more of the CPU time was taken up by fault simulation 41. The high cost of fault simulation becomes proportionally higher as the number of gates in the unit under test gets larger 42, so that there are continuing efforts to avoid the use of fault simulation 43'44 or to produce more economical fault simulation procedures 4s, some of which have been incorporated into new (or updated) ATPG systems such as Lamp246' 47.

A completely different stratagem for reducing the total processing time capitalizes on the observation that, in a complex circuit with many outputs and many internal nodes, virtually any test vector applied to the circuit will cover a significant number of faults. This observation leads to the conclusion that the application of a sequence of non-repeating random patterns to a combinational network will achieve a total fault coverage that asymptoti- cally approaches 100%, and, more importantly, that such a sequence is likely to provide large incremental fault cover in the early stages. Attempts have been made to avoid the TPG process entirely by using just random patterns, but it is always likely to be costly to achieve very high fault coverage 48'49 and for some circuits a sufficiently high fault coverage could be guaranteed only by an exhaustive test. Nevertheless, a few random patterns followed by fault simulation can achieve a worthwhile reduction in the fault list before directed TPG starts; this is particularly true if the 'random' generation process is cunningly directed. The Raps algorithm s° has been used in this way in conjunction with Podem to generate tests for networks of up to 40 000 gates sl.

Despite all the improvements in efficiency in ATPG that have been achieved, runtimes for VLSI circuits remain high and, with increasing density of new circuits, seem likely to continue to increase and to represent a major component of manufacturing cost s2. New approaches to ATPG are still being explored; both the procedures by which tests are derived s3 and the fault models on which they are based s4, ss are being scrutinized, but without any clear indication as yet that a major breakthrough is imminent. It seems possible that the use of knowledge- based (expert) systems could bring about a substantial advance, by using built-in 'understanding' of the environ- ment of electronic circuits either to guide the TPG process or to direct the overall strategy. Attempts have been, and continue to be, made to implement such systems s6, and one system based on these principles (Hitest) has been

57 58 made commercially available ' . The problems have not all been solved, however; Hitest is really an interactive aid to TPG rather than a completely automatic system. It is, nevertheless, a first step along an interesting and promising path.

DESIGN A N D TESTING OF SEQUENTIAL CIRCUITS

Initialization

Figure 10 shows a general model of a synchronous sequential circuit, emphasizing the distinction between

Vol 12 No 10 December "1988 579

Page 8: Testing methodology: implications for the circuit designer

Combinational logic •

State /

Flip-flops

variables

Figure 10. General model of a synchronous sequential circuit

the flip-flops and the associated combinational logic. The flip-flops define the current state of the system, while the next state of the system, which will be entered after the next clock pulse, is determined by the signals applied to the data inputs of the flip-flops. Both the next-state variables and the primary outputs are generated as combinational logic functions of both the present state variables and the primary inputs. However the testing task is approached, it is clear that any particular test is likely to require a sequence of patterns to be applied in order to establish an appropriate state before applying the required primary inputs; a further sequence of patterns may then be necessary to propagate the test results to a primary output.

An essential component of any test is a knowledge of the correct (fault-free) output. Since the output from a sequential circuit depends on the state as well as the current inputs, it follows that the tester must be able to put the circuit into a defined state before testing can commence. This process is known as initialization: the main problem with the initialization requirement is that we cannot know what state the circuit will enter on power up (and different samples of the same circuit may well power up in different states). An initialization sequence must, therefore, when applied to a fault-free circuit, take it to the chosen initialized state irrespective of the start state. The development of an initialization sequence for a circuit of any size is at best difficult and can be impossible: a cyclic sequencer, for example, can not be initialized (in accordance with the definition above) unless special provision is made when the circuit is designed 18. The most straightforward way of dealing with this problem is to provide a single master reset line, which either forces all flip-flop data inputs to known values (to allow a synchronous reset after a clock pulse) or activates the asynchronous inputs to the flip-flops to give an asynchronous reset. In neither case is it necessary for the initialized state to be all zero: any well defined state will do.

Testing requirements

Having established a means of initializing a synchronous sequential circuit, it is then necessary to generate test patterns. Many of the ATPG systems mentioned earlier

have been applied to sequential circuits, but they do not perform well, especially if, as is often the case, the sequential elements are represented in the ATPG system by gate-level equivalents sg. Adequate fault cover is, in general, achieved only after supplementation with manually generated test patterns.

The basic problems of testability can be illustrated by reference to the general model of a sequential circuit shown in Figure I0. From this model, it is not difficult to identify the features in the structure that are likely to present difficulties to an ATPG system.

• To test combinational logic, we need to apply specific test vectors; but among its inputs are the state variables over which we have no direct control.

• Having applied the test vectors to the combinational logic, some of the outputs are not directly observable.

• The state variables themselves (i.e. the outputs of the flip-flops) are neither directly observable nor directly controllable.

It is when attempting to overcome these restrictions on access to the circuit that the limitations of ATPG systems become apparent: the penalties consist of both long run times and low fault cover. The solution lies in the adoption of a structured design methodology - - design for testability (DFT) --whose objective is to avoid the testing problems by ensuring that complete controllability and observability of the state of the circuit are assured. Some form of DFT is increasingly being seen as essential for the next generation of custom circuits 6°' 61

The SlSO principle

All the formal methods of design for testability that have so far been proposed depend on the same two basic principles:

• the structure of the system is so arranged that the stored-state devices and the combinational logic can be isolated from each other for testing purposes

• observation and control of the state variables are both achieved by using serial access so as to minimize the number of input and output pins that have to be dedicated to the testing function

It is because of the serial access feature that these are known as scan-in, scan-out (SISO) methods and the serial data path is known as a scan path 62' 63.

The modification necessary to make the circuit of Figure I0 into a SlSO system is shown in Figure 11, and consists essentially of inserting a multiplexer in front of the data input of each flip-flop, all the multiplexers being controlled by a single mode control M. If M -- 0, the upper inputs are selected, while if M = I the lower inputs are selected. The output of each flip-flop, as well as being connected to the combinational logic, is also connected to the next flip-flop through its multiplexer. A complete chain of flip-flops is thus formed, and the two ends of the chain are brought out as I/O pins; these are generally known as the scan data in (SDI) and scan data out (SDO) pins.

The system has two modes of operation, selected by the mode control M.

• When M = O, the system is in operational mode, with the combinational logic driving the flip-flops.

580 Microprocessors and Microsystems

Page 9: Testing methodology: implications for the circuit designer

P~

V"

Combinational

logic

/

I i . -

Scan M data in(SDT)

Po Scan data out (SO0)

Figure 11. Modified sequential circuit with scan-in, scan- out capability

• When M = 1, the system is in scan mode, with the combinational logic disconnected from the flip-flop inputs. The flip-flops are reconfigured into a single shift register with input (SDI) and output (SDO) directly available. Thus the state of the circuit is directly controllable, by shifting in any required set of values through SDI; and directly observable by shifting the contents of the flip-flops out through SDO.

The test procedure with the SISO structure can be summarized as follows.

• Test the flip-flops as a shift register, by setting M = 1, and using SDI and SDO for access. This test will be standard for all circuits, since flip-flops are always configured as a shift register for test purposes, the only variation being in the length.

• Test the combinational logic. Tests will have been derived assuming controllability of all inputs (state variables as well as primary inputs) and observability of all outputs (flip-flop inputs as well as primary outputs). For each test there is a three-stage process. (1)Set M = 1 and shift in the required set of state variables. (2) Set M = 0. Apply the required primary inputs. Observe the resulting primary outputs. Clock the flip- flops once, so that the remaining outputs from the combinational logic are latched into the flip-flops. (3) Set M = 1. Shift the flip-flop contents out and observe them at SDO.

In practice, steps (1) and (3) can be combined, in that the state variables required for test N + 1 can be shifted in at the same time that the results from test N are shifted out.

Most implementations of SISO depend on the use of a specially designed flip-flop that effectively combines into a single unit a storage element and a selection mechanism (equivalent to a multiplexer).

DFT in the context of board testing

All forms of DFT incur hardware costs compared with the 'free style' design methods. They can be justified only by

savings in the testing process (including the cost of ATPG and also the cost of supplying and maintaining automatic test equipment). These costs are steadily rising, so that DFT in chips is becoming increasingly general. When it comes to testing PCBs, similar problems of cost and circuit complexity are encountered, together with added require- ments of diagnosis and repair of faulty boards 64. These problems are causing increasing difficulty to test engineerin~ particularly as it attempts to accommodate to surface mounting and other high-density packaging methods 6s. Just as with unstructured sequential circuits, the economic solution is increasingly being seen as DFT, where the testability can be built in either at board level or at chip level. At board level, the approach to testability improve- ment depends on whether the board is to be tested on a functional tester, with access confined to the board I/Os, or an in-circuit tester having access to all chip I/Os 66' 67

In-circuit testing is said to be acceptable for surface-mounted boards, provided the physical design of the board is matched to the testing method 68-7°, although fixturing will always be difficult 71, and doubts about the effects of backdriving remain 72'73.

For testability improvement in a board to be tested on a functional tester, there are three main approaches to DFT.

• For a board composed of scan-designed chips, provision can be made for all the test control pins to be made available, and for the individual scan chains to be joined into a single scan chain 74.

• Additional chips can be added to the board design in order to enhance controllability and observability: chip sets for this purpose have been developed, for example, by Logical Solutions Inc. 7s to service a 'testability bus' carrying test stimuli and responses. Alternatively, standard registers can be replaced by special chips such as the Serial Shadow Register developed by AMD (described in the manufacturer's data sheet and also in some books18'76).

• The physical structure of the board can be modified to take account of tester access problems 77.

A further recent development aimed at easing board testing problems is the growing advocacy amongst chip users of a design methodology that requires chip makers to incorporate specific testability features into their chips. One of the earliest versions of this methodology was the electronic chip in place test (ECIPT) 78, but more recent(JlY8~t has been developed under the name 'boundary scan The basic principle is illustrated in Figure 1 2, which shows a simple form of 'boundary cell' inserted between a chip output and its output pin. This cell contains a flip-flop that

Chip output

CLK

Figure 12.

SO0

M I ~.... M2 $DT ------Mode control

Simple form of boundary scan cell

Output ==pin

Vol "12 No 70 December 7 9 8 8 587

Page 10: Testing methodology: implications for the circuit designer

Chip outputs

SDO

H SDI

m m

Output pins

Figure 13. Output stages showing the scan chain

of a boundary scan chip,

Scan poth

Z1 E -1 E -1 E ZI E

Boo rd interconnect

hip 2

~_~

Figure 14. Principle of the boundary scan board, showing how interconnection (including soldered joints) can be tested without probing

can hold either stimulus or response data, and it can be loaded and inspected serially, forming a scan chain as indicated in Figure 13. The idea is that a similar cell should be included behind each chip I/O pin, and that the scan paths of all the chips on a board should be linked together as shown in Figure 14. By this means, 'in-circuit' tests of individual chips can be achieved without probing, by loading input data into the input boundary cells and inspecting the results serially from the output boundary cells. Furthermore, by loading data into the output cells of one chip and transferring it to the input cells of another chip, the interconnection wiring and solder joints can also be tested (which an in-circuit tester is unable to do).

Considerable interest in this scheme is being expressed by many companies internationally, and an agreed standard is close to completion. If widely implemented, it could have a profound effect on the practice of board testing in the decades to come.

CONCLUSIONS

The problems of testing are becoming more severe, and more costly, as the complexity of circuits at all levels continues to increase. To keep testing costs within reasonable limits, it is becoming ever more important to tailor design methodologies to the needs of the testing process.

The accurate modelling of the behaviour at taulty circuits is a matter of continuing concern, espe(iall~, for CMOS circuits, although many working systems continue to use the long-established single stuck-at fault rnodel as the basis for test pattern generation. The main exception to this is when testing programmable logic arrays, where the predominant failure mechanism requires a different treatment.

Sequential circuits present even more severe problems if unstructured design methods are employed. The use of scan design can overcome most of the problems, but at the expense of significant penalties in terms of hardware overhead, and some sacrifice of performance and reliability. The widespread interest being shown in boundary scan is evidence that these penalties are being seen as an acceptable price to pay for the benefits obtained.

REFERENCES

1 Eldred, R C 'Test routines based on symbolic logic statements' J. ACM Vol 6 (1959) pp 33-36

2 Beh, C C, Arya, K H, Radke, C E and Torku, K E 'Do stuck fault models reflect manufacturing defects?' Proc. Int. Test Cant (1982) pp 35-42

3 Parker, K P Integrating design and test I EEE Computer SocieW Press, New York, NY, USA (1987)

4 Bushanam, G S, Harwood, V R, King, P N and Story, R D 'Measuring thermal rises due to digital device overdriving' Proc. Int. Test Conf. (1984) pp 400-423

5 Peattie, C G, Adams, S D, Carrell, S L, George, T D and Valek, M H 'Elements of semiconductor-device reliability' Proc. IEEE Vol 62 (1974) pp 149-168

6 Schoen, J M 'A model of electromigration failure under pulsed conditions' J. Appl. Phys. Vol 51 (1980) pp 508-512

7 Dias, F J O 'Fault masking in combinational logic circuits' IEEE Trans. Comput. Vol C-24 (1975) pp 476-482

8 Bennetts, R G Introduction to digital board testing Arnold, London, UK (1982)

9 Hughes, J L A and McClusky, E J 'An analysis of the multiple fault detection capabilities of single stuck-at fault test sets' Proc. Int. Test Conf. (1984) pp 52-58

10 Agacwal, V K and Fung, A S F 'Mult iple fault testing of large circuits by single fault test sets' IEEE Trans. Comput. Vol C-30 (1981) pp 855-865

11 Mei, K C Y 'Bridging and stuck-at faults' IEEE Trans. Comput. Vol C-23 (1974) pp 720-727

12 Timoc, C, Buehler, M, Grimswold, T, Pina, C, Stolt, F and Hess, L 'Logical models of physical failures' Proc. Int. Test Conf. (1983) pp 546-553

13 Wadsack, R L'Fault modell ingand logic simulation of CMOS and NMOS integrated circuits' Bell Syst. Tech. J. Vol 57 (1978) pp 1449-1474

14 Galiay, J, Crouzet, Y and Vegniault, M 'Physical versus logical fault models in MOS LSI circuits: Impact on their testability' IEEE Trans. Comput. Vol C-29 (1980) pp 527-531

15 Banerjee, P and Abraham, J 'Generating tests for physical failures in MOS logic circuits' Proc. Int. Test Conf. (1983) pp 554-559

16 Banerjee, P and Abraham, J A 'Characterisation and testing of physical failures in MOS logic circuits' IEEE Des. Test Vol 1 No 3 (August 1984) pp 76-86

17 Abraham, J A 'Fault modelling' in Williams, T W (ed.)

582 Microprocessors and Microsystems

Page 11: Testing methodology: implications for the circuit designer

VLSI testing (Advances in CAD for VLSI: Vol 5) North- Holland, Amsterdam, The Netherlands (1986)

18 Wilkins, B R Testing digital circuits Van Nostrand Reinhold, Wokingham, UK (1986)

19 Burgess, N, Damper, R I, Torten, K A and Shaw, S J 'Physical faults in MOS circuits and their coverage by different fault models' lEE Prec. E Vol 135 (1988) pp 1-9

20 Burgess, N, Damper, R I, Shaw, S J and Wilkins, D R J 'Faults and fault-effects in NMOS circuits - Impact on design and testability' lEE Prec. G Vol 132 (1985) pp 82-89

21 Mandl, K D 'CMOS VLSI challenges to test' Prec. Int. Test Conf. (1984) pp 642-648

22 Chen, H H, Matthews, R G and Newkirk, J A 'An algorithm to generate tests for MOS circuits at the switch level' Prec. Int. Test Conf. (1985) pp 304-312

23 Chandramouli, R and Sucar, H 'Defect analysis and fault modelling in MOS technology' Prec. Int. Test Conf. (1985) pp 313-321

24 Turner, M E, Leer, D G, Prilik, R J and McLean, D J 'Testing CMOS VLSI: Tools, concepts and experi- mental results' Prec. Int. Test Conf. (1985) pp 322-328

25 Woodhall, B W, Newman, B D, and Sammuli, A G 'Empirical results on undetected CMOS stuck-open failures' Prec. Int. Test Conf. (1987) pp 166-170

26 Roth, J P, Oklobdzija, V G and Beetem, J F 'Test generation for FET switching circuits' Prec. Int. Test Conf. (1984) pp 59-62

27 Damper, R I and Burgess, N 'MOS test pattern generation using path algebras' IEEE Trans. Comput. Vol C-36 (1987)

28 Reddy, S M, Reddy, M K and Kuhl, J G 'On testable design for CMOS logic circuits' Prec. Int. Test Conf. (1983) pp 435-445

29 Moritz, P S and Thorsen, L M 'CMOS circuit testability' IEEE J. Solid State Circuits Vol 21 (1986) pp 306-309

30 Liu, D L and McCluskey, E J 'Designing CMOS circuits for switch-level testability' IEEE Des. Test Vol 4 No 4 (1987) pp 42-49

31 Malaiya, Y K and Su, S Y H 'A new fault model and testing technique for CMOS devices' Prec. Int. Test Conf. (1982) pp 25-34

32 Reddy, M K and Reddy, S M 'Detecting FET stuck- open faults in CMOS latches and flip-flops' IEEE Des. Test Vol 3 No 5 (October 1986) pp 17-26

33 McCluskey, E J and Clegg, F W'Fault equivalence in combinational logic networks' IEEE Trans. Comput. Vol 0 2 0 (1971) pp 1286-1293

34 Radke, C E 'Experiences in VLSI testing (reporting on an ITC invited address by E. Eichelberger)' IEEE Des. Test Vol 3 No 1 (1986) p 83

35 Roth, J P 'Diagnosis of automata failures - A calculus and a method' IBMJ. R&D Vo110 (1966) pp 278-291

36 Thomas, l J 'Automated diagnostic test programs for digital networks' Computer Design Vol 10 No 8 (1971) pp 63-67

37 Muth, P 'A nine-valued circuit model for test generation procedure for LSI functional testing' IEEE Trans. Comput. Vol C-25 (1976) pp 630-636

38 Goel, P 'An implicit enumeration algorithm to generate tests for combinational logic circuits' IEEE Trans. Comput. Vol C-30 (1981) pp 215-222

39 Fujiwara, H and Shimono, T 'On the acceleration of test generation algorithms' IEEE Trans. Comput. Vol C- 32 (1983) pp 1137-1144

40 Motohara, A and Fujiwara, H 'Design for testability for complete test coverage' IEEE Des. Test Vol 1 No 4 (November 1984) pp 25-32

41 Funatsu, S and Kawai, M 'An automatic test- generation system for large digital circuits' IEEE Des. Test Vol 2 No 5 (October 1985) pp 54-60

42 Geel, P'Test generation costs analysis and projections' Prec. IEEE Des. Auto. Conf. (1980) pp 77-84

43 Abramovici, M, Menon, P R and Miller, O T 'Critical path tracing: An alternative to fault simulation' IEEE Des. Test Vol 1 No 1 (February 1984) pp 83-93

44 Johansson, M 'The GENESYS algorithm for ATPG without fault simulation' Prec. InL Test Conf. (1983) pp 333-337

45 Abramovici, M, Kulikowski, J J, Menon, P R and Miller, D T 'SMART and FAST: Test generation for VLSI scan-design circuits' IEEE Des. Test Vol 3 No 4 (August 1986) pp 43-54

46 Abramevici, M, Kulikowski, J J, Menon, P R and Miller, D T 'Test generation in LAMP2: System ovewiew' Prec. Int. Test Conf. (1985) pp 45-48

47 Abramovici, M, Kulikowski, J J, Menon, P R and Miller, D T 'Test generation in LAMP 2: Concepts and algorithms' Prec. Int. Test Conf. (1985) pp 49-56

48 Eichelberger, E B and Lindbloom, E 'Random pattern coverage enhancement and diagnosis for LSSD logic self-test' IBM J. R&D Vol 27 (1983) pp 265-272

49 Savir, J, Ditlow, G S and Bardell, P H 'Random pattern testability' IEEE Trans. Comput. Vol C-33 (1984) pp 79-90

50 Geel, P 'RAPS (Random Path Sensitization) test pattern generator' IBM Tech. Discl. Bull. Vo121 (1978) pp 2787-2791

51 Goel, P and Resales, B 'PODEM-X: An automatic test generation system for VLSI logic structures' Prec. IEEE Des. Auto. Conf. (1981) pp 260-268

52 Bottorff, P 'Test generation and fault simulation' in Williams, T W (ed.) VLSI testing (Advances in CAD for VLSI, Vol 5) North Holland, Amsterdam, The Nether- lands (1986)

53 Hung, A C and Wang, F C 'A method for test generation directly from testability analysis' Prec. Int. Test Conf. (1985) pp 62-78

54 Levendel, Y H and Premachandran, R M 'A test generation algorithm for computer hardware descrip- tion languages' IEEE Trans. CompuL Vol C-31 (1982) pp 577-588

55 Miczo, A 'Fault modelling for functional primitives' Prec. Int. Test Conf. (1982) pp 43-49

56 Singh, N An artificial intelligence approach to test generation Kluwer, Boston, MA, USA (1987)

57 Robinson, G D 'HITEST- Intelligent test generation' Prec. Int. Test Conf. (1983) pp 311-323

58 Bending, M I 'HITEST: A knowledge-based test generation system' IEEE Des. Test Vol 1 No 2 (May 1984) pp 83-93

59 Micze, A 'The sequential ATPG: A theoretical limit' Prec. Int. Test Conf. (1983) pp 143-147

60 McCluskey, E J Logic design principles Prentice-Hall, Englewood Cliffs, N J, USA (1986)

61 Russell, G and Sayers, I L 'Design for testabi l i ty-A review of advanced methods' Microprocessors Micro- syst. Vol 10 No 10 (December 1986) pp 531-539

62 Maling, K and Allen, E L 'A computer organisation and programming system for automated maintenance'

Vol 72 No 10 December 1988 583

Page 12: Testing methodology: implications for the circuit designer

Trans. IEEE Vol EC-12 (1963) pp 887-895 63 Carter, W C, Montgomery, H C, Preiss, R J and

Reinheimer, H J 'Design of serviceability features for the IBM System/360' IBM J. R&D Vol 8 (1964) pp 115-126

64 Feugate, R J and Mclnlyre, S M Introduction to VLSI testing Prentice-Hall, Englewood Cliffs, N J, USA (1988)

65 Mangin, C-H and McClelland, S Surface mount technology IFS Publications, Bedford, UK (1987)

66 Bateson, J In-circuit testing Van Nostrand Reinhold, Wokingham, UK (1985)

67 Cortner, J M Digital test engineering Wiley, NewYork, NY, USA (1987)

68 Raymond, D W 'In-circuit testability factors: Shoot with a rifle' Proc. InL Test Conf. (1984) pp 572-580

69 Bullock, M 'Designing SMT boards for in-circuit testability' Proc. Int. TesL Conf. (1987) pp 606-613

70 Kakami, T 'Testing of surface mount technology boards' Proc. Int. Test Conf. (1987) pp 614-620

71 Barnes, R M 'Fixturing for surface mounting devices' Proc. InL Test Conf. (1983) pp 72-74

72 Sobotka, L 'The effect of backdriving digital circuits during in-circuit testing' Proc. Int. Test Conf. (1982) pp 269-277

73 Hill, G J, Roberts, B C and Strudwick, C P 'Safe operating limits for backdriving in ATE' Proc. Int. Test ConL (1987) pp 789-804

74 Eichelberger, E B and Williams, T W 'A logic design structure for LSI testability' Proc. IEEE Des. Auto. Conf. (1977) pp 462-468

75 Turino, J 'A totally universal reset initialisation and nodal observation circuit' Proc. Int. Test Conf. (1984) pp 878-883

76 Bennetts, R G Design of testable logic circuits Addison-Wesley, Wokingham, UK (1984)

77 Wilkins, B R Hierarchical interconnection technology - An emerging standard for testable circuit structures' Proc. 20th ECSG Symp. (1987) pp 262-271

78 Goel, P and McMahon, M T 'Electronic chip-m~piac~ test' Proc. Int. Test ConL (1982) pp 83-90

79 Maunder, C 'Paving the way for testability standards" IEEE Des. Test Vol 3 No 4 (1986) p 6S

80 Maunder, C and Beenker, F 'Boundary scan: A framework for structured design for test' Proc. Int. Test Conf. (1987) pp 714-723

81 Van der Lagemaat, D and Bleeker, H 'Testing a board with boundary scan' Proc. Int. Test Conf. (1987) pp 724-729

82 Wagner, P T 'Interconnect testing with boundary scan' Proc. Int. Test Conf. (1987) pp 52-57

83 Beenker, F P M 'Systematic and structured methods for digital board testing' Proc. Int. Test Conf. (1985) pp 380-385

Brian Wilkins graduated in electrical engineering from University College London, UK. After serving for several years in the Royal Navy, he joined the Anatomy Depart- ment in University College London as a research assistant working on various aspects of artificial intel- I!gence. This work concen- trated particularly on pattern

recognition and was continued after he joined Southampton University in 1965. More recently he has turned his attention to problems associated with failure mechanisms, test methodologies and testability design of electronic circuits. He is a member of the IEEE Computer Society and a fellow of the UK Institution of Electrical Engineers.

584 Microprocessors and Microsystems