12
1 Introduction 1.1 Background The advances over the past decades in densities of Very Large Scale Integration (VLSI) chips have resulted in electronic sys- tems with ever-increasing functionalities and speed, bringing into reach powerful informa- tion processing and communications applica- tions. It is expected that silicon-based CMOS technology can be extended to up to the year 2015; however, to carry improvements beyond that year, technological breakthroughs will be needed. Though this is a major motivation for research into devices on nanometer scales, it is equally important to find out how such devices can be combined into larger systems. Systems based on nanodevices will require very different architectures (designs), due to the strict requirements of nanometer scales. Three major factors influencing designs of nanocircuits are: ease of manufacturing, mini- mization of heat dissipation, and tolerance to errors. 1.2 Ease of Manufacturing It will be difficult to manufacture circuits of nanodevices in the same way as VLSI chips, i.e., by using optical lithography. The reason is that light has too large a wavelength to resolve details on nanometer scales. As a result, an alternative technique called self- assembly of nanocircuits is increasingly attracting the attention of researchers. The idea of this technique is to assemble circuits using the ability of molecules to interact with each other in such a way that certain desired patterns are formed, according to a process called self-organization. Though self-assem- bly is a promising technology in development, it comes with strings attached: the patterns that can be built tend to have very regular structures, like tiles in a bathroom. Conse- quently, nanocircuits constructed by self- assembly will likely need to have very regular structures. 1.3 Minimizing Heat Dissipation The huge integration densities made possi- Ferdinand Peper et al. 129 4-5 Nanoelectronics Architectures Ferdinand Peper, LEE Jia, ADACHI Susumu, ISOKAWA Teijiro, TAKADA Yousuke, MATSUI Nobuyuki, and MASHIKO Shinro The ongoing miniaturization of electronics will eventually lead to logic devices and wires with feature sizes of the order of nanometers. These elements need to be organized in an architecture that is suitable to the strict requirements ruling the nanoworld. Key issues to be addressed are (1) how to manufacture nanocircuits, given that current techniques like opti- cal lithography will be impracticable for nanometer scales, (2) how to reduce the substantial heat dissipation associated with the high integration densities, and (3) how to deal with the errors that are to occur with absolute certainty in the manufacturing and operation of nano- electronics? In this paper we sketch our research efforts in designing architectures meeting these requirements. Keywords Nanoelectronics, Architecture, Cellular automaton, Heat dissipation, Fault-tolerance, Reconfigurable

4-5 Nanoelectronics Architecturesnict.go.jp/publication/shuppan/kihou-journal/journal-vol51no3.4/4... · errors that are to occur with absolute certainty in the manufacturing and

  • Upload
    buidiep

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

1 Introduction

1.1 BackgroundThe advances over the past decades in

densities of Very Large Scale Integration(VLSI) chips have resulted in electronic sys-tems with ever-increasing functionalities andspeed, bringing into reach powerful informa-tion processing and communications applica-tions. It is expected that silicon-based CMOStechnology can be extended to up to the year2015; however, to carry improvements beyondthat year, technological breakthroughs will beneeded. Though this is a major motivation forresearch into devices on nanometer scales, it isequally important to find out how suchdevices can be combined into larger systems.Systems based on nanodevices will requirevery different architectures (designs), due tothe strict requirements of nanometer scales.Three major factors influencing designs ofnanocircuits are: ease of manufacturing, mini-mization of heat dissipation, and tolerance toerrors.

1.2 Ease of ManufacturingIt will be difficult to manufacture circuits

of nanodevices in the same way as VLSIchips, i.e., by using optical lithography. Thereason is that light has too large a wavelengthto resolve details on nanometer scales. As aresult, an alternative technique called self-assembly of nanocircuits is increasinglyattracting the attention of researchers. Theidea of this technique is to assemble circuitsusing the ability of molecules to interact witheach other in such a way that certain desiredpatterns are formed, according to a processcalled self-organization. Though self-assem-bly is a promising technology in development,it comes with strings attached: the patternsthat can be built tend to have very regularstructures, like tiles in a bathroom. Conse-quently, nanocircuits constructed by self-assembly will likely need to have very regularstructures.

1.3 Minimizing Heat DissipationThe huge integration densities made possi-

Ferdinand Peper et al. 129

4-5 Nanoelectronics Architectures

Ferdinand Peper, LEE Jia, ADACHI Susumu, ISOKAWA Teijiro,TAKADA Yousuke, MATSUI Nobuyuki, and MASHIKO Shinro

The ongoing miniaturization of electronics will eventually lead to logic devices and wireswith feature sizes of the order of nanometers. These elements need to be organized in anarchitecture that is suitable to the strict requirements ruling the nanoworld. Key issues to beaddressed are (1) how to manufacture nanocircuits, given that current techniques like opti-cal lithography will be impracticable for nanometer scales, (2) how to reduce the substantialheat dissipation associated with the high integration densities, and (3) how to deal with theerrors that are to occur with absolute certainty in the manufacturing and operation of nano-electronics? In this paper we sketch our research efforts in designing architectures meetingthese requirements.

Keywords Nanoelectronics, Architecture, Cellular automaton, Heat dissipation, Fault-tolerance,Reconfigurable

130

ble by nanotechnology will have far-reachingconsequences for the heat that can be dissipat-ed by nanodevices. Even nowadays heat dis-sipation is a growing problem in VLSI chips.For nanoelectronics the situation is muchworse: it is anticipated that heat dissipated perunit of surface will surpass that of the sun, ifsimilar techniques as in current VLSI areused. Most notably, the use of clock signals tosynchronize the operations in circuits is amajor factor causing heat dissipation. Eachtime clock signals are distributed over a cir-cuit, energy needs to be pumped into wires,and much of this energy tends to get lost in theform of heat. One technique to cope with thisproblem is to remove the clock. Circuits notrequiring synchronization by clock signals arecalled asynchronous. Roughly speaking, insuch circuits devices are activated only whenthere is work for them to do, unlike in clockedcircuits, in which most devices are constantlybusy processing clock signals, even in theabsence of useful work. Provided they arewell-designed, asynchronous electronics canreduce power consumption and heat dissipa-tion substantially.

Another technique to reduce power con-sumption and heat dissipation is reversibilityof operation, as was first claimed byLandauer [8]. The idea behind reversibility ofa device is that it can recapture the energyspent in the device’s operation by undoing thatoperation (i.e. doing the operation in reverse)afterwards. Though this principle has beenknown since the early 60’s, it is hardly appliedin practical circuits nowadays, perhapsbecause the gains it promises for VLSI do notjustify the overhead in hardware it requires.For nanoelectronics the story may be different,however, since interactions on nanometerscales (for example interactions between mol-ecules or other particles), are typicallyreversible, and such interactions are of funda-mental importance to the operation of nanode-vices. Consequently, interest into reversibilityhas revived together with the nanotechnologyboom.

1.4 Tolerance to ErrorsBoth the manufacturing and the operation

of nanocircuits will suffer from the unreliabili-ty of interactions on nanometer scales. Inmanufacturing this manifests itself in defects:some devices will permanently fail to workand some of the interconnections betweendevices will not be correctly made. To copewith defects, an architecture needs to bedefect-tolerant. In other words, it needs tohave redundent (spare) devices and intercon-nections available that take over the tasks ofdefective ones. An architecture also needs tobe reconfigurable such that defective devicesand interconnections can be made inaccessiblein favor of redundant ones.

Errors occuring during the operation ofnanocircuits will, unlike manufacturingdefects, typically be transient. Transienterrors occur only now and then, even in per-fectly manufactured devices, due to factorssuch as thermal noise, signal noise, quantummechanical effects, radiation, etc. The capa-bility of architectures to cope with transienterrors is called fault-tolerance. It can be pro-vided by redundancy in a circuit, such thateven when errors occur, they can be detectedand corrected using the remaining (correct)information in the circuit. Error correctingcodes are a well-known technique in this con-text, and they have been used extensively toachieve reliable performance in communica-tions and in present-day memory circuits.

1.5 Nanoelectronics ArchitecturesThe realization of nanoelectronics thus

requires architectures with a regular structurethat employ techniques like asynchronous tim-ing or reversibility to minimize heat dissipa-tion, while also offering tolerance to defectsand transient errors. The architectures wehave proposed for this purpose, are based onso-called cellular automata, which are regulararrays of identical cells, each cell of which canconduct a very simple operation. This papershows how cellular automata can be used asarchitectures for nanoelectronics.

Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004

2 Cellular Automata

Cellular automata were first proposed byvon Neumann [15] with the aim of describingthe biological process of self-reproductionusing a simple mathematical formalism. Theappeal of cellular automata is that they canmodel complex behavior, while only simplelocal interactions between individual neigh-boring cells take place. In this paper we use acellular automaton that is specially designedwith implementation of nanoelectronic circuitsin mind. This cellular automaton is a 2-dimensional array of identical cells, each ofwhich has four neighboring cells, at its north,its east, its south, and its west. Each side of acell has a memory of a few bits attached to it,as in Fig. 1.

The four memories at the side of a cell aresaid to be associated with the cell. The valuestored in a memory is called the state of thememory. A cell may change the states of thefour memories associated with it according toan operation called a transition. The transi-tions a cell may undergo are expressed by atable of transition rules. A transition ruledescribes how a cell may change the states ofthe memories associated with it.

Figure 2 shows a typical transition rule: itconsists of a left hand side, which is the part

left of the arrow, and a right hand side. If theleft hand side matches the combination ofstates of the memories associated with a cell,the transition rule is said to apply to the cell.The states of the memories may then bereplaced by the states in the right hand side ofthe transition rule. Certain desired behavior ofa cellular automaton can be obtained by set-ting its memories in proper states and definingtransition rules that lead to state changes cor-responding to this behavior. For example, onemay design a cellular automaton that behavesin the same way as a certain digital electroniccircuit.

Cellular automata in which all cells under-go transitions simultaneously in synchronywith a central clock signal are calledsynchronous. They are the most widely stud-ied type. When transitions of the cells occurat random times, independent of each other,we obtain asynchronous cellular automata,which is the type employed in this paper. Inan asynchronous cellular automaton, a cellmay undergo a transition when its memorystates match the left hand side of a transitionrule; this transition is randomly timed. If thereis no transition rule whose left hand sidematches a cell’s combination of memorystates, the cell remains unchanged. Thoughtransitions of cells are timed randomly andindependently of each other, they are subjectto the condition that two neighboring cellsnever undergo transitions simultaneously.This ensures that two neighboring cells will

Ferdinand Peper et al. 131

Cellular automaton consisting of cells(the big squares with fat borders, onecell being shaded), each havingaccess to four memories (the smallsquares). Shared by two cells, a mem-ory stores a few bits of information.

Fig.1

Transition rule describing a transitionthe memories associated with a cellcan undergo. If the memory states ofa cell match the states n, e, s, and win the left hand side of the rule, therule may be applied to the cell, as aresult of which the states of the cell’smemories are changed into the statesn’, e’, s’, and w’, respectively, depict-ed in the right hand side.

Fig.2

132

not attempt to set the memory common tothem to different states at the same time.

Figure 3(a) shows an example of a transi-tion rule that moves the contents of a memoryof two bits, one bit being 0 and the other 1, byone cell to the north. In a memory in thisexample, the two bits are represented by awhite block and a black block, to denote a 0-bit and a 1-bit, respectively, and the bits areseparated by a thin line. When this transitionrule is applied to the configuration on the leftof Fig. 3(b), a 01-bit pattern in a memory ismoved one cell to the north (Fig. 3(b) middleconfiguration). Applying the transition ruleonce more moves the bit pattern one more cellto the north (right side of Fig. 3(c)). Since this01-bit pattern steadily moves to the north overtime, it can be used to transfer informationfrom the south to the north in the cellularautomaton. For this reason, this pattern iscalled a signal. Together the cells over whichthe signal can move form a so-called path.The rotated or reflected version of a rule mayalso be used for transitions. This allows theabove transition rule to be used for transmit-

ting signals in directions towards the south,east, or west as well.

3 Operations

The cellular automaton described up tothis point is extremely simple: it can onlytransmit signals along straight paths. To dosome useful work by the cellular automaton,we need a way to operate on signals. Operat-ing on signals is more difficult than just trans-mitting them, because it may involve multiplesignals. For example, we may have an opera-tion that requires one signal to be input, andthat produces two output signals as a result.Such an operation is called a fork, because afork (the variety used for eating) has one barat one end, reflecting the input side, and multi-ple bars at the other end, reflecting the outputside. The opposite of a fork is an operationcalled a join. A join receives two signals asinput, and produces one signal as output. If ajoin receives only one signal as input, it justwaits until the second signal arrives beforetaking any action. How long should a joinoperator wait for a second signal? The lastquestion addresses an important point for theoperations we use in our cellular automaton.Our answer is: we allow a join to wait for asecond signal as long as it is necessary. Evenstronger, we allow any operator to wait forany input signal at least as long as is necessaryfor that signal to arrive. In other words, anyarbitrary delay experienced by a signal is sup-posed not to alter the logical outcome of theoperations conducted on the signal. A circuitin which delays of signals do not alter the log-ical outcome of circuit operations is calleddelay-insensitive (DI). DI circuits are animportant class of asynchronous circuits. Tooperate correctly, such circuits do not requirea central clock sending out clock signals.Rather, DI circuits are driven by data signals:each circuit operator is inactive, until itreceives an appropriate set of input signals,after which it processes the signals, outputsappropriate signals, and becomes inactiveagain. Operators in DI circuits are similar to

Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004

(a) Transition rule for signal propaga-tion, and (b) its application twice to aconfiguration of cells, each time giv-ing rise to a signal successively movingone cell towards the north. Here amemory contains two bits, each ofwhich is indicated by a block that isshaded for the value 1 and white forthe value 0. The transition rule oper-ates on all bits in the four memoriesassociated with a cell.

Fig.3

the operators in normal digital circuits, likeAND-gates and NOT-gates, except that DIoperators need no clock signals to ensure theirinputs and outputs are properly timed. DI cir-cuits are thus promising candidates for reduc-ing heat dissipation in physical realizations.

To implement a fork operator on our cellu-lar automaton, we employ a cell of which oneof the memories contains two 1-bits and theother three memories contain all 0-bits.

The transition rule in Fig. 4 is able to oper-ate on such a fork cell after the cell receivesan input signal, like in Fig. 5, and this resultsin two output signals. Unfortunately, a forkalone is insufficient to create arbitrary circuits:other operators are necessary, like the joinoperator described above. It turns out thatarbitrary valid DI circuits can be constructedusing only a small set of operators. Such a setis called universal, and it can be as small asthree operators, as shown in [11] (seealso [10]). For reasons of space we will not gointo details, other than mentioning that this setcan be implemented on our cellular automatonusing only four more transition rules [16] inaddition to the two rules in Fig. 3(a) and Fig.

4. To see how actual circuits can be made onour cellular automaton, like a 1-bit memory ora DI AND-gate, we refer to [16]. Implementa-tion of other, higher level, circuits and pro-gram structures, like counters, for-loops, etc.,is discussed in [17].

4 Reconfigurability

Reconfigurability of a nanoelectronicsarchitecture is its ability to reorganize itsdevices and the interconnections between itsdevices according to some functional specifi-cation given by a user. In the case of using acellular automaton as architecture, it may hap-pen that it conducts for example at one time apattern recognition task, at another time acommunications task, and at still another timesome number crunching task like decipheringan encrypted message. Given that a cellularautomaton has a very homogeneous structurewith all identical simple cells, how can it bemade to do such a wide variety of tasks? Inthe previous section we have seen that thefunctionality of each cell can be determinedby setting the memories associated with it toappropriate states. Taken together, a configu-ration of cells with memories in certain statesthus forms a DI circuit with a certain function-ality. While this explains how a cellularautomaton can have different functionalitiesdepending on the states of its memories, itleaves open the question as to how the memo-ries can be set to the appropriate states thatgive rise to a certain desired functionality, andlater be reset to different states reflecting a dif-ferent functionality. This problem boils down

Ferdinand Peper et al. 133

Sequence of operations in which a fork receives one input signal on its input path from thewest, after which it produces one signal to each of its output paths, i.e., to the north and thesouth. Each time a transition rule is used, its label appears above the corresponding arrow,whereby S denotes the transition rule for signal propagation and F that for the fork.

Fig.5

Transition rule defining the operationof a fork. The fork is represented by amemory of which both bits are 1.

Fig.4

134

to moving a certain pattern of information to acertain location in the cellular automaton. Inthis case this pattern of information is a con-figuration of cells that represent a DI circuitlayout. How can this problem be addressed?

Fortunately, the field of cellular automatahas a rich history, and one of the majorachievements is the implementation of self-reproduction, which is the ability to copy andmove certain patterns of information stored inthe cells’ memories to other cells’ memories.It thus closely resembles reconfigurability. Aswe pointed out in section 2, self-reproductionwas the defining problem that started the fieldof cellular automata. The method used by VonNeumann to implement self-reproduction oncellular automata [15], though, is extremelycomplicated, up to the point that even themere simulation on modern-day computers isdifficult in terms of memory resources andcomputation time. Successors of thisresearch, however, like [2] [3] [19], haveachieved much simpler self-reproducing cellu-lar automata. To investigate the potential ofour cellular automaton for self-reproduction,we have implemented self-reproduction on asimplified version of our cellular automaton[20] [21]. Note that our cellular automaton isdifferent from the cellular automata used inthe past for self-reproduction [2] [3] [15] [19], inthe sense that it works asynchronously. Thismakes it much more difficult to control thetiming by which cells undergo transitions, andconsequently, it complicates implementationsof self-reproduction.

Our implementation [20] [21] uses the samebasic idea as von Neumann’s. That is, it isbased on a so-called Turing machine, which isa very simple model of a computer. A Turingmachine consists of two parts: a control mech-anism and a tape unit. The control mechanismdoes all the “logical work”, and can bedescribed in terms of DI circuits, so it is easyto implement on our cellular automaton, usingthe same six transition rules as are necessaryfor implementing DI circuits. The second partof a Turing machine, i.e., the tape unit, is akind of memory that stores the machine’s pro-

gram, data, and temporary results. The tapeunit is more complicated than the controlmechanism: it requires a head that movesabout the tape and that reads and writes infor-mation from and to tape cells. To do self-reproduction by a Turing machine, one addi-tional mechanism is needed. Called a con-struction arm, this mechanism can be extendedtowards possibly far-away cells to copy infor-mation to their memories, and it can be with-drawn afterwards. Unfortunately, the imple-mentation of a construction arm on our cellu-lar automaton requires additional transitionrules, making the cellular automaton morecomplicated. The same holds true for the tapeunit. Fortunately, the tape unit is very similarto the construction arm: it turns out that theycan be both made by using the same transitionrules, making a total of 39 transitionrules [20] [21]. Though this is much more thanthe six rules required for merely implementingDI circuits, we believe that the number ofrules can be reduced, but this is left as futureresearch.

5 Tolerance to Errors

Reconfigurability not only endows a cellu-lar automaton with a flexible functionality, butalso makes it more tolerant to defects, becauseit gives freedom to use only nondefective cellswhen configuring DI circuits on the cellularautomaton. Well-known results on defect tol-erance have been obtained with theTeramac [4], which is a parallel computerbased on field programmable gate arrays(FPGAs). The Teramac is able to achievehigh-performance computing, even if a signif-icant number of its components are defective.However, the Teramac requires a master com-puter to set up a table of routes around defectsand to configure the hardware accordingly, soit is not autonomous. Especially with defectsthat occur after manufacturing, i.e., duringoperation, it would need a master computer tobe connected permanently to it to keep therouting table error-free. A master computerbased on nanoelectronics, however, would

Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004

suffer from defects too, so this solution is notself-contained. We thus prefer a method inwhich scanning for defects and hardwarereconfiguration is done by nanoelectronicshardware itself rather than by a master com-puter. We have conducted some preliminaryresearch for this purpose [7], by designing acellular automaton in which cells follow a cer-tain sequence of states over time. As defectivecells typically fail to change states, they canbe identified in this way. Defective cells arethen “isolated” by marking their non-defectneighbors by states that are exclusively usedfor this purpose. This allows DI circuits to beconfigured such as to avoid marked cells.

When errors are transient, other strategiesthan the above are needed. In [16] we haveconstructed a fault-tolerant version of the cel-lular automaton of sections 2 and 3 by encod-ing the memories associated with the cells byerror correcting codes. The central idea oferror correcting codes is to include redundantinformation in the encoding, and use thisinformation to reconstruct the original con-tents of a memory in case some of the memorybits become erroneous.

Figure 6 shows one such encoding: theoriginal four values of a 2-bit memory in thebottom row, i.e., the values 00, 01, 10, and 11,are encoded by the four values in the top rowthat are expressed by 14 bits each. So, tomake a memory fault-tolerant, we need 14 bitsinstead of the original two bits. The expenseof seven times more bits in a memory gives usin return the ability to correct up to four bit

errors. For example, flipping four arbitrarybits in one of the memories in the top row ofFig. 6 from 0 to 1 or the other way around,gives a combination of bit values that moreclosely resembles the original than any of theother three memories in the top row (see Fig.7). So, even with four bit errors, we candeduce what the original contents of a memo-ry should have been.

As is pointed out in [16], most 5-bit errorsand some 6-bit errors can also be corrected inthese 14-bit memories. Our scheme is flexiblein the sense that it can be made tolerant tomore (or less) bit errors by expending more(or resp. less) bits in each memory. The ideaof our error correcting scheme is to block tran-sitions on a cell as long as at least one of thememories associated with it contains at leastone error. Errors in each memory are correct-ed at random times; when all memories asso-ciated with a cell become error-free, the pre-condition is then created for a transition totake place on the cell. In other words, opera-tions of cells take only place once all theerrors in their memories are corrected. Werefer to [16] for more details of this scheme,and to [6] for a similar scheme based on a dif-ferent type of error correcting code.

6 Discussion

The construction of nanoelectronics cir-cuits requires a serious consideration of thearchitectures that are to be used. To facilitatemanufacturability, architectures will need to

Ferdinand Peper et al. 135

Graphical notation of an error correcting code. The bottom row contains the original 2-bitmemories, the top row the corresponding memories that are encoded by 14 bits each. Adark block denotes a bit with value 1, a white block a bit with value 0.

Fig.6

136

be regular. Additionally, the reduction of heatdissipation is important, as well as the toler-ance to errors during manufacturing and oper-ation. Our results achieved thus far suggestthat cellular automata form an attractive basisto satisfy these demands. With respect to heatdissipation we have investigated cellularautomaton models with an asynchronousmode of operation. The randomness by whichtransitions on cells take place eliminates theneed for distributing a central clock signalover the cells. Though this has the potential toreduce heat dissipation substantially, it alsocomplicates the operation of a cellularautomaton. We have developed a method tocope with this: to simplify operation, we con-figure the cells such that they implement delayinsensitive circuits. As compared to pastmethods [14], this is a remarkably effectiveway to coordinate cells’ activities on localscales, while allowing them to be independentof each other on global scales. Our methodcan be applied to a wide range of asynchro-nous cellular automaton types, for example asin [1] [9].

We have also investigated an alternativeway to reduce heat dissipation, i.e. by using

reversibility of circuit operation, but we do notreport extensively on it in this paper, otherthan mentioning that we have designed asyn-chronous cellular automata [12] and delayinsensitive circuits [13] that are reversible.Though these designs are not very efficient interms of computational power, this researchhas deepened our understanding of the natureof reversible operation without the use of acentral clock, and it has the potential to deliversome unexpected results. Moreover, sincereversible operation is a precondition forquantum computing, our research may alsohave some impact on that field.

The feasibility of our cellular automatonapproach will be decided by how efficientlyindividual cells can be manufactured and puttogether into regular structures. One promis-ing way to achieve efficient manufacturing isthe so-called Nanocell approach [5]. In thisapproach, the internal of a cell consists of nan-odevices connected in a random network ofnanowires. To achieve a certain desired func-tionality, the cell is subjected to appropriatesignals at its inputs and outputs that burn cer-tain pathways in the network, according to aprocess that resembles neural network train-

Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004

Occurrence and correction of four bit errors in a 14-bit memory. The four correct values of a14-bit memory in the top row differ with the erroneous memory in 5, 10, 4, and 9 bits, respec-tively. So, the corrected value of the erroneous memory is most likely the value that differs inonly 4 bits with it.

Fig.7

ing. Once manufactured and trained, cellsmay be put together by using DNA tiling tech-niques like those advanced by Seeman etal. [18]. Though many new techniquesimportant to physical implementations mayemerge in the coming decade, it seems clear atthis point that a key issue will be making cellsas simple as possible, i.e. making them oper-ate with a minimum set of transition rules.For this purpose, simpler delay insensitive cir-cuits and simpler self-reproducing techniquesneed to be designed, and this is an ongoingeffort in our research.

What can we eventually expect from nano-electronics? The high integration densities thatare possible have the potential of substantialperformance gains, as compared to currentVLSI. Most of these gains will come from thehuge number of available devices, thoughincreased switching speeds of devices mayalso contribute. Applications that can effec-tively utilize such huge numbers of deviceswill be the main beneficiaries of nanoelectron-ics. We think about applications suitable to be

subdivided in many independent subproblemsthat can be processed in parallel. Typical inthis context are Artificial Intelligence (AI)problems, search problems, optimization prob-lems, and many pattern recognition problems.Portability of applications will be greatlyfacilitated by nanoelectronics’limited heat dis-sipation, since the latter implies low powerconsumption. One can thus envision devicesthat do not need recharging other than occa-sionally by solar cells. Such devices can becarried continuously by their users, keepingthem wirelessly connected to the world. Dueto their powerful abilities, these devices willbe indispensible in augmenting the intellectualabilities of their users, for example withrespect to retrieving faces, recognizing speech,etc. Other applications that may be made pos-sible by nanoelectronics are networks on sub-millimeter scales that connect miniscule sen-sors, actuators, and communication hubs thatinterface with the outside world. Such net-works may find use in the control of minisculeenvironments.

Ferdinand Peper et al. 137

References1 S. Adachi, F. Peper, and J. Lee, “Computation by asynchronously updating cellular automata”, Journal of

Statistical Physics, Vol. 114, Nos. 1/2, pp. 261 – 289, 2004.

2 E.R. Banks, “Universality in cellular automata”, Proc. IEEE 11th Annual Symposium on Switching and

Automata Theory, pp. 194 – 215, 1970.

3 E.F. Codd, Cellular Automata, Academic Press, New York, 1968.

4 J.R. Heath, P.J. Kuekes, G.S. Snider, and R.S. Williams, “A defect-tolerant computer architecture: opportu-

nities for nanotechnology”, Science, Vol. 280, pp. 1716 – 1721, 1998.

5 C.P. Husband, S.M. Husband, J.S. Daniels, and J.M. Tour, “Logic and memory with Nanocell circuits”, IEEE

Transactions on Electron Devices, Vol. 50, No. 9, pp. 1865 – 1875, 2003.

6 T. Isokawa, F. Abo, F. Peper, S. Adachi, J. Lee, N. Matsui, and S. Mashiko, “Fault-tolerant nanocomputers

based on asynchronous cellular automata”, International Journal of Modern Physics C, in press, 2004.

7 T. Isokawa, F. Abo, F. Peper, N. Kamiura, and N. Matsui, “Defect-tolerant computing based on an asyn-

chronous cellular automaton”, Proc. SICE Annual Conference, Fukui, Japan, pp. 1746 – 1749, 2003.

8 R. Landauer, “Irreversibility and heat generation in the computing process”, IBM Journal of Research and

Development, Vol. 5, pp. 183 – 191, 1961.

9 J. Lee, S. Adachi, F. Peper, and K. Morita, “Embedding universal delay-insensitive circuits in asynchronous

cellular spaces”, Fundamenta Informaticae, Vol. 58, Nos. 3/4, pp. 295 – 320, 2003.

10 J. Lee, F. Peper, S. Adachi, and K. Morita, “Universal delay-insensitive circuits with bi-directional and buffer-

ing lines”, IEEE Transactions on Computers, Vol. 53, No. 8, pp. 1034 – 1046, 2004.

138 Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004

11 J. Lee, F. Peper, S. Adachi, and S. Mashiko, “Universal delay-insensitive systems with buffering lines”, IEEE

Transactions on Circuits and Systems I, in press, 2004.

12 J. Lee, F. Peper, S. Adachi, K. Morita, and S. Mashiko, “Reversible computation in asynchronous cellular

automata”, Unconventional Models of Computation, Lecture Notes in Computer Science, Vol. LNCS 2509,

220 – 229, 2002.

13 J. Lee, F. Peper, S. Adachi, and S. Mashiko, “On reversible computation in asynchronous systems”, in:

Quantum Information and Complexity: Proceedings of the 2003 Meijo Winter School and Conference, T.

Hida (Ed.), World Scientific, in press, 2004.

14 K. Nakamura, “Asynchronous cellular automata and their computational ability”, Systems, Computers, Con-

trols, Vol. 5, pp. 58 – 66, 1974.

15 J. von Neumann, Theory of Self-Reproducing Automata, University of Illinois Press, 1966.

16 F. Peper, J. Lee, F. Abo, T. Isokawa, S. Adachi, N. Matsui, and S. Mashiko, “Fault-tolerance in nanocomput-

ers: a cellular array approach”, IEEE Transactions on Nanotechnology, Vol. 3, No. 1, pp. 187 – 201, 2004.

17 F. Peper, J. Lee, S. Adachi, and S. Mashiko, “Laying out circuits on asynchronous cellular arrays: a step

towards feasible nanocomputers?”, Nanotechnology, Vol. 14, pp. 469 – 485, 2003.

18 N.C. Seeman, “Nanotechnology and the double helix”, Scientific American, Vol. 290, No. 6, pp. 64 – 75,

June 2004.

19 T. Serizawa, “Three-state Neumann neighbor cellular automata capable of constructing self-reproducing

machines”, Systems and Computers in Japan, Vol. 18, pp. 33 – 40, 1986.

20 Y. Takada, T. Isokawa, F. Peper, and N. Matsui, “Universal construction and self-reproduction on self-timed

cellular automata”, International Journal of Modern Physics C, 2005, accepted.

21 Y. Takada, T. Isokawa, F. Peper, and N. Matsui, “Universal construction on self-timed cellular automata”,

Cellular Automata for Research and Industry, Lecture Notes in Computer Science, LNCS3305, pp. 21-30,

2004.

Ferdinand Peper et al. 139

Ferdinand Peper, Ph.D.

Senior Researcher, NanotechnologyGroup, Kansai Advanced ResearchCenter, Basic and Advanced ResearchDepartment

Nanotechnology, Computer Science

LEE Jia, Ph.D.

Expert Researcher, NanotechnologyGroup, Kansai Advanced ResearchCenter, Basic and Advanced ResearchDepartment

Computer Science, Nanotechnology

ADACHI Susumu, Ph.D.

Expert Researcher, NanotechnologyGroup, Kansai Advanced ResearchCenter, Basic and Advanced ResearchDepartment

Computer Science, Nanotechnology

ISOKAWA Teijiro, Dr.Eng.

Research Associate, Division of Com-puter Engineerng, Graduate School ofEngineering, University of Hyogo

Computer Science

TAKADA Yousuke

Student, Division of Computer Engi-neerng, Graduate School of Engineer-ing, University of Hyogo

Computer Science

MASHIKO Shinro, Dr.Eng.

Director of Kansai Advanced ResearchCenter, Basic and Advanced ResearchDepartment

Laser Spectroscopy, Nanotechnology

MATSUI Nobuyuki, Dr.Eng.

Professor, Division of Computer Engi-neerng, Graduate School of Engineer-ing, University of Hyogo

Computer Science, Physics

140 Journal of the National Institute of Information and Communications Technology Vol.51 Nos.3/4 2004