Transcript
Page 1: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

0018-9162/06/$20.00 © 2006 IEEE56 Computer P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y

Magnetic bubble memoryBack when magnetic core memories dominated, the

idea of magnetic bubble memory appeared. As usual, itsadvocates attached great hopes to this concept, plan-ning for it to replace all kinds of mechanically rotatingdevices, the primary sources of troubles and unreliabil-ity. Although magnetic bubbles would still rotate in amagnetic field within a ferrite material, there would beno mechanically moving part. Like disks, they were aserial device, but rapid progress in disk technology madeboth the bubbles’ capacity and speed inferior, so devel-opers quietly buried the idea after a few years’ research.

CryogenicsCryogenic devices offered a new technology that kept

high hopes alive for decades, particularly in the super-computer domain. These devices promised ultrahighswitching speeds, but the effort to operate large com-puting equipment at temperatures close to absolute zeroproved prohibitive. The appearance of personal com-puters let cryogenic dreams either freeze or evaporate.

Tunnel diodesSome developers proposed using tunnel diodes—so

named because of their reliance on a quantum effect ofelectrons passing over an energy barrier without havingthe necessary energy—in place of transistors as switch-ing and memory elements.

The tunnel diode has a peculiar characteristic with anegative segment. This lets it assume two stable states.A germanium device, the tunnel diode has no silicon-based counterpart. This made it work over only a rela-

Given that thorough self-critique is the hallmark of any subject claiming to be

a science, computing science cannot help but benefit from a retrospective analysis

and evaluation.

Niklaus Wirth

C O V E R F E A T U R E

C omputing’s history has been driven by manygood and original ideas, but a few turned outto be less brilliant than they first appeared. Inmany cases, changes in the technological envi-ronment reduced their importance. Often,

commercial factors also influenced a good idea’s impor-tance. Some ideas simply turned out to be less effectiveand glorious when reviewed in retrospect or after properanalysis. Others were reincarnations of ideas inventedearlier and then forgotten, perhaps because they wereahead of their time, perhaps because they had not exem-plified current fashions and trends. And some ideas werereinvented, although they had already been found want-ing in their first incarnation.

This led me to the idea of collecting good ideas thatlooked less than brilliant in retrospect. A recent talk byCharles Thacker about obsolete ideas—those that dete-riorate by aging—motivated this work. I also rediscov-ered an article by Don Knuth titled “The Dangers ofComputer-Science Theory.” Thacker delivered his talkin far away China, Knuth from behind the Iron Curtainin Romania in 1970, both places safe from the damag-ing “Western critique.” Knuth’s document in particu-lar, with its tongue-in-cheek attitude, encouraged me towrite these stories.

HARDWARE TECHNOLOGYSpeed has always been computer engineers’ prevalent

concern. Refining existing techniques has been one alleyfor pursuing this goal, looking for alternative solutionsthe other. Although presented as most promising, thefollowing searches ultimately proved unsuccessful.

Good Ideas,Through theLooking Glass

Page 2: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

January 2006 57

tively narrow temperature range. Silicon tran-sistors became faster and cheaper at a rate thatlet researchers forget the tunnel diode.

COMPUTER ARCHITECTUREThis topic provides a rewarding area for find-

ing good ideas. A fundamental issue has beenrepresenting numbers, particularly integers.

Representing numbersHere, the key question has been the choice of the

numbers’ base. Virtually all early computers featuredbase 10—a representation by decimal digits, just aseverybody learned in school.

However, a binary representation with binary digits isclearly more economical. An integer n requires log10(n)decimal digits, but only log2(n) binary digits (bits).Because a decimal digit requires four bits, decimal rep-resentation requires about 20 percent more storage thanbinary, which shows the binary form’s clear advantage.Yet, developers retained the decimal representation fora long time, and it persists today in library module form.They did so because they continued to believe that allcomputations must be accurate.

However, errors occur through rounding, after divi-sion for example. The effects of rounding can differdepending on the number representation, and a binarycomputer can yield different results than a decimal com-puter. Because financial transactions—where accuracymatters most—traditionally were computed by handwith decimal arithmetic, developers felt that computersshould produce the same results in all cases—and thuscommit the same errors.

Although the binary form will generally yield moreaccurate results, the decimal form remained the pre-ferred option in financial applications, as a decimalresult can easily be hand-checked if required.

Although perhaps understandable, this was clearly aconservative idea. Consider that until the advent of theIBM System 360 in 1964, which featured both binaryand decimal arithmetic, manufacturers of large com-puters offered two lines of products: binary computersfor their scientific customers and decimal computers fortheir commercial customers—a costly practice.

Early computers represented integers by their magni-tude and a separate sign bit. In machines that relied onsequential addition, digit by digit, to be read first, thesystem placed the sign at the low end. When bit paral-lel processing became possible, the sign moved to thehigh end, again in analogy to the commonly used papernotation. However, using a sign-magnitude representa-tion was a bad idea because addition requires differentcircuits for positive and negative numbers.

Representing negative integers by their complementevidently provided a far superior solution, because thesame circuit could now handle both addition and sub-

traction. Some designers chose 1’s complement, where−n was obtained from n by simply inverting all bits.Some chose 2’s complement, where −n is obtained byinverting all bits and then adding 1. The former has thedrawback of featuring two forms for zero (0…0 and1…1). This is nasty, particularly if available compari-son instructions are inadequate.

For example, the CDC 6000 computers had aninstruction that tested for zero, recognizing both formscorrectly, but also an instruction that tested the sign bitonly, classifying 1…1 as a negative number, makingcomparisons unnecessarily complicated. This case ofinadequate design reveals 1’s complement as a bad idea.Today, all computers use 2’s complement arithmetic.The different forms are shown in Table 1.

Fixed or floating-point forms can represent numberswith fractional parts. Today, hardware usually featuresfloating-point arithmetic, that is, a representation of anumber x by two integers, an exponent e and a man-tissa m, such that x = Be?m.

For some time, developers argued about which expo-nent base B they should choose. The Burroughs B5000introduced B = 8, and the IBM 360 used B = 16, both in1964, in contrast to the conventional B = 2. The inten-tion was to save space through a smaller exponent rangeand to accelerate normalization because shifts occur inlarger steps of 3- or 4-bit positions only.

This, however, turned out to be a bad idea, as it aggra-vated the effects of rounding. As a consequence, it waspossible to find values x and y for the IBM 360, suchthat, for some small, positive ε, (x+ε)?(y+ε) < (x ? y).Multiplication had lost its monotonicity. Such a multi-plication is unreliable and potentially dangerous.

Data addressingThe earliest computers’ instructions consisted simply

of an operation code and an absolute address or literalvalue as the parameter. This made self-modification bythe program unavoidable. For example, if numbersstored in consecutive memory cells had to be added ina loop, the program had to modify the address of theadd instruction in each step by adding 1 to it.

Although developers heralded the possibility of pro-gram modification at runtime as one of the great con-sequences of John von Neumann’s profound idea of

Table 1. Using 2’s complement arithmetic to avoid ambiguous results.

Decimal 2’s complement 1’s complement

2 010 010 1 001 001 0 000 000 or 111 -1 111 110 -2 110 101

Page 3: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

storing program and data in the same memory, it quicklyturned out to enable a dangerous technique and to con-stitute an unlimited source of pitfalls. Program codemust remain untouched to avoid having the search forerrors become a nightmare. Developers quickly recog-nized that program self-modification was an extremelybad idea.

They avoided these pitfalls by introducing anotheraddressing mode that treated an address as a piece ofvariable data rather than as part of a program instruc-tion, which would be better left untouched. The solu-tion involved indirect addressing and modifying onlythe directly addressed address, a data word.

Although this feature removed thedanger of program self-modificationand remained common on mostcomputers until the mid 1970s, itshould be considered a questionableidea in retrospect. After all, itrequired two memory accesses foreach data access, which caused con-siderable computation slowdown.

The “clever” idea of multilevelindirection made this situation worse.The data accessed would indicatewith a bit whether the referenced word was the desireddata or another—possibly again indirect—address. Suchmachines could be brought to a standstill by specifying aloop of indirect addresses.

The solution lay in introducing index registers. Thevalue stored in an index register would be added to theaddress constant in the instruction. This required addinga few index registers and an adder to the arithmeticunit’s accumulator. The IBM 360 merged them all intoa single register bank, as is now customary.

The CDC 6000 computers used a peculiar arrangement:Instructions directly referred to registers only, of whichthere were three banks: 60-bit data (X) registers, 18-bitaddress (A) registers, and 18-bit index (B) registers.Memory access was implicitly evoked by every referenceto an A-register, whose value was modified by adding aB-register’s value. The odd thing was that references toA0-A5 implied fetching the addressed memory locationinto the corresponding X0-X5 register, whereas a refer-ence to A6 or A7 implied storing X6 or X7.

Although this arrangement did not cause any greatproblems, we can retrospectively classify it as a mediocreidea because a register number determines the opera-tion performed and thus the data transfer’s direction.Apart from this, the CDC 6000 featured several excel-lent ideas, primarily its simplicity. Although SeymourCray designed it in 1962, well before the term wascoined, the CDC 6000 can truly be called the first RISCmachine.

The Burroughs B5000 machine introduced a moresophisticated addressing scheme—its descriptor scheme,

primarily used to denote arrays. A so-called datadescriptor was essentially an indirect address, but it alsocontained index bounds to be checked at access time.

Although automatic index checking was an excellentand almost visionary facility, the descriptor schemeproved a questionable idea because matrices (multidi-mensional arrays) required a descriptor of an array ofdescriptors, one for each row or column of the matrix.Every n-dimensional matrix access required an n-timesindirection. The scheme evidently not only slowed accessdue to its indirection, it also required additional rowsof descriptors. Nevertheless, Java’s designers adoptedthis idea in 1995, as did C#’s designers in 2000.

Expression stacksThe Algol 60 language had a pro-

found influence not only on thedevelopment of further program-ming languages, but also, to a morelimited extent, on computer archi-tecture. This should not be surpris-ing given that language, compiler,and computer form an inextricablecomplex.

The evaluation of expressionscould be of arbitrary complexity in Algol, with subex-pressions being parenthesized and operators having theirindividual binding strengths. Results of subexpressionswere stored temporarily. F.L. Bauer and E.W. Dijkstraindependently proposed a scheme for evaluating arbi-trary expressions. They noticed that when evaluatingfrom left to right, obeying priority rules and parenthe-ses, the last item stored is always the first to be needed.It therefore could conveniently be placed in a push-downlist, or stack.

Implementing this simple strategy using a register bankwas straightforward, with the addition of an implicitup/down counter holding the top register’s index. Such astack reduced memory accesses and avoided explicitlyidentifying individual registers in the instructions. Inshort, stack computers seemed to be an excellent idea,and the English Electric KDF-9 and the Burroughs B5000computers both implemented the scheme, although itobviously added to their hardware complexity.

Given that registers were expensive resources, thequestion of how deep the stack should be arose. Afterall, the B5000 used two registers only, along with anautomatic pushdown into memory, if more than twointermediate results required storage. This seemed rea-sonable. As Knuth had pointed out in an analysis ofmany Fortran programs, the overwhelming majority ofexpressions required only one or two registers.

Still, the idea of an expression stack proved question-able, particularly after the advent in the mid 1960s ofarchitectures with register banks. These architectures sac-rificed simplicity of compilation for any gain in execution

58 Computer

The Algol 60 language profoundly influenced

not only the development of programming

languages but also computer architecture.

Page 4: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

January 2006 59

speed. The stack organization restricted the use of ascarce resource to a fixed strategy. But sophisticated com-pilation algorithms use registers more economically, giventhe flexibility of specifying individual registers in eachinstruction.

Storing return addresses in the codeThe subroutine jump instruction, invented by D.

Wheeler, deposits the program counter value to berestored when the subroutine terminates. The challengeinvolves choosing the place to deposit the value. In sev-eral computers, particularly minicomputers but also theCDC 6000 mainframe, a jump instruction to location dwould deposit the return address atd, then continue execution at loca-tion d+1:

mem[d] := PC+1; PC := d+1

This was a bad idea for at leasttwo reasons. First, it prevented call-ing a subroutine recursively. Algolintroduced recursion and causedmuch controversy because these pro-cedure calls could no longer be han-dled in this simple way given that arecursive call would overwrite the previous call’s returnaddress. Hence, the return address had to be fetchedfrom the fixed place dictated by the hardware and rede-posited in a place unique to the recursive procedure’sparticular incarnation. This overhead was unacceptableto many computer designers and users, forcing them todeclare recursion undesirable, useless, and forbidden.They refused to acknowledge that the difficulty arosebecause of their inadequate call instruction.

Second, this solution proved a bad idea because it pre-vented multiprocessing. Given that program code anddata were not kept separate, each concurrent processhad to use its own copy of the code.

Some later hardware designs, notably the RISC archi-tectures of the 1990s, accommodated recursive proce-dure calls by introducing specific registers dedicated tostack addressing and depositing return addresses relativeto their value. Depositing the addresses in one of thegeneral-purpose registers, presuming the availability ofa register bank, is probably the best idea, because itleaves freedom of choice to the compiler designer whilekeeping the basic subroutine instruction as efficient aspossible.

Virtual addressingLike compiler designers, operating system designers

had their own favorite ideas they wanted implemented.Such wishes appeared with the advent of multiprocess-ing and time-sharing, concepts that gave birth to oper-ating systems in general.

The guiding idea was to use the processor optimally,switching it to another program as soon as the oneunder execution would be blocked by, for example, aninput or output operation. The various programs werethereby executed in interleaved pieces, quasiconcur-rently. As a consequence, requests for memory alloca-tion and release occurred in an unpredictable, arbitrarysequence. Yet, individual programs were compiledunder the premise of a linear address space, a contigu-ous memory block. Worse, physical memory would typ-ically not be large enough to accommodate enoughprocesses to make multiprocessing beneficial.

Developers found a clever solution to this dilemma—indirect addressing, hidden this timefrom the programmer. Memorywould be subdivided into blocks orpages of fixed length, a power of 2.The system would use a page table tomap a virtual address into a physicaladdress. As a consequence, individ-ual pages could be placed anywherein store and, although spread out,would appear as a contiguous area.Even better, pages not finding theirslot in memory could be placed in thebackyard, on large disks. A bit in the

respective page table entry would indicate whether thedata was currently on disk or in main memory.

This clever and complex scheme, useful in its time,posed some problems. It practically required all mod-ern hardware to feature page tables and address map-ping and to hide the cost of indirect addressing—not tomention disposing and restoring pages on disk in unpre-dictable moments—from the unsuspecting user.

Even today, most processors use page mapping andmost operating systems work in the multiuser mode.But this has become a questionable idea because semi-conductor memories have become so large that map-ping and outplacing are no longer beneficial. Yet, theoverhead of the complex mechanism of indirect address-ing remains with us.

Ironically, virtual addressing is still used for a purposefor which it was never intended: Trapping references tononexistent objects, against using NIL pointers. NIL isrepresented by 0, yet the page at address 0 is never allo-cated. This dirty trick misuses the heavy virtual address-ing scheme and should have been solved straight-forwardly.

Complex instruction setsEarly computers featured small sets of simple instruc-

tions because they operated with a minimal amount ofexpensive circuitry. With hardware becoming cheaper,the temptation rose to incorporate more complicatedinstructions such as conditional jumps with three tar-gets, instructions that incremented, compared, and con-

Depositing the addresses in a general-purpose

register leaves compiler designers freedom of

choice while keeping the basic subroutine instruction

as efficient as possible.

Page 5: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

60 Computer

ditionally branched all in one, or complex move andtranslate operations.

With the advent of high-level languages, developerssought to accommodate certain language constructswith correspondingly tailored instructions such asAlgol’s for statement or instructions for recursive pro-cedure calls. Feature-tailored instructions were a smartidea because they contributed to code density, an impor-tant factor when memory was a scarce resource thatconsisted of 64 Kbytes or less.

This trend set in as early as 1963. The BurroughsB5000 machine not only accommodated many ofAlgol’s complicated features, it combined a scientificcomputer with a character-string machine and includedtwo computers with different instruction sets. Such anextravagance had become possible with the technique ofmicroprograms stored in fast, read-only memories. Thisfeature also made the idea of a computer family feasi-ble: The IBM Series 360 consisted of a set of computers,all with the same instruction set and architecture, atleast from the programmer’s perspective. However,internally the individual machines differed vastly. Thelow-end machines were microprogrammed, the genuinehardware executing a short microprogram interpretingthe instruction code. The high-end machines, however,implemented all instructions directly. This technologycontinued with single-chip microprocessors like Intel’s8086, Motorola’s 68000, and National’s 32000.

The NS processor offers a fine example of a complexinstruction set computer (CISC). Congealing frequentinstruction patterns into a single instruction improvedcode density significantly and reduced memory accesses,increasing execution speed.

The NS processor accommodated, for example, thenew concept of module and separate compilation with

an appropriate call instruction. Code segmentswere linked when loaded, the compiler havingprovided tables with linking information.Minimizing the number of linking operations,which replace references to the link tables byabsolute addresses, is certainly a good idea. Thescheme, which simplifies the linker’s task, leadsto the storage organization for every module, asFigure 1 shows.

A dedicated register MOD points to moduleM’s descriptor, which contains the procedure Pcurrently being executed. Register PC is the reg-ular program counter. Register SB contain theaddress of M’s data segment, including M’s sta-tic, global variables. All these registers changetheir values whenever the system calls an exter-nal procedure. To speed up this process, theprocessor offers the call external procedure(CXP) in addition to the regular branch sub-routine (BSR). A pair of corresponding returns,RXP and RTS, is also available.

Assume now that a procedure P in a module M is tobe activated. The CXP instruction’s parameter d speci-fies the entry in the current link table. From this the sys-tem obtains the address of M’s descriptor and also theoffset of P within M’s code segment. From the descrip-tor, the system then obtains M’s data segment and loadsit into SB—all with a single, short instruction. However,what was gained in linking simplicity and code densitymust be paid for somewhere, namely by an increasednumber of indirect references and, incidentally, by addi-tional hardware—the MOD and SB registers.

A second, similar example involves an instruction tocheck array bounds. It compared an array index againstthe array’s lower and upper bounds and caused a trap ifthe index did not lie within the bounds, thereby combin-ing two comparison and two branch instructions in one.

Several years after our Oberon compiler had beenbuilt and released, new, faster versions of the processorappeared. They went with the trend of implementingfrequent, simple instructions directly with hardware andletting an internal microcode interpret the complexinstructions. As a result, those language-orientedinstructions became rather slow compared to the simpleoperations. So I decided to program a new version ofthe compiler that refrained from using the sophisticatedinstructions.

This produced astonishing results. The new code exe-cuted considerably faster, making it apparent that thecomputer architect and the compiler designers had opti-mized in the wrong place.

Indeed, in the early 1980s advanced microprocessorsbegan to compete with old mainframes, featuring com-plex and irregular instruction sets. These sets hadbecome so complex that most programmers could useonly a small fraction of them. Also, compilers selected

RegisterMOD

RegisterSB

Programcode

Programcounter

Staticdata

RegisterPB

RegisterSB

Links

RegisterPB

Offset

Linktable

Disp

Off/mod

Figure 1. Module storage organization. A dedicated register MOD pointsto module M’s descriptor, which contains the procedure P currently beingexecuted. Register PC is the regular program counter. Register SB containsthe address of M’s data segment, including M’s static, global variables. Allthese registers change their values whenever the system calls an externalprocedure.

Page 6: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

only from a subset of the available instructions, a clearsign that hardware architects had gone overboard.Around 1990, the reaction became manifest in the formof reduced instruction set computers (RISC)—notablythe ARM, MIPS, and Sparc architectures. They featureda small set of simple instructions, all executing in a sin-gle clock cycle, a single addressing mode, and a fairlylarge, single bank of registers—in short, a highly regu-lar structure. These architectures debunked CISCs as abad idea.

PROGRAMMING LANGUAGE FEATURESProgramming languages offer a fertile ground for con-

troversial ideas. Some of these were not only question-able, but also known to be bad fromthe outset. The features proposed forAlgol in 1960 and for some of itssuccessors1 provide an excellentexample.

Most people consider a program-ming language merely as code withthe sole purpose of constructingsoftware for computers to run.However, a language is a computa-tion model, and programs are for-mal texts amenable to mathematical reasoning. Themodel must be defined so that its semantics are delin-eated without reference to an underlying mechanism,be it physical or abstract.

This makes explaining a complex set of features andfacilities in large volumes of manuals appear as apatently bad idea. Actually, a language is characterizednot so much by what it lets us program as by what itkeeps us from expressing. As E.W. Dijkstra observed,the programmer’s most difficult, daily task is to not messthings up. The first and most noble duty of a languageis thus to help in this eternal struggle.

Notation and syntaxIt has become fashionable to regard notation as a sec-

ondary issue depending purely on personal taste. Thiscould partly be true; yet the choice of notation shouldnot be considered arbitrary. It has consequences andreveals the language’s character.

Choosing the equal sign to denote assignment is onenotoriously bad example that goes back to Fortran in1957 and has been copied blindly by armies of languagedesigners since. This bad idea overthrows a century-oldtradition to let = denote a comparison for equality, apredicate that is either true or false. But Fortran madethis symbol mean assignment, the enforcing of equality.In this case, the operands are on unequal footing: Theleft operand, a variable, is to be made equal to the rightoperand, an expression. Thus, x = y does not mean thesame thing as y = x. Algol corrected this mistake with asimple solution: Let assignment be denoted by :=.

This might appear as nitpicking to programmers accus-tomed to the equal sign meaning assignment. But mixingup assignment and comparison truly is a bad idea becauseit requires using another symbol for what the equal signtraditionally expresses. Comparison for equality becamedenoted by the two characters == (first in C). This is anugly consequence that gave rise to similar bad ideas using++, —, &&, and so on.

Some of these operators exert side effects in C, C++,Java, and C#, a notorious source of programming mis-takes. It might be acceptable to, for example, let ++denote incrementation by 1, if it would not also denotethe incremented value, thereby allowing expressionswith side effects. The trouble lies in the elimination of

the fundamental distinction betweenstatement and expression. The for-mer is an instruction to be executed,the latter a value to be computed andevaluated.

The ugliness of a construct usuallyappears in combination with otherlanguage features. In C, a program-mer might write x+++++y, a riddlerather than an expression, and achallenge for even a sophisticated

parser. Is its value equal to ++x+++y+1? Or is the fol-lowing correct?

x+++++y+1==++x+++yx+++y++==x+++++y+1

We might as well postulate a new algebra. I findabsolutely surprising the equanimity with which theprogrammer community worldwide has accepted thisnotational monster. In 1962, the postulating of opera-tors being right-associative in the APL language madea similar break with established convention. Now x+y+zsuddenly stood for x+(y+z), and x-y-z for x-y+z.

Algol’s conditional statement provides a case ofunfortunate syntax rather than merely poor symbolchoice, offered in two forms, with S0 and S1 beingstatements:

if b then S0if b then S0 else S1

This definition has given rise to an inherent ambigu-ity and become known as the “dangling else” problem.For example, the statement

if b0 then if b1 then S0 else S1

can be interpreted in two ways, namely

if b0 then (if b1 then S0 else S1)if b0 then (if b1 then S0) else S1

January 2006 61

Choosing the equal sign to denote assignment is

a notoriously bad example that goes back to Fortran in 1957.

Page 7: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

62 Computer

{P & b} S {P} implies {P} R0 {P & ¬b}{P} S {P} implies {P} R1 {P & b}

If, however, S contains a goto statement, no suchassertion is possible about S, and therefore neither isany deduction about the effect of R. This is a great loss.Practice has indeed shown that large programs with-out goto are much easier to understand and it is mucheasier to give any guarantees about their properties.

Enough has been said and written about this nonfea-ture to convince almost everyone that it is a primaryexample of a bad idea. Pascal’s designer retained thegoto statement as well as the if statement without a clos-ing end symbol. Apparently, he lacked the courage tobreak with convention and made wrong concessions totraditionalists. But that was in 1968. By now, almosteverybody understands the problem except for thedesigners of the latest commercial programming lan-guages, such as C#.

SwitchesIf a feature is a bad idea, then features built atop it

are even worse. This rule can be demonstrated by theswitch, which is essentially an array of labels. Assuming,for example, labels L1, … L5, a switch declaration inAlgol could look as follows:

switch S := L1, L2, if x < 5 then L3 else L4, L5

Now the apparently simple statement goto S[i] is equiv-alent to

if i = 1 then goto L1 elseif i = 2 then goto L2 elseif i = 3 then

if x < 5 then goto L3 else goto L4 elseif i = 4 then goto L5If the goto encourages a programming mess, the

switch makes it impossible to avoid.C.A.R. Hoare proposed a most suitable replacement

of the switch in 1965: the case statement. This constructdisplays a proper structure with component statementsto be selected according to the value i:

case i of1: S1 | 2: S2 | ……….. | n: Sn

end

However, modern programming language designerschose to ignore this elegant solution in favor of a bas-tard formulation between the Algol switch and a struc-tured case statement:

switch (i) {case 1: S1; break;case 2: S2; break;

possibly leading to quite different results. The next exam-ple appears even graver:

if b0 then for i := 1 step 1 until 100 do if b1 then S0 else S1

because it can be parsed in two ways, yielding quite dif-ferent computations:

if b0 then [for i := 1 step 1 until 100 do if b1 then S0 else S1]if b0 then [for i := 1 step 1 until 100 do if b1 then S0] else S1

The remedy, however, is quite simple: Use an explicitend symbol in every construct that is recursive andbegins with an explicit start symbol such as if, while,for, or case:

if b then S0 endif b then S0 else S1 end

The GOTO statementIn the bad ideas hall of shame, the goto statement has

been the villain of many critics. It serves as the directcounterpart in languages to the jump statement ininstruction sets and can be used to construct conditionalas well as repeated statements. But it also lets program-mers construct any maze or mess of program flow, defiesany regular structure, and makes structured reasoningabout such programs difficult if not impossible.

Our primary tools for comprehending and control-ling complex objects are structure and abstraction. Webreak an overly complex object into parts. The specifi-cation of the whole is based on the specifications of theparts. The goto statement became the prototype of a bad programming language idea because it may break the boundaries of the parts and invalidate their specifications.

As a corollary, a language must allow, encourage, oreven enforce formulation of programs as properly nestedstructures, in which properties of the whole can bederived from properties of the parts. Consider, for exam-ple, the specification of a repetition R of a statement S.It follows that S appears as a part of R. We show twopossible forms:

R0: while b do S endR1: repeat S until b

The key behind proper nesting is that known proper-ties of S can be used to derive properties of R. For exam-ple, given that a condition (assertion) P is left valid(invariant) under execution of S, we conclude that P isalso left invariant when execution of S is repeated.Hoare’s rules express this formally as

Page 8: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

… ;case n: Sn; break; }

Either the break symbol denotes a separation betweenconsecutive statements Si, or it acts as a goto to the endof the switch construct. In the first case, it is superfluous;in the second, it is a goto in disguise. A bad concept in abad notation, this example stems from C.

Algol’s complicated for statementAlgol’s designers recognized that certain frequent cases

of repetition would better be expressed by a more con-cise form than in combination withgoto statements. They introduced thefor statement, which is particularlyconvenient in use with arrays, as in

for i := 1 step 1 until n do a[i] := 0

If we forgive the rather unfortunatechoice of the words step and until,this seems a wonderful idea. Unfor-tunately, it was infected with the badidea of imaginative generality. The sequence of values tobe assumed by the control variable i can be specified as a list:

for i := 2, 3, 5, 7, 11 do a[i] := 0

Further, these elements could be general expressions:

for i := x, x+1, y-5, x*(y+z) do a[i] := 0

Also, different forms of list elements were to be allowed:

for i := x-3, x step 1 until y, y+7, z while z < 20 do a[i] := 0

Naturally, clever minds quickly concocted pathologicalcases, demonstrating the concept’s absurdity:

for i := 1 step 1 until I+1 do a[i] := 0for i := 1 step i until i do i :=—i

The generality of Algol’s for statement should havebeen a warning signal to all future designers to keep theprimary purpose of a construct in mind and be wary ofexaggerated generality and complexity, which can eas-ily become counterproductive.

Algol’s name parameterAlgol introduced procedures and parameters in a

much greater generality than known in older languagessuch as Fortran. In particular, parameters were seen asin traditional mathematics of functions, where the actualparameter textually replaces the formal parameter.2 For

example, given the declaration

real procedure square(x); real x; square := x * x

the call square(a) is to be literally interpreted as a*a,and square(sin(a)*cos(b)) as sin(a)*cos(b) * sin(a)*cos(b). This requires the evaluation of sine and cosinetwice, which in all likelihood was not the programmer’sintention. To prevent this frequent, misleading case,Algol’s developers postulated a second kind of parame-ter: the value parameter. It meant that a local variable,x’, is to be allocated and then initialized with the value

of the actual parameter, x. With

real procedure square(x); value x; real x; square := x * x

the above call would be interpretedas

x’ := sin(a) * cos(b); square := x’ * x’

avoiding the wasteful double evaluation of the actualparameter. The name parameter is indeed a most flexibledevice, as the following examples demonstrate.

Given the declaration

real procedure sum(k, x, n); integer k, n; real x;begin real s; s := 0;

for k := 1 step 1 until n do s := x + s;sum := s

end

Now the sum a1 + a2 + … + a100 is written simply assum(i, a[i], 100), the inner product of vectors a and b assum(i, a[i]*b[i], 100) and the harmonic function assum(i, 1/i, n).

But generality, as elegant and sophisticated as it mayappear, has its price. Reflection reveals that every nameparameter must be implemented as an anonymous, para-meterless procedure. To avoid paying for the hiddenoverhead, a cleverly designed compiler might optimizecertain cases. But mostly inefficiency is caused, whichcould easily have been avoided.

We could simply drop the name parameter from thelanguage. However, this measure would be too drasticand therefore unacceptable. It would, for example, pre-clude assignments to parameters to pass results back tothe caller. This suggestion, however, led to the replace-ment of the name parameter by the reference parameterin later languages such as Algol W, Pascal, and Ada.

For today, the message is this: Be skeptical aboutoverly sophisticated features and facilities. At the veryleast, their cost to the user must be known before a lan-

January 2006 63

Algol introduced procedures and parameters in a much greater generality

than known in older languages such as Fortran.

Page 9: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

64 Computer

guage is released, published, and propagated. This costmust be commensurate with the advantages gained byincluding the feature.

LoopholesI consider the loophole one of the worst features ever,

even though I infected Pascal, Modula, and even Oberonwith this deadly virus. The loophole lets the programmerbreach the compiler’s type checking and say “Don’tinterfere, as I am smarter than the rules.” Loopholestake many forms. Most common are explicit type trans-fer functions, such as

x := LOOPHOLE[i, REAL]Mesa

x := REAL(i)Modula

x := SYSTEM.VAL(REAL, i)Oberon

But they can also be disguised asabsolute address specifications orby variant records, as in Pascal. Inthe preceding examples, the inter-nal representation of integer i is to be interpreted as afloating-point number x. This can only be done withknowledge about number representation, which shouldnot be necessary when dealing with the abstractionlevel the language provides.

Pascal and Modula3 at least displayed loopholes hon-estly. In Oberon, they are present only through a smallnumber of functions encapsulated in a pseudo-modulecalled SYSTEM, which must be imported and is thusexplicitly visible in the heading of any module that usessuch low-level facilities.

This may sound like an excuse, but the loophole isnevertheless a bad idea. It was introduced to implementcomplete systems in a single programming language. Forexample, a storage manager must be able to look at stor-age as a flat array of bytes without data types. It must beable to allocate blocks and recycle them, independentof any type constraints. The device driver providesanother example that requires a loophole. Earlier com-puters used special instructions to access devices. Later,programmers assigned devices specific memoryaddresses, memory-mapping them. Thus, the idea aroseto let absolute addresses be specified for certain vari-ables, as in Modula. But this facility can be abused inmany clandestine ways.

Evidently, the normal user will never need to programeither a storage manager or a device driver, and hencehas no need for those loopholes. However—and this iswhat really makes the loophole a bad idea—program-mers still have the loophole at their disposal. Experienceshowed that normal users will not shy away from usingthe loophole, but rather enthusiastically grab on to it as

a wonderful feature that they use wherever possible.This is particularly so if manuals caution against its use.

The presence of a loophole facility usually points to adeficiency in the language proper, revealing that certainthings could not be expressed that might be important. Forexample, the type ADDRESS in Modula had to be usedto program data structures with different element types. Astrict, static typing strategy, which demanded that everypointer be statically associated with a fixed type and couldonly reference such objects, made this impossible. Knowingthat pointers were addresses, the loophole, in the form ofan innocent-looking type-transfer function, would make

it possible to let pointer variables pointto objects of any type. The drawbackwas that no compiler could check thecorrectness of such assignments. Thetype checking system was overruledand might as well not have existed.Oberon4 introduced the clean solutionof type extension, called inheritance inobject- oriented languages. Now itbecame possible to declare a pointeras referencing a given type, and itcould point to any type that was an

extension of the given type. This made it possible to con-struct inhomogeneous data structures and use them withthe security of a reliable type checking system. An imple-mentation must check at runtime if and only if it is not possible to check at compile time.

Programs expressed in 1960s languages were full ofloopholes that made them utterly error-prone. But therewas no alternative. The fact that developers can use alanguage like Oberon to program entire systems fromscratch without using loopholes, except in the storagemanager and device drivers, marks the most significantprogress in language design over the past 40 years.

MISCELLANEOUS TECHNIQUESThese bad ideas stem from the wide area of software

practice, or rather from the narrower area of theauthor’s experiences with it, dating back 40 years. Yetthe lessons learned then remain valid today. Some reflectmore recent practices and trends, mostly supported bythe abundant availability of hardware power.

Syntax analysisThe 1960s saw the rise of syntax analysis. Using a for-

mal syntax to define Algol provided the necessary under-pinnings to turn language definition and compilerconstruction into a field of scientific merit. It establishedthe concept of the syntax-driven compiler and gave riseto many activities for automatic syntax analysis on amathematically rigorous basis. This work created thenotions of top-down and bottom-up principles, therecursive descent technique, and measures for symbollookahead and backtracking. Accompanying this were

The presence of a loopholefacility usually points

to a deficiency in the language proper,

revealing that certain thingscould not be expressed.

Page 10: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

efforts to define language semantics more rigorously bypiggybacking semantic rules onto corresponding syntaxrules.

As happens with new fields of endeavor, research wentbeyond the needs of the hour. Developers built increas-ingly powerful parser generators, which managed to han-dle ever more general and complex grammars. Althoughan intellectual achievement, this development had a lesspositive consequence. It led language designers to believethat no matter what syntactic construct they postulated,automatic tools could surely detect ambiguities, andsome powerful parser would certainly cope with it.

Yet no such tool would give anyindication how that syntax could beimproved. Designers had ignoredboth the issue of efficiency and thata language serves the human reader,not just the automatic parser. If alanguage poses difficulties to parsers,it surely also poses difficulties for thehuman reader. Many languageswould be clearer and cleaner hadtheir designers been forced to use asimple parsing method.

My own experience strongly supports this statement.After contributing to the development of parsers forprecedence grammars in the 1960s, and having usedthem for the implementation of Euler and Algol W, Idecided to switch to the simplest parsing method forPascal: top-down, recursive descent. The experience wasmost encouraging, and I have stuck to it to this day withgreat satisfaction.

The drawback is that considerably more carefulthought must go into the syntax’s design prior to publi-cation and any implementation effort. This additionaleffort will be more than compensated for in the later useof both the language and the compiler.

Extensible languagesComputer scientists’ fantasies in the 1960s knew no

bounds. Spurred by the success of automatic syntaxanalysis and parser generation, some proposed the ideaof the flexible, or at least extensible, language, envi-sioning that a program would be preceded by syntacticrules that would then guide the general parser whileparsing the subsequent program. Further, the syntaxrules also could be interspersed anywhere throughoutthe text. For example, it would be possible to use a par-ticularly fancy private form of for statement, even spec-ifying different variants for the same concept in differentsections of the same program.

The concept that languages serve to communicatebetween humans had been completely blended out, asapparently now it was possible to define a language on thefly. However, the difficulties encountered when trying tospecify what these private constructions should mean soon

dashed these high hopes. As a consequence, the intrigu-ing idea of extensible languages quickly faded away.

Tree-structured symbol tablesCompilers construct symbol tables. The compiler

builds up the tables while processing declarations andsearches them while processing statements. In languagesthat allow nested scopes, every scope is represented byits own table.

Traditionally, these tables are binary trees to allow fastsearching. Having followed this long-standing tradition,I dared to doubt the benefit of trees when implementing

the Oberon compiler. When doubtsoccurred, I quickly became con-vinced that tree structures are notworthwhile for local scopes. In themajority of cases, procedures containa dozen or even fewer local variables.Using a linked linear list is then bothsimpler and more effective.

Programs written 30 or 40 yearsago declared most variables globally.As there were many of them, the treestructure was justified. In the mean-

time, however, skepticism about the practice of usingglobal variables has been on the rise. Modern programsdo not feature many globals, and hence in this case atree-structured table is hardly suitable.

Modern programming systems consist of many mod-ules, each of which probably contains some globals, butnot hundreds. The many globals of early programs havebecome distributed over numerous modules and are ref-erenced not by a single identifier, but by a name combi-nation M.x, which defines the initial search path.

Using sophisticated data structures for symbol tableswas evidently a poor idea. Once, we had even consid-ered balanced trees.

Using the wrong toolsAlthough using the wrong tools is an obviously bad

idea, often we discover a tool’s inadequacy only afterhaving invested substantial effort into building andunderstanding it. Investing this effort, unfortunately,gives the tool perceived value regardless of its functionalworth. This happened to my team when we imple-mented the first Pascal compiler in 1969.

The tools available for writing programs were assem-bler code, Fortran, and an Algol compiler. The Algolcompiler was so poorly implemented we dared not relyon it, and working with assembler code was considereddishonorable. There remained only Fortran.

Hence, our naïve plan was to use Fortran to constructa compiler for a substantial subset of Pascal that, whencompleted, would be translated into Pascal. Afterward,we would employ the classic bootstrapping techniqueto complete, refine, and improve the compiler.

January 2006 65

If a language poses difficulties

to parsers,it surely also poses difficulties for the

human reader.

Page 11: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

66 Computer

This plan crashed in the face of reality. When we com-pleted step one—after about one person-year’s labor—it turned out that translating Fortran code into Pascalwas impossible. That program was so much determinedby Fortran’s features, or rather its lack of any, that ouronly option was to write the compiler afresh. BecauseFortran did not feature pointers and records, we had tosqueeze symbol tables into the unnatural form of arrays.Nor did Fortran have recursive subroutines. Hence, wehad to use the complicated table-driven bottom-up pars-ing technique with syntax represented by arrays andmatrices. In short, employing the advantages of Pascalrequired completely rewriting andrestructuring the compiler.

This incident revealed that theapparently easiest way is not alwaysthe right way. But difficulties alsohave their benefits: Because no Pascalcompiler had yet become available,we had to write the entire programfor compiling a substantial subset ofPascal without testing feedback. Thisproved an extremely healthy exerciseand would be even more so today, in the era of quick trialand interactive error correction.

After we believed the compiler to be complete, we ban-ished one member of our team to his home to translatethe program into a syntax-sugared, low-level languagethat had an available compiler. He returned after twoweeks of intensive labor, and a few days later the com-piler written in Pascal compiled the first test programs.

The exercise of conscientious programming provedextremely valuable. Never do programs contain so fewbugs as when no debugging tools are available.

Thereafter, we could use bootstrapping to obtain newversions that handled more of Pascal’s constructs andproduced increasingly refined code. Immediately afterthe first bootstrap, we discarded the translation writtenin the auxiliary language. Its character had been muchlike that of the ominous low-level language C, publisheda year later.

After this experience, we had trouble understandingwhy the software engineering community did not rec-ognize the benefits of adopting a high-level, type-safelanguage instead of C.

WizardsThanks to the great leaps forward in parsing technology

originating in the 1960s, hardly anybody now constructsa parser by hand. Instead, we buy a parser generator andfeed it the desired syntax.

This brings us to the topic of automatic tools, nowcalled wizards. Given that they have been optimallydesigned and laid out by experts, it isn’t necessary fortheir users to understand these black boxes. By automat-ing simple routine tasks, these tools relieve users from

bothering about them. Wizards supposedly help users—and this is the key—without their asking for help, likea faithful, devoted servant.

Although it would be unwise to launch a crusadeagainst wonderful wizards, my experience with themhas been largely unfortunate. I found it impossible toavoid confronting them in text editors. Worst are thosewizards that constantly interfere with my writing, auto-matically indenting and numbering lines when notdesired, capitalizing certain letters and words at specificplaces, combining sequences of characters into somespecial symbol, automatically converting a sequence of

minus signs into a solid line, and soon. It would help if wizards at leastcould be deactivated easily, but typ-ically they are as obstinate andimmortal as devils.

So much for clever software fordummies. A bad idea.

PROGRAMMING PARADIGMSSeveral programming paradigms

have come into and out of vogueover the past few decades, making contributions of vary-ing importance to the field.

Functional programmingFunctional languages had their origin in Lisp.5 They

have undergone a significant amount of developmentand change and have been used to implement both smalland large software systems. I have always maintained askeptical attitude toward such efforts.

What characterizes functional languages? It hasalways appeared that it was their form, that the entireprogram consists of function evaluations—nested, recur-sive, parametric, and so on. Hence, the term functional.However, the core idea is that functions inherently haveno state. This implies that there are no variables and noassignments. Immutable function parameters—variablesin the mathematical sense—take the place of variables.As a consequence, freshly computed values cannot bereassigned to the same variable, overwriting its oldvalue. This explains why repetition must be expressed byrecursion. A data structure can at best be extended, butno change can be made to its old part. This yields anextremely high degree of storage recycling—a garbagecollector is the necessary ingredient. An implementationwithout automatic garbage collection is unthinkable.

To postulate a stateless model of computation atop amachine whose most eminent characteristic is stateseems an odd idea at the least. The gap between modeland machinery is wide, which makes bridging it costly.No hardware support feature can gloss over this: Itremains a bad idea in practice.

The protagonists of functional languages have also rec-ognized this over time. They have introduced state and

Never do programs contain so few bugs

as when no debugging tools

are available.

Page 12: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

variables in various tricky ways. The purely functionalcharacter has thereby been compromised and sacrificed.The old terminology has become deceiving.

Looking back at functional programming, it appearsthat its truly relevant contribution was certainly notits lack of state, but rather its enforcement of clearlynested structures and its use of strictly local objects.This discipline can also be practiced using conven-tional, imperative languages, which have subscribed tothe notions of nested structures, functions, and recur-sion for some time.

Functional programming implies much more thanavoiding goto statements, however.It also implies restriction to localvariables, perhaps with the excep-tion of a few global state variables.It probably also considers the nest-ing of procedures as undesirable.The B5000 computer apparentlywas right, after all, in restrictingaccess to strictly local and globalvariables.

Many years ago, and with increas-ing frequency, several developersclaimed that functional languages are the best vehiclefor introducing parallelism—although it would be moreto the point to say “to facilitate compilers to detectopportunities for parallelizing a program.” After all,determining which parts of an expression can be eval-uated concurrently is relatively easy. More importantis that parameters of a called function can be evaluatedconcurrently, provided that side effects are banned—which cannot occur in a truly functional language. Asthis may be true and perhaps of marginal benefit,object-orientation offers a more effective way to let asystem make good use of parallelism, with each objectrepresenting its own behavior in the form of a privateprocess.

Logic programmingAlthough logic programming has also received wide

attention, only a single well-known language representsthis paradigm: Prolog. Principally, Prolog replaces thespecification of actions, such as assignment to variables,with the specification of predicates on states. If one orseveral of a predicate’s parameters are left unspecified,the system searches for all possible argument values sat-isfying the predicate. This implies the existence of asearch engine looking for solutions to logic statements.This mechanism is complicated, often time-consuming,and sometimes inherently unable to proceed withoutintervention. This, however, requires that the user mustsupport the system by providing hints, and thereforemust understand what is going on and the process oflogic inference, the very thing the paradigm’s advocateshad promised could be ignored.

We must suspect that, because the community des-perately desired ways to produce better, more reliablesoftware, its developers were glad to hear of this possi-ble panacea. But the promised advancements nevermaterialized. I sadly recall the exaggerated hopes thatfueled the Japanese Fifth Generation Computer project,Prolog’s inference machines. Organizations sank largeamounts of resources into this unwise and now largelyforgotten idea.

Object-oriented programmingIn contrast to functional and logic programming,

object-oriented programming restson the same principles as conven-tional, procedural programming.OOP’s character is imperative. Itdescribes a process as a sequence ofstate transformations. The novelty isthe partitioning of a global state intoindividual objects and the associa-tion of the state transformers, calledmethods, with the object itself. Theobjects are seen as the actors thatcause other objects to alter their state

by sending messages to them. The description of anobject template is called a class definition.

This paradigm closely reflects the structure of systemsin the real world and is therefore well suited to modelcomplex systems with complex behavior. Not surpris-ingly, OOP has its origins in the field of system simula-tion. Its success in the field of software system designspeaks for itself, starting with the language Smalltalk6

and continued with Object-Pascal, C++, Eiffel, Oberon,Java, and C#.

The original Smalltalk implementation provided aconvincing example of its suitability. The first languageto feature windows, menus, buttons and icons, it pro-vides a perfect example of visible objects. The directmodeling of actors diminished the importance of prov-ing program correctness analytically because the origi-nal specification is one of behavior, rather than a staticinput-output relationship.

Nevertheless, we may wonder where the core of thenew paradigm would hide and how it would differessentially from the traditional view of programming.After all, the old cornerstones of procedural program-ming reappear, albeit embedded in a new terminology:Objects are records, classes are types, methods are pro-cedures, and sending a method is equivalent to calling aprocedure. True, records now consist of data fields and,in addition, methods; and true, the feature called inher-itance allows the construction of heterogeneous datastructures, useful also without object orientation.Whether this change in terminology expresses an essen-tial paradigm shift or amounts to no more than a salestrick remains an open question.

January 2006 67

Many years ago,several developers

claimed that functionallanguages are the

best vehicle for introducing parallelism.

Page 13: Good Ideas, Through the Looking GlassMagnetic bubble memory Back when magnetic core memories dominated, the idea of magnetic bubble memory appeared. As usual, its advocates attached

68 Computer

M uch can be learned from analyzing not only badideas and past mistakes, but also good ones.Although the collection of topics here may

appear accidental, and is certainly incomplete, I wrote itfrom the perspective that computing science would ben-efit from more frequent analysis and critique, particu-larly self-critique. After all, thorough self-critique is thehallmark of any subject claiming to be a science. ■

References1. P. Naur, “Report on the Algorithmic Language ALGOL 60,”

Comm. ACM, May 1960, pp. 299-314.2. D.E. Knuth, “The Remaining Trouble Spots in ALGOL 60,”

Comm. ACM, Oct. 1967, pp. 611-618.3. N. Wirth, Programming in Modula-2, Springer-Verlag, 1982. 4. N. Wirth, “The Programming Language Oberon,” Software—

Practice and Experience, Wiley, 1988, pp. 671-691.5. J. McCarthy, “Recursive Functions of Symbolic Expressions

and their Computation by Machine,” Comm. ACM, May1962, pp. 184-195.

6. A. Goldberg and D. Robson, Smalltalk-80: The Language andIts Implementation, Addison-Wesley, 1983.

Niklaus Wirth is a professor of Informatics at ETH Zurich.His research interests include programming languages, inte-grated software environments, and hardware design usingfield-programmable gate arrays. Wirth received a PhD //inwhat discipline??// from the University of California atBerkeley. Contact him at [email protected].


Recommended