25
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 23 Feb 2012 19:39:43 UTC Processor Architectures RISC-CISC-Harvard-Von Neumann

Processor Architectures - UTFPR

  • Upload
    others

  • View
    16

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Processor Architectures - UTFPR

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.PDF generated at: Thu, 23 Feb 2012 19:39:43 UTC

Processor ArchitecturesRISC-CISC-Harvard-Von Neumann

Page 2: Processor Architectures - UTFPR

ContentsArticles

Von Neumann architecture 1Harvard architecture 8Complex instruction set computing 10Reduced instruction set computing 13

ReferencesArticle Sources and Contributors 21Image Sources, Licenses and Contributors 22

Article LicensesLicense 23

Page 3: Processor Architectures - UTFPR

Von Neumann architecture 1

Von Neumann architectureThe term Von Neumann architecture, aka the Von Neumann model, derives from a computer architectureproposal by the mathematician and early computer scientist John von Neumann and others, dated June 30, 1945,entitled First Draft of a Report on the EDVAC. This describes a design architecture for an electronic digital computerwith subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unitcontaining an instruction register and program counter, a memory to store both data and instructions, external massstorage, and input and output mechanisms.[1][2] The meaning of the term has evolved to mean a stored-programcomputer in which an instruction fetch and a data operation cannot occur at the same time because they share acommon bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.[3]

The design of a Von Neumann architecture is simpler than the more modern Harvard architecture which is also astored-program system but has one dedicated address and data buses for memory, and another set of address and databuses for fetching instructions.A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write,random-access memory (RAM). Stored-program computers were an advancement over the program-controlledcomputers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches andinserting patch leads to route data and to control signals between various functional units. In the vast majority ofmodern computers, the same memory is used for both data and program instructions.

HistoryThe earliest computing machines had fixed programs. Some very simple computers still use this design, either forsimplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can dobasic mathematics, but it cannot be used as a word processor or a gaming console. Changing the program of afixed-program machine requires re-wiring, re-structuring, or re-designing the machine. The earliest computers werenot so much "programmed" as they were "designed". "Reprogramming", when it was possible at all, was a laboriousprocess, starting with flowcharts and paper notes, followed by detailed engineering designs, and then theoften-arduous process of physically re-wiring and re-building the machine. It could take three weeks to set up aprogram on ENIAC and get it working.[4]

With the proposal of the stored-program computer this changed. A stored-program computer includes by design aninstruction set and can store in memory a set of instructions (a program) that details the computation.A stored-program design also allows for self-modifying code. One early motivation for such a facility was the needfor a program to increment or otherwise modify the address portion of instructions, which had to be done manuallyin early designs. This became less important when index registers and indirect addressing became usual features ofmachine architecture. Another use was to embed frequently used data in the instruction stream using immediateaddressing. Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, aswell as being inefficient under modern processor pipelining and caching schemes.On a large scale, the ability to treat instructions as data is what makes assemblers, compilers and other automatedprogramming tools possible. One can "write programs which write programs".[5] On a smaller scale, repetitiveI/O-intensive operations such as the BITBLT image manipulation primitive or pixel & vertex shaders in modern 3dgraphics, were considered inefficient to run without custom hardware. These operations could be accelerated ongeneral purpose processors with "on the fly compilation" ("just-in-time compilation") technology, e.g.,code-generating programs—one form of self-modifying code that has remained popular.There are drawbacks to the Von Neumann design. Aside from the Von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a

Page 4: Processor Architectures - UTFPR

Von Neumann architecture 2

computer crash. Memory protection and other forms of access control can usually protect against both accidental andmalicious program modification.

Development of the stored-program conceptThe mathematician Alan Turing, who had been alerted to a problem of mathematical logic by the lectures of MaxNewman at the University of Cambridge, wrote a paper in 1936 entitled On Computable Numbers, with anApplication to the Entscheidungsproblem, which was published in the Proceedings of the London MathematicalSociety.[6] In it he described a hypothetical machine which he called a "universal computing machine", and which isnow known as the "universal Turing machine". The hypothetical machine had an infinite store (memory in today'sterminology) that contained both instructions and data. The German engineer Konrad Zuse independently wroteabout this concept in 1936.[7] John von Neumann became acquainted with Turing when he was a visiting professor atCambridge in 1935 and also during Turing's PhD year at the Institute for Advanced Study, Princeton in 1936-37.Whether he knew of Turing's 1936 paper at that time is not clear.Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School ofElectrical Engineering, at the University of Pennsylvania, wrote about the stored-program concept in December1943.[8][9] In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data andprograms in a new addressable memory device, a mercury metal delay line memory. This was the first time theconstruction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware ofTuring's work.Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory, which required hugeamounts of calculation. This drew him to the ENIAC project, in the Summer of 1944. There he joined into theongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, hevolunteered to write up a description of it and produced the First Draft of a Report on the EDVAC[1] which includedideas from Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it with only vonNeumann's name on it, to the consternation of Eckert and Mauchly.[10] The paper was read by dozens of vonNeumann's colleagues in America and Europe, and influenced the next round of computer designs.Von Neumann was, then, not alone in putting forward the idea of the stored-program architecture, and JackCopeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as'von Neumann machines'".[11] His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing'sideas:

I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing'spaper of 1936 ... Von Neumann introduced me to that paper and at his urging I studied it with care. Manypeople have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I amsure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but hefirmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in sofar as not anticipated by Babbage ... Both Turing and von Neumann, of course, also made substantialcontributions to the "reduction to practice" of these concepts but I would not regard these as comparable inimportance with the introduction and explication of the concept of a computer able to store in its memory itsprogram of activities and of modifying that program in the course of these activities. [12]

At the time that the First Draft report was circulated, Turing was producing a report entitled Proposed ElectronicCalculator which described in engineering and programming detail, his idea of a machine that was called theAutomatic Computing Engine (ACE).[13] He presented this to the Executive Committee of the British NationalPhysical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Parkthat what he proposed was feasible, the secrecy surrounding Colossus, that was subsequently maintained for severaldecades, prevented him from saying so. Various successful implementations of the ACE design were produced.

Page 5: Processor Architectures - UTFPR

Von Neumann architecture 3

Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paperachieved greater circulation and the computer architecture it outlined became known as the "von Neumannarchitecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited byB.V. Bowden), a section in the chapter on Computers in America reads as follows:[14]

THE MACHINE OF THE INSTITUTE FOR ADVANCED STUDIES, PRINCETON

In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering inPhiladelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers areport on the logical design of digital computers. The report contained a fairly detailed proposal for thedesign of the machine which has since become known as the E.D.V.A.C. (electronic discrete variableautomatic computer). This machine has only recently been completed in America, but the von Neumannreport inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) inCambridge (see page 130).

In 1947, Burks, Goldstine and von Neumann published another report which outlined the design ofanother type of machine (a parallel machine this time) which should be exceedingly fast, capableperhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructingsuch a machine was in the development of a suitable memory, all the contents of which wereinstantaneously accessible, and at first they suggested the use of a special tube—called the Selectron,which had been invented by the Princeton Laboratories of the R.C.A. These tubes were expensive anddifficult to make, so von Neumann subsequently decided to build a machine based on the Williamsmemory. This machine, which was completed in June, 1952 in Princeton has become popularly knownas the Maniac. The design of this machine has inspired that of half a dozen or more machines which arenow being built in America, all of which are known affectionately as "Johniacs."'

In the same book, the first two paragraphs of a chapter on ACE read as follows:[15]

AUTOMATIC COMPUTATION AT THE NATIONAL PHYSICAL LABORATORY'

One of the most modern digital computers which embodies developments and improvements in thetechnique of automatic electronic computing was recently demonstrated at the National PhysicalLaboratory, Teddington, where it has been designed and built by a small team of mathematicians andelectronics research engineers on the staff of the Laboratory, assisted by a number of productionengineers from the English Electric Company, Limited. The equipment so far erected at the Laboratoryis only the pilot model of a much larger installation which will be known as the Automatic ComputingEngine, but although comparatively small in bulk and containing only about 800 thermionic valves, ascan be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine.

The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M.Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on suchmachines in Britain was delayed by the war. In 1945, however, an examination of the problems wasmade at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of theMathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists,and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of thespecial group already mentioned. In April, 1948, the latter became the Electronics Section of theLaboratory, under the charge of Mr. F. M. Colebrook.

Page 6: Processor Architectures - UTFPR

Von Neumann architecture 4

Early von Neumann-architecture computersThe First Draft described a design that was used by many universities and corporations to construct theircomputers.[16] Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets.• Manchester Small-Scale Experimental Machine (SSEM), nicknamed "Baby" (University of Manchester, England)

made its first successful run of a stored-program on June 21st 1948.• EDSAC (University of Cambridge, England) was the first practical stored-program electronic computer (May

1949)• Manchester Mark 1 (University of Manchester, England) Developed from the SSEM (June 1949)• CSIRAC (Council for Scientific and Industrial Research) Australia (November 1949)• ORDVAC (U-Illinois) at Aberdeen Proving Ground, Maryland (completed November 1951)[17]

• IAS machine at Princeton University (January 1952)• MANIAC I at Los Alamos Scientific Laboratory (March 1952)• ILLIAC at the University of Illinois, (September 1952)• AVIDAC at Argonne National Laboratory (1953)• ORACLE at Oak Ridge National Laboratory (June 1953)• JOHNNIAC at RAND Corporation (January 1954)• BESK in Stockholm (1953)• BESM-1 in Moscow (1952)• DASK in Denmark (1955)• PERM in Munich (1956?)• SILLIAC in Sydney (1956)• WEIZAC in Rehovoth (1955)

Early stored-program computersThe date information in the following chronology is difficult to put into proper order. Some dates are for firstrunning a test program, some dates are the first time the computer was demonstrated or completed, and some datesare for the first delivery or installation.• The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948.

This ability was claimed in a US patent.[18] However it was partially electromechanical, not fully electronic. Inpractice, instructions were read from paper tape due to its limited memory.[19]

• The Manchester SSEM (the Baby) was the first fully electronic computer to run a stored program. It ran afactoring program for 52 minutes on June 21, 1948, after running a simple division program and a program toshow that two numbers were relatively prime.

• The ENIAC was modified to run as a primitive read-only stored-program computer (using the Function Tables forprogram ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine forvon Neumann.

• The BINAC ran some test programs in February, March, and April 1949, although was not completed untilSeptember 1949.

• The Manchester Mark 1 developed from the SSEM project. An intermediate version of the Mark 1 was availableto run programs in April 1949, but was not completed until October 1949.

• The EDSAC ran its first program on May 6, 1949.• The EDVAC was delivered in August 1949, but it had problems that kept it from being put into regular operation

until 1951.• The CSIR Mk I ran its first program in November 1949.• The SEAC was demonstrated in April 1950.• The Pilot ACE ran its first program on May 10, 1950 and was demonstrated in December 1950.

Page 7: Processor Architectures - UTFPR

Von Neumann architecture 5

• The SWAC was completed in July 1950.• The Whirlwind was completed in December 1950 and was in actual use in April 1951.• The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950.

Evolution

Single system bus evolution of the architecture

Through the decades of the 1960s and 1970s computers generallybecame both smaller and faster, which led to some evolutions in theirarchitecture. For example, memory-mapped I/O allows input andoutput devices to be treated the same as memory.[20] A single systembus could be used to provide a modular system with lower cost. This issometimes called a "streamlining" of the architecture.[21] In subsequentdecades, simple microcontrollers would sometimes omit features of themodel to lower cost and size. Larger computers added features forhigher performance.

Von Neumann bottleneck

The shared bus between the program memory and data memory leads to the Von Neumann bottleneck, the limitedthroughput (data transfer rate) between the CPU and memory compared to the amount of memory. Because programmemory and data memory cannot be accessed at the same time, throughput is much smaller than the rate at which theCPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimalprocessing on large amounts of data. The CPU is continuously forced to wait for needed data to be transferred to orfrom memory. Since CPU speed and memory size have increased much faster than the throughput between them, thebottleneck has become more of a problem, a problem whose severity increases with every newer generation of CPU.

The term "von Neumann bottleneck" was coined by John Backus in his 1977 ACM Turing Award lecture. Accordingto Backus:

Surely there must be a less primitive way of making big changes in the store than by pushing vastnumbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literalbottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that haskept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the largerconceptual units of the task at hand. Thus programming is basically planning and detailing the enormoustraffic of words through the von Neumann bottleneck, and much of that traffic concerns not significantdata itself, but where to find it.[22][23]

The performance problem can be alleviated (to some extent) by several mechanisms. Providing a cache between theCPU and the main memory, providing separate caches or separate access paths for data and instructions (theso-called Modified Harvard architecture), using branch predictor algorithms and logic, and providing a limited CPUstack to reduce memory access are four of the ways performance is increased. The problem can also be sidesteppedsomewhat by using parallel computing, using for example the Non-Uniform Memory Access (NUMA)architecture—this approach is commonly employed by supercomputers. It is less clear whether the intellectualbottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a majorinfluence. Modern functional programming and object-oriented programming are much less geared towards "pushingvast numbers of words back and forth" than earlier languages like Fortran were, but internally, that is still whatcomputers spend much of their time doing, even highly parallel supercomputers.In some cases, emerging memristor technology may be able to circumvent the von Neumann bottleneck.[24]

Page 8: Processor Architectures - UTFPR

Von Neumann architecture 6

Non-von Neumann processorsThe National Semiconductor COP8 was introduced in 1986; it has a Modified Harvard architecture.[25][26]

Perhaps the most common kind of non-von Neumann structure used in modern computers is content-addressablememory (CAM).

References

Inline[1][1] von Neumann 1945[2][2] Ganesan 2009[3] Markgraf, Joey D. (2007), The Von Neumann bottleneck (http:/ / aws. linnbenton. edu/ cs271c/ markgrj/ ), , retrieved August 24, 2011[4][4] Copeland 2006, p. 104[5] MFTL (My Favorite Toy Language) entry Jargon File 4.4.7 (http:/ / catb. org/ ~esr/ jargon/ html/ M/ MFTL. html), , retrieved 2008-07-11[6] Turing, A.M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London

Mathematical Society, 2 42: 230–65, 1937, doi:10.1112/plms/s2-42.1.230 (and Turing, A.M. (1938), "On Computable Numbers, with anApplication to the Entscheidungsproblem: A correction", Proceedings of the London Mathematical Society, 2 43 (6): 544–6, 1937,doi:10.1112/plms/s2-43.6.544)

[7] The Life and Work of Konrad Zuse Part 10: Konrad Zuse and the Stored Program Computer (http:/ / web. archive. org/ web/20080601160645/ http:/ / www. epemag. com/ zuse/ part10. htm), archived from the original (http:/ / www. epemag. com/ zuse/ part10. htm)on June 1, 2008, , retrieved 2008-07-11

[8] Lukoff, Herman (1979), From Dits to Bits...: A Personal History of the Electronic Computer, Robotics Press, ISBN 978-0-89661-002-6[9][9] ENIAC project administrator Grist Brainerd's December 1943 progress report for the first period of the ENIAC's development implicitly

proposed the stored program concept (while simultaneously rejecting its implementation in the ENIAC) by stating that "in order to have thesimplest project and not to complicate matters" the ENIAC would be constructed without any "automatic regulation".

[10][10] Copeland 2006, p. 113[11] Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC (http:/ / www. alanturing. net/ turing_archive/ pages/ Reference

Articles/ BriefHistofComp. html#ACE), , retrieved 27 January 2010[12] Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC (http:/ / www. alanturing. net/ turing_archive/ pages/ Reference

Articles/ BriefHistofComp. html#ACE), , retrieved 27 January 2010 which cites Randell, B. (1972), Meltzer, B.; Michie, D., eds., "On AlanTuring and the Origins of Digital Computers", Machine Intelligence 7 (Edinburgh: Edinburgh University Press): 10, ISBN 0902383264

[13] Copeland 2006, pp. 108–111[14][14] Bowden 1953, pp. 176,177[15][15] Bowden 1953, p. 135[16] "Electronic Computer Project" (http:/ / www. ias. edu/ people/ vonneumann/ ecp/ ). Institute for Advanced Study. . Retrieved May 26, 2011.[17] Illiac Design Techniques, report number UIUCDCS-R-1955-146, Digital Computer Laboratory, University of Illinois at

Urbana-Champaign, 1955[18] F.E. Hamilton, R.R. Seeber, R.A. Rowley, and E.S. Hughes (January 19, 1949). "Selective Sequence Electronic Calculator" (http:/ / patft1.

uspto. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=/ netahtml/ PTO/ srchnum. htm& r=1& f=G& l=50&s1=2636672. PN. & OS=PN/ 2636672& RS=PN/ 2636672). US Patent 2,636,672. . Retrieved April 28, 2011. Issued April 28, 1953.

[19] Herbert R.J. Grosch (1991), Computer: Bit Slices From a Life (http:/ / www. columbia. edu/ acis/ history/ computer. html), ThirdMillennium Books, ISBN 0-88733-085-1,

[20] C. Gordon Bell; R. Cady; H. McFarland; J. O'Laughlin; R. Noonan; W. Wulf (1970), "A New Architecture for Mini-Computers—The DECPDP-11" (http:/ / research. microsoft. com/ en-us/ um/ people/ gbell/ CGB Files/ New Architecture PDP11 SJCC 1970 c. pdf), Spring JointComputer Conference: pp. 657–675, .

[21] Linda Null; Julia Lobur (2010), The essentials of computer organization and architecture (http:/ / books. google. com/books?id=f83XxoBC_8MC& pg=PA36) (3rd ed.), Jones & Bartlett Learning, pp. 36,199–203, ISBN 9781449600068,

[22] Backus, John W.. "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs" (http:/ /www. cs. cmu. edu/ ~crary/ 819-f09/ Backus78. pdf). . Retrieved 2012-01-20.

[23] Dijkstra, Edsger W.. "E. W. Dijkstra Archive: A review of the 1977 Turing Award Lecture" (http:/ / www. cs. utexas. edu/ ~EWD/transcriptions/ EWD06xx/ EWD692. html). . Retrieved 2008-07-11.

[24] Mouttet, Blaise L (2009), "Memristor Pattern Recognition Circuit Architecture for Robotics" (http:/ / www. iiis. org/ CDs2008/ CD2009SCI/CITSA2009/ PapersPdf/ I086AI. pdf), Proceedings of the 2nd International Multi-Conference on Engineering and Technological InnovationII: 65–70,

[25] "COP8 Basic Family User’s Manual" (http:/ / www. national. com/ appinfo/ mcu/ files/ Basic_user1. pdf). National Semiconductor. .Retrieved 2012-01-20.

Page 9: Processor Architectures - UTFPR

Von Neumann architecture 7

[26] "COP888 Feature Family User’s Manual" (http:/ / www. national. com/ appinfo/ mcu/ files/ Feature_user. pdf). National Semiconductor. .Retrieved 2012-01-20.

General• Bowden, B.V., ed. (1953), Faster Than Thought: A Symposium on Digital Computing Machines, London: Sir

Isaac Pitman and Sons Ltd.• Rojas, Raúl; Hashagen, Ulf, eds. (2000), The First Computers: History and Architectures, MIT Press,

ISBN 0-262-18197-5• Davis, Martin (2000), The universal computer: the road from Leibniz to Turing, New York: W W Norton &

Company Inc., ISBN 0-393-04785-7• Can Programming be Liberated from the von Neumann Style?, John Backus, 1977 ACM Turing Award Lecture.

Communications of the ACM, August 1978, Volume 21, Number 8 Online PDF (http:/ / www. stanford. edu/class/ cs242/ readings/ backus. pdf)

• C. Gordon Bell and Allen Newell (1971), Computer Structures: Readings and Examples, McGraw-Hill BookCompany, New York. Massive (668 pages)

• Copeland, Jack (2006), "Colossus and the Rise of the Modern Computer", in Copeland, B. Jack, Colossus: TheSecrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4

• von Neumann, John (1945), First Draft of a Report on the EDVAC (http:/ / qss. stanford. edu/ ~godfrey/vonNeumann/ vnedvac. pdf), retrieved August 24, 2011

• Ganesan, Deepak (2009), The Von Neumann Model (http:/ / none. cs. umass. edu/ ~dganesan/ courses/ fall09/handouts/ Chapter4. pdf), retrieved October 22, 2011

External links• Harvard vs von Neumann (http:/ / www. pic24micro. com/ harvard_vs_von_neumann. html)• A tool that emulates the behavior of a von Neumann machine (http:/ / home. gna. org/ vov/ )

Page 10: Processor Architectures - UTFPR

Harvard architecture 8

Harvard architecture

Harvard architecture

The Harvard architecture is a computerarchitecture with physically separate storageand signal pathways for instructions anddata. The term originated from the HarvardMark I relay-based computer, which storedinstructions on punched tape (24 bits wide)and data in electro-mechanical counters.These early machines had data storageentirely contained within the centralprocessing unit, and provided no access tothe instruction storage as data. Programsneeded to be loaded by an operator; theprocessor could not boot itself.

Today, most processors implement suchseparate signal pathways for performance reasons but actually implement a Modified Harvard architecture, so theycan support tasks such as loading a program from disk storage as data and then executing it.

Memory detailsIn a Harvard architecture, there is no need to make the two memories share characteristics. In particular, the wordwidth, timing, implementation technology, and memory address structure can differ. In some systems, instructionscan be stored in read-only memory while data memory generally requires read-write memory. In some systems, thereis much more instruction memory than data memory so instruction addresses are wider than data addresses.

Contrast with von Neumann architecturesUnder pure von Neumann architecture the CPU can be either reading an instruction or reading/writing data from/tothe memory. Both cannot occur at the same time since the instructions and data use the same bus system. In acomputer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access atthe same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuitcomplexity because instruction fetches and data access do not contend for a single memory pathway.Also, a Harvard architecture machine has distinct code and data address spaces: instruction address zero is not thesame as data address zero. Instruction address zero might identify a twenty-four bit value, while data address zeromight indicate an eight bit byte that isn't part of that twenty-four bit value.

Contrast with Modified Harvard architectureA modified Harvard architecture machine is very much like a Harvard architecture machine, but it relaxes the strictseparation between instruction and data while still letting the CPU concurrently access two (or more) memory buses.The most common modification includes separate instruction and data caches backed by a common address space.While the CPU executes from cache, it acts as a pure Harvard machine. When accessing backing memory, it acts likea von Neumann machine (where code can be moved around like data, a powerful technique). This modification iswidespread in modern processors such as the ARM architecture and X86 processors. It is sometimes loosely called aHarvard architecture, overlooking the fact that it is actually "modified".

Page 11: Processor Architectures - UTFPR

Harvard architecture 9

Another modification provides a pathway between the instruction memory (such as ROM or flash) and the CPU toallow words from the instruction memory to be treated as read-only data. This technique is used in somemicrocontrollers, including the Atmel AVR. This allows constant data, such as text strings or function tables, to beaccessed without first having to be copied into data memory, preserving scarce (and power-hungry) data memory forread/write variables. Special machine language instructions are provided to read data from the instruction memory.(This is distinct from instructions which themselves embed constant data, although for individual constants the twomechanisms can substitute for each other.)

SpeedIn recent years, the speed of the CPU has grown many times in comparison to the access speed of the main memory.Care needs to be taken to reduce the number of times main memory is accessed in order to maintain performance. If,for instance, every instruction run in the CPU requires an access to memory, the computer gains nothing forincreased CPU speed—a problem referred to as being "memory bound".It is possible to make extremely fast memory but this is only practical for small amounts of memory for cost, powerand signal routing reasons. The solution is to provide a small amount of very fast memory known as a CPU cachewhich holds recently accessed data. As long as the data that the CPU needs is in the cache, the performance hit ismuch smaller than it is when the cache has to turn around and get the data from the main memory.

Internal vs. external designModern high performance CPU chip designs incorporate aspects of both Harvard and von Neumann architecture. Inparticular, the Modified Harvard architecture is very common. CPU cache memory is divided into an instructioncache and a data cache. Harvard architecture is used as the CPU accesses the cache. In the case of a cache miss,however, the data is retrieved from the main memory, which is not formally divided into separate instruction anddata sections, although it may well have separate memory controllers used for concurrent access to RAM, ROM and(NOR) flash memory.Thus, while a von Neumann architecture is visible in some contexts, such as when data and code come through thesame memory controller, the hardware implementation gains the efficiencies of the Harvard architecture for cacheaccesses and at least some main memory accesses.In addition, CPUs often have write buffers which let CPUs proceed after writes to non-cached regions. The vonNeumann nature of memory is then visible when instructions are written as data by the CPU and software mustensure that the caches (data and instruction) and write buffer are synchronized before trying to execute thosejust-written instructions.

Modern uses of the Harvard architectureThe principal advantage of the pure Harvard architecture—simultaneous access to more than one memorysystem—has been reduced by modified Harvard processors using modern CPU cache systems. Relatively pureHarvard architecture machines are used mostly in applications where tradeoffs, such as the cost and power savingsfrom omitting caches, outweigh the programming penalties from having distinct code and data address spaces.• Digital signal processors (DSPs) generally execute small, highly-optimized audio or video processing algorithms.

They avoid caches because their behavior must be extremely reproducible. The difficulties of coping withmultiple address spaces are of secondary concern to speed of execution. As a result, some DSPs have multipledata memories in distinct address spaces to facilitate SIMD and VLIW processing. Texas Instruments TMS320C55x processors, as one example, have multiple parallel data buses (two write, three read) and one instructionbus.

Page 12: Processor Architectures - UTFPR

Harvard architecture 10

• Microcontrollers are characterized by having small amounts of program (flash memory) and data (SRAM)memory, with no cache, and take advantage of the Harvard architecture to speed processing by concurrentinstruction and data access. The separate storage means the program and data memories can have different bitwidths, for example using 16-bit wide instructions and 8-bit wide data. They also mean that instruction prefetchcan be performed in parallel with other activities. Examples include, the AVR by Atmel Corp, the PIC byMicrochip Technology, Inc. and the ARM Cortex-M3 processor (not all ARM chips have Harvard architecture).

Even in these cases, it is common to have special instructions to access program memory as data for read-only tables,or for reprogramming.

External links• Harvard vs Von Neumann [1]

References[1] http:/ / www. pic24micro. com/ harvard_vs_von_neumann. html

Complex instruction set computingA complex instruction set computer (CISC,  /ˈsɪsk/) is a computer where single instructions can execute severallow-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capableof multi-step operations or addressing modes within single instructions. The term was retroactively coined incontrast to reduced instruction set computer (RISC).[1]

Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola68k, and x86.

Historical design context

Incitements and benefitsBefore the RISC philosophy became prominent, many computer architects tried to bridge the so called semantic gap,i.e. to design instruction sets that directly supported high-level programming constructs such as procedure calls, loopcontrol, and complex addressing modes, allowing data structure and array accesses to be combined into singleinstructions. Instructions are also typically highly encoded in order to further enhance the code density. The compactnature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at thetime (early 1960s and onwards) resulted in a tremendous savings on the cost of computer memory and disc storage,as well as faster execution. It also meant good programming productivity even in assembly language, as high levellanguages such as Fortran or Algol were not always available or appropriate (microprocessors in this category aresometimes still programmed in assembly language for certain types of critical applications).

New instructions

In the 1970s, analysis of high level languages indicated some complex machine language implementations and it was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are

Page 13: Processor Architectures - UTFPR

Complex instruction set computing 11

needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.

Design issuesWhile many designs achieved the aim of higher throughput at lower cost and also allowed high-level languageconstructs to be expressed by fewer instructions, it was observed that this was not always the case. For instance,low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible toimprove performance by not using a complex instruction (such as a procedure call or enter instruction), but insteadusing a sequence of simpler instructions.One reason for this was that architects (microcode writers) sometimes "over-designed" assembler languageinstructions, i.e. including features which were not possible to implement efficiently on the basic hardware available.This could, for instance, be "side effects" (above conventional flags), such as the setting of a register or memorylocation that was perhaps seldom used; if this was done via ordinary (non duplicated) internal buses, or even theexternal bus, it would demand extra cycles every time, and thus be quite inefficient.Even in balanced high performance designs, highly encoded and (relatively) high-level instructions could becomplicated to decode and execute efficiently within a limited transistor budget. Such architectures thereforerequired a great deal of work on the part of the processor designer in cases where a simpler, but (typically) slower,solution based on decode tables and/or microcode sequencing is not appropriate. At a time when transistors and othercomponents were a limited resource, this also left fewer components and less opportunity for other types ofperformance optimizations.

The RISC idea

The circuitry that performs the actions defined by the microcode in many (but not all) CISC processors is, in itself, aprocessor which in many ways is reminiscent in structure to very early CPU designs. In the early 1970s, this gaverise to ideas to return to simpler processor designs in order to make it more feasible to cope without (then relativelylarge and expensive) ROM tables and/or PLA structures for sequencing and/or decoding. The first (retroactively)RISC-labeled processor (IBM 801 - IBMs Watson Research Center, mid-1970s) was a tightly pipelined simplemachine originally intended to be used as an internal microcode kernel, or engine, in CISC designs, but also becamethe processor that introduced the RISC idea to a somewhat larger public. Simplicity and regularity also in the visibleinstruction set would make it easier to implement overlapping processor stages (pipelining) at the machine code level(i.e. the level seen by compilers.) However, pipelining at that level was already used in some high performance CISC"supercomputers" in order to reduce the instruction cycle time (despite the complications of implementing within thelimited component count and wiring complexity feasible at the time). Internal microcode execution in CISCprocessors, on the other hand, could be more or less pipelined depending on the particular design, and therefore moreor less akin to the basic structure of RISC processors.

Superscalar

In a more modern context, the complex variable length encoding used by some of the typical CISC architecturesmakes it complicated, but still feasible, to build a superscalar implementation of a CISC programming modeldirectly; the in-order superscalar original Pentium and the out-of-order superscalar Cyrix 6x86 are well knownexamples of this. The frequent memory accesses for operands of a typical CISC machine may limit the instructionlevel parallelism that can be extracted from the code, although this is strongly mediated by the fast cache structuresused in modern designs, as well as by other measures. Due to inherently compact and semantically rich instructions,the average amount of work performed per machine code unit (i.e. per byte or bit) is higher for a CISC than a RISCprocessor, which may give it a significant advantage in a modern cache based implementation. (Whether thedownsides versus the upsides justifies a complex design or not is food for a never-ending debate in certain circles.)

Page 14: Processor Architectures - UTFPR

Complex instruction set computing 12

Transistors for logic, PLAs, and microcode are no longer scarce resources; only large high-speed cache memories arelimited by the maximum number of transistors today. Although complex, the transistor count of CISC decoders donot grow exponentially like the total number of transistors per processor (the majority typically used for caches).Together with better tools and enhanced technologies, this has led to new implementations of highly encoded andvariable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of olderarchitectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embeddedsystems, and similar uses. The superscalar complexity in the case of modern x86 was solved with dynamically issuedand buffered micro-operations, i.e. indirect and dynamic superscalar execution; the Pentium Pro and AMD K5 areearly examples of this. It allows a fairly simple superscalar design to be located after the (fairly complex) decoders(and buffers), giving, so to speak, the best of both worlds in many respects.

CISC and RISC terms

The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISCdesigns and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel,AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiencyonly on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e. without typicalRISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles.However, modern x86 processors also (typically) decode and split instructions into dynamic sequences ofinternally-buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined(overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for evenhigher performance.Contrary to popular simplifications (present also in some academic texts), not all CISCs are microcoded or have"complex" instructions. As CISC became a catch-all term meaning anything that's not a load-store (RISC)architecture, it's not the number of instructions, nor the complexity of the implementation or of the instructionsthemselves, that define CISC, but the fact that arithmetic instructions also perform memory accesses. Compared to asmall 8-bit CISC processor, a RISC floating-point instruction is complex. CISC does not even need to have complexaddressing modes; 32 or 64-bit RISC processors may well have more complex addressing modes than small 8-bitCISC processors.A PDP-10, a PDP-8, an Intel 386, an Intel 4004, a Motorola 68000, a System z mainframe, a Burroughs B5000, aVAX, a Zilog Z80000, and a 6502 all vary wildly in the number, sizes, and formats of instructions, the number,types, and sizes of registers, and the available data types. Some have hardware support for operations like scanningfor a substring, arbitrary-precision BCD arithmetic, or transcendental functions, while others have only 8-bit additionand subtraction. But they are all in the CISC category because they have "load-operate" instructions that load and/orstore memory contents within the same instructions that perform the actual calculations. For instance, the PDP-8,having only 8 fixed-length instructions and no microcode at all, is a CISC because of how the instructions work,PowerPC, which has over 230 instructions (more than some VAXes) and complex internals like register renamingand a reorder buffer is a RISC, while Minimal CISC [2] has 8 instructions, but is clearly a CISC because it combinesmemory access and computation in the same instructions.Some of the problems and contradictions in this terminology will perhaps disappear as more systematic terms, suchas (non) load/store, becomes more popular and eventually replaces the imprecise and slightly counter-intuitiveRISC/CISC terms.

Page 15: Processor Architectures - UTFPR

Complex instruction set computing 13

Notes[1] Patterson, D. A. and Ditzel, D. R. 1980. The case for the reduced instruction set computing. SIGARCH Comput. Archit. News 8, 6 (October

1980), 25-33. DOI= http:/ / doi. acm. org/ 10. 1145/ 641914. 641917[2] http:/ / www. cs. uiowa. edu/ ~jones/ arch/ cisc/

ReferencesThis article was originally based on material from the Free On-line Dictionary of Computing, which is licensedunder the GFDL.

• Tanenbaum, Andrew S. (2006) Structured Computer Organization, Fifth Edition, Pearson Education, Inc. UpperSaddle River, NJ.

External links• RISC vs. CISC comparison (http:/ / www. pic24micro. com/ cisc_vs_risc. html)

Reduced instruction set computingReduced instruction set computing, or RISC (  /ˈrɪsk/), is a CPU design strategy based on the insight thatsimplified (as opposed to complex) instructions can provide higher performance if this simplicity enables muchfaster execution of each instruction. A computer based on this strategy is a reduced instruction set computer (alsoRISC). There are many proposals for precise definitions,[1] but the term is slowly being replaced by the moredescriptive load-store architecture. Well-known RISC families include DEC Alpha, AMD 29k, ARC, ARM, AtmelAVR, Blackfin, MIPS, PA-RISC, Power (including PowerPC), SuperH, and SPARC.Some aspects attributed to the first RISC-labeled designs around 1975 include the observations that thememory-restricted compilers of the time were often unable to take advantage of features intended to facilitatemanual assembly coding, and that complex addressing modes take many cycles to perform due to the requiredadditional memory accesses. It was argued that such functions would be better performed by sequences of simplerinstructions if this could yield implementations small enough to leave room for many registers,[2] reducing thenumber of slow memory accesses. In these simple designs, most instructions are of uniform length and similarstructure, arithmetic operations are restricted to CPU registers and only separate load and store instructions accessmemory. These properties enable a better balancing of pipeline stages than before, making RISC pipelinessignificantly more efficient and allowing higher clock frequencies.

Non-RISC design philosophyIn the early days of the computer industry, programming was done in assembly language or machine code, whichencouraged powerful and easy-to-use instructions. CPU designers therefore tried to make instructions that would doas much work as feasible. With the advent of higher level languages, computer architects also started to creatededicated instructions to directly implement certain central mechanisms of such languages. Another general goal wasto provide every possible addressing mode for every instruction, known as orthogonality, to ease compilerimplementation. Arithmetic operations could therefore often have results as well as operands directly in memory (inaddition to register or immediate).The attitude at the time was that hardware design was more mature than compiler design so this was in itself also areason to implement parts of the functionality in hardware or microcode rather than in a memory constrainedcompiler (or its generated code) alone. This design philosophy became retroactively termed complex instruction setcomputing (CISC) after the RISC philosophy came onto the scene.

Page 16: Processor Architectures - UTFPR

Reduced instruction set computing 14

CPUs also had relatively few registers, for several reasons:•• More registers also implies more time-consuming saving and restoring of register contents on the machine stack.•• A large number of registers requires a large number of instruction bits as register specifiers, meaning less dense

code (see below).• CPU registers are more expensive than external memory locations; large register sets were cumbersome with

limited circuit boards or chip integration.An important force encouraging complexity was very limited main memories (on the order of kilobytes). It wastherefore advantageous for the density of information held in computer programs to be high, leading to features suchas highly encoded, variable length instructions, doing data loading as well as calculation (as mentioned above).These issues were of higher priority than the ease of decoding such instructions.An equally important reason was that main memories were quite slow (a common type was ferrite core memory); byusing dense information packing, one could reduce the frequency with which the CPU had to access this slowresource. Modern computers face similar limiting factors: main memories are slow compared to the CPU and the fastcache memories employed to overcome this are limited in size. This may partly explain why highly encodedinstruction sets have proven to be as useful as RISC designs in modern computers.

RISC design philosophyIn the mid-1970s, researchers (particularly John Cocke) at IBM (and similar projects elsewhere) demonstrated thatthe majority of combinations of these orthogonal addressing modes and instructions were not used by most programsgenerated by compilers available at the time. It proved difficult in many cases to write a compiler with more thanlimited ability to take advantage of the features provided by conventional CPUs.It was also discovered that, on microcoded implementations of certain architectures, complex operations tended to beslower than a sequence of simpler operations doing the same thing. This was in part an effect of the fact that manydesigns were rushed, with little time to optimize or tune every instruction, but only those used most often. Oneinfamous example was the VAX's INDEX instruction.[3]

As mentioned elsewhere, core memory had long since been slower than many CPU designs. The advent ofsemiconductor memory reduced this difference, but it was still apparent that more registers (and later caches) wouldallow higher CPU operating frequencies. Additional registers would require sizeable chip or board areas which, atthe time (1975), could be made available if the complexity of the CPU logic was reduced.Yet another impetus of both RISC and other designs came from practical measurements on real-world programs.Andrew Tanenbaum summed up many of these, demonstrating that processors often had oversized immediates. Forinstance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length machinecould store constants in unused bits of the instruction word itself, so that they would be immediately ready when theCPU needs them (much like immediate addressing in a conventional design). This required small opcodes in order toleave room for a reasonably sized constant in a 32-bit instruction word.Since many real-world programs spend most of their time executing simple operations, some researchers decided tofocus on making those operations as fast as possible. The clock rate of a CPU is limited by the time it takes toexecute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution ofother instructions.[4] The focus on "reduced instructions" led to the resulting machine being called a "reducedinstruction set computer" (RISC). The goal was to make instructions so simple that they could easily be pipelined, inorder to achieve a single clock throughput at high frequencies.Later, it was noted that one of the most significant characteristics of RISC processors was that external memory was only accessible by a load or store instruction. All other instructions were limited to internal registers. This simplified many aspects of processor design: allowing instructions to be fixed-length, simplifying pipelines, and isolating the

Page 17: Processor Architectures - UTFPR

Reduced instruction set computing 15

logic for dealing with the delay in completing a memory access (cache miss, etc.) to only two instructions. This ledto RISC designs being referred to as load/store architectures.[5]

One more issue is that complex instructions are difficult to restart, e.g. following a page fault. In some cases,restarting from the beginning will work (although wasteful), but in many this would give incorrect results. Thereforethe machine needs to have some hidden state to remember which parts went through and what needs to be done.With a load/store machine, the PC supplies all information.

Instruction set size and alternative terminologyA common misunderstanding of the phrase "reduced instruction set computer" is the mistaken idea that instructionsare simply eliminated, resulting in a smaller set of instructions. In fact, over the years, RISC instruction sets havegrown in size, and today many of them have a larger set of instructions than many CISC CPUs.[6][7] Some RISCprocessors such as the PowerPC have instruction sets as large as, say, the CISC IBM System/370; and conversely,the DEC PDP-8—clearly a CISC CPU because many of its instructions involve multiple memory accesses—has only8 basic instructions, plus a few extended instructions.The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instructionaccomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISCCPUs that may require dozens of data memory cycles in order to execute a single instruction.[8] In particular, RISCprocessors typically have separate instructions for I/O and data processing; as a consequence, industry observershave started using the terms "register-register" or "load-store" to describe RISC processors.Some CPUs have been retroactively dubbed RISC — a Byte magazine article once referred to the 6502 as "theoriginal RISC processor" due to its simplistic and nearly orthogonal instruction set (most instructions work withmost addressing modes) as well as its 256 zero-page "registers". The 6502 is no load/store design however:arithmetic operations may read memory, and instructions like INC and ROL even modify memory. Furthermore,orthogonality is equally often associated with "CISC". However, the 6502 may be regarded as similar to RISC (andearly machines) in the fact that it uses no microcode sequencing. As for the well known fact that it employed longerbut fewer clock cycles compared to many contemporary microprocessors, this was due to a more asynchronousdesign with less subdivision of internal machine cycles. This is similar to early machines, but not to RISC.Some CPUs have been specifically designed to have a very small set of instructions – but these designs are verydifferent from classic RISC designs, so they have been given other names such as minimal instruction set computer(MISC), zero instruction set computer (ZISC), one instruction set computer (OISC), transport triggered architecture(TTA), etc.

AlternativesRISC was developed as an alternative to what is now known as CISC. Over the years, other strategies have beenimplemented as alternatives to RISC and CISC. Some examples are VLIW, MISC, OISC, massive parallelprocessing, systolic array, reconfigurable computing, and dataflow architecture.

Typical characteristics of RISCFor any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to thecore logic which originally allowed designers to increase the size of the register set and increase internal parallelism.Other features, which are typically found in RISC architectures are:•• Uniform instruction format, using a single word with the opcode in the same bit positions in every instruction,

demanding less decoding;

Page 18: Processor Architectures - UTFPR

Reduced instruction set computing 16

• Identical general purpose registers, allowing any register to be used in any context, simplifying compiler design(although normally there are separate floating point registers);

• Simple addressing modes, with complex addressing performed via sequences of arithmetic and/or load-storeoperations;

• Few data types in hardware, some CISCs have byte string instructions, or support complex numbers; this is so farunlikely to be found on a RISC.

Exceptions abound, of course, within both CISC and RISC.RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the datastream are conceptually separated; this means that modifying the memory where code is held might not have anyeffect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), atleast until a special synchronization instruction is issued. On the upside, this allows both caches to be accessedsimultaneously, which can often improve performance.Many early RISC designs also shared the characteristic of having a branch delay slot. A branch delay slot is aninstruction space immediately following a jump or branch. The instruction in this space is executed, whether or notthe branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPUbusy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered anunfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designsgenerally do away with it (such as PowerPC and more recent versions of SPARC and MIPS).

Early RISCThe first system that would today be known as RISC was the CDC 6600 supercomputer, designed in 1964, a decadebefore the term was invented. The CDC 6600 had a load-store architecture with only two addressing modes(register+register, and register+immediate constant) and 74 opcodes (whereas an Intel 8086 has 400). The 6600 hadeleven pipelined functional units for arithmetic and logic, plus five load units and two store units; the memory hadmultiple banks so all load-store units could operate at the same time. The basic clock cycle/instruction issue rate was10 times faster than the memory access time. Jim Thornton and Seymour Cray designed it as a number-crunchingCPU supported by 10 simple computers called "peripheral processors" to handle I/O and other operating systemfunctions.[9] Thus the joking comment later that the acronym RISC actually stood for "Really Invented by SeymourCray".Another early load-store machine was the Data General Nova minicomputer, designed in 1968 by Edson de Castro.It had an almost pure RISC instruction set, remarkably similar to that of today's ARM processors; however it has notbeen cited as having influenced the ARM designers, although Novas were in use at the University of CambridgeComputer Laboratory in the early 1980s.The earliest attempt to make a chip-based RISC CPU was a project at IBM which started in 1975. Named after thebuilding where the project ran, the work led to the IBM 801 CPU family which was used widely inside IBMhardware. The 801 was eventually produced in a single-chip form as the ROMP in 1981, which stood for 'ResearchOPD [Office Products Division] Micro Processor'. As the name implies, this CPU was designed for "mini" tasks, andwhen IBM released the IBM RT-PC based on the design in 1986, the performance was not acceptable. Neverthelessthe 801 inspired several research projects, including new ones at IBM that would eventually lead to their POWERsystem.The most public RISC designs, however, were the results of university research programs run with funding from theDARPA VLSI Program. The VLSI Program, practically unknown today, led to a huge number of advances in chipdesign, fabrication, and even computer graphics.The Berkeley RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin, based on gaining performance through the use of pipelining and an aggressive use of a technique known as register

Page 19: Processor Architectures - UTFPR

Reduced instruction set computing 17

windowing. In a normal CPU, one has a small number of registers, and a program can use any register at any time. Ina CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a smallnumber of them, e.g. eight, at any one time. A program that limits itself to eight registers per procedure can makevery fast procedure calls: The call simply moves the window "down" by eight, to the set of eight registers used bythat procedure, and the return moves the window back. (On a normal CPU, most calls must save at least a fewregisters' values to the stack in order to use those registers as working space, and restore their values on return.)The RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared withaverages of about 100,000 in newer CISC designs of the era) RISC-I had only 32 instructions, and yet completelyoutperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-IIin 1983, which ran over three times as fast as RISC-I.At about the same time, John L. Hennessy started a similar project called MIPS at Stanford University in 1981.MIPS focused almost entirely on the pipeline, making sure it could be run as "full" as possible. Although pipeliningwas already in use in other designs, several features of the MIPS chip made its pipeline far faster. The mostimportant, and perhaps annoying, of these features was the demand that all instructions be able to complete in onecycle. This demand allowed the pipeline to be run at much higher data rates (there was no need for induced delays)and is responsible for much of the processor's performance. However, it also had the negative side effect ofeliminating many potentially useful instructions, like a multiply or a divide.In the early years, the RISC efforts were well known, but largely confined to the university labs that had createdthem. The Berkeley effort became so well known that it eventually became the name for the entire concept. Many inthe computer industry criticized that the performance benefits were unlikely to translate into real-world settings dueto the decreased memory efficiency of multiple instructions, and that that was the reason no one was using them. Butstarting in 1986, all of the RISC research projects started delivering products.

Later RISCBerkeley's research was not directly commercialized, but the RISC-II design was used by Sun Microsystems todevelop the SPARC, by Pyramid Technology to develop their line of mid-range multi-processor machines, and byalmost every other company a few years later. It was Sun's use of a RISC chip in their new machines thatdemonstrated that RISC's benefits were real, and their machines quickly outpaced the competition and essentiallytook over the entire workstation market.John Hennessy left Stanford (temporarily) to commercialize the MIPS design, starting the company known as MIPSComputer Systems. Their first design was a second-generation MIPS chip known as the R2000. MIPS designs wenton to become one of the most used RISC chips when they were included in the PlayStation and Nintendo 64 gameconsoles. Today they are one of the most common embedded processors in use for high-end applications.IBM learned from the RT-PC failure and went on to design the RS/6000 based on their new POWER architecture.They then moved their existing AS/400 systems to POWER chips, and found much to their surprise that even thevery complex instruction set ran considerably faster. POWER would also find itself moving "down" in scale toproduce the PowerPC design, which eliminated many of the "IBM only" instructions and created a single-chipimplementation. Today the PowerPC is one of the most commonly used CPUs for automotive applications (somecars have more than 10 of them inside). It was also the CPU used in most Apple Macintosh machines from 1994 to2006. (Starting in February 2006, Apple switched their main production line to Intel x86 processors.)Almost all other vendors quickly joined. From the UK, similar research efforts resulted in the INMOS transputer, the Acorn Archimedes and the Advanced RISC Machine line, which is a huge success today. Most mobile phones and MP3 players use ARM processors. Companies with existing CISC designs also quickly joined the revolution. Intel released the i860 and i960 by the late 1980s, although they were not very successful. Motorola built a new design called the 88000 in homage to their famed CISC 68000, but it saw almost no use. The company eventually abandoned it and joined IBM to produce the PowerPC. AMD released their 29000 which would go on to become the

Page 20: Processor Architectures - UTFPR

Reduced instruction set computing 18

most popular RISC design of the early 1990s.Today the vast majority of all 32-bit CPUs in use are RISC CPUs, and microcontrollers. RISC design techniqueshave become dominant for low-power 32-bit CPUs. Embedded systems are by far the largest market for processors:while a family may own one or two PCs, their car(s), cell phones, and other devices may contain a total of dozens ofembedded processors. RISC had also completely taken over the market for larger workstations for much of the 90s(until taken back by inexpensive PC-based solutions). After the release of the Sun SPARCstation, the other vendorsrushed to compete with RISC based solutions of their own. In 2008, the #1 spot among supercomputers as of 2008was held by IBM's Roadrunner system, which uses Power Architecture-based Cell processors[10] to provide most ofits computing power; however, the #1 spot as of November 2010 was held by the Tianhe-IA, which uses acombination of Intel Xeon processors, Nvidia Tesla GPGPUs, and custom processors, and most of the othermachines in the top 10 spots use x86 CISC processors instead.[11]

RISC and x86Despite many successes, RISC has made few inroads into the desktop PC and commodity server markets, whereIntel's x86 platform remains the dominant processor architecture. There are three main reasons for this:1. A very large base of proprietary PC applications are written for x86 or compiled into x86 machine code, whereas

no RISC platform has a similar installed base; hence PC users were locked into the x86.2. Although RISC was indeed able to scale up in performance quite quickly and cheaply, Intel took advantage of its

large market by spending vast amounts of money on processor development. Intel could spend many times asmuch as any RISC manufacturer on improving low level design and manufacturing. The same could not be saidabout smaller firms like Cyrix and NexGen, but they realized that they could apply (tightly) pipelined designpractices also to the x86-architecture, just as in the 486 and Pentium. The 6x86 and MII series did exactly this, butwas more advanced; it implemented superscalar speculative execution via register renaming, directly at thex86-semantic level. Others, like the Nx586 and AMD K5 did the same, but indirectly, via dynamic microcodebuffering and semi-independent superscalar scheduling and instruction dispatch at the micro-operation level(older or simpler ‘CISC’ designs typically execute rigid micro-operation sequences directly). The first availablechip deploying such dynamic buffering and scheduling techniques was the NexGen Nx586, released in 1994; theAMD K5 was severely delayed and released in 1995.

3. Later, more powerful processors, such as Intel P6, AMD K6, AMD K7, and Pentium 4, employed similardynamic buffering and scheduling principles and implemented loosely coupled superscalar (and speculative)execution of micro-operation sequences generated from several parallel x86 decoding stages. Today, these ideashave been further refined (some x86-pairs are instead merged into a more complex micro-operation, for example)and are still used by modern x86 processors such as Intel Core 2 and AMD K8.

While early RISC designs differed significantly from contemporary CISC designs, by 2000 the highest performingCPUs in the RISC line were almost indistinguishable from the highest performing CPUs in the CISC line.[12][13][14]

A number of vendors, including Qualcomm, are attempting to enter the PC market with ARM-based devices dubbedsmartbooks, riding on the netbook trend and rising acceptance of Linux distributions, a number of which alreadyhave ARM builds.[15] Other companies are choosing to use Windows CE.

Diminishing benefits for desktops and serversOver time, improvements in chip fabrication techniques have improved performance exponentially, according toMoore's law, whereas architectural improvements have been comparatively small. Modern CISC implementationshave implemented many of the performance improvements introduced by RISC, such as single-clock throughput ofsimple instructions. Compilers have also become more sophisticated, and are better able to exploit complex as wellas simple instructions on CISC architectures, often carefully optimizing both instruction selection and instruction anddata ordering in pipelines and caches. The RISC-CISC distinction has blurred significantly in practice.

Page 21: Processor Architectures - UTFPR

Reduced instruction set computing 19

Expanding benefits for mobile and embedded devicesThe hardware translation from x86 instructions into internal RISC-like micro-operations, which cost relatively littlein microprocessors for desktops and servers as Moore's Law provided more transistors, become significant in areaand energy for mobile and embedded devices. Hence, ARM processors dominate cell phones and tablets today justas x86 processors dominate PCs.

RISC success storiesRISC designs have led to a number of successful platforms and architectures, some of the larger ones being:• ARM — The ARM architecture dominates the market for low power and low cost embedded systems (typically

100–1200 MHz in 2011). ARM Ltd., which licenses intellectual property rather than manufacturing chips,reported that 10 billion licensed chips had been shipped as of early 2008.[16] The various generations, variants andimplementations of the ARM core are deployed in over 90% of mobile electronics devices, including almost allmodern mobile phones, mp3 players and portable video players. Some high profile examples are:• Apple iPhone, iPod and iPad•• Palm and PocketPC PDAs and smartphones• RIM BlackBerry email devices, smartphones• Microsoft Windows Mobile• Nintendo Game Boy Advance•• Nintendo DS• Sony Network Walkman• Android smartphones, tablets

• MIPS's MIPS line, found in most SGI computers and the PlayStation, PlayStation 2, Nintendo 64, PlayStationPortable game consoles, and residential gateways like Linksys WRT54G series.

• IBM's and Freescale's (formerly Motorola SPS) Power Architecture, used in all of IBM's supercomputers,midrange servers and workstations, in Apple's PowerPC-based Macintosh computers (discontinued), inNintendo's Gamecube and Wii, Microsoft's Xbox 360 and Sony's PlayStation 3 game consoles, EMC's DMXrange of the Symmetrix SAN, and in many embedded applications like printers and cars.

• SPARC, by Oracle (formerly Sun Microsystems), and Fujitsu• Hewlett-Packard's PA-RISC, also known as HP-PA, discontinued December 31, 2008.• Alpha, used in single-board computers, workstations, servers and supercomputers from Digital Equipment

Corporation, Compaq and HP, discontinued as of 2007.• XAP processor used in many low-power wireless (Bluetooth, Wi-Fi) chips from CSR.• Hitachi's SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now at the heart of many

consumer electronics devices. The SuperH is the base platform for the Mitsubishi–Hitachi joint semiconductorgroup. The two groups merged in 2002, dropping Mitsubishi's own RISC architecture, the M32R.

• Atmel AVR used in a variety of products ranging from Xbox handheld controllers to BMW cars.

Page 22: Processor Architectures - UTFPR

Reduced instruction set computing 20

Notes and references[1] Stanford sophomore students defined (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ whatis/ index.

html) RISC as “a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specializedset of instructions often found in other types of architectures”.

[2] in place of complex logic or microcode—transistors were a scarce resource then[3] Patterson, D. A. and Ditzel, D. R. 1980. The case for the reduced instruction set computing. SIGARCH Comput. Archit. News 8, 6 (October

1980), 25-33. DOI= http:/ / doi. acm. org/ 10. 1145/ 641914. 641917[4] "Microprocessors From the Programmer's Perspective" (http:/ / www. ddj. com/ architect/ 184408418) by Andrew Schulman 1990[5] Kevin Dowd. High Performance Computing. O'Reilly & Associates, Inc. 1993.[6] "RISC vs. CISC: the Post-RISC Era" (http:/ / arstechnica. com/ cpu/ 4q99/ risc-cisc/ rvc-5. html#Branch) by Jon "Hannibal" Stokes

(Arstechnica)[7] "RISC versus CISC" (http:/ / www. borrett. id. au/ computing/ art-1991-06-02. htm) by Lloyd Borrett Australian Personal Computer, June

1991[8] "Guide to RISC Processors for Programmers and Engineers": Chapter 3: "RISC Principles" (http:/ / www. springerlink. com/ content/

u5t457g61q637v66/ ) by Sivarama P. Dandamudi, 2005, ISBN 978-0-387-21017-9. "the main goal was not to reduce the number ofinstructions, but the complexity"

[9][9] Grishman, Ralph. Assembly Language Programming for the Control Data 6000 Series. Algorithmics Press. 1974. pg 12[10] http:/ / www. top500. org/ list/ 2008/ 06/ 100 TOP500 List - June 2008 (1-100)[11] TOP 10 Sites for November 2010 (http:/ / www. top500. org/ lists/ 2010/ 11)[12] "Schaum's Outline of Computer Architecture" (http:/ / books. google. com/ books?id=24V00tD7HeAC& pg=PT105& lpg=PT105&

dq=RISC+ "fewer+ instructions"& source=web& ots=RkQOcAKjNJ& sig=gTE5OsG93TjvDGpgN0Q87gfHc9Y& hl=en& sa=X&oi=book_result& resnum=1& ct=result#PPT105,M1) by Nicholas P. Carter 2002 p. 96 ISBN 007136207X

[13] "CISC, RISC, and DSP Microprocessors" (http:/ / www. ifp. uiuc. edu/ ~jones/ RISCvCISCvDSP. pdf) by Douglas L. Jones 2000[14] "A History of Apple's Operating Systems" (http:/ / www. kernelthread. com/ mac/ oshistory/ 5. html) by Amit Singh. "the line between

RISC and CISC has been growing fuzzier over the years."[15] "Meet smartbooks" (http:/ / www. hellosmartbook. com/ index. php)[16] "ARM Ships 10 Billionth Processor" (http:/ / www. efytimes. com/ efytimes/ 24375/ news. htm). (28 January 2008). EYFtimes.

Further reading

Television• Computer Chronicles (1986). " RISC (http:/ / www. archive. org/ details/ RISC1986)".

External links• RISC vs. CISC (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ risccisc/

)• What is RISC (http:/ / www-cs-faculty. stanford. edu/ ~eroberts/ courses/ soco/ projects/ 2000-01/ risc/ whatis/ )• RISC vs. CISC from historical perspective (http:/ / www. cpushack. net/ CPU/ cpuAppendA. html)

Page 23: Processor Architectures - UTFPR

Article Sources and Contributors 21

Article Sources and ContributorsVon Neumann architecture  Source: http://en.wikipedia.org/w/index.php?oldid=475308149  Contributors: 1234SPS, Aaronbrick, Academic Challenger, Adambro, Alan.rezende, Alansohn, Alejrb, Allenc28, Amazincredible, Andy Dingley, Arvindn, Ben D., BenFrantzDale, Bender235, Bevo, Blainster, Bobo192, Bonobosarenicer, BonsaiViking, Booyabazooka, BradBeattie, Btyner,Bubba73, CBM, Can't sleep, clown will eat me, CanisRufus, Capricorn42, Chris the speller, ChrisfromHouston, Christian75, Christopher Parham, Cleared as filed, Cmdrjameson, Corwin8,Courcelles, Cst17, Curps, DJPohly, Danakil, David.Prinzing, Denisarona, Dpbsmith, DragonHawk, Eaglizard, Edgar181, Edward, EncMstr, Fanf, Fireice, Frap, Frencheigh, Funandtrvl, Fvw,Gauss, Gene Nygaard, Gfoley4, Giacomo Ritucci, Gioto, GoA, Goodnightmush, GrahamDavies, Gtoomey, Guy Harris, Henriok, Ian Kelling, Isidore, IvanLanin, Jacen6788, Jcw69, Jidan, Jkt,JoHnY, JonHarder, Jynus, Kamasutra, Kdf9andtd1a, Kenyon, Koper, Krauss, Kubanczyk, Kvng, Kwertii, Kyattei, Ld100, Leibniz, Lotje, Lowercase, Lzur, Malleus Fatuorum, Malo, Matijs vanZuijlen, Matt Britt, Mboverload, Michael Hardy, MisterHand, Moilforgold, Neelix, Netkinetic, Noisy, Nolte, Nxavar, Obradovic Goran, P.r.newman, PeggyCummins, Pgan002, Phgao, Pion,Pnm, Ppntori, R. S. Shaw, RTC, Raul654, Rdv, RexNL, Rich Farmbrough, RichardVeryard, Rieger, Rigadoun, Rjwilmsi, Robert K S, Robert.Baruch, RodC, Rwwww, RyanEberhart, Sam,Sdornan, Sen Mon, Serych, Sf222, Shaddim, Shreshth91, Special-T, SpuriousQ, Staeiou, Stepheno, Stormwatch, Sw1974, Sychen, SystemBuilder, TedColes, The Anome, The00gazza, Thingg,Thunderboltz, Tijfo098, Timwi, TonyW, Trovatore, TuringMachine17, Tuxa, Unyoyega, Uranographer, Vendettax, W Nowicki, Wbm1058, Whitepaw, WulfTheSaxon, Wvbailey, Yym, Zebbie,Zephyr2k, Zoicon5, 287 anonymous edits

Harvard architecture  Source: http://en.wikipedia.org/w/index.php?oldid=477983174  Contributors: AgadaUrbanit, Ale And Quail, Alex Pascual, Amarco90, Anomalocaris, Antandrus,Arjun01, AxelBoldt, Bachrach44, Bitflung, BokicaK, Bozoid, CanisRufus, CryptoDerk, DHR, Dcljr, DexDor, Doradus, Dyl, Eaglizard, Epinheiro, Fanopanic, Frap, Furrykef, Fuzheado,GrahamDavies, Guy Macon, HenkeB, Henriok, Iain.mcclatchie, J.delanoy, JamesMLane, Jni, JonHarder, JorgePeixoto, Jwortzel, Kbdank71, Krauss, Kvng, Levin, LittleDan, LokiClock,Lordofcode, Magic5ball, Malleus Fatuorum, Maury Markowitz, Michael Hardy, Moilforgold, NathanBeach, Neelix, Nessa los, Oosterwal, Oxymoron83, Pion, Pjrm, Plugwash, Psycotica0, R. S.Shaw, RTC, Rdsmith4, Remi0o, Reswobslc, Ric8ard, Robert K S, Rwwww, Sepia tone, Shadowjams, Sietse Snel, SpeedyGonsales, Srasku, Sw1974, Toddintr, Transcendent, Unyoyega,Wernher, Witguiota, Ykhwong, Zarek, 130 anonymous edits

Complex instruction set computing  Source: http://en.wikipedia.org/w/index.php?oldid=475827237  Contributors: 209.239.198.xxx, Alimentarywatson, Andrejj, Arndbergmann, Blazar,Buybooks Marius, CanisRufus, Carbuncle, Cassie Puma, Cdleary, Chris Howard, Collabi, Conversion script, Cybercobra, DMTagatac, DaleDe, Davnor, Deflective, Destynova, DmitryKo, Dyl,Edward, Ejrrjs, EncMstr, EoGuy, Epbr123, Eras-mus, Ergbert, Ethancleary, EvanCarroll, Eyreland, Fejesjoco, Flying Bishop, Frap, Galain, Gardar Rurak, Graham Chapman, Guy Harris,HenkeB, James Foster, Jason Quinn, Joanjoc, JonHarder, JonathonReinhart, Jpfagerback, Kallikanzarid, Karl-Henner, Kbdank71, Kelly Martin, Kwertii, Liao, Lion10, MFH, Materialscientist,MattGiuca, Mike4ty4, Mudlock, Murray Langton, NapoliRoma, Neilc, Nikto parcheesy, Nxavar, Nyat, Optakeover, OrgasGirl, PS2pcGAMER, Pgquiles, PokeYourHeadOff, Prodego, Qbeep,Quuxplusone, R'n'B, R. S. Shaw, Rdnk, RekishiEJ, Rich Farmbrough, Rilak, Robert Merkel, Rwwww, Saaya, Sct72, SimDoc, SimonP, Skittleys, Slady, Sopoforic, Stephan Leeds, Swiftly,Template namespace initialisation script, Tesi1700, Thincat, Tirppa, TutterMouse, UnicornTapestry, Urhixidur, VampWillow, Virtualphtn, Whaa?, WhiteTimberwolf, Wiki alf, Wws, 119anonymous edits

Reduced instruction set computing  Source: http://en.wikipedia.org/w/index.php?oldid=475286182  Contributors: 15.253, 16@r, 18.94, 209.239.198.xxx, 62.253.64.xxx, Adam Bishop,AgadaUrbanit, Ale07, Alecv, Ancheta Wis, Andre Engels, Andrew.baine, Aninhumer, Anss123, Autarchprinceps, AvayaLive, Bcaff05, Beanyk, Betacommand, Bobanater, Bobblewik, Brianski,Bryan Derksen, Btwied, C xong, C. A. Russell, Cambrant, Capricorn42, Cbturner46, Charles Matthews, Christan80, Cliffster1, Cmdrjameson, Conversion script, Corti, Cybercobra, Cybermaster,Damian Yerrick, Darkink, Davewho2, David Gerard, David Shay, DavidCary, DavidConner, Davipo, Dbfirs, Dcoetzee, DeadEyeArrow, Deflective, Derek Ross, Dkanter, Dmsar, Donreed, Drzepsuj, DragonHawk, Drcwright, Drj, Dro Kulix, Dyl, Eclipsed aurora, EdBever, EdgeOfEpsilon, Edward, Eloquence, EnOreg, Eras-mus, Evice, Finlay McWalter, Fonzy, Frap, Fredrik,Fromageestciel, Fujimuji, Furrykef, G3pro, GCFreak2, Gaius Cornelius, Gazno, Gesslein, Gjs238, Graham87, GregLindahl, GregorB, Guy Harris, Hadal, Hardyplants, Heirpixel, HenkeB,Henriok, Hephaestos, HubmaN, ICE77, ISC PB, Iain.mcclatchie, Ianw, Imroy, In2thats12, IvanLanin, JVz, Jack1956, Jamesmusik, Jasongagich, Jay.slovak, JayC, Jengod, Jerryobject, JesseViviano, Jevansen, Jfmantis, Jiang, JoanneB, Joey Eads, Johncatsoulis, JonHarder, Josh Grosse, JulesH, Kaszeta, Kate, Kbdank71, Kelly Martin, Kevin, Kman543210, Knutux, Koper, KoyaanisQatsi, Kristof vt, Kwamikagami, Kwertii, Labalius, Larowebr, Leszek Jańczuk, Levin, Liao, Liftarn, Ligulem, Littleman TAMU, Lorkki, Lquilter, MER-C, MFH, Magus732, Marcosw, MarkRichards, MarkMLl, Matsuiny2004, MattGiuca, Mattpat, Maurreen, Maury Markowitz, Mav, Mdz, MehrdadAfshari, MetaNest, Michael Hardy, Micky750k, Mike4ty4, MikeCapone, Mikeblas,Milan Keršláger, Mintleaf, Miremare, Modster, Moxfyre, MrPrada, Mrand, Mrwojo, Murray Langton, Nasukaren, Nate Silva, Neilc, Nikevich, Nurg, Nutschig, OCNative, Odysseus1479,Optakeover, Orichalque, Owengerig, Parklandspanaway, Paul D. Anderson, Paul Foxworthy, Pgquiles, Phil webster, Philippe, Pixel8, Plr4ever, Pnm, PrimeHunter, Ptoboley, QTCaptain,Quuxplusone, Qwertyus, R. S. Shaw, RAMChYLD, RadicalBender, Radimvice, Rajrajmarley, Rat144, Raysonho, RedWolf, Rehnn83, Remi0o, Retodon8, Rilak, Robert K S, Robert Merkel,Romanm, Ruud Koot, Rwwww, Saaya, Sbierwagen, Scepia, Scootey, Self-Perfection, Senpai71, Shieldforyoureyes, Shirifan, Sietse Snel, Simetrical, SimonW, Snoyes, Solipsist, Sonu mangla,SpeedyGonsales, SpuriousQ, Stan Shebs, Stephan Leeds, Stewartadcock, StuartBrady, Surturz, Susvolans, T-bonham, The Appleton, TheMandarin, Thecheesykid, Thorpe, Thumperward,Thunderbrand, TimBentley, Tksharpless, Toresbe, Toussaint, Trevj, UncleDouggie, UnicornTapestry, Unyoyega, Uriyan, VampWillow, Vishwastengse, Watcharakorn, Wcooley, Weeniewhite,Weevil, Wehe, Wernher, Wik, WojPob, Worthawholebean, Wws, Xyb, Yurik, Zachlipton, ZeroOne, ^demon, 405 anonymous edits

Page 24: Processor Architectures - UTFPR

Image Sources, Licenses and Contributors 22

Image Sources, Licenses and ContributorsImage:Computer system bus.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Computer_system_bus.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors:User:W NowickiFile:Harvard architecture.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Harvard_architecture.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Nessa losFile:Loudspeaker.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Loudspeaker.svg  License: Public Domain  Contributors: Bayo, Gmaxwell, Husky, Iamunknown, Mirithing,Myself488, Nethac DIU, Omegatron, Rocket000, The Evil IP address, Wouterhagens, 18 anonymous edits

Page 25: Processor Architectures - UTFPR

License 23

LicenseCreative Commons Attribution-Share Alike 3.0 Unported//creativecommons.org/licenses/by-sa/3.0/