102
EECC551 - Shaaban EECC551 - Shaaban #1 Lec # 1 Fall 2001 9-6-2001 The Von The Von Neumann Neumann Computer Model Computer Model Partitioning of the computing engine into components: Central Processing Unit (CPU): Control Unit (instruction decode , sequencing of operations), Datapath (registers, arithmetic and logic unit, buses). Memory: Instruction and operand storage. Input/Output (I/O) sub-system: I/O bus, interfaces, devices. The stored program concept: Instructions from an instruction set are fetched from a common memory and executed one at a time - Memory (instructions, data) Control Datapath registers ALU, buses CPU Computer System Input Output I/O Devices

The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#1 Lec # 1 Fall 2001 9-6-2001

The VonThe Von Neumann Neumann Computer Model Computer Model• Partitioning of the computing engine into components:

– Central Processing Unit (CPU): Control Unit (instruction decode , sequencingof operations), Datapath (registers, arithmetic and logic unit, buses).

– Memory: Instruction and operand storage.– Input/Output (I/O) sub-system: I/O bus, interfaces, devices.– The stored program concept: Instructions from an instruction set are fetched

from a common memory and executed one at a time

-Memory

(instructions, data)

Control

DatapathregistersALU, buses

CPUComputer System

Input

Output

I/O Devices

Page 2: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#2 Lec # 1 Fall 2001 9-6-2001

Generic CPU Machine Instruction Execution StepsGeneric CPU Machine Instruction Execution Steps

Instruction

Fetch

Instruction

Decode

Operand

Fetch

Execute

Result

Store

Next

Instruction

Obtain instruction from program storage

Determine required actions and instruction size

Locate and obtain operand data

Compute result value or status

Deposit results in storage for later use

Determine successor or next instruction

Page 3: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#3 Lec # 1 Fall 2001 9-6-2001

Hardware Components of Any ComputerHardware Components of Any Computer

Processor (active)

Computer

ControlUnit

Datapath

Memory(passive)

(where programs, data live whenrunning)

Devices

Input

Output

Keyboard, Mouse, etc.

Display, Printer, etc.

Disk

Five classic components of all computers:Five classic components of all computers:

1. Control Unit; 2. 1. Control Unit; 2. Datapath Datapath; 3. Memory; 4. Input; 5. Output; 3. Memory; 4. Input; 5. Output}

Processor

Page 4: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#4 Lec # 1 Fall 2001 9-6-2001

CPU OrganizationCPU Organization• Datapath Design:

– Capabilities & performance characteristics of principalFunctional Units (FUs):

• (e.g., Registers, ALU, Shifters, Logic Units, ...)

– Ways in which these components are interconnected (busesconnections, multiplexors, etc.).

– How information flows between components.

• Control Unit Design:– Logic and means by which such information flow is controlled.

– Control and coordination of FUs operation to realize the targetedInstruction Set Architecture to be implemented (can either beimplemented using a finite state machine or a microprogram).

• Hardware description with a suitable language, possiblyusing Register Transfer Notation (RTN).

Page 5: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#5 Lec # 1 Fall 2001 9-6-2001

Recent Trends in Computer DesignRecent Trends in Computer Design• The cost/performance ratio of computing systems have seen a

steady decline due to advances in:

– Integrated circuit technology: decreasing feature size, λλ• Clock rate improves roughly proportional to improvement in λλ• Number of transistors improves proportional to λλ22 (or faster).

– Architectural improvements in CPU design.

• Microprocessor systems directly reflect IC improvement in termsof a yearly 35 to 55% improvement in performance.

• Assembly language has been mostly eliminated and replaced byother alternatives such as C or C++

• Standard operating Systems (UNIX, NT) lowered the cost ofintroducing new architectures.

• Emergence of RISC architectures and RISC-core architectures.

• Adoption of quantitative approaches to computer design based onempirical performance observations.

Page 6: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#6 Lec # 1 Fall 2001 9-6-2001

1988 Computer Food Chain1988 Computer Food Chain

PCWork-stationMini-

computer

Mainframe

Mini-supercomputer

Supercomputer

Massively ParallelProcessors

Page 7: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#7 Lec # 1 Fall 2001 9-6-2001

1997 Computer Food Chain1997 Computer Food Chain

PCWork-station

Mainframe

Supercomputer

Mini-supercomputerMassively Parallel Processors

Mini-computer

ServerPDA

Page 8: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#8 Lec # 1 Fall 2001 9-6-2001

Processor Performance TrendsProcessor Performance Trends

Microprocessors

Minicomputers

Mainframes

Supercomputers

Year

0.1

1

10

100

1000

1965 1970 1975 1980 1985 1990 1995 2000

Mass-produced microprocessors a cost-effective high-performancereplacement for custom-designed mainframe/minicomputer CPUs

Page 9: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#9 Lec # 1 Fall 2001 9-6-2001

Microprocessor PerformanceMicroprocessor Performance1987-971987-97

0

200

400

600

800

100 0

120 0

87 88 89 90 91 92 93 94 95 96 97

DEC Alpha 21264/600

DEC Alpha 5/500

DEC Alpha 5/300

DEC Alpha 4/266IBM POWER 100

DEC AXP/500

HP 9000/750

Sun-4/

260

IBMRS/

6000

MIPS M/

120

MIPS M

2000

Integer SPEC92 PerformanceInteger SPEC92 Performance

Page 10: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#10 Lec # 1 Fall 2001 9-6-2001

Microprocessor Frequency TrendMicroprocessor Frequency Trend

386486

Pentium(R)

Pentium Pro(R)

Pentium(R) II

MPC750604+604

601, 603

21264S

2126421164A

2116421064A

21066

10

100

1,000

10,000

1987

1989

1991

1993

1995

1997

1999

2001

2003

2005

Mh

z

1

10

100

Gat

e D

elay

s/ C

lock

Intel

IBM Power PC

DEC

Gate delays/clock

Processor freq scales by 2X per

generation

Ê Frequency doubles each generationË Number of gates/clock reduce by 25%

Page 11: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#11 Lec # 1 Fall 2001 9-6-2001

Microprocessor TransistorMicroprocessor TransistorCount Growth RateCount Growth Rate

Year

1000

10000

100000

1000000

10000000

100000000

1970 1975 1980 1985 1990 1995 2000

i80386

i4004

i8080

Pentium

i80486

i80286

i8086 Moore’sMoore’s Law: Law:2X transistors/ChipEvery 1.5 years

Alpha 21264: 15 millionPentium Pro: 5.5 millionPowerPC 620: 6.9 millionAlpha 21164: 9.3 millionSparc Ultra: 5.2 million

Moore’s Law

Page 12: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#12 Lec # 1 Fall 2001 9-6-2001

Increase of Capacity of VLSI Dynamic RAM ChipsIncrease of Capacity of VLSI Dynamic RAM Chips

size

Year

1000

10000

100000

1000000

10000000

100000000

1000000000

1970 1975 1980 1985 1990 1995 2000

year size(Megabit)

1980 0.06251983 0.251986 11989 41992 161996 641999 2562000 1024

1.55X/yr,or doubling every 1.6years

Page 13: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#13 Lec # 1 Fall 2001 9-6-2001

DRAM Cost Over TimeDRAM Cost Over Time

Current second half 1999 cost: ~ $1 per MB

Page 14: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#14 Lec # 1 Fall 2001 9-6-2001

Recent Technology TrendsRecent Technology Trends (Summary) (Summary)

Capacity Speed (latency)

Logic 2x in 3 years 2x in 3 years

DRAM 4x in 3 years 2x in 10 years

Disk 4x in 3 years 2x in 10 years

Page 15: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#15 Lec # 1 Fall 2001 9-6-2001

Computer Technology Trends:Computer Technology Trends:Evolutionary but Rapid ChangeEvolutionary but Rapid Change

• Processor:– 2X in speed every 1.5 years; 1000X performance in last decade.

• Memory:– DRAM capacity: > 2x every 1.5 years; 1000X size in last decade.– Cost per bit: Improves about 25% per year.

• Disk:– Capacity: > 2X in size every 1.5 years.– Cost per bit: Improves about 60% per year.– 200X size in last decade.– Only 10% performance improvement per year, due to mechanical

limitations.

• Expected State-of-the-art PC by end of year 2001 :– Processor clock speed: > 2500 MegaHertz (2.5 GigaHertz)– Memory capacity: > 1000 MegaByte (1 GigaBytes)– Disk capacity: > 100 GigaBytes (0.1 TeraBytes)

Page 16: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#16 Lec # 1 Fall 2001 9-6-2001

Distribution of Cost in a System: An ExampleDistribution of Cost in a System: An Example

Decreasingfractionof total cost

Increasingfractionof total cost

Page 17: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#17 Lec # 1 Fall 2001 9-6-2001

A Simplified View of TheA Simplified View of TheSoftware/Hardware Hierarchical LayersSoftware/Hardware Hierarchical Layers

Page 18: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#18 Lec # 1 Fall 2001 9-6-2001

A Hierarchy of Computer DesignA Hierarchy of Computer DesignLevel Name Modules Primitives Descriptive Media

1 Electronics Gates, FF’s Transistors, Resistors, etc. Circuit Diagrams

2 Logic Registers, ALU’s ... Gates, FF’s …. Logic Diagrams

3 Organization Processors, Memories Registers, ALU’s … Register Transfer

Notation (RTN)

4 Microprogramming Assembly Language Microinstructions Microprogram

5 Assembly language OS Routines Assembly language Assembly Language

programming Instructions Programs

6 Procedural Applications OS Routines High-level Language

Programming Drivers .. High-level Languages Programs

7 Application Systems Procedural Constructs Problem-Oriented

Programs

Low Level - Hardware

Firmware

High Level - Software

Page 19: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#19 Lec # 1 Fall 2001 9-6-2001

Hierarchy of Computer ArchitectureHierarchy of Computer Architecture

I/O systemInstr. Set Proc.

Compiler

OperatingSystem

Application

Digital DesignCircuit Design

Instruction Set Architecture

Firmware

Datapath & Control

Layout

Software

Hardware

Software/Hardware Boundary

High-Level Language Programs

Assembly LanguagePrograms

Microprogram

Register TransferNotation (RTN)

Logic Diagrams

Circuit Diagrams

Machine Language Program

Page 20: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#20 Lec # 1 Fall 2001 9-6-2001

Computer Architecture Vs. Computer Organization• The term Computer architecture is sometimes erroneously restricted

to computer instruction set design, with other aspects of computerdesign called implementation

• More accurate definitions:

– Instruction set architecture (ISA): The actual programmer-visible instruction set and serves as the boundary between thesoftware and hardware.

– Implementation of a machine has two components:

• Organization: includes the high-level aspects of a computer’sdesign such as: The memory system, the bus structure, theinternal CPU unit which includes implementations of arithmetic,logic, branching, and data transfer operations.

• Hardware: Refers to the specifics of the machine such as detailedlogic design and packaging technology.

• In general, Computer Architecture refers to the above three aspects:

Instruction set architecture, organization, and hardware.

Page 21: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#21 Lec # 1 Fall 2001 9-6-2001

Computer Architecture’s ChangingComputer Architecture’s ChangingDefinitionDefinition

• 1950s to 1960s:Computer Architecture Course = Computer Arithmetic.

• 1970s to mid 1980s:Computer Architecture Course = Instruction Set Design,especially ISA appropriate for compilers.

• 1990s:Computer Architecture Course = Design of CPU,memory system, I/O system, Multiprocessors.

Page 22: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#22 Lec # 1 Fall 2001 9-6-2001

The Task of A Computer DesignerThe Task of A Computer Designer• Determine what attributes that are important to the

design of the new machine.

• Design a machine to maximize performance whilestaying within cost and other constraints and metrics.

• It involves more than instruction set design.

– Instruction set architecture.– CPU Micro-Architecture.– Implementation.

• Implementation of a machine has two components:

– Organization.– Hardware.

Page 23: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#23 Lec # 1 Fall 2001 9-6-2001

Recent Architectural ImprovementsRecent Architectural Improvements• Increased optimization and utilization of cache systems.

• Memory-latency hiding techniques.

• Optimization of pipelined instruction execution.

• Dynamic hardware-based pipeline scheduling.

• Improved handling of pipeline hazards.

• Improved hardware branch prediction techniques.

• Exploiting Instruction-Level Parallelism (ILP) in terms ofmultiple-instruction issue and multiple hardware functionalunits.

• Inclusion of special instructions to handle multimediaapplications.

• High-speed bus designs to improve data transfer rates.

Page 24: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#24 Lec # 1 Fall 2001 9-6-2001

The Concept of Memory HierarchyThe Concept of Memory Hierarchy

Memory I/O dev.

Cache

Registers

CPU

< 4 GB60ns

>2 GB5ms

< 4MB

Page 25: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#25 Lec # 1 Fall 2001 9-6-2001

Typical Parameters of Memory Hierarchy Levels

Page 26: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#26 Lec # 1 Fall 2001 9-6-2001

Current Computer Architecture TopicsCurrent Computer Architecture Topics

Instruction Set Architecture

Pipelining, Hazard Resolution, Superscalar,Reordering, Branch Prediction, Speculation,VLIW, Vector, DSP, ...

Multiprocessing,Simultaneous CPU Multi-threading

Addressing,Protection,Exception Handling

L1 Cache

L2 Cache

DRAM

Disks, WORM, Tape

Coherence,Bandwidth,Latency

Emerging TechnologiesInterleavingBus protocols

RAID

VLSI

Input/Output and Storage

MemoryHierarchy

Pipelining and Instruction Level Parallelism (ILP)

Thread Level Parallelism (TLB)

Page 27: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#27 Lec # 1 Fall 2001 9-6-2001

Computer Performance Evaluation:Computer Performance Evaluation:Cycles Per Instruction (CPI)Cycles Per Instruction (CPI)

• Most computers run synchronously utilizing a CPU clockrunning at a constant clock rate:

where: Clock rate = 1 / clock cycle

• A computer machine instruction is comprised of a number ofelementary or micro operations which vary in number andcomplexity depending on the instruction and the exact CPUorganization and implementation.– A micro operation is an elementary hardware operation that can be

performed during one clock cycle.

– This corresponds to one micro-instruction in microprogrammed CPUs.

– Examples: register operations: shift, load, clear, increment, ALUoperations: add , subtract, etc.

• Thus a single machine instruction may take one or more cyclesto complete termed as the Cycles Per Instruction (CPI).

Page 28: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#28 Lec # 1 Fall 2001 9-6-2001

• For a specific program compiled to run on a specific machine “A”,the following parameters are provided:

– The total instruction count of the program.– The average number of cycles per instruction (average CPI).– Clock cycle of machine “A”

• How can one measure the performance of this machine running thisprogram?

– Intuitively the machine is said to be faster or has better performancerunning this program if the total execution time is shorter.

– Thus the inverse of the total measured program execution time is apossible performance measure or metric:

PerformanceA = 1 / Execution TimeA

How to compare performance of different machines?

What factors affect performance? How to improve performance?

Computer Performance Measures:Computer Performance Measures:Program Execution TimeProgram Execution Time

Page 29: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#29 Lec # 1 Fall 2001 9-6-2001

Measuring PerformanceMeasuring Performance• For a specific program or benchmark running on machine x:

Performance = 1 / Execution Timex

• To compare the performance of machines X, Y, executing specific code:

n = Executiony / Executionx

= Performance x / Performancey

• System performance refers to the performance and elapsed time measuredon an unloaded machine.

• CPU Performance refers to user CPU time on an unloaded system.

• Example:

For a given program: Execution time on machine A: ExecutionA = 1 second

Execution time on machine B: ExecutionB = 10 secondsPerformanceA /PerformanceB = Execution TimeB /Execution TimeA = 10 /1 = 10

The performance of machine A is 10 times the performance of machine B whenrunning this program, or: Machine A is said to be 10 times faster than machine Bwhen running this program.

Page 30: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#30 Lec # 1 Fall 2001 9-6-2001

CPU Performance EquationCPU Performance Equation

CPU time = CPU clock cycles for a program

X Clock cycle time

or:

CPU time = CPU clock cycles for a program / clock rate

CPI (clock cycles per instruction):

CPI = CPU clock cycles for a program / I

where I is the instruction count.

Page 31: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#31 Lec # 1 Fall 2001 9-6-2001

CPU Execution Time: The CPU EquationCPU Execution Time: The CPU Equation• A program is comprised of a number of instructions, I

– Measured in: instructions/program

• The average instruction takes a number of cycles perinstruction (CPI) to be completed.– Measured in: cycles/instruction

• CPU has a fixed clock cycle time C = 1/clock rate– Measured in: seconds/cycle

• CPU execution time is the product of the above threeparameters as follows:

CPU Time = I x CPI x C

CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle

Page 32: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#32 Lec # 1 Fall 2001 9-6-2001

CPU Execution TimeCPU Execution TimeFor a given program and machine:

CPI = Total program execution cycles / Instructions count

→ CPU clock cycles = Instruction count x CPI

CPU execution time =

= CPU clock cycles x Clock cycle

= Instruction count x CPI x Clock cycle

= I x CPI x C

Page 33: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#33 Lec # 1 Fall 2001 9-6-2001

CPU Execution Time: ExampleCPU Execution Time: Example• A Program is running on a specific machine with the

following parameters:– Total instruction count: 10,000,000 instructions

– Average CPI for the program: 2.5 cycles/instruction.

– CPU clock rate: 200 MHz.

• What is the execution time for this program:

CPU time = Instruction count x CPI x Clock cycle

= 10,000,000 x 2.5 x 1 / clock rate

= 10,000,000 x 2.5 x 5x10-9

= .125 seconds

CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle

Page 34: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#34 Lec # 1 Fall 2001 9-6-2001

Aspects of CPU Execution TimeAspects of CPU Execution TimeCPU Time = Instruction count x CPI x Clock cycle

Instruction Count Instruction Count II

ClockClockCycleCycle CC

CPICPIDepends on:

CPU OrganizationTechnology

Depends on:

Program Used

CompilerISACPU Organization

Depends on:

Program UsedCompilerISA

Page 35: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#35 Lec # 1 Fall 2001 9-6-2001

Factors Affecting CPU PerformanceFactors Affecting CPU PerformanceCPU time = Seconds = Instructions x Cycles x Seconds

Program Program Instruction Cycle

CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle

CPI Clock Cycle CInstruction Count I

Program

Compiler

Organization

Technology

Instruction SetArchitecture (ISA)

X

X

X

X

X

X

X X

X

Page 36: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#36 Lec # 1 Fall 2001 9-6-2001

Performance Comparison: ExamplePerformance Comparison: Example• From the previous example: A Program is running on a specific

machine with the following parameters:

– Total instruction count: 10,000,000 instructions

– Average CPI for the program: 2.5 cycles/instruction.

– CPU clock rate: 200 MHz.

• Using the same program with these changes:

– A new compiler used: New instruction count 9,500,000

New CPI: 3.0

– Faster CPU implementation: New clock rate = 300 MHZ

• What is the speedup with the changes?

Speedup = (10,000,000 x 2.5 x 5x10-9) / (9,500,000 x 3 x 3.33x10-9 ) = .125 / .095 = 1.32

or 32 % faster after changes.

Speedup = Old Execution Time = Iold x CPIold x Clock cycleold

New Execution Time Inew x CPInew x Clock Cyclenew

Speedup = Old Execution Time = Iold x CPIold x Clock cycleold

New Execution Time Inew x CPInew x Clock Cyclenew

Page 37: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#37 Lec # 1 Fall 2001 9-6-2001

Instruction Types And CPIInstruction Types And CPI• Given a program with n types or classes of

instructions with:

– Ci = Count of instructions of typei

– CPIi = Average cycles per instruction of typei

( )CPUclockcycles i ii

n

CPI C= ×=∑

1

Page 38: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#38 Lec # 1 Fall 2001 9-6-2001

Instruction Types And CPI: An ExampleInstruction Types And CPI: An Example• An instruction set has three instruction classes:

• Two code sequences have the following instruction counts:

• CPU cycles for sequence 1 = 2 x 1 + 1 x 2 + 2 x 3 = 10 cycles

CPI for sequence 1 = clock cycles / instruction count

= 10 /5 = 2

• CPU cycles for sequence 2 = 4 x 1 + 1 x 2 + 1 x 3 = 9 cycles

CPI for sequence 2 = 9 / 6 = 1.5

Instruction class CPI A 1 B 2 C 3

Instruction counts for instruction classCode Sequence A B C 1 2 1 2 2 4 1 1

Page 39: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#39 Lec # 1 Fall 2001 9-6-2001

Instruction Frequency & CPIInstruction Frequency & CPI• Given a program with n types or classes of

instructions with the following characteristics:

Ci = Count of instructions of typei

CPIi = Average cycles per instruction of typei

Fi = Frequency of instruction typei

= Ci / total instruction count

Then:

( )∑=

×=n

iii FCPICPI

1

Page 40: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#40 Lec # 1 Fall 2001 9-6-2001

Instruction Type Frequency & CPI:Instruction Type Frequency & CPI:A RISC ExampleA RISC Example

Typical Mix

Base Machine (Reg / Reg)Op Freq Cycles CPIxF % TimeALU 50% 1 .5 23%Load 20% 5 1.0 45%Store 10% 3 .3 14%Branch 20% 2 .4 18%

CPI = .5 x 1 + .2 x 5 + .1 x 3 + .2 x 2 = 2.2

( )∑=

×=n

iii FCPICPI

1

Page 41: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#41 Lec # 1 Fall 2001 9-6-2001

Metrics of Computer PerformanceMetrics of Computer Performance

Compiler

Programming Language

Application

DatapathControl

Transistors Wires Pins

ISA

Function UnitsCycles per second (clock rate).

Megabytes per second.

Execution time: Target workload,SPEC95, etc.

Each metric has a purpose, and each can be misused.

(millions) of Instructions per second – MIPS(millions) of (F.P.) operations per second – MFLOP/s

Page 42: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#42 Lec # 1 Fall 2001 9-6-2001

Choosing Programs To Evaluate PerformanceChoosing Programs To Evaluate PerformanceLevels of programs or benchmarks that could be used to evaluateperformance:

– Actual Target Workload: Full applications that run on thetarget machine.

– Real Full Program-based Benchmarks:• Select a specific mix or suite of programs that are typical of

targeted applications or workload (e.g SPEC95).

– Small “Kernel” Benchmarks:• Key computationally-intensive pieces extracted from real

programs.– Examples: Matrix factorization, FFT, tree search, etc.

• Best used to test specific aspects of the machine.

– Microbenchmarks:• Small, specially written programs to isolate a specific aspect

of performance characteristics: Processing: integer, floatingpoint, local memory, input/output, etc.

Page 43: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#43 Lec # 1 Fall 2001 9-6-2001

Actual Target Workload

Full Application Benchmarks

Small “Kernel” Benchmarks

Microbenchmarks

Pros Cons

• Representative• Very specific.• Non-portable.• Complex: Difficult to run, or measure.

• Portable.• Widely used.• Measurements useful in reality.

• Easy to run, early inthe design cycle.

• Identify peakperformance andpotential bottlenecks.

• Less representative than actual workload.

• Easy to “fool” bydesigning hardwareto run them well.

• Peak performanceresults may be a longway from real applicationperformance

Types of BenchmarksTypes of Benchmarks

Page 44: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#44 Lec # 1 Fall 2001 9-6-2001

SPEC: System PerformanceSPEC: System PerformanceEvaluation CooperativeEvaluation Cooperative

• The most popular and industry-standard set of CPU benchmarks.• SPECmarks, 1989:

– 10 programs yielding a single number (“SPECmarks”).

• SPEC92, 1992:– SPECInt92 (6 integer programs) and SPECfp92 (14 floating point

programs).

• SPEC95, 1995:– Eighteen new application benchmarks selected (with given inputs)

reflecting a technical computing workload.– SPECint95 (8 integer programs):

• go, m88ksim, gcc, compress, li, ijpeg, perl, vortex

– SPECfp95 (10 floating-point intensive programs):• tomcatv, swim, su2cor, hydro2d, mgrid, applu, turb3d, apsi, fppp, wave5

– Source code must be compiled with standard compiler flags.

Page 45: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#45 Lec # 1 Fall 2001 9-6-2001

SPEC95SPEC95Benchmark Description

go Artificial intelligence; plays the game of Gom88ksim Motorola 88k chip simulator; runs test programgcc The Gnu C compiler generating SPARC codecompress Compresses and decompresses file in memoryli Lisp interpreterijpeg Graphic compression and decompressionperl Manipulates strings and prime numbers in the special-purpose programming language Perlvortex A database programtomcatv A mesh generation programswim Shallow water model with 513 x 513 gridsu2cor quantum physics; Monte Carlo simulationhydro2d Astrophysics; Hydrodynamic Naiver Stokes equationsmgrid Multigrid solver in 3-D potential fieldapplu Parabolic/elliptic partial differential equationstrub3d Simulates isotropic, homogeneous turbulence in a cubeapsi Solves problems regarding temperature, wind velocity, and distribution of pollutantfpppp Quantum chemistrywave5 Plasma physics; electromagnetic particle simulation

Integer

FloatingPoint

Page 46: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#46 Lec # 1 Fall 2001 9-6-2001

SPEC95 For High-End CPUsSPEC95 For High-End CPUsFourth Quarter 1999Fourth Quarter 1999

Page 47: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#47 Lec # 1 Fall 2001 9-6-2001

SPEC95 For High-End CPUs FirstSPEC95 For High-End CPUs FirstQuarter 2000Quarter 2000

Page 48: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#48 Lec # 1 Fall 2001 9-6-2001

Comparing and SummarizingComparing and SummarizingPerformancePerformance

• Total execution time of the compared machines.

• If n program runs or n programs are used:

– Arithmetic mean:

– Weighted Execution Time:

– Normalized Execution time (arithmetic or geometricmean). Formula for geometric mean:

ii

n

iWeight Time×=

∑1

1

1n ii

n

Time=

ii

n

n Execution time ratio_ _−

∏1

Page 49: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#49 Lec # 1 Fall 2001 9-6-2001

Computer Performance Measures :Computer Performance Measures :MIPS MIPS (Million Instructions Per Second)(Million Instructions Per Second)

• For a specific program running on a specific computer is a measure ofmillions of instructions executed per second:

MIPS = Instruction count / (Execution Time x 106)

= Instruction count / (CPU clocks x Cycle time x 106)

= (Instruction count x Clock rate) / (Instruction count x CPI x 106)

= Clock rate / (CPI x 106)

• Faster execution time usually means faster MIPS rating.

• Problems:

– No account for instruction set used.

– Program-dependent: A single machine does not have a single MIPSrating.

– Cannot be used to compare computers with different instructionsets.

– A higher MIPS rating in some cases may not mean higherperformance or better execution time. i.e. due to compiler designvariations.

Page 50: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#50 Lec # 1 Fall 2001 9-6-2001

Compiler Variations, MIPS, Performance:Compiler Variations, MIPS, Performance:An ExampleAn Example

• For the machine with instruction classes:

• For a given program two compilers produced thefollowing instruction counts:

• The machine is assumed to run at a clock rate of 100 MHz

Instruction class CPI A 1 B 2 C 3

Instruction counts (in millions) for each instruction class Code from: A B C Compiler 1 5 1 1 Compiler 2 10 1 1

Page 51: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#51 Lec # 1 Fall 2001 9-6-2001

Compiler Variations, MIPS, Performance:Compiler Variations, MIPS, Performance:An Example (Continued)An Example (Continued)

MIPS = Clock rate / (CPI x 106) = 100 MHz / (CPI x 106)

CPI = CPU execution cycles / Instructions count

CPU time = Instruction count x CPI / Clock rate

• For compiler 1:

– CPI1 = (5 x 1 + 1 x 2 + 1 x 3) / (5 + 1 + 1) = 10 / 7 = 1.43

– MIP1 = 100 / (1.428 x 106) = 70.0

– CPU time1 = ((5 + 1 + 1) x 106 x 1.43) / (100 x 106) = 0.10 seconds

• For compiler 2:

– CPI2 = (10 x 1 + 1 x 2 + 1 x 3) / (10 + 1 + 1) = 15 / 12 = 1.25

– MIP2 = 100 / (1.25 x 106) = 80.0

– CPU time2 = ((10 + 1 + 1) x 106 x 1.25) / (100 x 106) = 0.15 seconds

( )CPU clock cycles i ii

n

CPI C= ×=

∑1

Page 52: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#52 Lec # 1 Fall 2001 9-6-2001

Computer Performance Measures :Computer Performance Measures :MFOLPS MFOLPS (Million FLOating-Point Operations Per Second)

• A floating-point operation is an addition, subtraction, multiplication,or division operation applied to numbers represented by a single ordouble precision floating-point representation.

• MFLOPS, for a specific program running on a specific computer, isa measure of millions of floating point-operation (megaflops) persecond:

MFLOPS = Number of floating-point operations / (Execution time x 106 )

• A better comparison measure between different machines thanMIPS.

• Program-dependent: Different programs have differentpercentages of floating-point operations present. i.e compilers haveno such operations and yield a MFLOPS rating of zero.

• Dependent on the type of floating-point operations present in theprogram.

Page 53: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#53 Lec # 1 Fall 2001 9-6-2001

Quantitative PrinciplesQuantitative Principlesof Computer Designof Computer Design

• Amdahl’s Law:

The performance gain from improving some portion ofa computer is calculated by:

Speedup = Performance for entire task using the enhancement

Performance for the entire task without using the enhancement

or Speedup = Execution time without the enhancement

Execution time for entire task using the enhancement

Page 54: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#54 Lec # 1 Fall 2001 9-6-2001

Performance Enhancement Calculations:Performance Enhancement Calculations: Amdahl's Law Amdahl's Law

• The performance enhancement possible due to a given designimprovement is limited by the amount that the improved feature is used

• Amdahl’s Law:

Performance improvement or speedup due to enhancement E:

Execution Time without E Performance with E Speedup(E) = -------------------------------------- = --------------------------------- Execution Time with E Performance without E

– Suppose that enhancement E accelerates a fraction F of theexecution time by a factor S and the remainder of the time isunaffected then:

Execution Time with E = ((1-F) + F/S) X Execution Time without EHence speedup is given by:

Execution Time without E 1Speedup(E) = --------------------------------------------------------- = -------------------- ((1 - F) + F/S) X Execution Time without E (1 - F) + F/S

Page 55: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#55 Lec # 1 Fall 2001 9-6-2001

Pictorial Depiction of Amdahl’s LawPictorial Depiction of Amdahl’s Law

Before: Execution Time without enhancement E:

Unaffected, fraction: (1- F)

After: Execution Time with enhancement E:

Enhancement E accelerates fraction F of execution time by a factor of S

Affected fraction: F

Unaffected, fraction: (1- F) F/S

Unchanged

Execution Time without enhancement E 1Speedup(E) = ------------------------------------------------------ = ------------------ Execution Time with enhancement E (1 - F) + F/S

Page 56: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#56 Lec # 1 Fall 2001 9-6-2001

Performance Enhancement ExamplePerformance Enhancement Example• For the RISC machine with the following instruction mix given earlier:

Op Freq Cycles CPI(i) % TimeALU 50% 1 .5 23%Load 20% 5 1.0 45%Store 10% 3 .3 14%

Branch 20% 2 .4 18%

• If a CPU design enhancement improves the CPI of load instructionsfrom 5 to 2, what is the resulting performance improvement from thisenhancement:

Fraction enhanced = F = 45% or .45

Unaffected fraction = 100% - 45% = 55% or .55

Factor of enhancement = 5/2 = 2.5

Using Amdahl’s Law: 1 1Speedup(E) = ------------------ = --------------------- = 1.37 (1 - F) + F/S .55 + .45/2.5

CPI = 2.2

Page 57: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#57 Lec # 1 Fall 2001 9-6-2001

An Alternative Solution Using CPU EquationAn Alternative Solution Using CPU EquationOp Freq Cycles CPI(i) % TimeALU 50% 1 .5 23%Load 20% 5 1.0 45%Store 10% 3 .3 14%

Branch 20% 2 .4 18%

• If a CPU design enhancement improves the CPI of load instructionsfrom 5 to 2, what is the resulting performance improvement from thisenhancement:

Old CPI = 2.2

New CPI = .5 x 1 + .2 x 2 + .1 x 3 + .2 x 2 = 1.6

Original Execution Time Instruction count x old CPI x clock cycleSpeedup(E) = ----------------------------------- = ---------------------------------------------------------------- New Execution Time Instruction count x new CPI x clock cycle

old CPI 2.2= ------------ = --------- = 1.37

new CPI 1.6

Which is the same speedup obtained from Amdahl’s Law in the first solution.

CPI = 2.2

Page 58: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#58 Lec # 1 Fall 2001 9-6-2001

Performance Enhancement ExamplePerformance Enhancement Example• A program runs in 100 seconds on a machine with multiply

operations responsible for 80 seconds of this time. By how muchmust the speed of multiplication be improved to make the programfour times faster?

100 Desired speedup = 4 = ----------------------------------------------------- Execution Time with enhancement

→ Execution time with enhancement = 25 seconds

25 seconds = (100 - 80 seconds) + 80 seconds / n 25 seconds = 20 seconds + 80 seconds / n

→ 5 = 80 seconds / n

→ n = 80/5 = 16

Hence multiplication should be 16 times faster to get a speedup of 4.

Page 59: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#59 Lec # 1 Fall 2001 9-6-2001

Performance Enhancement ExamplePerformance Enhancement Example

• For the previous example with a program running in 100 seconds ona machine with multiply operations responsible for 80 seconds of thistime. By how much must the speed of multiplication be improvedto make the program five times faster?

100Desired speedup = 5 = ----------------------------------------------------- Execution Time with enhancement

→ Execution time with enhancement = 20 seconds

20 seconds = (100 - 80 seconds) + 80 seconds / n 20 seconds = 20 seconds + 80 seconds / n

→ 0 = 80 seconds / n

No amount of multiplication speed improvement can achieve this.

Page 60: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#60 Lec # 1 Fall 2001 9-6-2001

Extending Amdahl's Law To Multiple EnhancementsExtending Amdahl's Law To Multiple Enhancements

• Suppose that enhancement Ei accelerates a fraction Fi of theexecution time by a factor Si and the remainder of the time isunaffected then:

∑ ∑+−=

i ii

ii

XSFF

SpeedupTime Execution Original)1

Time Execution Original

)((

∑ ∑+−=

i ii

ii S

FFSpeedup

)( )1

1

(

Note: All fractions refer to original execution time.

Page 61: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#61 Lec # 1 Fall 2001 9-6-2001

Amdahl's Law With Multiple Enhancements:Amdahl's Law With Multiple Enhancements:ExampleExample

• Three CPU performance enhancements are proposed with the followingspeedups and percentage of the code execution time affected:

Speedup1 = S1 = 10 Percentage1 = F1 = 20%

Speedup2 = S2 = 15 Percentage1 = F2 = 15%

Speedup3 = S3 = 30 Percentage1 = F3 = 10%

• While all three enhancements are in place in the new design, eachenhancement affects a different portion of the code and only oneenhancement can be used at a time.

• What is the resulting overall speedup?

• Speedup = 1 / [(1 - .2 - .15 - .1) + .2/10 + .15/15 + .1/30)] = 1 / [ .55 + .0333 ] = 1 / .5833 = 1.71

∑ ∑+−=

i ii

ii S

FFSpeedup

)( )1

1

(

Page 62: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#62 Lec # 1 Fall 2001 9-6-2001

Pictorial Depiction of ExamplePictorial Depiction of ExampleBefore: Execution Time with no enhancements: 1

After: Execution Time with enhancements: .55 + .02 + .01 + .00333 = .5833

Speedup = 1 / .5833 = 1.71

Note: All fractions refer to original execution time.

Unaffected, fraction: .55

Unchanged

Unaffected, fraction: .55 F1 = .2 F2 = .15 F3 = .1

S1 = 10 S2 = 15 S3 = 30

/ 10 / 30/ 15

Page 63: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#63 Lec # 1 Fall 2001 9-6-2001

Instruction Set Architecture (ISA)Instruction Set Architecture (ISA)“... the attributes of a [computing] system as seen by theprogrammer, i.e. the conceptual structure and functionalbehavior, as distinct from the organization of the data flowsand controls the logic design, and the physicalimplementation.” – Amdahl, Blaaw, and Brooks, 1964.

The instruction set architecture is concerned with:

• Organization of programmable storage (memory & registers): Includes the amount of addressable memory and number of available registers.

• Data Types & Data Structures: Encodings & representations.

• Instruction Set: What operations are specified.

• Instruction formats and encoding.

• Modes of addressing and accessing data items and instructions

• Exceptional conditions.

Page 64: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#64 Lec # 1 Fall 2001 9-6-2001

Evolution of Instruction SetsEvolution of Instruction SetsSingle Accumulator (EDSAC 1950)

Accumulator + Index Registers(Manchester Mark I, IBM 700 series 1953)

Separation of Programming Model from Implementation

High-level Language Based Concept of a Family(B5000 1963) (IBM 360 1964)

General Purpose Register Machines

Complex Instruction Sets Load/Store Architecture

RISC

(Vax, Intel 432 1977-80) (CDC 6600, Cray 1 1963-76)

(Mips,SPARC,HP-PA,IBM RS6000, . . .1987)

Page 65: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#65 Lec # 1 Fall 2001 9-6-2001

Types of Instruction Set ArchitecturesTypes of Instruction Set ArchitecturesAccording To Operand Addressing FieldsAccording To Operand Addressing Fields

Memory-To-Memory Machines:– Operands obtained from memory and results stored back in memory by any

instruction that requires operands.– No local CPU registers are used in the CPU datapath.– Include:

• The 4 Address Machine.• The 3-address Machine.• The 2-address Machine.

The 1-address (Accumulator) Machine:– A single local CPU special-purpose register (accumulator) is used as the source of

one operand and as the result destination.

The 0-address or Stack Machine:– A push-down stack is used in the CPU.

General Purpose Register (GPR) Machines:– The CPU datapath contains several local general-purpose registers which can

be used as operand sources and as result destinations.– A large number of possible addressing modes.– Load-Store or Register-To-Register Machines: GPR machines where only

data movement instructions (loads, stores) can obtain operands from memoryand store results to memory.

Page 66: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#66 Lec # 1 Fall 2001 9-6-2001

Code Sequence C = A + BCode Sequence C = A + Bfor Four Instruction Setsfor Four Instruction Sets

Register Register

Stack Accumulator (register-memory) (load-store)

Push A Load A Load R1,A Load R1,A

Push B Add B Add R1, B Load R2, B

Add Store C Store C, R1 Add R3,R1, R2

Store C, R3

Page 67: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#67 Lec # 1 Fall 2001 9-6-2001

General-Purpose RegisterGeneral-Purpose Register(GPR) Machines(GPR) Machines

• Every machine designed after 1980 uses a load-store GPRarchitecture.

• Registers, like any other storage form internal to the CPU,are faster than memory.

• Registers are easier for a compiler to use.

• GPR architectures are divided into several typesdepending on two factors:

– Whether an ALU instruction has two or three operands.

– How many of the operands in ALU instructions may bememory addresses.

Page 68: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#68 Lec # 1 Fall 2001 9-6-2001

General-Purpose Register MachinesGeneral-Purpose Register Machines

Page 69: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#69 Lec # 1 Fall 2001 9-6-2001

ISA ExamplesISA Examples Machine Number of General Architecture year

Purpose Registers

EDSAC

IBM 701

CDC 6600

IBM 360

DEC PDP-11

DEC VAX

Motorola 68000

MIPS

SPARC

1

1

8

16

8

16

16

32

32

accumulator

accumulator

load-store

register-memory

register-memory

register-memorymemory-memory

register-memory

load-store

load-store

1949

1953

1963

1964

1970

1977

1980

1985

1987

Page 70: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#70 Lec # 1 Fall 2001 9-6-2001

Examples of GPR MachinesExamples of GPR Machines

Number of Maximum number

memory addresses of operands allowed

SPARK, MIPS

0 3 PowerPC, ALPHA

1 2 Intel 80x86,

Motorola 68000

2 2 VAX

3 3 VAX

Page 71: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#71 Lec # 1 Fall 2001 9-6-2001

Typical Memory Addressing ModesTypical Memory Addressing Modes

Register

Immediate

Displacement

Indirect

Indexed

Absolute

Memory indirect

Autoincrement

Autodecrement

Scaled

Regs [R4] ←Regs[R4] + Regs[R3]

Regs[R4] ←Regs[R4] + 3

Regs[R4] ←Regs[R4]+Mem[10+Regs[R1]]

Regs[R4] ←Regs[R4]+ Mem[Regs[R1]]

Regs [R3] ←Regs[R3]+Mem[Regs[R1]+Regs[R2]]

Regs[R1] ←Regs[R1] + Mem[1001]

Regs[R1] ←Regs[R1] + Mem[Mem[Regs[R3]]]

Regs[R1] ←Regs[R1] + Mem[Regs[R2]]

Regs[R2] ←Regs[R2] + d

Regs [R2] ←Regs[R2] -d

Regs{R1] ←Regs[Regs[R1] +Mem[Regs[R2]]

Regs[R1] ←Regs[R1] +

Mem[100+Regs[R2]+Regs[R3]*d]

Add R4, R3

Add R4, #3

Add R4, 10 (R1)

Add R4, (R1)

Add R3, (R1 + R2)

Add R1, (1001)

Add R1, @ (R3)

Add R1, (R2) +

Add R1, - (R2)

Add R1, 100 (R2) [R3]

Addressing Sample Mode Instruction Meaning

Page 72: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#72 Lec # 1 Fall 2001 9-6-2001

Addressing Modes Usage ExampleAddressing Modes Usage Example

Displacement 42% avg, 32% to 55%

Immediate: 33% avg, 17% to 43%

Register deferred (indirect): 13% avg, 3% to 24%

Scaled: 7% avg, 0% to 16%

Memory indirect: 3% avg, 1% to 6%

Misc: 2% avg, 0% to 3%

75% displacement & immediate88% displacement, immediate & register indirect.

Observation: In addition Register direct, Displacement,Immediate, Register Indirect addressing modes are important.

For 3 programs running on VAX ignoring direct register mode:

75%88%

Page 73: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#73 Lec # 1 Fall 2001 9-6-2001

Utilization of Memory Addressing ModesUtilization of Memory Addressing Modes

Page 74: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#74 Lec # 1 Fall 2001 9-6-2001

Displacement Address Size ExampleDisplacement Address Size ExampleAvg. of 5 SPECint92 programs v. avg. 5 SPECfp92 programs

1% of addresses > 16-bits

12 - 16 bits of displacement needed

0%

10%

20%

30%

0 1 2 3 4 5 6 7 8 9

10 11 12 13 14 15

Int. Avg. FP Avg.

Displacement Address Bits Needed

Page 75: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#75 Lec # 1 Fall 2001 9-6-2001

Immediate Addressing ModeImmediate Addressing Mode

Page 76: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#76 Lec # 1 Fall 2001 9-6-2001

Operation Types in The Instruction SetOperation Types in The Instruction Set Operator Type Examples

Arithmetic and logical Integer arithmetic and logical operations: add, or

Data transfer Loads-stores (move on machines with memory

addressing)

Control Branch, jump, procedure call, and return, traps.

System Operating system call, virtual memory management instructions

Floating point Floating point operations: add, multiply.

Decimal Decimal add, decimal multiply, decimal to character conversion

String String move, string compare, string search

Graphics Pixel operations, compression/ decompression operations

Page 77: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#77 Lec # 1 Fall 2001 9-6-2001

Instruction Usage Example:Instruction Usage Example: Top 10 Intel X86 Instructions Top 10 Intel X86 Instructions

Rank Integer Average Percent total executed

1

2

3

4

5

6

7

8

9

10

instruction

load

conditional branch

compare

store

add

and

sub

move register-register

call

return

Total

Observation: Simple instructions dominate instruction usage frequency.

22%

20%

16%

12%

8%

6%

5%

4%

1%

1%

96%

Page 78: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#78 Lec # 1 Fall 2001 9-6-2001

Instructions for Control FlowInstructions for Control Flow

Page 79: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#79 Lec # 1 Fall 2001 9-6-2001

Type and Size of OperandsType and Size of Operands• Common operand types include (assuming a 32 bit CPU):

Character (1 byte)

Half word (16 bits)

Word (32 bits)

• IEEE standard 754: single-precision floating point(1 word), double-precision floating point (2 words).

• For business applications, some architectures support adecimal format (packed decimal, or binary coded decimal,BCD).

Page 80: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#80 Lec # 1 Fall 2001 9-6-2001

Type and Size of OperandsType and Size of Operands

Page 81: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#81 Lec # 1 Fall 2001 9-6-2001

Instruction Set EncodingInstruction Set EncodingConsiderations affecting instruction set encoding:

– To have as many registers and address modes aspossible.

– The Impact of of the size of the register and addressingmode fields on the average instruction size and on theaverage program.

– To encode instructions into lengths that will be easy tohandle in the implementation. On a minimum to be amultiple of bytes.

Page 82: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#82 Lec # 1 Fall 2001 9-6-2001

Three Examples of Instruction Set EncodingThree Examples of Instruction Set Encoding

Variable: VAX (1-53 bytes)

Operations &no of operands

Addressspecifier 1

Addressfield 1

Address specifier n

Address field n

Operation Addressfield 1

Addressfield 2

Addressfield3

Fixed: DLX, MIPS, PowerPC, SPARC

Operation Address Specifier

Addressfield

Operation AddressSpecifier 1

AddressSpecifier 2

Address field

OperationAddress Specifier Address

field 1

Address field 2

Hybrid : IBM 360/370, Intel 80x86

Page 83: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#83 Lec # 1 Fall 2001 9-6-2001

Complex Instruction Set Computer (CISC)Complex Instruction Set Computer (CISC)• Emphasizes doing more with each instruction

• Motivated by the high cost of memory and hard diskcapacity when original CISC architectures were proposed– When M6800 was introduced: 16K RAM = $500, 40M hard disk = $ 55, 000

– When MC68000 was introduced: 64K RAM = $200, 10M HD = $5,000

• Original CISC architectures evolved with faster morecomplex CPU designs but backward instruction setcompatibility had to be maintained.

• Wide variety of addressing modes:• 14 in MC68000, 25 in MC68020

• A number instruction modes for the location and number ofoperands:

• The VAX has 0- through 3-address instructions.

• Variable-length instruction encoding.

Page 84: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#84 Lec # 1 Fall 2001 9-6-2001

Example CISC ISA:Example CISC ISA: Motorola 680X0Motorola 680X0

18 addressing modes:• Data register direct.• Address register direct.• Immediate.• Absolute short.• Absolute long.• Address register indirect.• Address register indirect with postincrement.• Address register indirect with predecrement.• Address register indirect with displacement.• Address register indirect with index (8-bit).• Address register indirect with index (base).• Memory inderect postindexed.• Memory indirect preindexed.• Program counter indirect with index (8-bit).• Program counter indirect with index (base).• Program counter indirect with displacement.• Program counter memory indirect postindexed.• Program counter memory indirect preindexed.

Operand size:• Range from 1 to 32 bits, 1, 2, 4, 8,

10, or 16 bytes.

Instruction Encoding:• Instructions are stored in 16-bit

words.

• the smallest instruction is 2- bytes(one word).

• The longest instruction is 5 words(10 bytes) in length.

Page 85: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#85 Lec # 1 Fall 2001 9-6-2001

Example CISC ISA:Example CISC ISA:

Intel X86, 386/486/PentiumIntel X86, 386/486/Pentium12 addressing modes:

• Register.

• Immediate.

• Direct.

• Base.

• Base + Displacement.

• Index + Displacement.

• Scaled Index + Displacement.

• Based Index.

• Based Scaled Index.

• Based Index + Displacement.

• Based Scaled Index + Displacement.

• Relative.

Operand sizes:• Can be 8, 16, 32, 48, 64, or 80 bits long.

• Also supports string operations.

Instruction Encoding:• The smallest instruction is one byte.

• The longest instruction is 12 bytes long.

• The first bytes generally contain the opcode,mode specifiers, and register fields.

• The remainder bytes are for addressdisplacement and immediate data.

Page 86: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#86 Lec # 1 Fall 2001 9-6-2001

Reduced Instruction Set Computer (RISC)Reduced Instruction Set Computer (RISC)• Focuses on reducing the number and complexity of

instructions of the machine.

• Reduced CPI. Goal: At least one instruction per clock cycle.

• Designed with pipelining in mind.

• Fixed-length instruction encoding.

• Only load and store instructions access memory.

• Simplified addressing modes.

– Usually limited to immediate, register indirect, registerdisplacement, indexed.

• Delayed loads and branches.

• Instruction pre-fetch and speculative execution.

• Examples: MIPS, SPARC, PowerPC, Alpha

Page 87: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#87 Lec # 1 Fall 2001 9-6-2001

Example RISC ISA:Example RISC ISA:

PowerPCPowerPC8 addressing modes:

• Register direct.

• Immediate.

• Register indirect.

• Register indirect with immediateindex (loads and stores).

• Register indirect with register index(loads and stores).

• Absolute (jumps).

• Link register indirect (calls).

• Count register indirect (branches).

Operand sizes:• Four operand sizes: 1, 2, 4 or 8 bytes.

Instruction Encoding:• Instruction set has 15 different formats

with many minor variations.•

• All are 32 bits in length.

Page 88: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#88 Lec # 1 Fall 2001 9-6-2001

7 addressing modes:• Register

• Immediate

• Base with displacement

• Base with scaled index anddisplacement

• Predecrement

• Postincrement

• PC-relative

Operand sizes:• Five operand sizes ranging in powers of

two from 1 to 16 bytes.

Instruction Encoding:• Instruction set has 12 different formats.•

• All are 32 bits in length.

Example RISC ISA:Example RISC ISA:

HP Precision Architecture, HP-PA HP Precision Architecture, HP-PA

Page 89: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#89 Lec # 1 Fall 2001 9-6-2001

Example RISC ISA:Example RISC ISA:

SPARCSPARC

Operand sizes:• Four operand sizes: 1, 2, 4 or 8 bytes.

Instruction Encoding:• Instruction set has 3 basic instruction

formats with 3 minor variations.

• All are 32 bits in length.

5 addressing modes:• Register indirect with immediate

displacement.

• Register inderect indexed by anotherregister.

• Register direct.

• Immediate.

• PC relative.

Page 90: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#90 Lec # 1 Fall 2001 9-6-2001

Example RISC ISA:Example RISC ISA:

Compaq Alpha AXP Compaq Alpha AXP

4 addressing modes:• Register direct.

• Immediate.

• Register indirect with displacement.

• PC-relative.

Operand sizes:• Four operand sizes: 1, 2, 4 or 8 bytes.

Instruction Encoding:• Instruction set has 7 different formats.•

• All are 32 bits in length.

Page 91: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#91 Lec # 1 Fall 2001 9-6-2001

RISC ISA Example:RISC ISA Example:

MIPS R3000MIPS R3000Instruction Categories:

• Load/Store.• Computational.• Jump and Branch.• Floating Point (using coprocessor).• Memory Management.• Special.

OP

OP

OP

rs rt rd sa funct

rs rt immediate

jump target

Instruction Encoding: 3 Instruction Formats, all 32 bits wide.

R0 - R31

PCHI

LO

Registers 4 Addressing Modes:• Base register + immediate offset

(loads and stores).

• Register direct (arithmetic).

• Immedate (jumps).

• PC relative (branches).

Operand Sizes:• Memory accesses in any

multiple between 1 and 8 bytes.

Page 92: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#92 Lec # 1 Fall 2001 9-6-2001

A RISC ISA Example: MIPSA RISC ISA Example: MIPS

Op

31 26 01516202125

rs rt immediate

Op

31 26 025

Op

31 26 01516202125

rs rt

target

rd sa funct

Register-Register

561011

Register-Immediate

Op

31 26 01516202125

rs rt displacement

Branch

Jump / Call

Page 93: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#93 Lec # 1 Fall 2001 9-6-2001

The Role of CompilersThe Role of CompilersThe Structure of Recent Compilers:

Dependencies

Language dependentmachine dependent

Function:

Transform Language to Commonintermediate form

Somewhat Language dependent largely machine independent

For example procedure inliningand loop transformations

Small language dependenciesmachine dependencies slight(e.g. register counts/types)

Include global and local optimizations + register allocation

Front-end per Language

High-level Optimizations

Global Optimizer

Code generatorHighly machine dependent language independent

Detailed instruction selectionand machine-dependentoptimizations; may include orbe followed by assembler

Page 94: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#94 Lec # 1 Fall 2001 9-6-2001

Major Types of Compiler Optimization

Page 95: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#95 Lec # 1 Fall 2001 9-6-2001

Compiler Optimization andCompiler Optimization and Instruction Count Instruction Count

Page 96: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#96 Lec # 1 Fall 2001 9-6-2001

An Instruction Set Example: The DLX ArchitectureAn Instruction Set Example: The DLX Architecture• A RISC-type instruction set architecture based on instruction set

design considerations of chapter 2:

– Use general-purpose registers with a load/store architecture toaccess memory.

– Reduced number of addressing modes: displacement (offset sizeof 12 to 16 bits), immediate (8 to 16 bits), register deferred.

– Data sizes: 8, 16, 32 bit integers and 64 bit IEEE 754 floating-point numbers.

– Use fixed instruction encoding for performance and variableinstruction encoding for code size.

– 32, 32-bit general-purpose registers, R0, …., R31. R0 alwayshas a value of zero.

– Separate floating point registers: can be used as 32 single-precision registers, F0, F1 …., F31. Each odd-even pair can beused as a single 64-bit double-precision register: F0, F2, … F30

Page 97: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#97 Lec # 1 Fall 2001 9-6-2001

DLX Instruction FormatDLX Instruction Format166 5 5

Immediaterdrs1Opcode

6 5 5 5 11

Opcode rs1 rs2 rd func

6 26

Opcode Offset added to PC

J - Type instruction

R - type instruction

Jump and jump and link. Trap and return from exception

Register-register ALU operations: rd ←← rs1 func rs2 Function encodes the data path operation: Add, Sub .. Read/write special registers and moves.

Encodes: Loads and stores of bytes, words, half words. All immediates (rd ← ← rs1 op immediate)Conditional branch instructions (rs1 is register, rd unused)Jump register, jump and link register (rd = 0, rs = destination, immediate = 0)

I - type instruction

Page 98: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#98 Lec # 1 Fall 2001 9-6-2001

DLX Instructions: Load and StoreDLX Instructions: Load and StoreLW R1, 30(R2) Load word Regs[R1] ←32 Mem[30+Regs[R2]]

LW R1, 1000(R0) Load word Regs[R1] ←32 Mem[1000+0]

LB R1, 40(R3) Load byte Regs[R1] ←32 (Mem[40+Regs[R3]]0)24 ##

Mem[40+Regs[R3]]

LBU R1, 40(R3) Load byte unsigned Regs[R1] ←32 024 ## Mem[40+Regs[R3]]

LH R1, 40(R3) Load half word Regs[R1] ←32 (Mem[40+Regs[R3]]0)16 ##

Mem[40 + Regs[R3] ] # # Mem [41+Regs[R3]]

LF F0, 50(R3) Load float Regs[F0] ←32 Mem[50+Regs[R3]]

LD F0, 50(R2) Load double Regs[F0] # # Regs[F1] ←64 Mem[50+Regs[R2]]

SW 500(R4), R3 Store word Mem [500+Regs[R4]] ←32 Reg[R3]

SF 40(R3) , F0 Store float Mem [40, Regs[R3]] ← 32 Regs[F0]

SD 4 (R3), F0 Store double Mem[40+Regs[R3]] ←-32 Regs[F0];

Mem[44+Regs[R3] ←32 Regs[F1]

SH 502(R2), R3 Store half Mem[502+Regs[R2]] ←16 Regs[R3]16…31

SB 41(R3), R2 Store byte Mem[41 + Regs[R3]] ←8 Regs[R2] 24…31

Page 99: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#99 Lec # 1 Fall 2001 9-6-2001

DLX Instructions:DLX Instructions:Arithmetic/LogicalArithmetic/Logical

ADD R1, R2, R3 Add Regs[R1] ← Regs[R2] + Regs[R3]

ADDI R1, R2, #3 Add immediate Regs[R1] ← Regs[R2] + 3

LHI R1, #42 Load high immediate Regs[R1] ← 42 ## 016

SLLI R1, R2, #5 Shift left logical immediate Regs[R1] ← Regs [R2] <<5

SLT R1, R2, R3 Set less than if (regs[R2] < Regs[R3] )

Regs [R1] ← 1 else Regs[R1] ← 0

Page 100: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#100 Lec # 1 Fall 2001 9-6-2001

DLX Instructions:DLX Instructions:Control-FlowControl-Flow

J name Jump PC ← name; ((PC+4) - 225) ≤ name < ((PC + 4)+225)

JAL name Jump and link Regs[31] ← PC+4; PC ← name;

((PC+4)- 225) ≤ name < ((PC + 4) + 225)

JALR R2 Jump and link register Regs[R31] ← PC+4; PC ← Regs[R2]

JR R3 Jump register PC ← Regs[R3]

BEQZ R4, name Branch equal zero if (Regs[R4] ==0) PC ← name;

((PC+4) -215) ≤ name < ((PC+4) + 215

BNEZ R4, Name Branch not equal zero if (Regs[R4] != 0) PC ← name

((PC+4) - 215) ≤ name < ((PC +4) + 215

Page 101: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#101 Lec # 1 Fall 2001 9-6-2001

Sample DLX Instruction DistributionSample DLX Instruction Distribution

Using SPECint92Using SPECint92

Page 102: The Von Neumann Computer Modelmeseec.ce.rit.edu/eecc551-fall2001/551-9-6-2001.pdf · Deposit results in storage for later use Determine successor or next instruction. EECC551 - Shaaban

EECC551 - ShaabanEECC551 - Shaaban#102 Lec # 1 Fall 2001 9-6-2001

DLX Instruction DLX Instruction DistributionDistribution

Using SPECfp92Using SPECfp92