48
COMP25212 Lecture 5 1 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Embed Size (px)

Citation preview

Page 1: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 1

Pipelining

Reducing Instruction Execution Time

Page 2: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 2

The Fetch-Execute Cycle

• Instruction execution is a simple repetitive cycle

Fetch Instruction

Execute Instruction

CPU Memory

Page 3: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 3

Cycles of Operation

• Most logic circuits are driven by a clock

• In its simplest form one operations would take one clock cycle

• This is assuming that getting an instruction and accessing data memory can each be done in a 1/5th of a cycle (i.e. a cache hit)

Page 4: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 4

Fetch-Execute Detail

The two parts of the cycle can be subdivided

• Fetch– Get instruction from memory– Decode instruction & select registers

• Execute– Perform operation or calculate address– Access an operand in data memory– Write result to a register

Page 5: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Processor Detail

Register B

ank

Data

Cache

PC

Instruction

Cache

MU

XALU

IF ID EX MEM WBInstruction Instruction Execute Access Write Fetch Decode Instruction Memory Back

Page 6: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 6

Logic to do this

• Each stage will do its work and pass work to the next• Each block is only doing any work once every 1/5th of

a cycle

Fetch Logic

Decode Logic

Exec Logic

Mem

Logic

Write Logic

Inst Cache Data Cache

Page 7: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 7

Can We Overlap Operations?

• E.g while decoding one instruction we could be fetching the next

1 2 3 4 5 6 7Inst a IF ID EX MEM WBInst b IF ID EX MEM WBInst c IF ID EX MEM WBInst d IF ID EX MEMInst e IF ID EX

Clock Cycle

Page 8: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 8

Insert Buffers Between Stages

• Instead of direct connection between stages – use extra buffers to hold state

• Clock buffers once per cycle

Fetch Logic

Decode Logic

Exec Logic

Mem

Logic

Write Logic

Inst Cache Data Cacheclock

Instruction R

eg.

Page 9: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 9

This is a Pipeline

• Just like a car production line, one stage puts engine in, next puts wheels on etc.

• We still execute one instruction every cycle

• We can now increase the clock speed by 5x

• 5x faster!

• But it isn’t quite that easy!

Page 10: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 10

Why 5 Stages

• Simply because early pipelined processors determined that dividing into these 5 stages of roughly equal complexity was appropriate

• Some recent processors have used more than 30 pipeline stages

• We will consider 5 for simplicity at the moment

Page 11: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Control Hazards

Page 12: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 12

The Control Transfer Problem

• The obvious way to fetch instructions is in serial program order (i.e. just incrementing the PC)

• What if we fetch a branch?• We only know it’s a branch when we

decode it in the second stage of the pipeline

• By that time we are already fetching the next instruction in serial order

Page 13: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 13

A Pipeline ‘Bubble’

Inst 1Inst 2Inst 3Branch nInst 5..Inst n

5 Bra 3 2 1

n 5 Bra 3

n+1

5 Bra 3 2n cycles

We must mark Inst 5 as unwanted andIgnore it as it goes down the pipeline.But we have wasted a cycle

Decode here

Page 14: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 14

Conditional Branches

• It gets worse!

• Suppose we have a conditional branch

• It is possible that we might not be able to determine the branch outcome until the execute (3rd) stage

• We would then have 2 ‘bubbles’

• We can often avoid this by reading registers during the decode stage.

Page 15: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 15

Longer Pipelines

• ‘Bubbles’ due to branches are usually called Control Hazards

• They occur when it takes one or more pipeline stages to detect the branch

• The more stages, the less each does

• More likely to take multiple stages

• Longer pipelines usually suffer more degradation from control hazards

Page 16: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 16

Branch Prediction

• In most programs a branch instruction is executed many times

• Also, the instructions will be at the same (virtual) address in memory

• What if, when a branch was executed– We ‘remembered’ its address– We ‘remembered’ the address that was

fetched next

Page 17: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 17

Branch Target Buffer

• We could do this with some sort of cache

• As we fetch the branch we check the target• If a valid entry in buffer we use that to fetch next

instruction

Address Data

Branch Add Target Add

Page 18: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 18

Branch Target Buffer

• For an unconditional branch we would always get it right

• For a conditional branch it depends on the probability that the next branch is the same as the previous

• E.g. a ‘for’ loop which jumps back many times we will get it right most of the time

• But it is only a prediction, if we get it wrong we correct next cycle (suffer a ‘bubble’)

Page 19: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 19

Outline Implementation

FetchStage

PC

InstCache

BranchTargetBuffer

valid

inc

Page 20: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212 Lecture 5 20

Other Branch Prediction

• BTB is simple to understand but expensive to implement

• Also, as described, it just uses the last branch to predict

• In practice branch prediction depends on– More history (several previous branches)– Context (how did we get to this branch)

• Real branch predictors are more complex and vital to performance (long pipelines)

Page 21: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Data Hazards

Page 22: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Data Hazards

• Pipeline can cause other problems

• ConsiderADD R1,R2,R3

MUL R0,R1,R1

• The ADD instruction is producing a value in R1

• The following MUL instruction uses R1 as input

Page 23: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Instructions in the Pipeline

Register B

ank

Data

Cache

PC

Instruction

Cache

MU

X

ALU

IF ID EX MEM WB

ADD R1,R2,R3MUL R0,R1,R1

Page 24: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

The Data isn’t Ready

• At end of ID cycle, MUL instruction should have selected value in R1 to put into buffer at input to EX stage

• But the correct value for R1 from ADD instruction is being put into the buffer at output of EX stage at this time

• It won’t get to input of Register Bank until one cycle later – then probably another cycle to write into R1

Page 25: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Insert Delays?

• One solution is to detect such data dependencies in hardware and hold instruction in decode stage until data is ready – ‘bubbles’ & wasted cycles again

• Another is to use the compiler to try to reorder instructions

• Only works if we can find something useful to do – otherwise insert NOPs - waste

Page 26: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Forwarding

Register B

ank

Data

Cache

PC

Instruction

Cache

MU

X

ALU

ADD R1,R2,R3MUL R0,R1,R1

• We can add extra paths for specific cases• Control becomes more complex

Page 27: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Why did it Occur?

• Due to the design of our pipeline• In this case, the result we want is ready

one stage ahead of where it was needed, why pass it down the pipeline?

• But what if we have the sequenceLDR R1,[R2,R3]MUL R0,R1,R1

• LDR instruction means load R1 from memory address R2+R3

Page 28: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Pipeline Sequence for LDR

• Fetch

• Decode and read registers (R2 & R3)

• Execute – add R2+R3 to form address

• Memory access, read from address

• Now we can write the value into register R1

• We have designed the ‘worst case’ pipeline to work for all instructions

Page 29: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Forwarding

Register B

ank

Data

Cache

PC

Instruction

Cache

MU

X

ALU

NOPMUL R0,R1,R1

• We can add extra paths for specific cases• Control becomes more complex

LDR R1,[R2,R3]

Page 30: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Longer Pipelines

• As mentioned previously we can go to longer pipelines– Do less per pipeline stage– Each step takes less time– So can increase clock frequency– But greater penalty for hazards– More complex control

• Negative returns?

Page 31: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

COMP25212

Where Next?

• Despite these difficulties it is possible to build processors which approach 1 cycle per instruction (cpi)

• Given that the computational model is one of serial instruction execution can we do any better than this?

Page 32: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Instruction Level Parallelism

Page 33: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Instruction Level Parallelism (ILP)

• Suppose we have an expression of the form x = (a+b) * (c-d)

• Assuming a,b,c & d are in registers, this might turn into

ADD R0, R2, R3

SUB R1, R4, R5

MUL R0, R0, R1

STR R0, x

Page 34: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

ILP (cont)

• The MUL has a dependence on the ADD and the SUB, and the STR has a dependence on the MUL

• However, the ADD and SUB are independent

• In theory, we could execute them in parallel, even out of order

ADD R0, R2, R3SUB R1, R4, R5MUL R0, R0, R1STR R0, x

Page 35: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

The Data Flow Graph

• We can see this more clearly if we draw the data flow graph

ADD SUB

MUL

R2 R3 R4 R5

x

As long as R2, R3,R4 & R5 are available,We can execute theADD & SUB in parallel

Page 36: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Amount of ILP?

• This is obviously a very simple example

• However, real programs often have quite a few independent instructions which could be executed in parallel

• Exact number is clearly program dependent but analysis has shown that maybe 4 is not uncommon (in parts of the program anyway).

Page 37: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

How to Exploit?

• We need to fetch multiple instructions per cycle – wider instruction fetch

• Need to decode multiple instructions per cycle

• But must use common registers – they are logically the same registers

• Need multiple ALUs for execution

• But also access common data cache

Page 38: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Dual Issue Pipeline Structure• Two instructions can now execute in parallel• (Potentially) double the execution rate• Called a ‘Superscalar’ architecture

Register B

ank

Data

CachePC

Instruction C

ache

MU

X

ALU

I1 I2

ALU

MU

X

Page 39: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Register & Cache Access

• Note the access rate to both registers & cache will be doubled

• To cope with this we may need a dual ported register bank & dual ported cache.

• This can be done either by duplicating access circuitry or even duplicating whole register & cache structure

Page 40: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Selecting Instructions

• To get the doubled performance out of this structure, we need to have independent instructions

• We can have a ‘dispatch unit’ in the fetch stage which uses hardware to examine the instruction dependencies and only issue two in parallel if they are independent

Page 41: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Instruction order

• If we hadADD R1,R1,R0

MUL R0,R1,R1

ADD R3,R4,R5

MUL R4,R3,R3

• Issued in pairs as above

• We wouldn’t be able to issue any in parallel because of dependencies

Page 42: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Compiler Optimisation

• But if the compiler had examined dependencies and producedADD R1,R1,R0

ADD R3,R4,R5

MUL R0,R1,R1

MUL R4,R3,R3

• We can now execute pairs in parallel (assuming appropriate forwarding logic)

Page 43: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Relying on the Compiler

• If compiler can’t manage to reorder the instructions, we still need hardware to avoid issuing conflicts

• But if we could rely on the compiler, we could get rid of expensive checking logic

• This is the principle of VLIW (Very Long Instruction Word)

• Compiler must add NOPs if necessary

Page 44: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Out of Order Execution

• There are arguments against relying on the compiler– Legacy binaries – optimum code tied to a

particular hardware configuration– ‘Code Bloat’ in VLIW – useless NOPs

• Instead rely on hardware to re-order instructions if necessary

• Complex but effective

Page 45: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Out of Order Execution Processor

• An instruction buffer needs to be added to store all issued instructions

• An scheduler is in charge of sending non-conflicted instructions to execute

• Memory and register accesses need to be delayed until all older instructions are finished to comply with application semantics.

Page 46: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Out of Order Execution• Instruction Dispatching and Scheduling

• Memory and register accesses deferred

Register B

ank

Mem

oryQ

ueue

PC

Instr.C

ache

AL

UInstruction B

uffer

Dispatch

Schedule

Register

Queue

Data

Cache

Delay

Delay

Page 47: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Programmer Assisted ILP / Vector Instructions

• Linear Algebra operations such as Vector Product, Matrix Multiplication have LOTS of parallelism

• This can be hard to detect in languages like C

• Instructions can be too separated for hardware detection.

• Programmer can use types such as float4

Page 48: COMP25212 Lecture 51 Pipelining Reducing Instruction Execution Time

Limits of ILP

• Modern processors are up to 4 way superscalar (but rarely achieve 4x speed)

• Not much beyond this– Hardware complexity– Limited amounts of ILP in real programs

• Limited ILP not surprising, conventional programs are written assuming a serial execution model – what next?