56
1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines Conclusion

1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Embed Size (px)

Citation preview

Page 1: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

1

Chapter 2 Memory Hierarchy Design

• Introduction

• Cache performance

• Advanced cache optimizations

• Memory technology and DRAM optimizations

• Virtual machines

• Conclusion

Page 2: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

2

Many Levels in Memory Hierarchy

Pipeline registers

Register file1st-level cache

(on-chip)2nd-level cache

(on same MCM as CPU)Physical memory

(usu. mounted on same board as CPU)

Virtual memory(on hard disk, often in same enclosure as CPU)

Disk files(on hard disk often in same enclosure as CPU)

Network-accessible disk files(often in the same building as the CPU)

Tape backup/archive system(often in the same building as the CPU)

Data warehouse: Robotically-accessed room full of shelves of tapes (usually on the same planet as the CPU)

Our focusin chapter 5

Usually madeinvisible to

the programmer(even assemblyprogrammers)

Invisible only to high-levellanguage programmers

There can also bea 3rd (or more)

cache levels here

Page 3: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

3

Simple Hierarchy Example

• Note many orders of magnitude change in characteristics between levels:

×128→ ×8192→ ×200→

×4 → ×100 → ×50,000 →(for randomaccess) (2 GB) (1 TB

10 ms)

Page 4: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

4

Why More on Memory Hierarchy?

1

10

100

1,000

10,000

100,000

1980 1985 1990 1995 2000 2005 2010

Year

Pe

rfo

rma

nc

e

Memory

Processor Processor-MemoryPerformance GapGrowing

Page 5: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

5

Three Types of Misses

• Compulsory– During a program, the very first access to a block will

not be in the cache (unless pre-fetched)

• Capacity– The working set of blocks accessed by the program is

too large to fit in the cache

• Conflict– Unless cache is fully associative, sometimes blocks

may be evicted too early (compared to fully-associative)

because too many frequently-accessed blocks map to

the same limited set of frames.

Page 6: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

6

Misses by Type

Conflict

Conflict misses are significant in a direct-mapped cache.Conflict misses are significant in a direct-mapped cache.

From direct-mapped to 2-way helps as much as doubling cache size.From direct-mapped to 2-way helps as much as doubling cache size.

Going from direct-mapped to 4-way is Going from direct-mapped to 4-way is betterbetter than doubling cache size. than doubling cache size.

Page 7: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

7

As fraction of total misses

Page 8: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

8

Cache Performance

• Consider memory delays in calculating CPU time:

• Example:– Ideal CPI=1, 1.5 ref / inst, miss rate=2%, miss penalty=100

CPU cycles

• In-order pipeline is assumed – The lower the ideal CPI, the higher the impact of cache miss– Measured by CPU cycles, fast cycle time has more penalty

TimeCyclenInstructio

CycleStallMemoryCPIICTimeCPU Execu )(

CTICTimeCycleICTimeCPU 0.4)1005.1%20.1(

TimeCyclePenaltyMissnInstructio

AccessMemRateMissCPIIC Execu )(

Page 9: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

9

Cache Performance Example

• Ideal-L1 CPI=2.0, ref / inst=1.5, cache size=64KB, miss penalty=75ns, hit time=1 clock cycle

• Compare performance of two caches:– Direct-mapped (1-way): cycle time=1ns, miss rate=1.4%– 2-way: cycle time=1.25ns, miss rate=1.0%

ICnsICTimeCPU way 575.31)755.1%4.10.2(1

Cyclesns

nsPenaltyMiss way 75

1

751

Cyclesns

nsPenaltyMiss way 60

25.1

752

ICnsICTimeCPU way 625.325.1)605.1%10.2(2

Page 10: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

10

Out-Of-Order Processor

• Define new “miss penalty” considering overlap

– Need to decide memory latencymemory latency and overlapped latencyoverlapped latency

– Not straight forward

• Example (from previous slide)– Assume 30% of 75ns penalty can be overlapped, but with

longer (1.25ns) cycle on 1-way design due to OOO

)(.

latencymissOverlaplatencymissTotalnInstructio

Misses

nInstructio

CyclesStallMem

Cyclesns

nsPenaltyMiss way 42

25.1

%70751

ICnsICTimeCPU OOOway 60.325.1)425.1%4.10.2(,1

Page 11: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

11

An Alternative Metric

missmisshitacc TfTT (Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty)

The times Tacc, Thit, and T+miss can be either:– Real time (e.g., nanoseconds), or, number of clock cycles– T+miss means the extra (not total) time (or cycle) for a miss

• in addition to Thit, which is incurred by all accesses

– The Ave. mem access time does notnot take other instructions into the formula, the same example shows different results

CPU CacheLower levelsof hierarchy

Hit time

Miss penalty

nsnsTimeAccessMemAverage way 05.2)175%4.1(0.11

nsnsnsTimeAccessMemAverage way 2)25.160%1(25.10.12

Page 12: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

12

Another View (Multi-cycle Cache)

• Instead of increase the cycle time, 2-way cache can have two-cycle cache access, then

• becomes:

• In reality, not all the 2nd cycle of the 2-cycle loads will cause stalls of dependent instructions, say only 20%: (Note the formula assume impact all)

• Note, cycle time always impacts all

nsnsnsTimeAccessMemAverage way 75.2)175%1(10.22

nsnsnsTimeAccessMemAverage way 2)25.160%1(25.10.12

nsnsnsTimeAccessMemAverage way 95.1)175%1(12.12

Page 13: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

13

Cache Performance

• Consider the cache performance equation:

• It obviously follows that there are three basic ways to improve cache performance:– Reducing miss penalty– Reducing miss rate – Reducing miss penalty/rate via parallelism – Reducing hit time

• Note that by Amdahl’s Law, there will be diminishing returns from reducing only hit time or amortized miss penalty by itself, instead of both together.

(Average memory access time) = (Hit time) + (Miss rate)×(Miss penalty)

“Amortized miss penalty”

Page 14: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

14

6 Basic Cache Optimizations

Reducing hit time1. Giving Reads Priority over Writes

• E.g., Read complete before earlier writes in write buffer

2. Avoiding Address Translation during Cache Indexing

Reducing Miss Penalty3. Multilevel Caches

Reducing Miss Rate4. Larger Block size (Compulsory misses)5. Larger Cache size (Capacity misses)6. Higher Associativity (Conflict misses)

Page 15: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

15

Multiple-Level Caches

• Avg mem acc time = Hit time (L1) + Miss rate (L1) x Miss Penalty (L1)

• Miss penalty (L1) =Hit time (L2) +Miss rate (L2) x Miss Penalty (L2)

• Can plug 2nd equation into the first:– Avg mem access time =

Hit time(L1) + Miss rate(L1) x (Hit time(L2) +

Miss rate(L2)x Miss penalty(L2))

Page 16: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

16

Multi-level Cache Terminology

• “Local miss rate”– The miss rate of one hierarchy level by itself.– # of misses at that level / # accesses to that level– e.g. Miss rate(L1), Miss rate(L2)

• “Global miss rate”– The miss rate of a whole group of hierarchy levels– # of accesses coming out of that group

(to lower levels) / # of accesses to that group– Generally this is the product of the miss rates at each

level in the group.– Global L2 Miss rate = Miss rate(L1) × Local Miss rate(L2)

Page 17: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

17

Effect of 2-level Caching

• L2 size usually much bigger than L1– Provide reasonable hit rate

– Decreases miss penalty of 1st-level cache

– May increase L2 miss penalty

• Multiple-level cache inclusion property– Inclusive cache: L1 is a subset of L2; simplify cache

coherence mechanism, effective cache size = L2

– Exclusive cache: L1, L2 are exclusive; increase effect cache sizes = L1 + L2

– Enforce inclusion property: Backward invalidation on L2 replacement

Page 18: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

18

11 Advanced Cache Optimizations

• Reducing hit time

Small and simple caches

Way prediction

Trace caches

• Increasing cache

bandwidth

Pipelined caches

Multibanked caches

Nonblocking caches

• Reducing Miss Penalty

Critical word first

Merging write buffers

• Reducing Miss Rate

Compiler optimizations

• Reducing miss penalty or miss rate via parallelism

Hardware prefetching

Compiler prefetching

Page 19: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

19

1. Fast Hit times via Small and Simple Caches

• Index tag memory and then compare takes time• Small cache can help hit time since smaller memory takes less time to index

– E.g., L1 caches same size for 3 generations of AMD microprocessors: K6, Athlon, and Opteron

– Also L2 cache small enough to fit on chip with the processor avoids time penalty of going off chip

• Simple direct mapping– Can overlap tag check with data transmission since no choice

• Access time estimate for 90 nm using CACTI model 4.0– Median ratios of access time relative to the direct-mapped

caches are 1.32, 1.39, and 1.43 for 2-way, 4-way, and 8-way caches

-

0.50

1.00

1.50

2.00

2.50

16 KB 32 KB 64 KB 128 KB 256 KB 512 KB 1 MB

Cache size

Ac

ce

ss

tim

e (

ns

)

1-way 2-way 4-way 8-way

Page 20: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

20

2. Fast Hit times via Way Prediction• How to combine fast hit time of Direct Mapped and have the lower

conflict misses of 2-way SA cache? • Way prediction: keep extra bits in cache to predict the “way,” or

block within the set, of next cache access. – Multiplexor is set early to select desired block, only 1 tag comparison

performed that clock cycle in parallel with reading the cache data – Miss 1st check other blocks for matches in next clock cycle

• Accuracy 85% (can be higher with bigger history)• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles

– Used for instruction caches vs. data caches

Hit Time

Way-Miss Hit Time Miss Penalty

Page 21: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

21

3. Fast Hit times via Trace Cache (Pentium 4 only; and last time?)

• Find more instruction level parallelism?How avoid translation from x86 to microops?

• Trace cache in Pentium 4• Dynamic traces of the executed instructions vs. static sequences

of instructions as determined by layout in memory– Built-in branch predictor

• Cache the micro-ops vs. x86 instructions– Decode/translate from x86 to micro-ops on trace cache miss

• +1. better utilize long blocks (don’t exit in middle of block, don’t enter at label in middle of block)

- 1. complicated address mapping since addresses no longer aligned to power-of-2 multiples of word size

• -1. instructions may appear multiple times in multiple dynamic traces due to different branch outcomes

Page 22: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

22

4: Increasing Cache Bandwidth by Pipelining

• Pipeline cache access to maintain bandwidth, but higher latency

• Instruction cache access pipeline stages:• 1: Pentium• 2: Pentium Pro through Pentium III • 4: Pentium 4 greater penalty on mispredicted branches more clock cycles between the issue of the load

and the use of the data

Page 23: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

23

5. Increasing Cache Bandwidth: Non-Blocking Caches

• Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss– requires F/E bits on registers or out-of-order execution– requires multi-bank memories

• “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests

• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses– Significantly increases the complexity of the cache controller

as there can be multiple outstanding memory accesses– Requires muliple memory banks (otherwise cannot support)– Penium Pro allows 4 outstanding memory misses

Page 24: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

24

Nonblocking Cache StatsA

vera

ge s

tall

tim

eas

% o

f bl

ocki

ng c

ache

hu1m

hu2m

hu64mAverages

Page 25: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

25

Value of Hit Under Miss for SPEC

• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19• 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss, SPEC 92

Hit Under i Misses

Av

g.

Me

m.

Acce

ss T

ime

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

eqntott

espresso

xlisp

compress

mdljsp2

ear

fpppp

tomcatv

swm256

doduc

su2cor

wave5

mdljdp2

hydro2d

alvinn

nasa7

spice2g6

ora

0->1

1->2

2->64

Base

Integer Floating Point

“Hit under n Misses”

0->11->22->64Base

Page 26: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

26

6: Increasing Cache Bandwidth via Multiple Banks

• Rather than treat the cache as a single monolithic block, divide into independent banks that can support simultaneous accesses– E.g.,T1 (“Niagara”) L2 has 4 banks

• Banking works best when accesses naturally spread themselves across banks mapping of addresses to banks affects behavior of memory system

• Simple mapping that works well is “sequential interleaving” – Spread block addresses sequentially across banks– E,g, if there 4 banks, Bank 0 has all blocks whose address

modulo 4 is 0; bank 1 has all blocks whose address modulo 4 is 1; …

Page 27: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

27

7. Reduce Miss Penalty: Early Restart and Critical Word First

• Don’t wait for full block before restarting CPU• Early restart —As soon as the requested word of the

block arrives, send it to the CPU and let the CPU continue execution– Spatial locality tend to want next sequential word, so not clear size

of benefit of just early restart

• Critical Word First —Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block– Long blocks more popular today Critical Word 1st Widely used

block

Page 28: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

28

8. Merging Write Buffer to Reduce Miss Penalty

• Write buffer to allow processor to continue while waiting to write to memory

• If buffer contains modified blocks, the addresses can be checked to see if address of new data matches the address of a valid write buffer entry

• If so, new data are combined with that entry• Increases block size of write for write-through

cache of writes to sequential words, bytes since multiword writes more efficient to memory

• The Sun T1 (Niagara) processor, among many others, uses write merging

Page 29: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

29

9. Reducing Misses by Compiler Optimizations

• McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software

• Instructions– Reorder procedures in memory so as to reduce conflict misses

– Profiling to look at conflicts(using tools they developed)

• Data– Merging Arrays: improve spatial locality by single array of compound elements

vs. 2 arrays

– Loop Interchange: change nesting of loops to access data in order stored in memory

– Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap

– Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

Page 30: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

30

Merging Arrays Example

/* Before: 2 sequential arrays */

int val[SIZE];

int key[SIZE];

/* After: 1 array of stuctures */

struct merge {

int val;

int key;

};

struct merge merged_array[SIZE];

• Reducing conflicts between val & key; improve spatial locality when they are accessed in a interleaved fashion

Page 31: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

31

Loop Interchange Example

/* Before */

for (k = 0; k < 100; k = k+1)

for (j = 0; j < 100; j = j+1)

for (i = 0; i < 5000; i = i+1)

x[i][j] = 2 * x[i][j];

/* After */

for (k = 0; k < 100; k = k+1)

for (i = 0; i < 5000; i = i+1)

for (j = 0; j < 100; j = j+1)

x[i][j] = 2 * x[i][j];

• Sequential accesses instead of striding through memory every 100 words; improved spatial locality

Page 32: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

32

Loop Fusion Example

/* Before */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

a[i][j] = 1/b[i][j] * c[i][j];

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

d[i][j] = a[i][j] + c[i][j];

/* After */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

{ a[i][j] = 1/b[i][j] * c[i][j];

d[i][j] = a[i][j] + c[i][j];}

• 2 misses per access to a & c vs. one miss per access; improve spatial locality

Page 33: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

33

Blocking Example/* Before */

for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1)

{r = 0;

for (k = 0; k < N; k = k+1){

r = r + y[i][k]*z[k][j];};

x[i][j] = r;

};

• Two Inner Loops:– Read all NxN elements of z[]– Read N elements of 1 row of y[] repeatedly– Write N elements of 1 row of x[]

• Capacity Misses a function of N & Cache Size:– 2N3 + N2 => (assuming no conflict; otherwise …)

• Idea: compute on BxB submatrix that fits

Page 34: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

34

Blocking Example

/* After */

for (jj = 0; jj < N; jj = jj+B)

for (kk = 0; kk < N; kk = kk+B)

for (i = 0; i < N; i = i+1)

for (j = jj; j < min(jj+B-1,N); j = j+1)

{r = 0;

for (k = kk; k < min(kk+B-1,N); k = k+1) {

r = r + y[i][k]*z[k][j];};

x[i][j] = x[i][j] + r;

};

• B called Blocking Factor• Capacity Misses from 2N3 + N2 to 2N3/B +N2

• Conflict Misses Too?

Page 35: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

35

Loop Blocking – Matrix Multiply

Before:

After:

Page 36: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

36

Reducing Conflict Misses by Blocking

• Conflict misses in caches not FA vs. Blocking size– Lam et al [1991] a blocking factor of 24 had a fifth the misses vs. 48

despite both fit in cache

Blocking Factor

Mis

s R

ate

0

0.05

0.1

0 50 100 150

Fully Associative Cache

Direct Mapped Cache

Page 37: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

37

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

mergedarrays

loopinterchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache Misses (by hand)

Page 38: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

38

10. Reducing Misses by Hardware Prefetching of Instructions & Data

• Prefetching relies on having extra memory bandwidth that can be used without penalty• Instruction Prefetching

– Typically, CPU fetches 2 blocks on a miss: the requested block and the next consecutive block.

– Requested block is placed in instruction cache when it returns, and prefetched block is placed into instruction stream buffer

• Data Prefetching– Pentium 4 can prefetch data into L2 cache from up to 8 streams

from 8 different 4 KB pages

– Prefetching invoked if 2 successive L2 cache misses to a page, if distance between those cache blocks is < 256 bytes

1.16

1.45

1.18 1.20 1.21 1.26 1.29 1.32 1.401.49

1.97

1.001.201.401.601.802.002.20

Perf

orm

ance I

mpro

vem

ent

SPECint2000 SPECfp2000

Page 39: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

39

11. Reducing Misses by Software Prefetching Data

• Data Prefetch– Load data into register (HP PA-RISC loads)– Cache Prefetch: load into cache

(MIPS IV, PowerPC, SPARC v. 9)– Special prefetching instructions cannot cause faults;

a form of speculative execution

• Issuing Prefetch Instructions takes time– Is cost of prefetch issues < savings in reduced misses?– Higher superscalar reduces difficulty of issue bandwidth

Page 40: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

40

Compiler Optimization vs. Memory Hierarchy Search

• Compiler tries to figure out memory hierarchy optimizations

• New approach: “Auto-tuners” 1st run variations of program on computer to find best combinations of optimizations (blocking, padding, …) and algorithms, then produce C code to be compiled for that computer

• “Auto-tuner” targeted to numerical method– E.g., PHiPAC (BLAS), Atlas (BLAS),

Sparsity (Sparse linear algebra), Spiral (DSP), FFT-W

Page 41: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Summary

41

Page 42: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

42

Main Memory

• Some definitions:– Bandwidth (bw): Bytes read or written per unit time– Latency: Described by

• Access Time: Delay between access initiation & completion

– For reads: Present address till result ready.• Cycle time: Minimum interval between separate requests

to memory.– Address lines: Separate bus CPUMem to carry

addresses. (Not usu. counted in BW figures.)– RAS (Row Access Strobe)

• First half of address, sent first.– CAS (Column Access Strobe)

• Second half of address, sent second.

Page 43: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

43

RAS vs. CAS (save address pin)

DRAM bit-cell array

1. RAS selects a row

2. Parallelreadout of

all row data

3. CAS selectsa column to read

4. Selected bitwritten to memory bus

Page 44: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

44

Types of Memory

• DRAM (Dynamic Random Access Memory)– Cell design needs only 1 transistor per bit stored.– Cell charges leak away and may dynamically (over time)

drift from their initial levels.– Requires periodic refreshing to correct drift

• e.g. every 8 ms– Time spent refreshing kept to <5% of BW

• SRAM (Static Random Access Memory)– Cell voltages are statically (unchangingly) tied to power

supply references. No drift, no refresh.– But needs 4-6 transistors per bit.

• DRAM: 4-8x larger, 8-16x slower, 8-16x cheaper/bit

Page 45: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

45

Typical DRAM Organization

(256 Mbit)

High14 bits

Low 14 bits

Page 46: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

46

Amdahl/Case Rule

• Memory size (and I/O b/w) should grow linearly with CPU speed– Typical: 1 MB main memory, 1 Mbps I/O b/w per 1 MIPS

CPU performance.• Takes a fairly constant ~8 seconds to scan entire

memory (if memory bandwidth = I/O bandwidth, 4 bytes/load, 1 load/4 instructions, and if latency not a problem)

• Moore’s Law:– DRAM size doubles every 18 months (up 60%/yr)– Tracks processor speed improvements

• Unfortunately, DRAM latency has only decreased 7%/yr! Latency is a big deal.

Page 47: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory OptimizationsM

emo

ry Tech

no

log

y

Page 48: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory OptimizationsM

emo

ry Tech

no

log

y

Page 49: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory Optimizations

DDR:– DDR2

• Lower power (2.5 V -> 1.8 V)• Higher clock rates (266 MHz, 333 MHz, 400 MHz)

– DDR3• 1.5 V• 800 MHz

– DDR4• 1-1.2 V• 1600 MHz

GDDR5 is graphics memory based on DDR3

Mem

ory T

echn

olo

gy

Page 50: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory Optimizations

Graphics memory:– Achieve 2-5 X bandwidth per DRAM vs. DDR3

• Wider interfaces (32 vs. 16 bit)• Higher clock rate

– Possible because they are attached via soldering instead of socketted DIMM modules

Reducing power in SDRAMs:– Lower voltage– Low power mode (ignores clock, continues to

refresh)

Mem

ory T

echn

olo

gy

Page 51: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory Power ConsumptionM

emo

ry Tech

no

log

y

Page 52: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Flash Memory

Type of EEPROMMust be erased (in blocks) before being

overwrittenNon volatileLimited number of write cyclesCheaper than SDRAM, more expensive than

diskSlower than SRAM, faster than disk

Mem

ory T

echn

olo

gy

Page 53: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Memory Dependability

Memory is susceptible to cosmic raysSoft errors: dynamic errors

– Detected and fixed by error correcting codes (ECC)

Hard errors: permanent errors– Use sparse rows to replace defective rows

Chipkill: a RAID-like error recovery technique

Mem

ory T

echn

olo

gy

Page 54: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Virtual Memory

Protection via virtual memory– Keeps processes in their own memory space

Role of architecture:– Provide user mode and supervisor mode– Protect certain aspects of CPU state– Provide mechanisms for switching between

user mode and supervisor mode– Provide mechanisms to limit memory accesses– Provide TLB to translate addresses

Virtu

al Mem

ory an

d V

irtual M

achin

es

Page 55: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Virtual Machines

Supports isolation and securitySharing a computer among many unrelated usersEnabled by raw speed of processors, making the

overhead more acceptable

Allows different ISAs and operating systems to be presented to user programs– “System Virtual Machines”– SVM software is called “virtual machine monitor” or

“hypervisor”– Individual virtual machines run under the monitor are

called “guest VMs”

Virtu

al Mem

ory an

d V

irtual M

achin

es

Page 56: 1 Chapter 2 Memory Hierarchy Design Introduction Cache performance Advanced cache optimizations Memory technology and DRAM optimizations Virtual machines

Copyright © 2012, Elsevier Inc. All rights reserved.

Impact of VMs on Virtual Memory

Each guest OS maintains its own set of page tables– VMM adds a level of memory between physical

and virtual memory called “real memory”– VMM maintains shadow page table that maps

guest virtual addresses to physical addresses• Requires VMM to detect guest’s changes to its own page

table• Occurs naturally if accessing the page table pointer is a

privileged operation

Virtu

al Mem

ory an

d V

irtual M

achin

es