27
Computer Organization and Computer Organization and Design Design Memory Hierachy Memory Hierachy Montek Singh Montek Singh Wed, April 27, 2011 Wed, April 27, 2011 Lecture 17 Lecture 17

Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Embed Size (px)

Citation preview

Page 1: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Computer Organization and Computer Organization and DesignDesign

Memory HierachyMemory Hierachy

Montek SinghMontek Singh

Wed, April 27, 2011Wed, April 27, 2011

Lecture 17Lecture 17

Page 2: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Memory HierarchyMemory Hierarchy Memory FlavorsMemory Flavors Principle of LocalityPrinciple of Locality Program TracesProgram Traces Memory HierarchiesMemory Hierarchies AssociativityAssociativity

Reading: Ch. 5.1-5.4Reading: Ch. 5.1-5.4

Page 3: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

What Do We Want in a Memory?What Do We Want in a Memory?

PC

INST

MADDR

MDATA

miniMIPS MEMORY

Capacity Latency Cost

Register 1000’s of bits

10 ps $$$$

SRAM 1-4 Mbytes 0.2 ns $$$

DRAM 1-4 Gbytes 5 ns $

Hard disk* 100’s Gbytes

10 ms ¢

Want? 2-10 Gbytes

0.2ns cheap!*non-volatile

ADDR

DOUT

ADDR

DATAR/WWr

Page 4: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Best of Both WorldsBest of Both Worlds What we REALLY want: A BIG, FAST memory!What we REALLY want: A BIG, FAST memory!

Keep everything within instant accessKeep everything within instant access

We’d like to have a memory system thatWe’d like to have a memory system thatPerforms like 2-10 GB of fast SRAMPerforms like 2-10 GB of fast SRAM

typical SRAM sizes are in MB, not GBtypical SRAM sizes are in MB, not GBCosts like 1-4 GB of DRAM (slower)Costs like 1-4 GB of DRAM (slower)

typical DRAMs are order of magnitude slower than SRAMtypical DRAMs are order of magnitude slower than SRAMSURPRISE: We can (nearly) get our wish!SURPRISE: We can (nearly) get our wish!

Key Idea: Use a hierarchy of memory technologies:Key Idea: Use a hierarchy of memory technologies:

CPU

SRAMMAINMEM

DISK

Page 5: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Key IdeaKey Idea Key Idea: Exploit “Principle of Locality”Key Idea: Exploit “Principle of Locality”

Keep data used often in a small fast SRAMKeep data used often in a small fast SRAMcalled “CACHE”, often on the same chip as the CPUcalled “CACHE”, often on the same chip as the CPU

Keep all data in a bigger but slower DRAMKeep all data in a bigger but slower DRAMcalled “main memory”, usually separate chipcalled “main memory”, usually separate chip

Access Main Memory only rarely, for remaining dataAccess Main Memory only rarely, for remaining dataThe reason this strategy works: LOCALITYThe reason this strategy works: LOCALITY

if you access something now, you will likely access it again if you access something now, you will likely access it again (or its neighbors) soon(or its neighbors) soon

Locality of Reference:Reference to location X at time t implies that reference to location X+X at time t+t is likely for small X and t.

Locality of Reference:Reference to location X at time t implies that reference to location X+X at time t+t is likely for small X and t.

Page 6: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

CacheCache cache (kash) cache (kash)

n. n. A hiding place used especially for storing provisions. A hiding place used especially for storing provisions. A place for concealment and safekeeping, as of A place for concealment and safekeeping, as of

valuables. valuables. The store of goods or valuables concealed in a hiding The store of goods or valuables concealed in a hiding

place. place. Computer Science.Computer Science. A fast storage buffer in the central A fast storage buffer in the central

processing unit of a computer. In this sense, also processing unit of a computer. In this sense, also called cache memory. called cache memory.

v. tr. v. tr. cached, cach·ing, cach·es. cached, cach·ing, cach·es. To hide or store in a cache.To hide or store in a cache.

Page 7: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Cache AnalogyCache Analogy You are writing a term paper for your history You are writing a term paper for your history

class at a table in the libraryclass at a table in the libraryAs you work you realize you need a bookAs you work you realize you need a book You stop writing, fetch the reference, continue writingYou stop writing, fetch the reference, continue writing You don’t immediately return the book, maybe you’ll You don’t immediately return the book, maybe you’ll

need it againneed it againSoon you have a few books at your table, and you can Soon you have a few books at your table, and you can

work smoothly without needing to fetch more books work smoothly without needing to fetch more books from the shelvesfrom the shelves

The table is a CACHE for the rest of the libraryThe table is a CACHE for the rest of the library

Now you switch to doing your biology homeworkNow you switch to doing your biology homework You need to fetch your biology textbook from the shelfYou need to fetch your biology textbook from the shelf If your table is full, you need to return one of the history If your table is full, you need to return one of the history

books back to the shelf to make room for the biology books back to the shelf to make room for the biology bookbook

Page 8: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Typical Memory Reference Typical Memory Reference PatternsPatterns Memory TraceMemory Trace

A temporal sequence A temporal sequence of memory references of memory references (addresses) from a (addresses) from a real program.real program.

Temporal LocalityTemporal Locality If an item is If an item is

referenced, it will tend referenced, it will tend to be referenced again to be referenced again soonsoon

Spatial LocalitySpatial Locality If an item is If an item is

referenced, nearby referenced, nearby items will tend to be items will tend to be referenced soon.referenced soon.

time

address

data

stack

program

Page 9: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Exploiting the Memory HierarchyExploiting the Memory Hierarchy Approach 1 (Cray, others): Expose HierarchyApproach 1 (Cray, others): Expose Hierarchy

Registers, Main Memory, Disk each available as Registers, Main Memory, Disk each available as explicitexplicit storage alternatives storage alternatives

Tell programmers: “Use them cleverly”Tell programmers: “Use them cleverly”

Approach 2: Hide HierarchyApproach 2: Hide HierarchyProgramming model: SINGLE kind of memory, single Programming model: SINGLE kind of memory, single

address space.address space.Transparent to programmer: Machine Transparent to programmer: Machine

AUTOMATICALLY assigns locations, depending on AUTOMATICALLY assigns locations, depending on runtime usage patterns.runtime usage patterns.

CPU

SRAMMAINMEM

CPU SmallSRAM

DynamicRAM

HARDDISK

“MAIN MEMORY”“CACHE”

Page 10: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Exploiting the Memory HierarchyExploiting the Memory Hierarchy CPU speed is dominated by memory CPU speed is dominated by memory

performanceperformanceMore significant than:More significant than: ISA, circuit optimization, ISA, circuit optimization,

pipelining, etc .pipelining, etc .

TRICK #1: Make slow MAIN MEMORY appear fasterTRICK #1: Make slow MAIN MEMORY appear fasterTechnique: CACHINGTechnique: CACHING

TRICK #2: Make small MAIN MEMORY appear biggerTRICK #2: Make small MAIN MEMORY appear biggerTechnique: VIRTUAL MEMORYTechnique: VIRTUAL MEMORY

CPU SmallSRAM

DynamicRAM

HARDDISK

“MAIN MEMORY”“VIRTUAL MEMORY”

“SWAP SPACE”

“CACHE”

Page 11: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

The Cache IdeaThe Cache IdeaProgram-Transparent Memory Hierarchy:Program-Transparent Memory Hierarchy:

Cache contains TEMPORARY COPIES of Cache contains TEMPORARY COPIES of selected main memory locations…selected main memory locations…e.g. Mem[100] = 37e.g. Mem[100] = 37

Two Goals:Two Goals: Improve the average memory access timeImprove the average memory access time

HIT RATIO (HIT RATIO (: Fraction of refs found in cache: Fraction of refs found in cacheMISS RATIO (1-MISS RATIO (1-): Remaining references): Remaining referencesaverage total access time depends on these average total access time depends on these

parametersparameters

Transparency (compatibility, programming Transparency (compatibility, programming ease)ease)

1.0 (1.0-)

CPU

Cache

DYNAMICRAM

Main Memory

100 37

Challenge: To make the hit ratio as high as possible.

tave =αtc + (1−α )(tc + tm ) = tc + (1−α )tm

Page 12: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

How High of a Hit Ratio?How High of a Hit Ratio? Suppose we can easily build an on-chip SRAM Suppose we can easily build an on-chip SRAM

with a 0.8 ns access time, but the fastest with a 0.8 ns access time, but the fastest DRAM we can buy for main memory has DRAM we can buy for main memory has access time of 10 ns. How high of a hit rate do access time of 10 ns. How high of a hit rate do we need to sustain an average total access we need to sustain an average total access time of 1 ns?time of 1 ns?

m

cave

t

tt 1 %98

10

8.011

Wow, a cache really needs to be good!

Page 13: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

CacheCache Sits between CPU and main memorySits between CPU and main memory Very fast table that stores a TAG and Very fast table that stores a TAG and

DATADATATAG is the memory addressTAG is the memory addressDATA is a copy of memory contents at the DATA is a copy of memory contents at the

address given by TAGaddress given by TAG

1000 17

1040 1

1032 97

1008 11

1000

17

1004

23

1008

11

1012

5

1016

29

1020

38

1024

44

1028

99

1032

97

1036

25

1040

1

1044

4

Main Memory

Tag Data

Cache

Page 14: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Cache AccessCache Access On load (lw) we look in the TAG entries On load (lw) we look in the TAG entries

for the address we’re loadingfor the address we’re loading Found Found a HIT, return the DATA a HIT, return the DATANot Found Not Found a MISS, go to memory for the a MISS, go to memory for the

data and put it and the address (TAG) in the data and put it and the address (TAG) in the cachecache

1000 17

1040 1

1032 97

1008 11

1000

17

1004

23

1008

11

1012

5

1016

29

1020

38

1024

44

1028

99

1032

97

1036

25

1040

1

1044

4

Main Memory

Tag Data

Cache

Page 15: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Cache LinesCache Lines Usually get more data than requestedUsually get more data than requested

a LINE is the unit of memory stored in the a LINE is the unit of memory stored in the cachecache

usually much bigger than 1 word, 32 bytes usually much bigger than 1 word, 32 bytes per line is commonper line is common

bigger LINE means fewer misses because of bigger LINE means fewer misses because of spatial localityspatial locality

but bigger LINE means longer time on missbut bigger LINE means longer time on miss

1000 17 23

1040 1 4

1032 97 25

1008 11 5

Tag Data

1000

17

1004

23

1008

11

1012

5

1016

29

1020

38

1024

44

1028

99

1032

97

1036

25

1040

1

1044

4

Main Memory

Cache

Page 16: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Finding the TAG in the CacheFinding the TAG in the CacheSuppose requested data could be anywhere in Suppose requested data could be anywhere in

cache:cache:This is called a Fully Associative cacheThis is called a Fully Associative cache

A 1MB cache may have 32k different lines each of 32 bytesA 1MB cache may have 32k different lines each of 32 bytesWe can’t afford to sequentially search the 32k different We can’t afford to sequentially search the 32k different

tagstagsFully Associative cache uses hardware to compare the Fully Associative cache uses hardware to compare the

address to the tags in address to the tags in parallelparallel but it is expensive but it is expensiveTAG Data

= ?

TAG Data

= ?

TAG Data

= ?

IncomingAddress

HIT

Data Out

Page 17: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Finding the TAG in the CacheFinding the TAG in the Cache Fully Associative:Fully Associative:

Requested data could be Requested data could be anywhereanywhere in the cache in the cache Fully Associative cache uses hardware to compare the Fully Associative cache uses hardware to compare the

address to the tags in parallel but it is expensiveaddress to the tags in parallel but it is expensive1 MB is thus unlikely, typically smaller1 MB is thus unlikely, typically smaller

Direct Mapped Cache:Direct Mapped Cache:Directly computes the cache entry from the addressDirectly computes the cache entry from the address

multiple addresses will map to the same cache linemultiple addresses will map to the same cache lineuse TAG to determine if right use TAG to determine if right

Choose some bits from the address to determine Choose some bits from the address to determine cache entrycache entry low 5 bits determine which byte within the line of 32 byteslow 5 bits determine which byte within the line of 32 byteswe need 15 bits to determine which of the 32k different we need 15 bits to determine which of the 32k different

lines has the datalines has the datawhich of the 32 – 5 = 27 remaining bits should we use?which of the 32 – 5 = 27 remaining bits should we use?

Page 18: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Direct-Mapping ExampleDirect-Mapping Example Suppose: 2 words/line, 4 lines, bytes are being Suppose: 2 words/line, 4 lines, bytes are being

readreadWith 8 byte lines, bottom 3 bits determine byte within With 8 byte lines, bottom 3 bits determine byte within

linelineWith 4 cache lines, next 2 bits determine which line to With 4 cache lines, next 2 bits determine which line to

useuse1024d = 1000001024d = 1000000000000b 000b line = 00b = 0d line = 00b = 0d1000d = 0111111000d = 0111110101000b 000b line = 01b = 1d line = 01b = 1d1040d = 1000001040d = 1000001010000b 000b line = 10b = 2d line = 10b = 2d

1024 44 99

1000 17 23

1040 1 4

1016 29 38

Tag Data

1000

17

1004

23

1008

11

1012

5

1016

29

1020

38

1024

44

1028

99

1032

97

1036

25

1040

1

1044

4

Main memory

Cache

Page 19: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Direct Mapping MissDirect Mapping Miss What happens when we now ask for address 1008?What happens when we now ask for address 1008?

1008d = 0111111008d = 0111111010000b 000b line = 10b = 2d line = 10b = 2d

……but earlier we put 1040d therebut earlier we put 1040d there

1040d = 1000001040d = 1000001010000b 000b line = 10b = 2d line = 10b = 2d

……so evict 1040d, put 1008d in that entryso evict 1040d, put 1008d in that entry

1024 44 99

1000 17 23

1040 1 4

1016 29 38

Tag Data

1000

17

1004

23

1008

11

1012

5

1016

29

1020

38

1024

44

1028

99

1032

97

1036

25

1040

1

1044

4

Memory

1008 11 5

Cache

Page 20: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Miss Penalty and RateMiss Penalty and Rate How much time do you lose on a Miss?How much time do you lose on a Miss?

MISS PENALTY is the time it takes to read the main MISS PENALTY is the time it takes to read the main memory if data was not found in the cachememory if data was not found in the cache50 to 100 clock cycles is common50 to 100 clock cycles is common

MISS RATE is the fraction of accesses which MISSMISS RATE is the fraction of accesses which MISSHIT RATE is the fraction of accesses which HITHIT RATE is the fraction of accesses which HITMISS RATE + HIT RATE = 1MISS RATE + HIT RATE = 1

ExampleExampleSuppose a particular cache has a MISS PENALTY of Suppose a particular cache has a MISS PENALTY of

100 cycles and a HIT RATE of 95%. The CPI for load on 100 cycles and a HIT RATE of 95%. The CPI for load on HIT is 5 but on a MISS it is 105. What is the average HIT is 5 but on a MISS it is 105. What is the average CPI for load?CPI for load?Average CPI = 5 * 0.95 + 105 * 0.05 = 10Average CPI = 5 * 0.95 + 105 * 0.05 = 10

What if MISS PENALTY = 120 cycles?What if MISS PENALTY = 120 cycles?Average CPI = 5 * 0.95 + 120 * 0.05 = 11Average CPI = 5 * 0.95 + 120 * 0.05 = 11

Page 21: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Naddress

N-way set associative

Compares addr with N tags simultaneously. Data can be stored in any of the N cache lines belonging to a “set” like N direct-mapped caches.

Continuum of AssociativityContinuum of Associativity

address

Fully associative

Compares addr with ALL tags simultaneously location A can be stored in any cache line.

address

Direct-mapped

Compare addr with only ONE tag. Location A can be stored in exactly one cache line.

Finds one entry out of entire cache to evict

What happens on a MISS?

Finds one entry out of N in a particular row to evict

There is only one place it can go

Page 22: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Three Replacement StrategiesThree Replacement Strategies When an entry has to be evicted, how to pick When an entry has to be evicted, how to pick

the victim?the victim? LRU (Least-recently used)LRU (Least-recently used)

replaces the item that has gone UNACCESSED the replaces the item that has gone UNACCESSED the LONGESTLONGEST

favors the favors the most recently accessedmost recently accessed data data FIFO/LRR (first-in, first-out/least-recently replaced)FIFO/LRR (first-in, first-out/least-recently replaced)

replaces the OLDEST item in cachereplaces the OLDEST item in cache favors favors recently loadedrecently loaded items over older STALE items items over older STALE items

RandomRandom replace some item at RANDOMreplace some item at RANDOMno favoritism – uniform distributionno favoritism – uniform distributionno “pathological” reference streams causing worst-case no “pathological” reference streams causing worst-case

resultsresultsuse pseudo-random generator to get reproducible behavioruse pseudo-random generator to get reproducible behavior

Page 23: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Handling WRITESHandling WRITES Observation: Most (80+%) of memory Observation: Most (80+%) of memory

accesses are reads, but writes are essential. accesses are reads, but writes are essential. How should we handle writes? How should we handle writes? Two different policies:Two different policies:

WRITE-THROUGH: CPU writes are cached, but WRITE-THROUGH: CPU writes are cached, but also written also written to main memoryto main memory (stalling the CPU until write is completed). (stalling the CPU until write is completed). Memory always holds the latest values.Memory always holds the latest values.

WRITE-BACK: CPU writes are cached, but WRITE-BACK: CPU writes are cached, but not immediately not immediately written to main memory.written to main memory. Main memory contents can Main memory contents can become “stale”. Only when a value has to be evicted from become “stale”. Only when a value has to be evicted from the cache, and only if it had been modified (i.e., is “dirty”), the cache, and only if it had been modified (i.e., is “dirty”), only then it is written to main memory.only then it is written to main memory.

Pros and Cons?Pros and Cons?WRITE-BACK typically has higher performanceWRITE-BACK typically has higher performanceWRITE-THROUGH typically causes fewer consistency WRITE-THROUGH typically causes fewer consistency

problemsproblems

Page 24: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Memory Hierarchy SummaryMemory Hierarchy Summary Give the illusion of fast, big memoryGive the illusion of fast, big memory

small fast cache makes the entire memory appear small fast cache makes the entire memory appear fastfast

large main memory provides ample storagelarge main memory provides ample storageeven larger hard drive provides huge virtual memory even larger hard drive provides huge virtual memory

(TB)(TB)

Various design decisions affect cachingVarious design decisions affect caching total cache size, line size, replacement strategy, write total cache size, line size, replacement strategy, write

policypolicy

PerformancePerformancePut your money on bigger caches, them bigger main Put your money on bigger caches, them bigger main

memory. memory. Don’t put your money on CPU clock speed (GHz) Don’t put your money on CPU clock speed (GHz)

because memory is usually the culprit!because memory is usually the culprit!

Page 25: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

That’s it folks!That’s it folks!

So, what did we learn this semester?So, what did we learn this semester?

Page 26: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

What we learnt this semesterWhat we learnt this semester You now have a pretty good idea about how You now have a pretty good idea about how

computers are designed and how they work:computers are designed and how they work:How data and instructions are representedHow data and instructions are representedHow arithmetic and logic operations are performedHow arithmetic and logic operations are performedHow ALU and control circuits are implementedHow ALU and control circuits are implementedHow registers and the memory hierarchy are How registers and the memory hierarchy are

implementedimplementedHow performance is measuredHow performance is measured (Self Study) How performance is increased via pipelining(Self Study) How performance is increased via pipelining

Lots of low-level programming experience:Lots of low-level programming experience:C and MIPSC and MIPS

This is how programs are actually executed!This is how programs are actually executed!This is how OS/networking code is actually written!This is how OS/networking code is actually written! Java and other higher-level languages are convenient high-Java and other higher-level languages are convenient high-

level abstractions. You probably have new appreciation for level abstractions. You probably have new appreciation for them!them!

Page 27: Computer Organization and Design Memory Hierachy Montek Singh Wed, April 27, 2011 Lecture 17

Grades?Grades?

Your final grades will be on Blackboard Your final grades will be on Blackboard shortly, shortly,

then Connect Carolina.then Connect Carolina.

Don’t forget to submit your course Don’t forget to submit your course evaluation!evaluation!