60
Chapter 7 Memory Systems

Chapter 7 Memory Systems

  • Upload
    wei

  • View
    57

  • Download
    0

Embed Size (px)

DESCRIPTION

Chapter 7 Memory Systems. Secondary Memory (Disk) M D. Secondary Memory (Archival) M A. Cache Memory M C. Main Memory M M. Memory Hierarchy. CPU ---- Registers. Memory Content: M C  M M  M D  M A. Memory Parameters: Access Time: increase with distance from CPU - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter 7 Memory Systems

Chapter 7Memory Systems

Page 2: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Memory Hierarchy

CacheMemory

MC

MainMemory

MM

SecondaryMemory(Disk)

MD

SecondaryMemory

(Archival)MA

CPU----

Registers

Memory Parameters:

•Access Time: increase with distance from CPU

•Cost/Bit: decrease with distance from CPU

•Capacity: increase with distance from CPU

Memory Content: MC MM MD MA

Page 3: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Users want large and fast memories! SRAM access times are .5 – 5ns at cost of $4000 to $10,000 per GB.DRAM access times are 50-70ns at cost of $100 to $200 per GB.Disk access times are 5 to 20 million ns at cost of $.50 to $2 per GB.

Try and give it to them anyway build a memory hierarchy

Exploiting Memory Hierarchy

2004

CPU

Level 1

Level 2

Level n

Increasing distance

from the CPU in

access timeLevels in the

memory hierarchy

Size of the memory at each level

Page 4: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Locality of Reference

A principle that makes having a memory hierarchy a good idea

If an item is referenced:temporal locality: it will tend to be referenced again soonspatial locality: nearby items will tend to be referenced soon

Why does code have locality?

Our initial focus: two levels (upper, lower) block: minimum unit of data hit: data requested is in the upper level miss: data requested is not in the upper level

Page 5: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Memory Performance Access Time (latency) – from initiation of memory

read to valid data returned to CPU Cache: TA typically = CPU cycle time Main: TA typically = multiple CPU cycles Disk: TA typically several orders of magnitude greater

than CPU cycle time (software controlled)

Bandwidth (throughput) = number of bytes transferred per second

BW = (bytes/transfer) x (transfers/second)

Ex. Synchronous Dynamic RAM “burst transfers”

Page 6: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Memory Access Modes Random Access – locations accessed in any order

with access time independent of location SRAM ROM/EPROM/EEPROM/Flash technologies DRAM (some special modes available) Cache

Serial Access – locations must be accessed in a predetermined sequence (ex. tape) Disk – position read element over a track, and then

serial access within the track CDROM – data recorded in a “spiral”

Page 7: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Memory Write Strategies

Writing a new value of an item in a hierarchical memory which has copies in multiple memory levels

“Write-through” strategy Update all levels of memory on each write All levels “consistent” or “coherent” at all times

“Write-back” strategy Update closest (fastest) copy during write operation Copy the value to other levels at “convenient time” Memory state temporarily inconsistent

Page 8: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Two issues: How do we know if a data item is in the cache? If it is, how do we find it?

Three basic cache organizations: Fully associative Direct mapped Set associative

Assume block size is one word of data

Cache

Page 9: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Fully Associative Cache Associative memory – access by “content” rather than address

V Address Data

1 3 D3

1 7 D7

1 5 D5

0 ? ?

D1

D2

D3

D4

D5

D6

D7

D8

D9

D10

Main Memory

Cache Memory

1

2

3

4

5

6

7

8

9

10

Valid entry

=

=

=

=

Address from CPU

Concurrently compare CPU addressto all cache address fields

Page 10: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Read hits this is what we want!

Read misses stall the CPU, fetch block from memory, deliver to cache,

restart

Write hits: can replace data in cache and memory (write-through) write the data only into the cache (write-back the cache

later)

Write misses: read the entire block into the cache, then write the word

Hits vs. Misses

Page 11: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Fully Associative Cache

D1

D2

D3

D4

D5

D6

D7

D8

D9

D10

Main Memory

1

2

3

4

5

6

7

8

9

10

5

V Address Data

1 3 D3

1 7 D7

1 5 D5

0 ? ?

=

=

=

=

CPU memory read from address 5

Data to CPU

Address from CPU

“Hit”

Cache

Page 12: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Fully Associative Cache

D1

D2

D3

D4

D5

D6

D7

D8

D9

D10

Main Memory

1

2

3

4

5

6

7

8

9

10

8

V Address Data

1 3 D3

1 7 D7

1 5 D5

0 ? ?

=

=

=

=

CPU memory read from address 8

Address from CPU

“Miss” (no “hit”)

CPU readsMain Memoryafter Cache

miss

Cache

Page 13: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Cache Memory Structures

D1

D2

D3

D4

D5

D6

D7

D8

D9

D10

Main Memory

1

2

3

4

5

6

7

8

9

10

V Address Data

1 3 D3

1 7 D7

1 5 D5

1 8 D8

=

=

=

=

Update cache with data from address 8

Cache

Use “open”cache line

Page 14: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Cache Memory Structures

D1

D2

D3

D4

D5

D6

D7

D8

D9

D10

Main Memory

1

2

3

4

5

6

7

8

9

10

V Address Data

1 3 D3

1 7 D7

1 5 D5

1 8 D8

=

=

=

=

Where should the CPU place data from address 2??

Cache

-no open cache lines

- must replace data in some line

Address 2

“Miss” (no “hit”)

Page 15: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Multi-Word Blocks

D0

D1

D2

D3

D4

D5

D6

D7

D8

D9

Main Memory

0

1

2

3

4

5

6

7

8

9

V Block # Data

1 1 D2-D3

1 3 D6-D7

1 0 D0-D1

0

=

=

=

=

Cache line holds a multi-word block from main memory Take advantage of spatial locality of reference

Cache

Block #

Word offset

Block 0

Blk 1

2

3

4

Page 16: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Mapping: address is modulo the number of blocks in the cache

Direct Mapped Cache

00001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

010

011

100

101

110

111 indextag

Main memory address

cache line

Page 17: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

For MIPS:

What kind of locality are we taking advantage of?

Direct Mapped Cache

Page 18: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Taking advantage of spatial locality:

Direct Mapped Cache

Page 19: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Motorola 68020/68030 Cache

Page 20: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Cache Misses Compulsory – cache “empty” at startup Capacity – cache unable to hold entire working set Conflict – two memory loc’s map to same cache line

Conflict (1-way)

Compulsory – “flat” but negligible in this example

Conflict (2-way)

Conflict (8-way)

Fig. 7.31

Page 21: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Problems: Fully associative – high cost, misses only when

capacity reached since any item can go in any line Direct-mapped: less expensive, but can hold only one

item with a given index, leading to conflicts Compromise: Set Associative Cache

“N-way” set associative cache has N direct-mapped caches

One “set” = N lines with a particular index Item with index K can be placed in line K of any of the

N direct-mapped caches Results in fewer misses due to conflicts

Decreasing miss ratio with associativity

Page 22: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Example: 4-way set-associative cacheAddress

22 8

V TagIndex

012

253254255

Data V Tag Data V Tag Data V Tag Data

3222

4-to-1 multiplexor

Hit Data

123891011123031 0

Page 23: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

VAX 11/780 Cache

Page 24: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Intel 80486/Pentium L1 Cache

2K (486)

4K (Pentium)128 Lines

(sets)

Data line = 16 bytes (486) 32 bytes (Pentium)

Tags(20 bit)

LRU Valid* (3) (4)

* Instruction cache: 1 valid bit per line

* Data cache: 2-bit MESI state per line

80486 4-way Set Associative (Pentium 2-way)

Write-back/Write-through programmable

80486 – 8K unified I/DPentium – 8K/8K I/DPentium II/III – 16K/16K I/D

Page 25: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

MESI Cache Line State

MESI = 2-bit cache line “state” for “write-back” cache I = Invalid (line does not contain valid data) S = Line valid, with shared access – no writes allowed E = Line valid, with exclusive access – data may be

written M = Line valid & modified since read from main (must

be rewritten to main memory)

I S

E M

read

read

write

invalidate

invalidate invalidate

Page 26: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Intel Approximated-LRU Replacement

3-bit number B2B1B0 assigned to each “set” of 4 lines Access L0/L1 – set B0=1 Access L2/L3 – set B0=0 Access L0 – set B1=1 or L1 – set B1=0 Access L2 – set B2=1 or L1 – set B2=0

B0

B2B1

L0 L1 L2 L3

0

00

1

11

B0B1 = 00 – replace L0

01 – replace L1

B0B2 = 10 – replace L2

11 – replace L3

Page 27: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Intel Pentium P4 vs. AMD Opteron

Characteristic Intel Pentium P4 AMD Opteron

L1 cache org. Split instr/data cache Split instr/data cache

L1 cache size 8KB data, 96KB trace cache (12K RISC instr)

64KB each I/D

L1 cache associativity 4-way set-associative 2-way set associative

L1 replacement Approximated LRU LRU

L1 block size 64 bytes 64 bytes

L1 write policy Write-through Write-back

L2 cache org. Unified I/D Unified I/D

L2 cache size 512KB 1024KB

L2 cache associativity 8-way set-associative 16-way set-associative

L2 replacement Approximated LRU Approximated LRU

L2 block size 128 bytes 64 bytes

L2 write policy Write-back Write-back

Figure 7.35

Page 28: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Average Memory Access Time Assume main accessed after cache miss detected: TC ,TM = cache and main memory access times

HC = cache hit ratio

TAavg = TC*HC + (1-HC)(TC + TMavg

)

= TC + (1-HC)(TMavg)

miss penalty

Extending to 3rd level (disk):

TAavg = TC + (1-HC)(TM) + (1-HC)(1-HM)(TDavg

)

Note that TM << TDavg

Page 29: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Average Memory Access Time

Assume main accessed concurrent with cache access abort main access on cache hit main access already in progress on cache miss

TA = TC*HC + (1-HC)(TM)

Called “look-aside” cache (previous “look through”)

Problem: main memory busy with aborted accesses

(unacceptable if memory shared)

Page 30: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Increasing the block size tends to decrease miss rate:

Use split caches because there is more spatial locality in code:

Performance

1 KB

8 KB

16 KB

64 KB

256 KB

256

40%

35%

30%

25%

20%

15%

10%

5%

0%

Mis

s ra

te

64164

Block size (bytes)

ProgramBlock size in

wordsInstruction miss rate

Data miss rate

Effective combined miss rate

gcc 1 6.1% 2.1% 5.4%4 2.0% 1.7% 1.9%

spice 1 1.2% 1.3% 1.2%4 0.3% 0.6% 0.4%

Page 31: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Performance

Associativity

0One-way Two-way

3%

6%

9%

12%

15%

Four-way Eight-way

1 KB

2 KB

4 KB

8 KB

16 KB

32 KB64 KB 128 KB

Page 32: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Performance

Simplified model:

execution time = (execution cycles + stall cycles) x cycle time

stall cycles = # of instructions x miss ratio x miss penalty

Two ways of improving performance: decreasing the miss ratio decreasing the miss penalty

What happens if we increase block size?

Page 33: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Example – “gcc”compiler on MIPS

Instruction count = IC Instruction cache miss rate = 5% Data cache miss rate = 10% Clocks/instruction (CPI) = 4 if no misses/stalls Miss penalty = 12 cycles (all misses) Instruction frequencies:

lw = 22% of instructions executed sw = 11% of instructions executed

Question: What is the effect of cache misses on CPU performance (ex, on CPI)?

Page 34: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Decreasing miss penalty with multilevel caches

Add a second level cache: often primary cache is on the same chip as the processor use SRAMs to add another cache above primary memory (DRAM) miss penalty goes down if data is in 2nd level cache

Example: CPI of 1.0 on a 5 Ghz machine with a 5% miss rate, 100ns DRAM access Adding 2nd level cache with 5ns access time decreases miss rate to .5%

Using multilevel caches: try and optimize the hit time on the 1st level cache try and optimize the miss rate on the 2nd level cache

Page 35: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Make reading multiple words easier by using banks of memory

It can get a lot more complicated...

Hardware Issues

CPU

Cache

Memory

Bus

One-word-widememory organization

a.

b. Wide memory organization

CPU

Cache

Memory

Bus

Multiplexor

CPU

Cache

Bus

Memory

bank 0

Memory

bank 1

Memory

bank 2

Memory

bank 3

c. Interleaved memory organization

Page 36: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Cache Complexities

Not always easy to understand implications of caches:

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

200

400

600

800

1000

1200

64 128 256 512 1024 2048 4096

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

400

800

1200

1600

2000

64 128 256 512 1024 2048 4096

Theoretical behavior of Radix sort vs. Quicksort

Observed behavior of Radix sort vs. Quicksort

Page 37: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Cache Complexities

Here is why:

Memory system performance is often critical factor multilevel caches, pipelined processors, make it harder to predict

outcomes Compiler optimizations to increase locality sometimes hurt ILP

Difficult to predict best algorithm: need experimental data

Radix sort

Quicksort

Size (K items to sort)

04 8 16 32

1

2

3

4

5

64 128 256 512 1024 2048 4096

Page 38: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Virtual Memory

Main memory can act as a cache for the secondary storage (disk)

Advantages: illusion of having more physical memory program relocation protection

Virtual addresses Physical addresses

Address translation

Disk addresses

Page 39: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Segmentation vs. Paging Page: transparent to programmer

Virtual address partitioned into fixed-size pages Physical memory likewise partitioned Virtual pages map exactly into physical pages

Segment: defined by programmer/compiler Compiler partitions program/data into segments of related

information (code, data, etc.) Segments easily protected (read-only, execute-only, etc.) Segments can be of variable size Memory manager must find sufficient free memory to hold

each segment (fragmentation may occur) Less common than paging

Page 40: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pages: virtual memory blocks

Page faults: the data is not in memory, retrieve it from disk huge miss penalty, thus pages should be fairly large (e.g., 4KB) reducing page faults is important (LRU is worth the price) can handle the faults in software instead of hardware using write-through is too expensive so we use writeback

Page 41: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Page Tables for Address Translation

Page tablePhysical page or

disk addressPhysical memory

Virtual pagenumber

Disk storage

1111011

11

1

0

0

Valid

Page 42: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Page Tables

Page Descriptor

Page 43: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Segmented Memory

S1

S2

S3

S4

S1

S3

free

Memory ManagementUnit (MMU)

SegmentTable

x

x

free

Sufficient free memory for segment S2, but fragmented, preventing loading of S2

Virtual Memory

Physical Memory

Page 44: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Mapping Segmented Address

+

Segment # Displacement

Virtual Address

V prot limit base

Segment Table

Physical Memory

datum

Physicaladdress

ProtectionCompare segment

Limit to displacement

Page 45: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Combining Segmentation and Paging

+

Segment # Offset

Virtual Address

V prot limit base

Segment Table

Physical Memory

datum

Physicalpage

Page #

V prot page #

Page Table

•Partition segments into pages

•Combine best features of both

offset from top of page

Page # Offset*

Physical Address* offset same as in virtual address

Page 46: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Making Address Translation Fast

A cache for address translations: translation lookaside buffer (TLB)

Typical values: 16-512 entries, miss-rate: .01% - 1%miss-penalty: 10 – 100 cycles

Page 47: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

TLBs and caches

YesWrite access

bit on?

No

YesCache hit?

No

Write data into cache,update the dirty bit, and

put the data and theaddress into the write buffer

YesTLB hit?

Virtual address

TLB access

Try to read datafrom cache

No

YesWrite?

No

Cache miss stallwhile read block

Deliver datato the CPU

Write protectionexception

YesCache hit?

No

Try to write datato cache

Cache miss stallwhile read block

TLB missexception

Physical address

Page 48: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

TLBs and Caches

Page 49: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Concurrent TLB & Cache Access

Normally access main memory cache after translating logical to physical address

Can do these concurrently to save time

Virtual page # Offset

Offset

Index Byte

Physical page #

Tag

Virtual address

Physical address

Cache format

• Offset same in virtual & physical address, • Index available for cache line access while translation occurs• Check tag after translation and cache line access

translate

Page 50: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Modern Systems

Page 51: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Microarchitecture

From IntelPentium Ref.Manual

Page 52: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Caches

From IntelPentium Ref.Manual

Page 53: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Memory Mapping

From IntelPentium Ref.Manual

13 bit index

Page 54: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Paging Details

From IntelPentium Ref.Manual

Page 55: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Segment Descriptor

From IntelPentium Ref.Manual

Page 56: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Protection (Privilege) Levels

From IntelPentium Ref.Manual

Page 57: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Privilege-Level Protection

From IntelPentium Ref.Manual

Currentprivilege

level

Minimumrequiredprivilege

level

Page 58: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Pentium Page Descriptors

From IntelPentium Ref.Manual

Page 59: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Modern Systems

Things are getting complicated!

Page 60: Chapter 7 Memory Systems

Spring 2005 ELEC 5200/6200From Patterson/Hennessey Slides

Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

Design challenge: dealing with this growing disparity Prefetching? 3rd level caches and more? Memory design?

Some Issues

Year

Performance

1

10

100

1,000

10,000

100,000

CPU

Memory