31
CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Embed Size (px)

Citation preview

Page 1: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

CSCE 432/832 High Performance

---- An Introduction to

Multicore Memory Hierarchy

Dongyuan Zhan

Page 2: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

04/18/23 CSCE 432/832, CMP Memory Hierarchy 2

What We Learnt from the Video

• The Motivation of Multi-core Processors– Better utilization of on-chip transistor resources as

technology scales

– Use thread-level parallelism to increase throughput

• Two Models of Multi-core Processors– Homogenous vs. Heterogeneous CMPs

• Communication & Synchronization among Cores– Communicate with each other via the shared cache/memory

– Synchronize reads/writes via locks, mutex or transactional memory

• How to Program Multi-core Processors– Using OpenMP to write parallel programs

Page 3: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

From Teraflop Multiprocessor to Teraflop Multicore

04/18/23 CSCE 432/832, CMP Memory Hierarchy 3

ASCI RED (1997~2005)

Page 4: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Intel Teraflop Multicore Prototype

04/18/23 CSCE 432/832, CMP Memory Hierarchy 4

Page 5: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

From Teraflop Multiprocessor to Teraflop Multicore

• Pictured here is ASCI Red which was the first computer to reach a Teraflops of processing, equal to trillions of calculations per second.

– Using about 10,000 Pentium Processors running at 200MHz

– Consuming 500kW of power for computation and another 500kW for cooling

– Occupy a very large room

• Intel has now announced just over 10 yeas later that they have developed the world’s first processor that will deliver the same Teraflops performance all on one single

– 80-core on a single chip running at 5 GHz

– Consuming only 62 watts power

– Small enough to rest on the tip of your finger.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 5

Page 6: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

A Commodity Many-core Processor

• Tile64 Multicore Processor (2007~now)

04/18/23 CSCE 432/832, CMP Memory Hierarchy 6

Page 7: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

The Schematic Design of Tile64

04/18/23 CSCE 432/832, CMP Memory Hierarchy 7

4 essential components

• Processor Core

• on-chip Cache

• Network-on-Chip (NoC)

• I/O controllers

PCIe 1

MAC

PHY

PCIe 1

MAC

PHY

PCIe 0

MAC

PHY

PCIe 0

MAC

PHY

SerdesSerdes

SerdesSerdes

Flexible IOFlexible IO

GbE 0GbE 0

GbE 1GbE 1Flexible IOFlexible IO

UART, HPI

JTAG, I2C,

SPI

UART, HPI

JTAG, I2C,

SPI

DDR2 Memory Controller 3DDR2 Memory Controller 3

DDR2 Memory Controller 0DDR2 Memory Controller 0

DDR2 Memory Controller 2DDR2 Memory Controller 2

DDR2 Memory Controller 1DDR2 Memory Controller 1

XAUI

MAC

PHY 0

XAUI

MAC

PHY 0

SerdesSerdes

XAUI

MAC

PHY 1

XAUI

MAC

PHY 1

SerdesSerdes

PROCESSOR

P2

Reg File

P1 P0

CACHE

L2 CACHE

L1I L1D

ITLB DTLB

2D DMA

STN

MDN TDN

UDN IDN

SWITCH

Page 8: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Agenda Today

• An Introduction to the Multi-core Memory Hierarchy

– Why do we need the memory hierarchy for any processors?

» A tradeoff between capacity and latency

» Make common cases fast as a result of programs’ locality (general principle in computer architecture)

– What is the difference between the memory hierarchies of single-core and multi-core CPUs?

» Quite distinct from each other in on-chip caches

– Managing the CMP caches is of paramount importance in performance

» Again, we still have the capacity and latency issues for CMP caches

» How to keep CMP cache coherent

» Hardware & software management schemes

04/18/23 CSCE 432/832, CMP Memory Hierarchy 8

Page 9: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

The Motivation for Mem Hierarchy

04/18/23 CSCE 432/832, CMP Memory Hierarchy 9

CPU Registers100s Bytes0.3-0.5 ns

L1 and L2 Cache10s-100s K Bytes~1 ns - ~10 ns

Main MemoryG Bytes200ns ~ 300ns~ $15/ GByte

Disk1s -10s T Bytes~ 10 ms ~ $0.15 / GByte

CapacityAccess TimeCost

Registers

L1 Cache

Memory

Disk

Instr. Operands

Blocks

Pages

prog./compiler4-8 bytes

cache cntl32 or 64 bytes

OS4K~ 64K bytes

Upper Level

L2 Cachecache cntl64 or 128 bytesBlocks

Lower Level

faster

Larger

Trading off between capacity and latency

On Chip

Off Chip

Page 10: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Programs’ Locality

• Two Kinds of Basic Locality– Temporal:

» if a memory location is referenced, then it is likely that the same memory location will be referenced again in the near future.

int i; register int j;

for (i = 0; i < 20000; i++)

for (j = 0; j < 300; j++);

– Spatial:

» if a memory location is referenced, then it is likely that nearby memory locations will be referenced in the near future.

• Locality + smaller HW is to make common cases faster = memory hierarchy

04/18/23 CSCE 432/832, CMP Memory Hierarchy 10

Page 11: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

The Challenges of Memory Wall

• The Truths:– In many applications, 30-40% the total instructions are

memory operations

– CPU speed scales much faster than the DRAM speed

» In 1980, CPUs and DRAMs were operated at almost the same speed, about 4MHz~8MHz

» CPU clock frequency has doubled every 2 years;

» DRAM speed have only been doubling about every 6 years.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 11

Page 12: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Memory Wall

– DRAM bandwidth is quite limited: two DDR2-800 modules can reach the bandwidth of 12.8GB/sec (about 6.4B/cpu_cycle if the cpu runs at 2GHz). So, in a multicore processor, when multiple 64-bit cores need to access the memory at the same time, they will exacerbate contention on the DRAM bandwidth.

– Memory Wall: CPU needs to speed a lot of time on off-chip memory accesses. E.g., Intel XScale spends on average 35% of the total execution time on memory accesses. High latency and low bandwidth of the DRAM system becomes a bottleneck for CPUs.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 12

Page 13: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Solutions

• How to alleviate the memory wall problem– Hiding the mem access latency: prefetching

– Reducing the latency: making memory closer to the CPU: 3D-stacked on-chip DRAM

– Increasing the bandwidth: optical I/O

– Reducing the number of memory accesses: keeping as much reusable data on cache as possible

04/18/23 CSCE 432/832, CMP Memory Hierarchy 13

Page 14: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

CMP Cache Organizations(Shared L2 Cache)

04/18/23 CSCE 432/832, CMP Memory Hierarchy 14

Page 15: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

CMP Cache Organizations(Private L2 Cache)

04/18/23 CSCE 432/832, CMP Memory Hierarchy 15

Page 16: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

How to Address Blocks in a CMP

• How to address blocks in a single-core processor– L1 caches are typically virtually indexed but physically tagged,

while L2 caches are mostly physically indexed and tagged (related to virtual memory).

• How to address blocks in a CMP– L1 caches are accessed in the same way as in a single-core

processor

– If the L2 caches are private, the addressing of a block is still the same

– If the L2 caches are shared among all of the cores, then

04/18/23 CSCE 432/832, CMP Memory Hierarchy 16

Tag Set index Offset

6 bits10 bits16 bits

Block # Block offset

Tag Set index Offset

6 bits10 bits14 bits

Block # Block offset

TN

2bits

Tile #

Page 17: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

How to Address Blocks in a CMP

04/18/23 CSCE 432/832, CMP Memory Hierarchy 17

Tag Set Index Block offset

Block Number

Core 0 Core 1 Core 2 Core 3

L2 Banks L2 Banks L2 Banks L2 Banks

Tile 0 Tile 1 Tile 2 Tile 3

0 0

0 1

0 2

0 3

1 0Consecutive Blocks

L1 L1 L1 L1

Physical Address(Issued by “ Core 1” )

Router Router Router Router

Page 18: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

How to Address Blocks in a CMP

04/18/23 CSCE 432/832, CMP Memory Hierarchy 18

Tag Set IndexTile

IndexBlock offset

Block Number

Core 0 Core 1 Core 2 Core 3

L2 Banks L2 Banks L2 Banks L2 Banks

Router Router Router Router

Tile 0 Tile 1 Tile 2 Tile 3

0 0 0

0 0 1

0 0 2

0 0 3

0 1 0Consecutive Blocks

L1 L1 L1 L1

Physical Address(Issued to L2 Controller)

Page 19: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

CMP Cache Coherence

• Snoop based: – All caches on the bus snoop the bus to determine if they have

a copy of the block of data that is requested on the bus. Multiple copies of a data block can be read without any coherence problems; however, a processor must have exclusive access (either invalidate or update other copies) to the bus in order to write.

– Enough for small-scale CMPs with bus interconnection

• Directory based– the data being shared is tracked in a common directory that

maintains the coherence between caches. When a cache line is changed the directory either updates or invalidates the other caches with that cache line.

– Necessary for many-core CMPs with such interconnection as mesh

04/18/23 CSCE 432/832, CMP Memory Hierarchy 19

Page 20: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Non-Uniform Cache Access Timein Shared L2 Caches

04/18/23 CSCE 432/832, CMP Memory Hierarchy 20

Page 21: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Non-Uniform Cache Access Timein Shared L2 Caches

• Let’s assume that Core0 needs to access a data block stored in Tile15

– Assume that access an L2 cache bank needs 10 cycles;

– Assume transferring a data block from one router to an adjacent one needs 2 cycles;

– Then, an remote access to the block in Tile 15 needs 10+2*(2*6)=34 cycles, much greater than an local L2 access.

• Non-Uniform Cache Access Time (NUCA) means that the latency of accessing an cache is a function of the physical locations of both the requesting core and the cache.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 21

Page 22: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

How to reduce the latency of Remote Cache Access

• At least two solutions:– Place the data close enough to the requesting core

» Victim replication [1]: placing L1 victim blocks in the Local L2 cache;

» Change the layout of the data: I will talk about one approach pretty soon;

– Use faster transmission

» Use special on-chip interconnect to transmit data via radio-wave or light-wave signals

04/18/23 CSCE 432/832, CMP Memory Hierarchy 22

Page 23: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

The RF-Interconnect [2]

04/18/23 CSCE 432/832, CMP Memory Hierarchy 23

Page 24: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Interference in Cachingin Shared L2 Caches

• The Problem: because the shared L2 caches are accessible to all cores, one core can interfere with another in placing blocks in L2 caches

– For example, in a dual-core CMP, if a stream application like a video player is co-scheduled with a scientific computation application that has good locality, then the aggressive stream application will continuously place new blocks in L2 cache and replace the computation application’s cached blocks, thus affecting the computation application’s performance.

• Solution: – Regulate cores’ usage of the L2 cache based on their utility of

using the cache [3]

04/18/23 CSCE 432/832, CMP Memory Hierarchy 24

Page 25: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

The Capacity Problemsin Private L2 Caches

• The Problems: – the L2 capacity accessible to each core is fixed, regardless of

the core’s real cache capacity demand. E.g., if two applications are co-scheduled on a dual core CMP with two 1MB private L2 caches, and if one application has a cache demand of 0.5 MB while the other asks for 1.5MB, then one private L2 cache is underutilized while the other is overwhelmed.

– If a parallel program is running on the CMP, different cores will have a lot of data in common. However, the private L2 cache organization requires each core maintain a copy of the common data in its local cache, leading to a lot of data redundancy and degrading the effective

• A Solution: Cooperative Caching [4]

04/18/23 CSCE 432/832, CMP Memory Hierarchy 25

Page 26: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

A Comparison Between Shared and Private L2 Caches

04/18/23 CSCE 432/832, CMP Memory Hierarchy 26

L2S L2P

setmapping

First locate the tile and then index the set

The same as in a single-core CPU

coherencedirectory

each L2 entry has its own directory bits; no separate directory caches.

independent shared directory caches employing the same mapping scheme as in Fig.

capacity high aggregate capacity for any cores relatively low capacity for each core

latencydue to the distributed mapping, a lot of requested data (on-chip) will be in non-local L2 $

the requested data (on-chip) is in the private (closest) L2 $

Sharing Capacity & Data None

performanceIsolation

severe contention in L2 capacity allocation among cores

no interference among cores

Commodity CMPs

Intel Core 2 Duo E6600Sun Sparc Niagara2Tilera Tile64 (64 cores)

AMD Athlon64 6400+Intel Pentium D 840

Page 27: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Using OS to Manage CMP Caches [5]

• Two kinds of address space: – virtual (or logic) & physical

• Page coloring: there is a correspondence between a physical page and its location in the cache

• In CMPs with Shared L2 Cache, by changing the mapping scheme, we can use the OS to determine where a virtual page required by a core is located in the L2 cache

– Tile#(where a page is cached) = physical page number % #Tiles

04/18/23 CSCE 432/832, CMP Memory Hierarchy 27

Page 28: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Using OS to Manage CMP Caches

04/18/23 CSCE 432/832, CMP Memory Hierarchy 28

0

1

0

1

0

1

0

1

0

1

2

3

4

……

4

PhysicalPages

0

1

VirtualPages Required

by Core 0

Core0 Core1 Core2 Core3

2

Page 29: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Using OS to Manage CMP Caches

• The Benefits– Improved Data Proximity

– Capacity Sharing

– Data Sharing (to be introduced next time)

04/18/23 CSCE 432/832, CMP Memory Hierarchy 29

Page 30: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

Summary

• What we have covered this class– The Memory Wall problem for CMPs

– The two basic cache organizations for CMPs

– HW & SW approaches of managing the last level cache.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 30

Page 31: CSCE 432/832 High Performance ---- An Introduction to Multicore Memory Hierarchy Dongyuan Zhan

References

[1] M. Zhang, et al. Victim Replication: Maximizing Capacity while Hiding Wire Delay in Tiled Chip Multiprocessors. ISCA’05.

[2] F. Chang, et al. CMP Network-on-Chip Overlaid With Multi-Band RF-Interconnect. HPCA’08.

[3] A. Jaleel, et al. Adaptive Insertion Policies for Managing Shared Caches. PACT’08.

[4] J. Chang, et al. Cooperative Caching for Chip Multiprocessors. ISCA’06

[5] S. Cho, et al. Managing Distributed, Shared L2 Caches through OS-Level Page Allocation. MICRO’06.

04/18/23 CSCE 432/832, CMP Memory Hierarchy 31