69
1 PGAS Languages Kathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint work with The Titanium Group: S. Graham, P. Hilfinger, P. Colella, D. Bonachea, K. Datta, E. Givelberg, A. Kamil, N. Mai, A. Solar, J. Su, T. Wen The Berkeley UPC Group: C. Bell, D. Bonachea, W. Chen, J. Duell, P. Hargrove, P. Husbands, C. Iancu, R. Nishtala, M. Welcome

1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

  • View
    225

  • Download
    3

Embed Size (px)

Citation preview

Page 1: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

1PGAS Languages Kathy Yelick

Compilation Technology for Computational Science

Kathy YelickLawrence Berkeley National Laboratory

and UC BerkeleyJoint work with

The Titanium Group: S. Graham, P. Hilfinger, P. Colella, D. Bonachea, K. Datta, E. Givelberg, A. Kamil, N. Mai, A. Solar, J. Su, T. Wen

The Berkeley UPC Group: C. Bell, D. Bonachea, W. Chen, J. Duell, P. Hargrove, P. Husbands, C. Iancu, R. Nishtala, M. Welcome

Page 2: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 3

Parallelism Everywhere• Single processor Moore’s Law effect is ending

• Power density limitations; device physics below 90nm

• Multicore is becoming the norm• AMD, IBM, Intel, Sun all offering multicore• Number of cores per chip likely to increase with density

• Fundamental software change• Parallelism is exposed to software• Performance is no longer solely a hardware problem

• What has the HPC community learned?• Caveat: Scale and applications differ

Page 3: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 4

High-end simulation in the physical sciences = 7 methods:

1. Structured Grids (including Adaptive Mesh Refinement)

2. Unstructured Grids3. Spectral Methods (FFTs, etc.)4. Dense Linear Algebra5. Sparse Linear Algebra 6. Particles 7. Monte Carlo Simulation

Note: Data sizes (8 bit to 32 bit) and types (integer, character) differ, but algorithms the same Games/Entertainment close to scientific computing

Phillip Colella’s “Seven dwarfs”

• Add 4 for embedded; covers all 41 EEMBC benchmarks8. Search/Sort9. Filter10. Comb. logic11. Finite State

Machine

Slide source: Phillip Colella, 2004 and Dave Patterson, 2006

Page 4: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 5

Parallel Programming Models• Parallel software is still an unsolved problem !• Most parallel programs are written using either:

• Message passing with a SPMD model • for scientific applications; scales easily

• Shared memory with threads in OpenMP, Threads, or Java• non-scientific applications; easier to program

• Partitioned Global Address Space (PGAS) Languages off 3 features

• Productivity: easy to understand and use• Performance: primary requirement in HPC• Portability: must run everywhere

Page 5: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 6

Partitioned Global Address Space• Global address space: any thread/process may

directly read/write data allocated by another• Partitioned: data is designated as local (near) or

global (possibly far); programmer controls layout

Glo

bal

ad

dre

ss s

pac

e

x: 1y:

l: l: l:

g: g: g:

x: 5y:

x: 7y: 0

p0 p1 pn

By default: • Object heaps

are shared• Program

stacks are private

• 3 Current languages: UPC, CAF, and Titanium • Emphasis in this talk on UPC & Titanium (based on Java)

Page 6: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 7

PGAS Language Overview

• Many common concepts, although specifics differ• Consistent with base language

• Both private and shared data• int x[10]; and shared int y[10];

• Support for distributed data structures• Distributed arrays; local and global pointers/references

• One-sided shared-memory communication • Simple assignment statements: x[i] = y[i]; or t = *p; • Bulk operations: memcpy in UPC, array ops in Titanium and CAF

• Synchronization• Global barriers, locks, memory fences

• Collective Communication, IO libraries, etc.

Page 7: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 8

Example: Titanium Arrays• Ti Arrays created using Domains; indexed using Points:

double [3d] gridA = new double [[0,0,0]:[10,10,10]];

• Eliminates some loop bound errors using foreach foreach (p in gridA.domain())

gridA[p] = gridA[p]*c + gridB[p];

• Rich domain calculus allow for slicing, subarray, transpose and other operations without data copies

• Array copy operations automatically work on intersection data[neighborPos].copy(mydata);

mydata data[neighorPos]

“restrict”-ed (non-ghost) cells

ghost cells

intersection (copied area)

Page 8: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 9

Productivity: Line Count Comparison

• Comparison of NAS Parallel Benchmarks• UPC version has modest programming effort relative to C• Titanium even more compact, especially for MG, which uses multi-d

arrays• Caveat: Titanium FT has user-defined Complex type and cross-language

support used to call FFTW for serial 1D FFTs

UPC results from Tarek El-Gazhawi et al; CAF from Chamberlain et al;

Titanium joint with Kaushik Datta & Dan Bonachea

0

500

1000

1500

2000

NPB-CG NPB-EP NPB-FT NPB-IS NPB-MG

Lin

es

of

co

de

Fortran

C

MPI+F

CAF

UPC

Titanium

Page 9: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 10

Case Study 1: Block-Structured AMR• Adaptive Mesh Refinement

(AMR) is challenging• Irregular data accesses and

control from boundaries• Mixed global/local view is useful

AMR Titanium work by Tong Wen and Philip Colella

Titanium AMR benchmarks available

Page 10: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 11

AMR in TitaniumC++/Fortran/MPI AMR

• Chombo package from LBNL• Bulk-synchronous comm:

• Pack boundary data between procs

Titanium AMR• Entirely in Titanium• Finer-grained communication

• No explicit pack/unpack code• Automated in runtime system

Code Size in Lines

C++/Fortran/MPI Titanium

AMR data Structures 35000 2000

AMR operations 6500 1200

Elliptic PDE solver 4200* 1500

10X reduction in lines of code!

* Somewhat more functionality in PDE part of Chombo code

Work by Tong Wen and Philip Colella; Communication optimizations joint with Jimmy Su

Page 11: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 12

Performance of Titanium AMRSpeedup

0

10

20

30

40

50

60

70

80

16 28 36 56 112

#procs

sp

eed

up

Ti Chombo

• Serial: Titanium is within a few % of C++/F; sometimes faster!• Parallel: Titanium scaling is comparable with generic

optimizations

- additional optimizations (namely overlap) not yet implemented

Comparable performance

Page 12: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 13

Immersed Boundary Simulation in Titanium• Modeling elastic structures in an

incompressible fluid.• Blood flow in the heart, blood clotting,

inner ear, embryo growth, and many more

• Complicated parallelization• Particle/Mesh method• “Particles” connected into materials

Joint work with Ed Givelberg, Armando Solar-Lezama

Code Size in Lines

Fortran Titanium

8000 4000

Time per timestep

0

20

40

60

80

100

1 2 4 8 16 32 64128

# procs

time

(sec

s)

Pow3/SP 256 3̂

Pow3/SP 512 3̂P4/Myr 512 2̂x256

Page 13: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 14

High PerformanceStrategy for acceptance of a new language• Within HPC: Make it run faster than anything else

Approaches to high performance• Language support for performance:

• Allow programmers sufficient control over resources for tuning• Non-blocking data transfers, cross-language calls, etc.• Control over layout, load balancing, and synchronization

• Compiler optimizations reduce need for hand tuning• Automate non-blocking memory operations, relaxed memory,…• Productivity gains though parallel analysis and optimizations

• Runtime support exposes best possible performance• Berkeley UPC and Titanium use GASNet communication layer • Dynamic optimizations based on runtime information

Page 14: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 15

One-Sided vs Two-Sided

• A one-sided put/get message can be handled directly by a network interface with RDMA support• Avoid interrupting the CPU or storing data from CPU (preposts)

• A two-sided messages needs to be matched with a receive to identify memory address to put data• Offloaded to Network Interface in networks like Quadrics• Need to download match tables to interface (from host)

address

message id

data payload

data payload

one-sided put message

two-sided message

network

interface

memory

host

CPU

Page 15: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 16

Performance Advantage of One-Sided Communication: GASNet vs MPI

0

100

200

300

400

500

600

700

800

900

10 100 1,000 10,000 100,000 1,000,000 10,000,000

Size (bytes)

Ba

nd

wid

th (

MB

/s)

GASNet put (nonblock)"

MPI Flood

Relative BW (GASNet/MPI)

1.0

1.2

1.4

1.6

1.8

2.0

2.2

2.4

10 1000 100000 10000000Si z e (bytes )

• Opteron/InfiniBand (Jacquard at NERSC):• GASNet’s vapi-conduit and OSU MPI 0.9.5 MVAPICH

• Half power point (N ½ ) differs by one order of magnitudeJoint work with Paul Hargrove and Dan Bonachea

(up

is

go

od

)

Page 16: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 17

GASNet: Portability and High-Performance(d

ow

n i

s g

oo

d)

GASNet better for latency across machines

8-byte Roundtrip Latency

14.6

6.6

22.1

9.6

6.6

4.5

9.5

18.5

24.2

13.5

17.8

8.3

0

5

10

15

20

25

Elan3/Alpha Elan4/IA64 Myrinet/x86 IB/G5 IB/Opteron SP/Fed

Ro

un

dtr

ip L

ate

nc

y (

us

ec

)

MPI ping-pong

GASNet put+sync

Joint work with UPC Group; GASNet design by Dan Bonachea

Page 17: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 18

(up

is

go

od

)

GASNet at least as high (comparable) for large messages

Flood Bandwidth for 2MB messages

1504

630

244

857225

610

1490799255

858 228795

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Elan3/Alpha Elan4/IA64 Myrinet/x86 IB/G5 IB/Opteron SP/Fed

Pe

rce

nt

HW

pe

ak

(B

W in

MB

)

MPI GASNet

GASNet: Portability and High-Performance

Joint work with UPC Group; GASNet design by Dan Bonachea

Page 18: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 19

(up

is

go

od

)

GASNet excels at mid-range sizes: important for overlap

GASNet: Portability and High-PerformanceFlood Bandwidth for 4KB messages

547

420

190

702

152

252

750

714231

763223

679

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Elan3/Alpha Elan4/IA64 Myrinet/x86 IB/G5 IB/Opteron SP/Fed

Pe

rce

nt

HW

pe

ak

MPI

GASNet

Joint work with UPC Group; GASNet design by Dan Bonachea

Page 19: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 20

Case Study 2: NAS FT• Performance of Exchange (Alltoall) is critical

• 1D FFTs in each dimension, 3 phases• Transpose after first 2 for locality• Bisection bandwidth-limited

• Problem as #procs grows

• Three approaches:• Exchange:

• wait for 2nd dim FFTs to finish, send 1 message per processor pair

• Slab: • wait for chunk of rows destined for 1

proc, send when ready

• Pencil: • send each row as it completes

Joint work with Chris Bell, Rajesh Nishtala, Dan Bonachea

Page 20: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 21

Overlapping Communication• Goal: make use of “all the wires all the time”

• Schedule communication to avoid network backup

• Trade-off: overhead vs. overlap• Exchange has fewest messages, less message overhead• Slabs and pencils have more overlap; pencils the most

• Example: Class D problem on 256 Processors

Joint work with Chris Bell, Rajesh Nishtala, Dan Bonachea

Exchange (all data at once) 512 Kbytes

Slabs (contiguous rows that go to 1 processor)

64 Kbytes

Pencils (single row) 16 Kbytes

Page 21: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 22

NAS FT Variants Performance Summary

• Slab is always best for MPI; small message cost too high• Pencil is always best for UPC; more overlap

0

200

400

600

800

1000

Myrinet 64

InfiniBand 256Elan3 256

Elan3 512Elan4 256

Elan4 512

MF

lops

per

Thr

ead

Best MFlop rates for all NAS FT Benchmark versions

Best NAS Fortran/MPIBest MPIBest UPC

Joint work with Chris Bell, Rajesh Nishtala, Dan Bonachea

.5 Tflops

Page 22: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 23

Case Study 3: LU Factorization• Direct methods have complicated dependencies

• Especially with pivoting (unpredictable communication)• Especially for sparse matrices (dependence graph with holes)

• LU Factorization in UPC• Use overlap ideas and multithreading to mask latency• Multithreaded: UPC threads + user threads + threaded BLAS

• Panel factorization: Including pivoting• Update to a block of U• Trailing submatrix updates

• Status:• Dense LU done: HPL-compliant • Sparse version underway

Joint work with Parry Husbands

Page 23: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 24

UPC HPL Performance

• Comparison to ScaLAPACK on an Altix, a 2 x 4 process grid• ScaLAPACK (block size 64) 25.25 GFlop/s (tried several block sizes)• UPC LU (block size 256) - 33.60 GFlop/s, (block size 64) - 26.47 GFlop/s

• n = 32000 on a 4x4 process grid• ScaLAPACK - 43.34 GFlop/s (block size = 64) • UPC - 70.26 Gflop/s (block size = 200)

X1 Linpack Performance

0

200

400

600

800

1000

1200

1400

60 X1/64 X1/128

GF

lop

/s

MPI/HPL

UPC

Opteron Cluster Linpack

Performance

0

50

100

150

200

Opt/64

GF

lop

/s

MPI/HPL

UPC

Altix Linpack Performance

0

20

40

60

80

100

120

140

160

Alt/32

GF

lop

/s

MPI/HPL

UPC

•MPI HPL numbers from HPCC database

•Large scaling: • 2.2 TFlops on 512p,

• 4.4 TFlops on 1024p (Thunder)

Joint work with Parry Husbands

Page 24: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 25

Automating Support for Optimizations• The previous examples are hand-optimized

• Non-blocking put/get on distributed memory• Relaxed memory consistency on shared memory

• What analyses are needed to optimize parallel codes?• Concurrency analysis: determine which blocks of code

could run in parallel• Alias analysis: determine which variables could access the

same location • Synchronization analysis: align matching barriers, locks…• Locality analysis: when is a general (global pointer) used

only locally (can convert to cheaper local pointer)Joint work with Amir Kamil and Jimmy Su

Page 25: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 26

In parallel programs, a reordering can change the semantics even if no local dependencies exist.

Reordering in Parallel Programs

data = 1

flag = 1

flag = 1

data = 1

{f == 1, d == 0} is possible after reordering; not in original

T1 T1

f = flag

d = data

T2

f = flag

d = data

T2

Initially, flag = data = 0

Compiler, runtime, and hardware can produce such reorderings

Joint work with Amir Kamil and Jimmy Su

Page 26: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 27

• Sequential consistency: a reordering is illegal if it can be observed by another thread

• Relaxed consistency: reordering may be observed, but local dependencies and synchronization preserved (roughly)

• Titanium, Java, & UPC are not sequentially consistent• Perceived cost of enforcing it is too high• For Titanium and UPC, network latency is the cost• For Java shared memory fences and code

transformations are the cost

Memory Models

Joint work with Amir Kamil and Jimmy Su

Page 27: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 29

• Reordering of an access is observable only if it conflicts with some other access:• The accesses can be to the same memory location• At least one access is a write• The accesses can run concurrently

• Fences (compiler and hardware) need to be inserted around accesses that conflict

Conflicts

data = 1

flag = 1

T1

f = flag

d = data

T2

Conflicts

Joint work with Amir Kamil and Jimmy Su

Page 28: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 30

Sequential Consistency in Titanium• Minimize number of fences – allow same

optimizations as relaxed model• Concurrency analysis identifies

concurrent accesses• Relies on Titanium’s textual barriers and single-

valued expressions

• Alias analysis identifies accesses to the same location• Relies on SPMD nature of Titanium

Joint work with Amir Kamil and Jimmy Su

Page 29: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 31

• Many parallel languages make no attempt to ensure that barriers line up• Example code that is legal but will deadlock:if (Ti.thisProc() % 2 == 0)

Ti.barrier(); // even ID threadselse

; // odd ID threads

Barrier Alignment

Joint work with Amir Kamil and Jimmy Su

Page 30: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 32

• Aiken and Gay introduced structural correctness (POPL’98)• Ensures that every thread executes the same

number of barriers• Example of structurally correct code:if (Ti.thisProc() % 2 == 0)

Ti.barrier(); // even ID threadselse

Ti.barrier(); // odd ID threads

Structural Correctness

Joint work with Amir Kamil and Jimmy Su

Page 31: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 33

• Titanium has textual barriers: all threads must execute the same textual sequence of barriers• Stronger guarantee than structural correctness –

this example is illegal:if (Ti.thisProc() % 2 == 0)

Ti.barrier(); // even ID threadselse

Ti.barrier(); // odd ID threads

• Single-valued expressions used to enforce textual barriers

Textual Barrier Alignment

Joint work with Amir Kamil and Jimmy Su

Page 32: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 34

• A single-valued expression has the same value on all threads when evaluated• Example: Ti.numProcs() > 1

• All threads guaranteed to take the same branch of a conditional guarded by a single-valued expression• Only single-valued conditionals may have barriers• Example of legal barrier use:if (Ti.numProcs() > 1)

Ti.barrier(); // multiple threadselse

; // only one thread total

Single-Valued Expressions

Joint work with Amir Kamil and Jimmy Su

Page 33: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 35

Concurrency Analysis• Graph generated from program as follows:

• Node added for each code segment between barriers and single-valued conditionals

• Edges added to represent control flow between segments

// code segment 1

if ([single])

// code segment 2

else

// code segment 3

// code segment 4

Ti.barrier()

// code segment 5

1

2 3

4

5

barrier

Joint work with Amir Kamil and Jimmy Su

Page 34: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 36

Concurrency Analysis (II)• Two accesses can run concurrently if:

• They are in the same node, or• One access’s node is reachable from the other

access’s node without hitting a barrier

• Algorithm: remove barrier edges, do DFS1

2 3

4

5

barrier

Concurrent Segments

1 2 3 4 5

1 X X X X

2 X X X

3 X X X

4 X X X X

5 XJoint work with Amir Kamil and Jimmy Su

Page 35: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 37

Alias Analysis• Allocation sites correspond to abstract

locations (a-locs)• All explicit and implict program variables

have points-to sets• A-locs are typed and have points-to sets

for each field of the corresponding type• Arrays have a single points-to set for all indices

• Analysis is flow,context-insensitive• Experimental call-site sensitive version – doesn’t

seem to help much

Joint work with Amir Kamil and Jimmy Su

Page 36: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 38

Thread-Aware Alias Analysis• Two types of abstract locations: local and

remote• Local locations reside in local thread’s memory• Remote locations reside on another thread

• Exploits SPMD property• Results are a summary over all threads• Independent of the number of threads at runtime

Joint work with Amir Kamil and Jimmy Su

Page 37: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 39

Alias Analysis: Allocation• Creates new local abstract location

• Result of allocation must reside in local memory

class Foo { Object z; }

static void bar() {L1: Foo a = new Foo(); Foo b = broadcast a from 0; Foo c = a;L2: a.z = new Object(); }

Points-to Sets

a

b

c

A-locs 1, 2

Joint work with Amir Kamil and Jimmy Su

Page 38: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 40

Alias Analysis: Assignment• Copies source abstract locations into

points-to set of target

class Foo { Object z; }

static void bar() {L1: Foo a = new Foo(); Foo b = broadcast a from 0; Foo c = a;L2: a.z = new Object(); }

Points-to Sets

a 1

b

c 1

1.z 2

A-locs 1, 2

Joint work with Amir Kamil and Jimmy Su

Page 39: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 41

Alias Analysis: Broadcast• Produces both local and remote versions

of source abstract location• Remote a-loc points to remote analog of what

local a-loc points to

class Foo { Object z; }

static void bar() {L1: Foo a = new Foo(); Foo b = broadcast a from 0; Foo c = a;L2: a.z = new Object(); }

Points-to Sets

a 1

b 1, 1r

c 1

1.z 2

1r.z 2r

A-locs 1, 2, 1r

Joint work with Amir Kamil and Jimmy Su

Page 40: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 42

Aliasing Results• Two variables A and B may

alias if: xpointsTo(A).

xpointsTo(B)• Two variables A and B may

alias across threads if: xpointsTo(A).

R(x)pointsTo(B),(where R(x) is the remote counterpart of x)

Points-to Sets

a 1

b 1, 1r

c 1

Alias [Across Threads]:

a b, c [b]

b a, c [a, c]

c a, b [b]

Joint work with Amir Kamil and Jimmy Su

Page 41: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 43

BenchmarksBenchmark Lines1 Description

pi 56 Monte Carlo integration

demv 122 Dense matrix-vector multiply

sample-sort 321 Parallel sort

lu-fact 420 Dense linear algebra

3d-fft 614 Fourier transform

gsrb 1090 Computational fluid dynamics kernel

gsrb* 1099 Slightly modified version of gsrb

spmv 1493 Sparse matrix-vector multiply

gas 8841 Hyperbolic solver for gas dynamics

1 Line counts do not include the reachable portion of the 1 37,000 line Titanium/Java 1.0 libraries

Joint work with Amir Kamil and Jimmy Su

Page 42: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 44

• We tested analyses of varying levels of precision

Analysis Levels

Analysis Description

naïve All heap accesses

sharing All shared accesses

concur Concurrency analysis + type-based AA

concur/saa Concurrency analysis + sequential AA

concur/taa Concurrency analysis + thread-aware AA

concur/taa/cycle Concurrency analysis + thread-aware AA + cycle detection

Joint work with Amir Kamil and Jimmy Su

Page 43: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 45

Static Fence Removal

0

20

40

60

80

100

120

Perc

en

tag

e

pi demv sample sort

lu fact 3d fft gsrb

gsrb* spmv gas

Percentages are for number of static fences reduced over naive

Static (Logical) Fences

GOOD

Joint work with Amir Kamil and Jimmy Su

Page 44: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 46

Percentages are for number of dynamic fences reduced over naive

Dynamic Fence Removal

0

20

40

60

80

100

120

Perc

en

tag

e

pi demv sample sort

lu fact 3d fft gsrb

gsrb* spmv gas

Dynamic (Executed) Fences

GOOD

Joint work with Amir Kamil and Jimmy Su

Page 45: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 47

Dynamic Fences: gsrb

GOOD

• gsrb relies on dynamic locality checks• slight modification to remove checks (gsrb*)

greatly increases precision of analysisgsrb Dynamic Fence Removal

0

20

40

60

80

100

120

Pe

rce

nta

ge

gsrb gsrb*

Joint work with Amir Kamil and Jimmy Su

Page 46: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 48

• Consider two optimizations for GAS languages

1.Overlap bulk memory copies2.Communication aggregation for irregular array

accesses (i.e. a[b[i]])

• Both optimizations reorder accesses, so sequential consistency can inhibit them

• Both are addressing network performance, so potential payoff is high

Two Example Optimizations

Joint work with Amir Kamil and Jimmy Su

Page 47: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 49

Array Copies in Titanium• Array copy operations are commonly used

dst.copy(src);

• Content in the domain intersection of the two arrays is copied from dst to src

• Communication (possibly with packing) required if arrays reside on different threads

• Processor blocks until the operation is complete.

srcdst

Joint work with Amir Kamil and Jimmy Su

Page 48: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 50

Non-Blocking Array Copy Optimization• Automatically convert blocking array copies

into non-blocking array copies• Push sync as far down the instruction stream as

possible to allow overlap with computation

• Interprocedural: syncs can be moved across method boundaries

• Optimization reorders memory accesses – may be illegal under sequential consistency

Joint work with Amir Kamil and Jimmy Su

Page 49: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 51

Communication Aggregation on Irregular Array Accesses (Inspector/Executor)

• A loop containing indirect array accesses is split into phases• Inspector examines loop and computes reference

targets• Required remote data gathered in a bulk operation• Executor uses data to perform actual computation

• Can be illegal under sequential consistency

for (...) { a[i] = remote[b[i]]; // other accesses}

schd = inspect(remote, b);tmp = get(remote, schd);for (...) { a[i] = tmp[i]; // other accesses}

Joint work with Amir Kamil and Jimmy Su

Page 50: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 52

Relaxed + SC with 3 Analyses

Name Description

relaxed Uses Titanium’s relaxed memory model

naïve Uses sequential consistency, puts fences around every heap access

sharing Uses sequential consistency, puts fences around every shared heap access

concur/taa/cycle Uses sequential consistency, uses our most aggressive analysis

• We tested performance using analyses of varying levels of precision

Joint work with Amir Kamil and Jimmy Su

Page 51: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 53

Dense Matrix Vector MultiplyDense Matrix Vector Multiply

0

0.5

1

1.5

2

1 2 4 8 16# of processors

sp

eed

up

relaxed naive sharing concur/taa/cycle

• Non-blocking array copy optimization applied• Strongest analysis is necessary: other SC

implementations suffer relative to relaxedJoint work with Amir Kamil and Jimmy Su

Page 52: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 54

Sparse Matrix Vector Multiply

Sparse Matrix Vector Multiply

0

20

40

60

80

100

1 2 4 8 16# of processors

spee

du

p

relaxed naive sharing concur/taa/cycle

• Inspector/executor optimization applied• Strongest analysis is again necessary and sufficient

Joint work with Amir Kamil and Jimmy Su

Page 53: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 55

Portability of Titanium and UPC• Titanium and the Berkeley UPC translator use a similar model

• Source-to-source translator (generate ISO C)• Runtime layer implements global pointers, etc• Common communication layer (GASNet)

• Both run on most PCs, SMPs, clusters & supercomputers• Support Operating Systems:

• Linux, FreeBSD, Tru64, AIX, IRIX, HPUX, Solaris, Cygwin, MacOSX, Unicos, SuperUX• UPC translator somewhat less portable: we provide a http-based compile server

• Supported CPUs: • x86, Itanium, Alpha, Sparc, PowerPC, PA-RISC, Opteron

• GASNet communication:• Myrinet GM, Quadrics Elan, Mellanox Infiniband VAPI, IBM LAPI, Cray X1, SGI Altix,

Cray/SGI SHMEM, and (for portability) MPI and UDP• Specific supercomputer platforms:

• HP AlphaServer, Cray X1, IBM SP, NEC SX-6, Cluster X (Big Mac), SGI Altix 3000• Underway: Cray XT3, BG/L (both run over MPI)

• Can be mixed with MPI, C/C++, Fortran

Also used by gcc/upc

Joint work with Titanium and UPC groups

Page 54: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 56

Portability of PGAS LanguagesOther compilers also exist for PGAS Languages• UPC

• Gcc/UPC by Intrepid: runs on GASNet• HP UPC for AlphaServers, clusters, …• MTU UPC uses HP compiler on MPI (source to source)• Cray UPC

• Co-Array Fortran:• Cray CAF Compiler: X1, X1E• Rice CAF Compiler (on ARMCI or GASNet), John Mellor-Crummey

• Source to source • Processors: Pentium, Itanium2, Alpha, MIPS• Networks: Myrinet, Quadrics, Altix, Origin, Ethernet • OS: Linux32 RedHat, IRIS, Tru64

NB: source-to-source requires cooperation by backend compilers

Page 55: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 57

Summary• PGAS languages offer productivity advantage

• Order of magnitude in line counts for grid-based code in Titanium• Push decisions about packing/not into runtime for portability

(advantage of language with translator vs. library approach)• Significant work in compiler can make programming easier

• PGAS languages offer performance advantages• Good match to RDMA support in networks• Smaller messages may be faster:

• make better use of network: postpone bisection bandwidth pain• can also prevent cache thrashing for packing

• Have locality advantages that may help even SMPs

• Source-to-source translation• The way to ubiquity• Complement highly tuned machine-specific compilers

Page 56: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

58PGAS Languages Kathy Yelick

End of Slides

Page 57: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 59

Productizing BUPC• Recent Berkeley UPC release

• Support full 1.2 language spec• Supports collectives (tuning ongoing); memory model compliance• Supports UPC I/O (naïve reference implementation)

• Large effort in quality assurance and robustness• Test suite: 600+ tests run nightly on 20+ platform configs

• Tests correct compilation & execution of UPC and GASNet• >30,000 UPC compilations and >20,000 UPC test runs per night• Online reporting of results & hookup with bug database

• Test suite infrastructure extended to support any UPC compiler• now running nightly with GCC/UPC + UPCR• also support HP-UPC, Cray UPC, …

• Online bug reporting database• Over >1100 reports since Jan 03• > 90% fixed (excl. enhancement requests)

Page 58: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 61

Benchmarking• Next few UPC and MPI application benchmarks

use the following systems• Myrinet: Myrinet 2000 PCI64B, P4-Xeon 2.2GHz• InfiniBand: IB Mellanox Cougar 4X HCA, Opteron 2.2GHz• Elan3: Quadrics QsNet1, Alpha 1GHz• Elan4: Quadrics QsNet2, Itanium2 1.4GHz

Page 59: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 62

PGAS Languages: Key to High PerformanceOne way to gain acceptance of a new language• Make it run faster than anything elseKeys to high performance• Parallelism:

• Scaling the number of processors

• Maximize single node performance• Generate friendly code or use tuned libraries (BLAS, FFTW, etc.)

• Avoid (unnecessary) communication cost• Latency, bandwidth, overhead

• Avoid unnecessary delays due to dependencies• Load balance• Pipeline algorithmic dependencies

Page 60: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 63

Hardware Latency

• Network latency is not expected to improve significantly• Overlapping communication automatically (Chen)• Overlapping manually in the UPC applications (Husbands, Welcome,

Bell, Nishtala)• Language support for overlap (Bonachea)

Page 61: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 64

Effective Latency

Communication wait time from other factors • Algorithmic dependencies

• Use finer-grained parallelism, pipeline tasks (Husbands)

• Communication bandwidth bottleneck• Message time is: Latency + 1/Bandwidth * Size• Too much aggregation hurts: wait for bandwidth term• De-aggregation optimization: automatic (Iancu);

• Bisection bandwidth bottlenecks• Spread communication throughout the computation (Bell)

Page 62: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 65

Fine-grained UPC vs. Bulk-Synch MPI

• How to waste money on supercomputers• Pack all communication into single message (spend

memory bandwidth)• Save all communication until the last one is ready (add

effective latency)• Send all at once (spend bisection bandwidth)

• Or, to use what you have efficiently:• Avoid long wait times: send early and often• Use “all the wires, all the time”• This requires having low overhead!

Page 63: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 66

What You Won’t Hear Much About• Compiler/runtime/gasnet bug fixes, performance

tuning, testing, …• >13,000 e-mail messages regarding cvs checkins

• Nightly regression testing • 25 platforms, 3 compilers (head, opt-branch, gcc-upc),

• Bug reporting• 1177 bug reports, 1027 fixed

• Release scheduled for later this summer• Beta is available• Process significantly streamlined

Page 64: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 67

Take-Home Messages• Titanium offers tremendous gains in productivity

• High level domain-specific array abstractions• Titanium is being used for real applications

• Not just toy problems• Titanium and UPC are both highly portable

• Run on essentially any machine• Rigorously tested and supported

• PGAS Languages are Faster than two-sided MPI• Better match to most HPC networks

• Berkeley UPC and Titanium benchmarks• Designed from scratch with one-side PGAS model • Focus on 2 scalability challenges: AMR and Sparse LU

Page 65: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 68

Titanium Background

• Based on Java, a cleaner C++• Classes, automatic memory management, etc.• Compiled to C and then machine code, no JVM

• Same parallelism model at UPC and CAF• SPMD parallelism• Dynamic Java threads are not supported

• Optimizing compiler• Analyzes global synchronization• Optimizes pointers, communication, memory

Page 66: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 69

Do these Features Yield Productivity?MG Line Count Comparison

60 58

552

203

9236

0

100

200

300

400

500

600

700

800

900

Fortran w/ MPI Titanium

Language

Pro

duct

ive

Line

s of

Cod

e

Computation

Communication

Declarations

CG Line Count Comparison

1435

27

28

99 44

0

20

40

60

80

100

120

140

160

Fortran w/ MPI Titanium

Language

Pro

duct

ive

Line

s of

Cod

e

Computation

Communication

Declarations

MG Class D Speedup - Opteron/IB

0

20

40

60

80

100

120

140

0 50 100 150

Processors

Sp

eed

up

(O

ver

Bes

t 32

P

roc

Cas

e)

Linear Speedup

Fortran w/MPI

Titanium

CG Class D Speedup - G5/IB

050

100150200

250300350400

0 50 100 150 200 250 300

Processors

Sp

eed

up

(O

ver

Bes

t 64

P

roc

Cas

e)

Linear Speedup

Fortran w/MPI

Titanium

Joint work with Kaushik Datta, Dan Bonachea

Page 67: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 70

GASNet/X1 Performance

• GASNet/X1 improves small message performance over shmem and MPI • Leverages global pointers on X1• Highlights advantage of languages vs. library approach

1 2 4 8 16 32 64 128 256 512 1024 20480

1

2

3

4

5

6

7

8

9

10

11

12

13

14ShmemGASNet

MPI

Message Size (bytes)

Put

per

mess

ag

e g

ap (

mic

rose

conds)

RMW Scalar Vector bcopy()

1 2 4 8 16 32 64 128 256 512 1024 20480

1

2

3

4

5

6

7

8

9

10

11

12

13

14Shmem

GASNet

Message Size (bytes)G

et

Late

ncy

(m

icro

seco

nd

s)

RMW Scalar Vector bcopy()

single word put single word get

Joint work with Christian Bell, Wei Chen and Dan Bonachea

Page 68: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 71

High Level Optimizations in Titanium

Average and maximum speedup of the Titanium version relative to the Aztec version on 1 to 16 processors

Itanium/Myrinet Speedup Comparison

1

1.1

1.2

1.3

1.4

1.5

1.6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

matrix number

spee

dup

average speedup maximum speedup

• Irregular communication can be expensive• “Best” strategy differs by data size/distribution and machine

parameters • E.g., packing, sending bounding boxes, fine-grained are

• Use of runtime optimizations

• Inspector-executor• Performance on

Sparse MatVec Mult• Results: best strategy

differs within the machine on a single matrix (~ 50% better)

Speedup relative to MPI code (Aztec library)

Joint work with Jimmy Su

Page 69: 1 PGAS LanguagesKathy Yelick Compilation Technology for Computational Science Kathy Yelick Lawrence Berkeley National Laboratory and UC Berkeley Joint

Kathy Yelick, 72

Source to Source Strategy

• Source-to-source translation strategy• Tremendous portability advantage• Still can perform significant optimizations

• Relies on high quality back-end compilers and some coaxing in code generation Livermore Loops on Cray X1 (single Node)

0

0.5

1

1.5

2

2.5

3

3.5

4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Per

form

ance

Rat

io C

/ U

PC

48x• Use of “restrict” pointers in C • Understand Multi-D array

indexing (Intel/Itanium issue)• Support for pragmas like

IVDEP• Robust vectorizators (X1,

SSE, NEC,…)

• On machines with integrated shared memory hardware need access to shared memory operations

Joint work with Jimmy Su