35
1 Keshav Pingali University of Texas, Austin Towards a Science of Parallel Programming

Towards a Science of Parallel Programming

  • Upload
    jewell

  • View
    54

  • Download
    0

Embed Size (px)

DESCRIPTION

Towards a Science of Parallel Programming. Keshav Pingali University of Texas, Austin. Problem Statement. Community has worked on parallel programming for more than 30 years programming models machine models programming languages …. - PowerPoint PPT Presentation

Citation preview

Page 1: Towards a Science  of  Parallel Programming

1

Keshav PingaliUniversity of Texas, Austin

Towards a Science of

Parallel Programming

Page 2: Towards a Science  of  Parallel Programming

2

Problem Statement• Community has worked on parallel

programming for more than 30 years– programming models– machine models– programming languages– ….

• However, parallel programming is still a research problem – matrix computations, stencil computations,

FFTs etc. are well-understood– each new application is a “new

phenomenon”• few insights for irregular applications

• Thesis: we need a science of parallel programming – analysis: framework for thinking about

parallelism in application– synthesis: produce an efficient parallel

implementation of application “The Alchemist” Cornelius Bega (1663)

Page 3: Towards a Science  of  Parallel Programming

3

Analogy: science of electro-magnetism

Seemingly unrelated phenomena Unifying abstractions

Specialized modelsthat exploit structure

Page 4: Towards a Science  of  Parallel Programming

4

Organization of talk• Seemingly unrelated parallel algorithms

and data structures– Stencil codes– Delaunay mesh refinement– Event-driven simulation– Graph reduction of functional languages– …

• Unifying abstractions– Operator formulation of algorithms– Amorphous data-parallelism– Galois programming model– Baseline parallel implementation

• Specialized implementations that exploit structure– Structure of algorithms– Optimized compiler and runtime system

support for different kinds of structure• Ongoing work

Page 5: Towards a Science  of  Parallel Programming

5

Some parallel algorithms

Page 6: Towards a Science  of  Parallel Programming

6

ExamplesApplication/domain Algorithm

Meshing Generation/refinement/partitioning

Compilers Iterative and elimination-based dataflow algorithms

Functional interpreters Graph reduction, static and dynamic dataflow

Maxflow Preflow-push, augmenting pathsMinimal spanning trees Prim, Kruskal, BoruvkaEvent-driven simulation Chandy-Misra-Bryant, Jefferson

TimewarpAI Message-passing algorithms

Stencil computations Jacobi, Gauss-Seidel, red-black ordering

Sparse linear solvers Sparse MVM, sparse Cholesky factorization

Page 7: Towards a Science  of  Parallel Programming

7

Stencil computation: Jacobi iteration• Finite-difference method for solving PDEs

– discrete representation of domain: grid• Values at interior points are updated using values at

neighbors– values at boundary points are fixed

• Data structure: – dense arrays

• Parallelism: – values at all interior points can be computed simultaneously– parallelism is not dependent on input values

• Compiler can find the parallelism– spatial loops are DO-ALL loops

//Jacobi iteration with 5-point stencil//initialize array Afor time = 1, nsteps for <i,j> in [2,n-1]x[2,n-1] temp(i,j)=0.25*(A(i-1,j)+A(i+1,j)+A(i,j-1)+A(i,j+1)) for <i,j> in [2,n-1]x[2,n-1] A(i,j) = temp(i,j)

(i,j)

(i-1,j)

(i+1,j)

(i,j-1) (i,j+1)

5-point stencil

Page 8: Towards a Science  of  Parallel Programming

8

Delaunay Mesh Refinement• Iterative refinement to remove badly

shaped triangles:while there are bad triangles do { pick a bad triangle; find its cavity; retriangulate cavity; // may create new bad triangles}

• Don’t-care non-determinism:– final mesh depends on order in which bad

triangles are processed– applications do not care which mesh is

produced• Data structure:

– graph in which nodes represent triangles and edges represent triangle adjacencies

• Parallelism: – bad triangles with cavities that do not

overlap can be processed in parallel– parallelism is very “input-dependent”

• compilers cannot determine this parallelism – (Miller et al.) at runtime, repeatedly build

interference graph and find maximal independent sets for parallel execution

Mesh m = /* read in mesh */WorkList wl;wl.add(m.badTriangles());while ( !wl.empty() ) {

Element e = wl.get();if (e no longer in mesh) continue;Cavity c = new Cavity(e);//determine

new cavityc.expand();c.retriangulate();//re-triangulate regionm.update(c);//update meshwl.add(c.badTriangles());

}

Page 9: Towards a Science  of  Parallel Programming

9

Event-driven simulation• Stations communicate by sending

messages with time-stamps on FIFO channels

• Stations have internal state that is updated when a message is processed

• Messages must be processed in time-order at each station

• Data structure:– Messages in event-queue, sorted in time-

order• Parallelism:

– conservative: Chandy-Misra-Bryant• station fires when it has messages on all

incoming edges and processes earliest message

• requires null messages to avoid deadlock– optimistic: Jefferson time-warp

• station can fire when it has an incoming message on any edge

• requires roll-back if speculative conflict is detected

2

5

AB

46

Page 10: Towards a Science  of  Parallel Programming

10

Remarks on algorithms

• Diverse algorithms and data structures• Exploiting parallelism in irregular algorithms is very complex

– Miller et al. DMR implementation: interference graph + maximal independent sets

– Jefferson Timewarp algorithm for event-driven simulation• Algorithms:

– parallelism can be very input-dependent • DMR, event-driven simulation, graph reduction,….

– don’t-care non-determinism• has nothing to do with concurrency• DMR, graph reduction

– activities created dynamically may interfere with existing activities• event-driven simulation…

• Data structures:– relatively few algorithms use dense arrays– more common: graphs, trees, lists, priority queues,…

Page 11: Towards a Science  of  Parallel Programming

11

Organization of talk• Seemingly unrelated parallel algorithms

and data structures– Stencil codes– Delaunay mesh refinement– Event-driven simulation– Graph reduction of functional languages– ………

• Unifying abstractions– Amorphous data-parallelism– Baseline parallel implementation for

exploiting amorphous data-parallelism• Specialized implementations that exploit

structure– Structure of algorithms– Optimized compiler and runtime system

support for different kinds of structure• Ongoing work

Page 12: Towards a Science  of  Parallel Programming

12

Requirements

• Provide a model of parallelism in irregular algorithms

• Unified treatment of parallelism in regular and irregular algorithms– parallelism in regular algorithms must emerge as a

special case of general model– (cf.) correspondence principles in Physics

• Abstractions should be effective– should be possible to write an interpreter to execute

algorithms in parallel

Page 13: Towards a Science  of  Parallel Programming

13

Traditional abstraction• Computation graph

– nodes are computations– edges are dependences

• Parallelism– width of the computation graph

• Effective parallel computation graph model– dataflow model of Dennis, Arvind et al.

• Inadequate for irregular applications– dependences between computations are a

function of input data– don’t-care non-determinism– conflicting work may be created dynamically– …

• Data structures play almost no role in this abstraction– in most programs, parallelism comes from

data-parallelism (concurrent operations on data structure elements)

• New abstraction– data-centric: data structures play a central role– we will use graph ADT to illustrate concepts

Page 14: Towards a Science  of  Parallel Programming

14

i1

i2

i3

i4

i5

Operator formulation of algorithms• Algorithm = repeated application of operator to graph

– active element: • node or edge where operator is applied

– Jacobi: interior nodes of mesh– DMR: nodes representing bad triangles– Event-driven simulation: station with

incoming message– neighborhood:

• set of nodes and edges read/written to perform computation

– Jacobi: nodes in stencil– DMR: cavity of bad triangle– Event-driven simulation: station

• distinct usually from neighbors in graph– ordering:

• order in which active elements must be executed in a sequential implementation

– any order (Jacobi, DMR, graph reduction)– some problem-dependent order (event-

driven simulation)

: active node

: neighborhood

Page 15: Towards a Science  of  Parallel Programming

15

Parallelism• Amorphous data-parallelism:

– parallelism in processing active nodes subject to

• neighborhood constraints• ordering constraints

• Computations at two active elements are independent if– Neighborhoods do not overlap– More generally, neither of them writes to an

element in the intersection of the neighborhoods• Unordered active elements

– In principle, independent active elements can be processed in parallel

– How do we find independent active elements?• Ordered active elements

– Independence is not enough since elements can become active dynamically (see example)

– How do we determine what is safe to execute in parallel?

• How do we make this model effective?

i1

i2

i3

i4

i5

2

5

AB

34

C6

Page 16: Towards a Science  of  Parallel Programming

16

Galois programming model (PLDI 2007)

• Program written in terms of abstractions in model

• Programming model: sequential, OO• Graph class: provided by Galois library

– specialized versions to exploit structure (see later)

• Galois set iterators: for iterating over unordered and ordered sets of active elements– for each e in Set S do B(e)

• evaluate B(e) for each element in set S• no a priori order on iterations• set S may get new elements during

execution– for each e in OrderedSet S do B(e)

• evaluate B(e) for each element in set S• perform iterations in order specified by

OrderedSet• set S may get new elements during

execution

Mesh m = /* read in mesh */Set ws;ws.add(m.badTriangles()); // initialize ws

for each tr in Set ws do { //unordered Set iterator if (tr no longer in mesh) continue;

Cavity c = new Cavity(tr);c.expand();c.retriangulate();m.update(c);ws.add(c.badTriangles()); //bad

triangles }

DMR using Galois iterators

Page 17: Towards a Science  of  Parallel Programming

17

Shared Memory

main()….for each …..{…….…….}.....

Master

Program Threads

• Parallel execution model:– shared-memory– optimistic execution of Galois

iterators• Implementation:

– master thread begins execution of program

– when it encounters iterator, worker threads help by executing iterations concurrently

– barrier synchronization at end of iterator

• Independence of neighborhoods:– software TM variety – logical locks on nodes and edges

• Ordering constraints for ordered set iterator:– execute iterations out of order

but commit in order– cf. out-of-order CPUs

Galois parallel execution model

i1

i2

i3

i4

i5

Page 18: Towards a Science  of  Parallel Programming

18

Parameter tool (PPoPP 2009)• Idealized execution model:

– unbounded number of processors– applying operator at an active node takes one time

step– execute a maximal set of active nodes, subject to

neighborhood and ordering constraints• Measures amorphous data-parallelism in

irregular program execution• Useful as an analysis tool

Page 19: Towards a Science  of  Parallel Programming

19

Example: DMR• Input mesh:

– Produced by Triangle (Shewchuck)

– 550K triangles– Roughly half are badly

shaped• Available parallelism:

– How many non-conflicting triangles can be expanded at each time step?

• Parallelism intensity:– What fraction of the total

number of bad triangles can be expanded at each step?

Page 20: Towards a Science  of  Parallel Programming

20

Examples• Boruvka MST algorithm

– Builds MST bottom-up– Unordered active elements

• Agglomerative clustering (AC)– Data-mining algorithm– Ordered active elements

• Similarity in parallelism profiles arises from similarity in algorithmic structure

AC: 20K random points in 2D

Boruvka: 10K node graph, avg degree 5

Page 21: Towards a Science  of  Parallel Programming

21

Summary• Old abstraction: computation graphs• New abstraction: operator formulation of

algorithms– active elements – neighborhoods– ordering of active elements

• Amorphous data-parallelism– generalizes conventional data-parallelism

• Baseline execution model– Galois programming model

• sequential, OO• uses new abstractions

– optimistic parallel execution• Parameter tool

– provides estimates of amorphous data-parallelism in programs written using Galois programming model

i1

i2

i3

i4

i5

Page 22: Towards a Science  of  Parallel Programming

22

Organization of talk• Seemingly unrelated parallel algorithms

and data structures– Stencil codes– Delaunay mesh refinement– Event-driven simulation– Graph reduction of functional languages– ………

• Unifying abstractions– Operator formulation of algorithms– Amorphous data-parallelism– Galois programming model– Baseline parallel implementation

• Specialized implementations that exploit structure– Structure of algorithms– Optimized compiler and runtime system

support for different kinds of structure• Ongoing work

Page 23: Towards a Science  of  Parallel Programming

23

Key idea

• Baseline implementation is general but usually inefficient– (e.g.) dynamic scheduling of iterations is not needed for Jacobi

since grid structure is known at compile-time– (e.g.) hand-written parallel implementations of DMR and Jacobi

do not buffer updates to neighborhood until commit point• Efficient execution requires exploiting structure in

algorithms and data structures• How do we talk about structure in algorithms?

– Previous approaches: like descriptive biology• Mattson et al. book• Parallel programming patterns (PPP): Snir et al.• Berkeley dwarfs• …

– Our approach: like molecular biology• based on amorphous data-parallelism framework

Page 24: Towards a Science  of  Parallel Programming

24

iterativealgorithms

topology

operator

ordering

morph: modifies structure of graph

local computation: only updates values on nodes/edges

reader: does not modify graph in any way

general graphgridtree

unordered

ordered

Algorithm abstractions

Jacobi: topology: grid, operator: local computation, ordering: unordered DMR, graph reduction: topology: graph, operator: morph, ordering: unorderedEvent-driven simulation: topology: graph, operator: local computation, ordering: ordered

Page 25: Towards a Science  of  Parallel Programming

25

morph

Morphs

coarsening

node elimination: sparse Cholesky factorization

edge contraction: Metis, Kruskal MST, Boruvka MST, AC

sub-graph elimination: elimination-based dataflow analysis

refinement: DMR, Prim MST, Barnes-Hut tree building

general: graph reduction

…..

….

operator

u

v

am

n

m

n

a

uv

Edge contractionNode elimination

Page 26: Towards a Science  of  Parallel Programming

26

Reducing Overheads of

Optimistic Parallel Execution

Page 27: Towards a Science  of  Parallel Programming

27

Graph partitioning (ASPLOS 2008)

• Algorithm structure: – general graph/grid + unordered active elements

• Optimization I: – partition the graph/grid and work-set between cores– data-centric work assignment: core gets active elements from its own partition

• Pros and cons:– eliminates contention for worklist– improves locality and can dramatically reduce conflicts– dynamic load-balancing may be needed

• Optimization II:– lock coarsening: associate logical locks with partitions, not graph elements– reduces overhead of lock management

• Over-decomposition may improve core utilization

Cores

Page 28: Towards a Science  of  Parallel Programming

28

Zero-copy implementation

• Cautious operator:– reads all the elements in its neighborhood

before modifying any of them– (e.g.) Delaunay mesh refinement

• Algorithm structure:– cautious operator + unordered active

elements• Optimization: optimistic execution w/o

buffering updates– grab locks on elements during read phase

• conflict: someone else has lock, so release your locks

– once update phase begins, no new locks will be acquired

• update in-place w/o making copies– note: this is not two-phase locking

Page 29: Towards a Science  of  Parallel Programming

29

Delaunay mesh refinement

• Algorithm structure:– general graph/grid + cautious operator +

unordered active elements• Optimizations:

– partitioning + lock-coarsening + zero-buffering

– very efficient implementations possible• Maverick@TACC

– 128-core Sun Fire E25K 1.05 GHz– 64 dual-core processors– Sun Solaris

• Speed-up of 20 on 32 cores for refinement • Mesh partitioning is still sequential

– time for mesh partitioning starts to dominate after 8 processors (32 partitions)

– Need parallel mesh partitioning

Page 30: Towards a Science  of  Parallel Programming

30

Survey propagation on Maverick

• SP is a heuristic for solving difficult SAT problems

• SP: general graph + cautious operator + unordered elements

• Implementation:– partitioning– lock coarsening– zero-buffering

Survey propagation on Maverick(roughly 1500 clauses, 250-500 variables)

Page 31: Towards a Science  of  Parallel Programming

31

Eliminating the Need

for Optimistic Parallel Execution

Page 32: Towards a Science  of  Parallel Programming

32

Scheduling

• Baseline implementation– autonomous scheduling: no coordination between execution of different active elements

• Global coordination possible for some algorithms– Run-time scheduling: cautious operator + unordered active elements

• execute all activities partially to determine neighborhoods• create interference graph and find independent set of activities• execute independent set of activities in parallel w/o synchronization• used in Gary Miller’s implementation of DMR

– Just-in-time scheduling: local computation + structure-driven + cautious, unordered (e.g.) sparse MVM

• Inspector-executor approach– Compile-time scheduling: previous case + graph is known at compile-time (e.g.) Jacobi

• make all scheduling decisions at compile-time time

Page 33: Towards a Science  of  Parallel Programming

33

Ongoing work

• Algorithm studies:– divide-and-conquer algorithms– transforming ordered algorithms into unordered algorithms– intra-operator parallelism

• important for some algorithms on dense graphs– locality

• Language/programming model:– incorporating scheduling information into Galois

• program refinements?• Compiler analysis

– analyze and optimize code for operators• Runtime system

– adaptive control system for managing threads• Application studies

– Case studies of hand-optimized codes• understand hand optimizations• figure out how to incorporate them into system

– Lonestar benchmark suite for irregular programs• joint work with Calin Cascaval’s group at IBM Yorktown Heights

n1

n2

n4

n3

h4

h3

h2

n1

n2

n4

n3

h4

h3

h2

n1

n2

n4

h1

n3

h2

h4

h3

Page 34: Towards a Science  of  Parallel Programming

34

Acknowledgements (in alphabetical order)

• Kavita Bala (Cornell)• Martin Burtscher (UT Austin)• Patrick Carribault (UT Austin)• Calin Cascaval (IBM)• Paul Chew (Cornell)• Amber Hassaan (UT Austin)• Tony Ingraffea (Cornell)• Milind Kulkarni (UT Austin)• Mario Mendez (UT Austin)• Rajasekhar Inkulu (UT Austin)• Donald Nguyen (UT Austin)• Dimitrios Prountzos (UT Austin)• Ganesh Ramanarayanan (Microsoft)• Xin Sui (UT Austin)• Bruce Walter (Cornell)• Zifei Zhong (UT Austin)

Page 35: Towards a Science  of  Parallel Programming

35

Science of Parallel Programming

Seemingly unrelated algorithms

Unifying abstractionsSpecialized models

that exploit structure

2 A B

……..

i1

i2

i3

i4

i5