60
Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. (SAND2014-5649)

Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Avoiding Communication in Linear Algebra

Grey Ballard

Microsoft Research Faculty Summit

July 15, 2014

Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

(SAND2014-5649)

Page 2: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Summary

Communication is Expensivein terms of time and energy

Avoiding Communicationsome communication is necessary: we can prove lower bounds

theoretical analysis identifies suboptimal algorithms and spursalgorithmic innovation

minimizing communication leads to speedups in practice

New Algorithms(sometimes) require careful implementation to navigate tradeoffs

(sometimes) require numerical analysis to ensure correctness

Grey Ballard 1

Page 3: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we improve dense matrix multiplication?

Here’s a strong-scaling plot, for fixed matrix dimension: n = 94,080

0

10

20

30

40

50

P=49 P=343 P=2401

Effective G

FLO

PS

pe

r nod

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

Grey Ballard 2

benchmarked on a Cray XT4

2n3

P· time →

ScaLAPACK (classical)

Page 4: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we improve dense matrix multiplication?

Here’s a strong-scaling plot, for fixed matrix dimension: n = 94,080

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ective

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Effect

ive G

FLO

PS

pe

r nod

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

Grey Ballard 2

benchmarked on a Cray XT4

2n3

P· time →

ScaLAPACK (classical)

New Classical

Page 5: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we improve dense matrix multiplication?

Here’s a strong-scaling plot, for fixed matrix dimension: n = 94,080

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ective

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Effect

ive G

FLO

PS

pe

r nod

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

Grey Ballard 2

benchmarked on a Cray XT4

2n3

P· time →

ScaLAPACK (classical)

New Classical

(Old) Strassen

Page 6: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we improve dense matrix multiplication?

Here’s a strong-scaling plot, for fixed matrix dimension: n = 94,080

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ective

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

0

10

20

30

40

50

P=49 P=343 P=2401

Effect

ive G

FLO

PS

pe

r nod

e machine peak (for classical algorithms)

Grey Ballard 2

benchmarked on a Cray XT4

2n3

P· time →

ScaLAPACK (classical)

New Classical

(Old) StrassenNew Strassen

Page 7: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

We must consider communication

By communication, I meanmoving data within memory hierarchy on a sequential computermoving data between processors on a parallel computer

For high-level analysis, we’ll use these simple memory models:

SLOW

FAST

Local

Sequential Parallel

Local Local

Local

Local Local Local

Local

Local

Grey Ballard 3

Page 8: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Runtime Model

Measure computation in termsof # flops performed

Time per flop: γ

Measure communication in termsof # words communicated

Time per word: β

Total running time of an algorithm (ignoring overlap):

γ · (# flops) + β · (# words)

β γ as measured in time and energy, and the relative cost ofcommunication is increasing

Grey Ballard 4

Page 9: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Why avoid communication

Annual Improvements in TimeFlop rate DRAM Bandwidth Network Bandwidth

γ β β

59% per year 23% per year 26% per year

Energy cost comparisons

Grey Ballard 5

Source: John Shalf

Page 10: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Costs of matrix multiplication algorithms

n = matrix dimensionP = number of processorsM = size of the local memory

Computation Communication

“2D” Algorithm(ScaLAPACK)

O(

n3

P

)O(

n2√

P

)

“2.5D” Algorithm O(

n3

P

)O(

n3

P√

M

)

Lower Bound Ω(

n3

P

)Ω(

n3

P√

M

)2D algorithm is suboptimal if M n2

P (extra memory available)

Takeaway: tradeoff extra memory for reduced communication

Grey Ballard 6

Local

Local Local

Local

Local Local Local

Local

Local

Page 11: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Costs of matrix multiplication algorithms

n = matrix dimensionP = number of processorsM = size of the local memory

Computation Communication

“2D” Algorithm(ScaLAPACK)

O(

n3

P

)O(

n2√

P

)“2.5D” Algorithm O

(n3

P

)O(

n3

P√

M

)Lower Bound Ω

(n3

P

)Ω(

n3

P√

M

)2D algorithm is suboptimal if M n2

P (extra memory available)Takeaway: tradeoff extra memory for reduced communication

Grey Ballard 6

Local

Local Local

Local

Local Local Local

Local

Local

Page 12: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Performance improvement in practice

0

0.2

0.4

0.6

0.8

1

1.2

1.4

n=8192, 2D

n=8192, 2.5D

n=131072, 2D

n=131072, 2.5D

Exe

cutio

n tim

e no

rmal

ized

by

2DMatrix Multiplication on 16,384 nodes of BG/P

computationidle

communication

Grey Ballard 7

Page 13: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Lower bounds for classical matrix multiplication

Assume Θ(n3) algorithmSequential case with fast memory of size M

lower bound on words moved between fast/slow mem:

Ω

(n3√

M

)[Hong & Kung 81]

attained by blocked algorithmParallel case with P processors (local memory of size M)

lower bound on words communicated (along critical path):

Ω

(n3

P√

M

)[Toledo et al. 04]

also attainable

Grey Ballard 8

SLOW

FAST

Local

Local Local

Local

Local Local Local

Local

Local

Page 14: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Extensions to the rest of linear algebra

Grey Ballard 9

Theorem (Ballard, Demmel, Holtz, Schwartz 11)If a computation “smells” like 3 nested loops, it must communicate

# words = Ω

(# flops√

memory size

)

This result applies todense or sparse problemssequential or parallel computers

This work was recognized with the SIAM Linear Algebra Prize,given to the best paper from the years 2009-2011

Page 15: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Extensions to the rest of linear algebra

Grey Ballard 9

Theorem (Ballard, Demmel, Holtz, Schwartz 11)If a computation “smells” like 3 nested loops, it must communicate

# words = Ω

(# flops√

memory size

)

What smells like 3 nested loops?

the rest of BLAS 3 (e.g. matrix multiplication, triangular solve)

Cholesky, LU, LDLT , LTLT decompositions

QR decomposition

eigenvalue and SVD reductions

sequences of algorithms (e.g. repeated matrix squaring)

graph algorithms (e.g. all pairs shortest paths)

This work was recognized with the SIAM Linear Algebra Prize,given to the best paper from the years 2009-2011

Page 16: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Optimal Algorithms - Sequential O(n3) Linear Algebra

Computation OptimalAlgorithm

BLAS 3 blocked algorithms[Gustavson 97]

CholeskyLAPACK

[Ahmed & Pingali 00][BDHS10]

Symmetric LAPACK (rarely)Indefinite [BDD+12a]

LULAPACK (rarely)

[Toledo 97]∗

[Grigori et al. 11]

QR

LAPACK (rarely)[Frens & Wise 03]

[Elmroth & Gustavson 98]∗

[Hoemmen et al. 12]∗

Eig, SVD [BDK12a], [BDD12b]

Grey Ballard 10

SLOW

FAST

Parallel Case

Page 17: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Example: Symmetric Indefinite Linear Solve

Suppose we want to solve Ax = b where A

is symmetric (save half the storage and flops)but indefinite (need to permute rows/cols for numerical stability)

We generally want to compute a factorization

PAPT = LTLT

P is a permutation, L is triangular, and T is symmetric and “simpler”

Grey Ballard 11

Page 18: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Symmetric Indefinite Factorization

We’re solving Ax = b where A = AT but A is indefinite

Standard approach is to compute PAPT = LDLT

L is lower triangular and D is block diagonal (1× 1 and 2× 2 blocks)requires complicated pivoting, harder to do tournament pivoting

Alternative approach is to compute PAPT = LTLT [Aas71]L is lower triangular and T is tridiagonalpivoting is more like LU (nonsymmetric case)

= A L T LT

= A L H = H T LT

Grey Ballard 12

Page 19: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Reducing communication improves performance

Performance of symmetric indefinite linear system solverson 48-core AMD Opteron node

0 10000 20000 30000 40000 500000

50

100

150

200

Matrix Dimension

Per

form

ance

(G

FLO

PS

)

New AlgorithmLAPACK

Implemented within PLASMA library [BBD+13]This work received a Best Paper Award at IPDPS ’13

Grey Ballard 13

Page 20: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Example Application: Video Background Subtraction

Idea: use Robust PCA algorithm [Candes et al. 09] to subtractconstant background from the action of a surveillance video

Given a matrix M whose columns represent frames, compute

M = L + S

where L is low-rank and S is sparse

Grey Ballard 14

Page 21: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Example Application: Video Background Subtraction

Q

U Σ VT * **R"

Threshold these singular values

Compute:

M = L + S

where L is low-rank and S is sparse

The algorithm works iteratively, eachiteration requires a singular valuedecomposition (SVD)

M is 110,000×100

Communication-avoiding algorithmprovided 3× speedup over best GPUimplementation [ABDK11]

Grey Ballard 15

Page 22: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Let’s go back to matrix multiplication

Can we do better than the “2.5D” algorithm?

Given the computation involved, it minimized communication. . .

. . . but what if we change the computation?

It’s possible to reduce both computation and communication

Grey Ballard 16

Page 23: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Let’s go back to matrix multiplication

Can we do better than the “2.5D” algorithm?

Given the computation involved, it minimized communication. . .

. . . but what if we change the computation?

It’s possible to reduce both computation and communication

Grey Ballard 16

Page 24: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Strassen’s AlgorithmStrassen showed how to use 7 multiplies instead of 8 for 2× 2 multiplication[

C11 C12C21 C22

]=

[A11 A12A21 A22

] [B11 B12B21 B22

]Classical Algorithm

M1 = A11 · B11

M2 = A12 · B21

M3 = A11 · B12

M4 = A12 · B22

M5 = A21 · B11

M6 = A22 · B21

M7 = A21 · B12

M8 = A22 · B22

C11 = M1 + M2

C12 = M3 + M4

C21 = M5 + M6

C22 = M7 + M8

Strassen’s Algorithm

M1 = (A11 + A22) · (B11 + B22)

M2 = (A21 + A22) · B11

M3 = A11 · (B12 − B22)

M4 = A22 · (B21 − B11)

M5 = (A11 + A12) · B22

M6 = (A21 − A11) · (B11 + B12)

M7 = (A12 − A22) · (B21 + B22)

C11 = M1 + M4 −M5 + M7

C12 = M3 + M5

C21 = M2 + M4

C22 = M1 −M2 + M3 + M6

Grey Ballard 17

Page 25: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Strassen’s AlgorithmStrassen showed how to use 7 multiplies instead of 8 for 2× 2 multiplication

Flop count recurrence:

F (n) = 7 · F (n/2) + Θ(n2)

F (n) = Θ(nlog2 7)

log2 7 ≈ 2.81

M1 = (A11 + A22) · (B11 + B22)

M2 = (A21 + A22) · B11

M3 = A11 · (B12 − B22)

M4 = A22 · (B21 − B11)

M5 = (A11 + A12) · B22

M6 = (A21 − A11) · (B11 + B12)

M7 = (A12 − A22) · (B21 + B22)

C11 = M1 + M4 −M5 + M7

C12 = M3 + M5

C21 = M2 + M4

C22 = M1 −M2 + M3 + M6

Grey Ballard 17

Page 26: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Sequential Communication Costs

If you implement Strassen’s algorithm recursivelyon a sequential computer:

Computation CommunicationClassical

O(n3) O((

n√M

)3M)

(blocked)

Strassen O(nlog2 7) O((

n√M

)log2 7M)

Can we reduce Strassen’s communication cost further?

Grey Ballard 18

SLOW

FAST

Page 27: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Sequential Communication Costs

If you implement Strassen’s algorithm recursivelyon a sequential computer:

Computation CommunicationClassical

O(n3) O((

n√M

)3M)

(blocked)

Strassen O(nlog2 7) O((

n√M

)log2 7M)

Can we reduce Strassen’s communication cost further?

Grey Ballard 18

SLOW

FAST

Page 28: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Lower Bounds for Strassen’s Algorithm

Theorem (Ballard, Demmel, Holtz, Schwartz 12)On a sequential machine, Strassen’s algorithm must communicate

# words = Ω

((n√M

)log2 7

M

)

and on a parallel machine, it must communicate

# words = Ω

((n√M

)log2 7 MP

)

This work received the SPAA Best Paper Award [BDHS11] and appeared asa Research Highlight in the Communications of the ACM

Grey Ballard 19

Page 29: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Lower Bounds for Strassen’s Algorithm

Theorem (Ballard, Demmel, Holtz, Schwartz 12)On a sequential machine, Strassen’s algorithm must communicate

# words = Ω

((n√M

)log2 7

M

)

and on a parallel machine, it must communicate

# words = Ω

((n√M

)log2 7 MP

)

This work received the SPAA Best Paper Award [BDHS11] and appeared asa Research Highlight in the Communications of the ACM

Grey Ballard 19

Page 30: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Optimal Parallel Algorithm?

This lower bound proves that the sequential recursive algorithm iscommunication-optimal

What about the parallel case?

Earlier attempts to parallelize Strassen had communication coststhat exceeded the lower boundWe developed a new algorithm that is communication-optimal,called Communication-Avoiding Parallel Strassen (CAPS)[BDH+12]

Grey Ballard 20

Page 31: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Optimal Parallel Algorithm?

This lower bound proves that the sequential recursive algorithm iscommunication-optimal

What about the parallel case?

Earlier attempts to parallelize Strassen had communication coststhat exceeded the lower boundWe developed a new algorithm that is communication-optimal,called Communication-Avoiding Parallel Strassen (CAPS)[BDH+12]

Grey Ballard 20

Page 32: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Main idea of CAPS algorithm

At each level of recursion tree, choose either breadth-first or depth-firsttraversal of the recursion tree

Breadth-First-Search (BFS) Depth-First-Search (DFS)

Runs all 7 multiplies in paralleleach uses P/7 processors

Requires 7/4 as much extra memory

Requires communication, butminimizes communication in subtrees

Runs all 7 multiplies sequentiallyeach uses all P processors

Requires 1/4 as much extra memory

Increases communication by factor of7/4 in subtrees

Grey Ballard 21

Page 33: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Performance of CAPS on a large problem

Strong-scaling on a Cray XT4, n = 94,080

0

10

20

30

40

50

P=49 P=343 P=2401

Eff

ect

ive

GF

LO

PS

pe

r n

od

e machine peak (for classical algorithms)

Grey Ballard 22

More details

ScaLAPACK (classical)

2.5D Classical

(Old) StrassenCAPS

Page 34: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we beat Strassen?

Strassen’s algorithm allows for less computation and communicationthan the classical O(n3) algorithm

We have algorithms that attain its communication lower bounds andperform well on highly parallel machines

Can we do any better?

Grey Ballard 23

Page 35: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we beat Strassen?

Exponent of matrix multiplicationover time

1960 1970 1980 1990 2000 20102

2.2

2.4

2.6

2.8

3 classical

Strassen

Schonhage

Coppersmith−Winograd Williams

Strassen

Bini et al.

Unfortunately, most of theseimprovements are only theoretical(i.e., not practical) because they

involve approximationsare existence proofshave large constants

Grey Ballard 24

FMM(n) = O(n?)

Page 36: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can we beat Strassen?

Exponent of matrix multiplicationover time

1960 1970 1980 1990 2000 20102

2.2

2.4

2.6

2.8

3 classical

Strassen

Schonhage

Coppersmith−Winograd Williams

Strassen

Bini et al.

Unfortunately, most of theseimprovements are only theoretical(i.e., not practical) because they

involve approximationsare existence proofshave large constants

Grey Ballard 24

FMM(n) = O(n?)

Page 37: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Yes, it’s possible!

Other practical fast algorithms exist (with slightly better exponents)Smaller arithmetic exponent means less communicationRectangular matrix multiplication prefers rectangular base case

0 2000 4000 6000 800016

18

20

22

24

26

28

dimension

Effe

ctiv

e G

FLO

PS

Sequential outer product performance: n x 1600 x n

MKLSTRASSENHK−323424

Grey Ballard 25

Page 38: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Yes, it’s possible!

Other practical fast algorithms exist (with slightly better exponents)Smaller arithmetic exponent means less communicationRectangular matrix multiplication prefers rectangular base case

0 2000 4000 6000 800016

18

20

22

24

26

28

dimension

Effe

ctiv

e G

FLO

PS

Sequential outer product performance: n x 1600 x n

MKLSTRASSENHK−323424

Grey Ballard 25

Page 39: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Collaborators

Michael Anderson (UC Berkeley)

Austin Benson (Stanford)

Aydin Buluc (LBNL)

James Demmel (UC Berkeley)

Alex Druinsky (Tel-Aviv U)

Ioana Dumitriu (U Washington)

Andrew Gearhart (UC Berkeley)

Laura Grigori (INRIA)

Olga Holtz (UC Berkeley/TU Berlin)

Jonathan Hu (Sandia NL)

Mathias Jacquelin (LBNL)

Nicholas Knight (UC Berkeley)

Kurt Keutzer (UC Berkeley)

Tamara Kolda (Sandia NL)

Benjamin Lipshitz (Google)

Inon Peled (Tel-Aviv U)

Todd Plantenga (Sandia NL)

Oded Schwartz (UC Berkeley)

Chris Siefert (Sandia NL)

Edgar Solomonik (UC Berkeley)

Sivan Toledo (Tel-Aviv U)

Ichitaro Yamazaki (UT Knoxville)

Grey Ballard 26

Page 40: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

For more details...

For a more comprehensive (150+ pages) survey, see our

Communication lower bounds and optimal algorithmsfor numerical linear algebra

in the most recent Acta Numerica volume[BCD+14]

Grey Ballard 27

Page 41: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Avoiding Communication in Linear Algebra

Grey Ballard

Grey Ballard 28

Thank You!

www.sandia.gov/~gmballa

Page 42: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Extra Slides

Grey Ballard 29

Page 43: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Main Idea of Classical Lower Bound Proof

Crux of proof based on geometric inequality [Loomis & Whitney 49]

x

z

z

y

x y

A B C

V

Volume of box

V = xyz =√

xz · yz · xy

A shadow

B shadow

C shadow

A B C

V

Volume of a 3D set

V ≤√

area(A shadow) ·√area(B shadow) ·√area(C shadow)

Given limited set of data, how much useful computation can be done?

Grey Ballard 30

Page 44: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Memory-Independent Lower Bounds

Summary for matrix multiplication:

Classical StrassenMemory-dependent

Ω(

n3

P√

M

)Ω(

PMω/2−1

)lower bound

Memory-independentΩ(

n2

P2/3

)Ω(

n2

P2/ω

)lower bound

Perfect strongP = O

(n3

M3/2

)P = O

(nω

Mω/2

)scaling range

Attaining algorithm [SD11] [BDH+12]

Grey Ballard 31

Page 45: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Algorithms - Parallel O(n3) Linear Algebra

Algorithm ReferenceFactor exceeding Factor exceedinglower bound for lower bound for

# words # messagesMatrix Multiply [Can69] 1 1

Cholesky ScaLAPACK log P log PSymmetric [BDD+12a] ? ?Indefinite ScaLAPACK log P (N/P1/2) log P

LU [GDX11] log P log PScaLAPACK log P (N/P1/2) log P

QR [DGHL12] log P log3 PScaLAPACK log P (N/P1/2) log P

SymEig, SVD [BDK12a] ? ?ScaLAPACK log P N/P1/2

NonsymEig [BDD12b] log P log3 PScaLAPACK P1/2 log P N log P

*This table assumes that one copy of the data is distributed evenly across processors

Red = not optimal Local

Local Local

Local

Local Local Local

Local

Local

Grey Ballard 32

Back

Page 46: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Example: Compute Eigenvalues of Band Matrix

Suppose we want to solve Ax = λx where A

is symmetric (save half the storage and flops)has band structure (exploit sparsity – ignore zeros)

We generally want to compute a factorization

A = QTQT

Q is an orthogonal matrix and T is symmetric tridiagonal

Grey Ballard 33

Page 47: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Successive Band Reduction (bulge-chasing)

constraint:c + d ≤ b

5

Q1

4

3

2

1 6

Q1T

b+1

d+1

c

c+d

c d

Q2

Q2T

Q3

Q3T

Q4

Q4T

Q5

Q5T

b = bandwidthc = columnsd = diagonals

Grey Ballard 34

Page 48: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Implementation of Band Eigensolver (CASBR)

Speedup of parallel CASBR (10 threads) over PLASMA library

4.0

4.2

4.4

4.7

6.7

6.2

3.2

3.6

4.6

5.1

5.7

5.7

4.6

4.5

4.5

4.7

5.5

3.7

4.7

4.3

4.1

3.6

4.0

3.4

5.2

5.0

4.8

4.4

3.8

3.0

5.9

5.9

5.5

5.5

5.0

3.8

Bandwidth b

Matr

ix d

imensio

n n

50 100 150 200 250 300

24000

20000

16000

12000

8000

4000

Benchmarked on 10-core Intel Westmere [BDK12a]

Grey Ballard 35

Algorithmic Details

Page 49: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Symmetric Eigenproblem and SVD via SBR

We’re solving the symmetric eigenproblem via reduction to tridiagonal form

Conventional approach (e.g. LAPACK) is direct tridiagonalization

Two-phase approach reduces first to band, then band to tridiagonal

Direct:

1 2 1 2

A T Two-step:

1 2 1 2 1 2

A B T

first phase can be done efficiently

second phase is trickier, requiressuccessive band reduction (SBR)[BLS00]

involves “bulge-chasing”we’ve improved it to reducecommunication [BDK12b]

Grey Ballard 36

Back

Page 50: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Communication-Avoiding SBR - theory

Flops Words Moved Data Re-useSchwarz 4n2b O(n2b) O(1)

M-H 6n2b O(n2b) O(1)

B-L-S* 5n2b O(n2 log b) O(

blog b

)CA-SBR† 5n2b O

(n2b2

M

)O(M

b

)*with optimal parameter choices

†assuming 1 ≤ b ≤√

M/3

Grey Ballard 37

Back

Page 51: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Performance of CAPS on large problems

Strong-scaling on Intrepid (IBM BG/P), n = 65,856.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

5e2 1e3 5e3 1e4 5e4

Effective P

erf

orm

ance, F

raction o

f P

eak

Number of Cores

Strong-Scaling Range

classical

actual

CAPS

2.5D-Strassen

2D-Strassen

Strassen-2D

2.5D

2DClassical Peak

Grey Ballard 38

Back

Page 52: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Performance of CAPS on large problems

Strong-scaling on Intrepid (IBM BG/P), n = 65,856.

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5e2 1e3 5e3 1e4 5e4

Effective P

erf

orm

ance, F

raction o

f P

eak

Number of Cores

Strong-Scaling Range

classical

actual

Strassen-Winograd

CAPS

2.5D-Strassen

2D-Strassen

Strassen-2D

2.5D

2DClassical Peak

Grey Ballard 39

Back

Page 53: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Performance: Model vs Actual

0

0.2

0.4

0.6

0.8

1

1.2

1.4

5e2 1e3 5e3 1e4 5e4

Effective P

erf

orm

ance, F

raction o

f P

eak

Number of Cores

CAPS Model2.5D Model

2D ModelCAPS no cont.

CAPS2.5D

2D

Comparison of the parallel models with the algorithmsin strong scaling of matrix dimension n = 65,856 on Intrepid.

Grey Ballard 40

Back

Page 54: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Can an n × n linear system of equations Ax = b be solved in O(n2+ε)operations, where ε is arbitrarily small?

. . . if solved affirmatively, [this] would change the world.

It is an article of faith for some of us that if O(n2+ε) is ever achieved, thebig idea that achieves it will correspond to an algorithm that is reallypractical.

-Nick Trefethen, 2012 SIAM President

Grey Ballard 41

Page 55: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Solving the base case. . .

2× 2× 2 [a11 a12a21 a22

] [b11 b12b21 b22

]=

[c11 c12c21 c22

]

multiplies 6 7 8

flop count O(n2.58) O

(n2.81) O

(n3)

3× 3× 3 a11 a12 a13

a21 a22 a23

a31 a32 a33

b11 b12 b13

b21 b22 b23

b31 b32 b33

=

c11 c12 c13

c21 c22 c23

c31 c32 c33

multiplies 19 21 23 27

flop count O(n2.68) O

(n2.77) O

(n2.85) O

(n3)

Grey Ballard 42

Page 56: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Solving the base case. . .

2× 2× 2 [a11 a12a21 a22

] [b11 b12b21 b22

]=

[c11 c12c21 c22

]

multiplies 6 7 8

flop count O(n2.58) O

(n2.81) O

(n3)

3× 3× 3 a11 a12 a13

a21 a22 a23

a31 a32 a33

b11 b12 b13

b21 b22 b23

b31 b32 b33

=

c11 c12 c13

c21 c22 c23

c31 c32 c33

multiplies 19 21 23 27

flop count O(n2.68) O

(n2.77) O

(n2.85) O

(n3)

Grey Ballard 42

Page 57: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

Solving the base case. . .

2× 2× 2 [a11 a12a21 a22

] [b11 b12b21 b22

]=

[c11 c12c21 c22

]

multiplies 6 7 8

flop count O(n2.58) O

(n2.81) O

(n3)

3× 3× 3 a11 a12 a13

a21 a22 a23

a31 a32 a33

b11 b12 b13

b21 b22 b23

b31 b32 b33

=

c11 c12 c13

c21 c22 c23

c31 c32 c33

multiplies 19 21 23 27

flop count O(n2.68) O

(n2.77) O

(n2.85) O

(n3)

Grey Ballard 42

Page 58: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

References I

J. O. Aasen.On the reduction of a symmetric matrix to tridiagonal form.BIT Numerical Mathematics, 11:233–242, 1971.10.1007/BF01931804.

M. Anderson, G. Ballard, J. Demmel, and K. Keutzer.Communication-avoiding QR decomposition for GPUs.In Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, IPDPS ’11, pages 48–58,Washington, DC, USA, 2011. IEEE Computer Society.

G. Ballard, D. Becker, J. Demmel, J. Dongarra, A. Druinsky, I. Peled, O. Schwartz, S. Toledo, and I. Yamazaki.

Implementing a blocked AasenÕs algorithm with a dynamic scheduler on multicore architectures, 2013.To appear.

G. Ballard, E. Carson, J. Demmel, M. Hoemmen, N. Knight, and O. Schwartz.Communication lower bounds and optimal algorithms for numerical linear algebra.Acta Numerica, 23:1–155, 5 2014.

G. Ballard, J. Demmel, A. Druinsky, I. Peled, O. Schwartz, and S. Toledo.Communication avoiding symmetric indefinite factorization, 2012.In preparation.

G. Ballard, J. Demmel, and I. Dumitriu.Communication-optimal parallel and sequential nonsymmetric eigenvalue algorithm, 2012.In preparation.

Grey Ballard 43

Page 59: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

References II

G. Ballard, J. Demmel, O. Holtz, B. Lipshitz, and O. Schwartz.Communication-optimal parallel algorithm for Strassen’s matrix multiplication.In Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12, pages 193–204,New York, NY, USA, 2012. ACM.

G. Ballard, J. Demmel, O. Holtz, and O. Schwartz.Communication-optimal parallel and sequential Cholesky decomposition.SIAM Journal on Scientific Computing, 32(6):3495–3523, 2010.

G. Ballard, J. Demmel, O. Holtz, and O. Schwartz.Graph expansion and communication costs of fast matrix multiplication: regular submission.In Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’11, pages 1–12. ACM,2011.

G. Ballard, J. Demmel, and N. Knight.Avoiding communication in the symmetric eigenproblem and SVD, 2012.In preparation.

G. Ballard, J. Demmel, and N. Knight.Communication avoiding successive band reduction.In Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming, PPoPP ’12,pages 35–44, New York, NY, USA, 2012. ACM.

C. Bischof, B. Lang, and X. Sun.A framework for symmetric band reduction.ACM Trans. Math. Soft., 26(4):581–601, December 2000.

Grey Ballard 44

Page 60: Avoiding Communication in Linear Algebra · Avoiding Communication in Linear Algebra Grey Ballard Microsoft Research Faculty Summit July 15, 2014 Sandia National Laboratories is a

References III

L. Cannon.A cellular computer to implement the Kalman filter algorithm.PhD thesis, Montana State University, Bozeman, MN, 1969.

J. Demmel, L. Grigori, M. Hoemmen, and J. Langou.Communication-optimal parallel and sequential QR and LU factorizations.SIAM Journal on Scientific Computing, 34(1):A206–A239, 2012.

L. Grigori, J. Demmel, and H. Xiang.CALU: A communication optimal LU factorization algorithm.SIAM Journal on Matrix Analysis and Applications, 32(4):1317–1350, 2011.

E. Solomonik and J. Demmel.Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms.In Emmanuel Jeannot, Raymond Namyst, and Jean Roman, editors, Euro-Par 2011 Parallel Processing, volume 6853 ofLecture Notes in Computer Science, pages 90–109. Springer Berlin / Heidelberg, 2011.

Grey Ballard 45