261
Automatic Performance Tuning of Sparse-Matrix-Vector- Multiplication (SpMV) and Iterative Sparse Solvers James Demmel www.cs.berkeley.edu/~demmel/cs267_Spr09

Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

  • View
    234

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Automatic Performance Tuningof

Sparse-Matrix-Vector-Multiplication (SpMV)

andIterative Sparse Solvers

James Demmel

www.cs.berkeley.edu/~demmel/cs267_Spr09

Page 2: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Outline

• Motivation for Automatic Performance Tuning• Results for sparse matrix kernels

– Sparse Matrix Vector Multiplication (SpMV)– Sequential and Multicore results

• OSKI = Optimized Sparse Kernel Interface• Tuning Entire Sparse Solvers

Page 3: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Berkeley Benchmarking and OPtimization (BeBOP)

• Prof. Katherine Yelick• Current members

– Kaushik Datta, Mark Hoemmen, Marghoob Mohiyuddin, Shoaib Kamil, Rajesh Nishtala, Vasily Volkov, Sam Williams, …

• Previous members– Hormozd Gahvari, Eun-Jim Im, Ankit Jain, Rich Vuduc,

many undergrads, …

• Many results here from current, previous students• bebop.cs.berkeley.edu

Page 4: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Automatic Performance Tuning

• Goal: Let machine do hard work of writing fast code• What does tuning of dense BLAS, FFTs, signal

processing, have in common?– Can do the tuning off-line: once per architecture,

algorithm– Can take as much time as necessary (hours, a week…)– At run-time, algorithm choice may depend only on few

parameters (matrix dimensions, size of FFT, etc.)

• Can’t always do tuning off-line– Algorithm and implementation may strongly depend on

data only known at run-time– Ex: Sparse matrix nonzero pattern determines both best

data structure and implementation of Sparse-matrix-vector-multiplication (SpMV)

– Part of search for best algorithm just be done (very quickly!) at run-time

Page 5: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Page 6: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Linear Programming Matrix

Page 7: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

A Sparse Matrix You Encounter Every Day

Page 8: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

SpMV with Compressed Sparse Row (CSR) Storage

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

Only 2 flops per 2 mem_refs:Limited by getting data from memory

Page 9: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: The Difficulty of Tuning

• n = 21200• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

Page 10: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: The Difficulty of Tuning

• n = 21200• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

• 8x8 dense substructure: exploit this to limit #mem_refs

Page 11: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Speedups on Itanium 2: The Need for Search

Reference

Best: 4x2

Mflop/s

Mflop/s

Page 12: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Profile: Itanium 2

190 Mflop/s

1190 Mflop/s

Page 13: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Profiles: IBM and Intel IA-64

Power3 - 17% Power4 - 16%

Itanium 2 - 33%Itanium 1 - 8%

252 Mflop/s

122 Mflop/s

820 Mflop/s

459 Mflop/s

247 Mflop/s

107 Mflop/s

1.2 Gflop/s

190 Mflop/s

Page 14: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Profiles: Sun and Intel x86

Ultra 2i - 11% Ultra 3 - 5%

Pentium III-M - 15%Pentium III - 21%

72 Mflop/s

35 Mflop/s

90 Mflop/s

50 Mflop/s

108 Mflop/s

42 Mflop/s

122 Mflop/s

58 Mflop/s

Page 15: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Another example of tuning challenges

• More complicated non-zero structure in general

• N = 16614• NNZ = 1.1M

Page 16: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Zoom in to top corner

• More complicated non-zero structure in general

• N = 16614• NNZ = 1.1M

Page 17: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

3x3 blocks look natural, but…

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells

• But would lead to lots of “fill-in”

Page 18: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Extra Work Can Improve Efficiency!

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells– Fill-in explicit zeros– Unroll 3x3 block multiplies– “Fill ratio” = 1.5

• On Pentium III: 1.5x speedup!– Actual mflop rate 1.52 =

2.25 higher

Page 19: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Automatic Register Block Size Selection

• Selecting the r x c block size– Off-line benchmark

• Precompute Mflops(r,c) using dense A for each r x c

• Once per machine/architecture– Run-time “search”

• Sample A to estimate Fill(r,c) for each r x c– Run-time heuristic model

• Choose r, c to minimize time ~ Fill(r,c) / Mflops(r,c)

Page 20: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accurate and Efficient Adaptive Fill Estimation• Idea: Sample matrix

– Fraction of matrix to sample: s [0,1]– Control cost = O(s * nnz ) by controlling s

• Search at run-time: the constant matters!• Control s automatically by computing statistical

confidence intervals, by monitoring variance• Cost of tuning

– Lower bound: convert matrix in 5 to 40 unblocked SpMVs

– Heuristic: 1 to 11 SpMVs• Tuning only useful when we do many SpMVs

– Common case, eg in sparse solvers

Page 21: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (1/4)

NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)See p. 375 of Vuduc’s thesis for matrices

Page 22: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (2/4)

Page 23: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (2/4)DGEMV

Page 24: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Upper Bounds on Performance for blocked SpMV

• P = (flops) / (time)– Flops = 2 * nnz(A)

• Upper bound on speed: Two main assumptions– 1. Count memory ops only (streaming)– 2. Count only compulsory, capacity misses: ignore conflicts

• Account for line sizes• Account for matrix size and nnz

• Charge minimum access “latency” i at Li cache & mem

– e.g., Saavedra-Barrera and PMaC MAPS benchmarks

1mem11

1memmem

Misses)(Misses)(Loads

HitsHitsTime

iiii

iii

Page 25: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 26: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 27: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 28: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary of Other Performance Optimizations

• Optimizations for SpMV– Register blocking (RB): up to 4x over CSR– Variable block splitting: 2.1x over CSR, 1.8x over RB– Diagonals: 2x over CSR– Reordering to create dense structure + splitting: 2x over

CSR– Symmetry: 2.8x over CSR, 2.6x over RB– Cache blocking: 2.8x over CSR– Multiple vectors (SpMM): 7x over CSR– And combinations…

• Sparse triangular solve– Hybrid sparse/dense data structure: 1.8x over CSR

• Higher-level kernels– A·AT·x, AT·A·x: 4x over CSR, 1.8x over RB– A·x: 2x over CSR, 1.5x over RB– [A·x, A·x, A·x, .. , Ak·x] …. more to say later

Page 29: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Potential Impact on Applications: Omega3P• Application: accelerator cavity design [Ko]• Relevant optimization techniques

– Symmetric storage– Register blocking– Reordering, to create more dense blocks

• Reverse Cuthill-McKee ordering to reduce bandwidth– Do Breadth-First-Search, number nodes in reverse order visited

• Traveling Salesman Problem-based ordering to create blocks

– Nodes = columns of A– Weights(u, v) = no. of nz u, v have in common– Tour = ordering of columns– Choose maximum weight tour– See [Pinar & Heath ’97]

• 2.1x speedup on IBM Power 4

Page 30: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Source: Accelerator Cavity Design Problem (Ko via Husbands)

Page 31: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Post-RCM Reordering

Page 32: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

100x100 Submatrix Along Diagonal

Page 33: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Before: Green + RedAfter: Green + Blue

“Microscopic” Effect of RCM Reordering

Page 34: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

“Microscopic” Effect of Combined RCM+TSP Reordering

Before: Green + RedAfter: Green + Blue

Page 35: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

(Omega3P)

Page 36: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimized Sparse Kernel Interface - OSKI

• Provides sparse kernels automatically tuned for user’s matrix & machine– BLAS-style functionality: SpMV, Ax & ATy, TrSV– Hides complexity of run-time tuning– Includes new, faster locality-aware kernels: ATAx, Akx

• Faster than standard implementations– Up to 4x faster matvec, 1.8x trisolve, 4x ATA*x

• For “advanced” users & solver library writers– Available as stand-alone library (OSKI 1.0.1h, 6/07)– Available as PETSc extension (OSKI-PETSc .1d, 3/06)– Bebop.cs.berkeley.edu/oski

Page 37: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How the OSKI Tunes (Overview)

Benchmarkdata

1. Build forTargetArch.

2. Benchmark

Heuristicmodels

1. EvaluateModels

Generatedcode

variants

2. SelectData Struct.

& Code

Library Install-Time (offline) Application Run-Time

To user:Matrix handlefor kernelcalls

Workloadfrom program

monitoring

Extensibility: Advanced users may write & dynamically add “Code variants” and “Heuristic models” to system.

HistoryMatrix

Page 38: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors */

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 39: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create OSKI wrappers around this data */oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows,

num_cols, SHARE_INPUTMAT, …);

oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE);

oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE);

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

my_matmult( ptr, ind, val, , x, , y );

Page 40: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How to Call OSKI: Basic Usage

• May gradually migrate existing apps– Step 1: “Wrap” existing data structures– Step 2: Make BLAS-like kernel calls

int* ptr = …, *ind = …; double* val = …; /* Matrix, in CSR format */

double* x = …, *y = …; /* Let x and y be two dense vectors *//* Step 1: Create OSKI wrappers around this data */oski_matrix_t A_tunable = oski_CreateMatCSR(ptr, ind, val, num_rows,

num_cols, SHARE_INPUTMAT, …);

oski_vecview_t x_view = oski_CreateVecView(x, num_cols, UNIT_STRIDE);

oski_vecview_t y_view = oski_CreateVecView(y, num_rows, UNIT_STRIDE);

/* Compute y = ·y + ·A·x, 500 times */for( i = 0; i < 500; i++ )

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);/* Step 2 */

Page 41: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How to Call OSKI: Tune with Explicit Hints

• User calls “tune” routine– May provide explicit tuning hints (OPTIONAL)

oski_matrix_t A_tunable = oski_CreateMatCSR( … );

/* … */

/* Tell OSKI we will call SpMV 500 times (workload hint) */oski_SetHintMatMult(A_tunable, OP_NORMAL, , x_view, , y_view, 500);/* Tell OSKI we think the matrix has 8x8 blocks (structural hint) */oski_SetHint(A_tunable, HINT_SINGLE_BLOCKSIZE, 8, 8);

oski_TuneMat(A_tunable); /* Ask OSKI to tune */

for( i = 0; i < 500; i++ )

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);

Page 42: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How the User Calls OSKI: Implicit Tuning

• Ask library to infer workload– Library profiles all kernel calls– May periodically re-tune

oski_matrix_t A_tunable = oski_CreateMatCSR( … );

/* … */

for( i = 0; i < 500; i++ ) {

oski_MatMult(A_tunable, OP_NORMAL, , x_view, , y_view);oski_TuneMat(A_tunable); /* Ask OSKI to tune */

}

Page 43: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

43

Multicore SMPs Used for Tuning SpMVAMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

Page 44: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

44

Multicore SMPs with Conventional cache-based memory hierarchy

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

Page 45: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

45

Multicore SMPs with local store-based memory hierarchy

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell Blade

Sun T2+ T5140 (Victoria Falls)

Page 46: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

46

Multicore SMPs with CMT = Chip-MultiThreading

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

Page 47: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multicore SMPs: Number of threadsAMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

8 threads 8 threads

16* threads128 threads

*SPEs only

Page 48: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multicore SMPs: peak double precision flops

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

75 GFlop/s 74 Gflop/s

29* GFlop/s19 GFlop/s

*SPEs only

Page 49: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multicore SMPs: total DRAM bandwidth

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

21 GB/s (read)

10 GB/s (write)21 GB/s

51 GB/s42 GB/s (read)

21 GB/s (write)

*SPEs only

Page 50: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multicore SMPs with Non-Uniform Memory Access - NUMA

AMD Opteron 2356 (Barcelona)Intel Xeon E5345 (Clovertown)

IBM QS20 Cell BladeSun T2+ T5140 (Victoria Falls)

*SPEs only

Page 51: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

51

Set of 14 test matrices

• All bigger than the caches of our SMPs

Dense

ProteinFEM /

SpheresFEM /

CantileverWind

TunnelFEM /Harbor

QCDFEM /Ship

Economics Epidemiology

FEM /Accelerator

Circuit webbase

LP

2K x 2K Dense matrixstored in sparse format

Well Structured(sorted by nonzeros/row)

Poorly Structuredhodgepodge

Extreme Aspect Ratio(linear programming)

Page 52: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

52

SpMV Performance: Naive parallelization

• Out-of-the box SpMV performance on a suite of 14 matrices

• Scalability isn’t great:

Compare to # threads

8 8 128 16

Naïve Pthreads

Naïve

Page 53: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Performance: NUMA and Software Prefetching

53

NUMA-aware allocation is essential on NUMA SMPs.

Explicit software prefetching can boost bandwidth and change cache replacement policies

used exhaustive search

Page 54: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Performance: “Matrix Compression”

54

Compression includes register blocking other formats smaller indices

Use heuristic rather than search

Page 55: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

55

SpMV Performance: cache and TLB blocking

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

Page 56: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

56

SpMV Performance: Architecture specific optimizations

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

Page 57: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

57

SpMV Performance: max speedup

• Fully auto-tuned SpMV performance across the suite of matrices

• Included SPE/local store optimized version

• Why do some optimizations work better on some architectures?

+Cache/LS/TLB Blocking

+Matrix Compression

+SW Prefetching

+NUMA/Affinity

Naïve Pthreads

Naïve

2.7x2.7x 4.0x4.0x

2.9x2.9x 35x35x

Page 58: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Avoiding Communication in Sparse Linear Algebra

• k-steps of typical iterative solver for Ax=b or Ax=λx– Does k SpMVs with starting vector (eg with b, if solving Ax=b)– Finds “best” solution among all linear combinations of these k+1

vectors– Many such “Krylov Subspace Methods”

• Conjugate Gradients, GMRES, Lanczos, Arnoldi, …

• Goal: minimize communication in Krylov Subspace Methods– Assume matrix “well-partitioned,” with modest surface-to-volume

ratio– Parallel implementation

• Conventional: O(k log p) messages, because k calls to SpMV• New: O(log p) messages - optimal

– Serial implementation• Conventional: O(k) moves of data from slow to fast memory• New: O(1) moves of data – optimal

• Lots of speed up possible (modeled and measured)– Price: some redundant computation

Page 59: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

Locally Dependent Entries for [x,Ax], A tridiagonal, 2 processors

Can be computed without communication

Proc 1 Proc 2

Page 60: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

Can be computed without communication

Proc 1 Proc 2

Locally Dependent Entries for [x,Ax,A2x], A tridiagonal, 2 processors

Page 61: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

Can be computed without communication

Proc 1 Proc 2

Locally Dependent Entries for [x,Ax,…,A3x], A tridiagonal, 2 processors

Page 62: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

Can be computed without communication

Proc 1 Proc 2

Locally Dependent Entries for [x,Ax,…,A4x], A tridiagonal, 2 processors

Page 63: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

Locally Dependent Entries for [x,Ax,…,A8x], A tridiagonal, 2 processors

Can be computed without communicationk=8 fold reuse of A

Proc 1 Proc 2

Page 64: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Remotely Dependent Entries for [x,Ax,…,A8x], A tridiagonal, 2 processors

x

AxA2x

A3x

A4x

A5x

A6x

A7xA8x

One message to get data needed to compute remotely dependent entries, not k=8Minimizes number of messages = latency cost

Price: redundant work “surface/volume ratio”

Proc 1 Proc 2

Page 65: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Remotely Dependent Entries for [x,Ax,A2x,A3x], A irregular, multiple processors

Page 66: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sequential [x,Ax,…,A4x], with memory hierarchy

v

One read of matrix from slow memory, not k=4Minimizes words moved = bandwidth cost

No redundant work

Page 67: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Performance results on 8-Core Clovertown

Page 68: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimizing Communication Complexity of Sparse Solvers

• Example: GMRES for Ax=b on “2D Mesh”– x lives on n-by-n mesh– Partitioned on p½ -by- p½ grid of p processors– A has “5 point stencil” (Laplacian)

• (Ax)(i,j) = linear_combination(x(i,j), x(i,j±1), x(i±1,j))

– Ex: 18-by-18 mesh on 3-by-3 grid of 9 processors

Page 69: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES• What is the cost = (#flops, #words, #mess)

of k steps of standard GMRES?

GMRES, ver.1: for i=1 to k w = A * v(i-1) MGS(w, v(0),…,v(i-1)) update v(i), H endfor solve LSQ problem with H

n/p½

n/p½

• Cost(A * v) = k * (9n2 /p, 4n / p½ , 4 )• Cost(MGS) = k2/2 * ( 4n2 /p , log p , log p )• Total cost ~ Cost( A * v ) + Cost (MGS)• Can we reduce the latency?

Page 70: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES• Cost(GMRES, ver.1) = Cost(A*v) + Cost(MGS)

• Cost(W) = ( ~ same, ~ same , 8 )• Latency cost independent of k – optimal

• Cost (MGS) unchanged• Can we reduce the latency more?

= ( 9kn2 /p, 4kn / p½ , 4k ) + ( 2k2n2 /p , k2 log p / 2 , k2 log p / 2 )• How much latency cost from A*v can you avoid? Almost all

GMRES, ver. 2: W = [ v, Av, A2v, … , Akv ] [Q,R] = MGS(W) Build H from R, solve LSQ problem

k = 3

Page 71: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= ( 9kn2 /p, 4kn / p½ , 8 ) + ( 2k2n2 /p , k2 log p / 2 , k2 log p / 2 )• How much latency cost from MGS can you avoid? Almost all

• Cost(TSQR) = ( ~ same, ~ same , log p )• Latency cost independent of s - optimal

GMRES, ver. 3: W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem

W = W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

Page 72: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= ( 9kn2 /p, 4kn / p½ , 8 ) + ( 2k2n2 /p , k2 log p / 2 , k2 log p / 2 )• How much latency cost from MGS can you avoid? Almost all

GMRES, ver. 3: W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem

W = W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

• Cost(TSQR) = ( ~ same, ~ same , log p )• Oops

Page 73: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES• Cost(GMRES, ver. 2) = Cost(W) + Cost(MGS)

= ( 9kn2 /p, 4kn / p½ , 8 ) + ( 2k2n2 /p , k2 log p / 2 , k2 log p / 2 )• How much latency cost from MGS can you avoid? Almost all

GMRES, ver. 3: W = [ v, Av, A2v, … , Akv ] [Q,R] = TSQR(W) … “Tall Skinny QR” Build H from R, solve LSQ problem

W = W1

W2

W3

W4

R1

R2

R3

R4

R12

R34

R1234

• Cost(TSQR) = ( ~ same, ~ same , log p )• Oops – W from power method, precision lost!

Page 74: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 75: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Minimizing Communication of GMRES

• Cost(GMRES, ver. 3) = Cost(W) + Cost(TSQR)

= ( 9kn2 /p, 4kn / p½ , 8 ) + ( 2k2n2 /p , k2 log p / 2 , log p )• Latency cost independent of k, just log p – optimal• Oops – W from power method, so precision lost – What to do?

• Use a different polynomial basis• Not Monomial basis W = [v, Av, A2v, …], instead …• Newton Basis WN = [v, (A – θ1 I)v , (A – θ2 I)(A – θ1 I)v, …] or• Chebyshev Basis WC = [v, T1(v), T2(v), …]

Page 76: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 77: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Speed ups on 8-core Clovertown

Page 78: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Conclusions• Fast code must minimize communication

– Especially for sparse matrix computations because communication dominates

• Generating fast code for a single SpMV– Design space of possible algorithms must be

searched at run-time, when sparse matrix available– Design space should be searched automatically

• Biggest speedups from minimizing communication in an entire sparse solver– Many more opportunities to minimize communication

in multiple SpMVs than in one– Requires transforming entire algorithm– Lots of open problems

Page 79: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

EXTRA SLIDES

Page 80: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Quick-and-dirty Parallelism: OSKI-PETSc

• Extend PETSc’s distributed memory SpMV (MATMPIAIJ)

p0

p1

p2

p3

• PETSc– Each process stores

diag (all-local) and off-diag submatrices

• OSKI-PETSc:– Add OSKI wrappers– Each submatrix

tuned independently

Page 81: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

OSKI-PETSc Proof-of-Concept Results

• Matrix 1: Accelerator cavity design (R. Lee @ SLAC)– N ~ 1 M, ~40 M non-zeros– 2x2 dense block substructure– Symmetric

• Matrix 2: Linear programming (Italian Railways)– Short-and-fat: 4k x 1M, ~11M non-zeros– Highly unstructured– Big speedup from cache-blocking: no native PETSc

format

• Evaluation machine: Xeon cluster– Peak: 4.8 Gflop/s per node

Page 82: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accelerator Cavity Matrix

Page 83: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

OSKI-PETSc Performance: Accel. Cavity

Page 84: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Linear Programming Matrix

Page 85: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

OSKI-PETSc Performance: LP Matrix

Page 86: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Performance Results• Measured Multicore (Clovertown) speedups up to 6.4x• Measured/Modeled sequential OOC speedup up to 3x• Modeled parallel Petascale speedup up to 6.9x• Modeled parallel Grid speedup up to 22x

• Sequential speedup due to bandwidth, works for many problem sizes

• Parallel speedup due to latency, works for smaller problems on many processors

Page 87: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Speedups on Intel Clovertown (8 core)

Page 88: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Extensions

• Other Krylov methods– Arnoldi, CG, Lanczos, …

• Preconditioning– Solve MAx=Mb where preconditioning matrix M chosen to

make system “easier” • M approximates A-1 somehow, but cheaply, to accelerate

convergence

– Cheap as long as contributions from “distant” parts of the system can be compressed

• Sparsity• Low rank

Page 89: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Design Space for [x,Ax,…,Akx] (1/3)

• Mathematical Operation– Keep all vectors

• Krylov Subspace Methods

– Improving conditioning of basis• W = [x, p1(A)x, p2(A)x,…,pk(A)x]

• pi(A) = degree i polynomial chosen to reduce cond(W)

– Preconditioning (Ay=b MAy=Mb)• [x,Ax,MAx,AMAx,MAMAx,…,(MA)kx]

– Keep last vector Akx only• Jacobi, Gauss Seidel

Page 90: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Design Space for [x,Ax,…,Akx] (2/3)

• Representation of sparse A– Zero pattern may be explicit or implicit– Nonzero entries may be explicit or implicit

• Implicit save memory, communication

Explicit pattern Implicit pattern

Explicit nonzeros General sparse matrix

Image segmentation

Implicit nonzeros Laplacian(graph), for graph partitioning

“Stencil matrix”Ex: tridiag(-1,2,-1)

• Representation of dense preconditioners M

– Low rank off-diagonal blocks (semiseparable)

Page 91: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Design Space for [x,Ax,…,Akx] (3/3)

• Parallel implementation– From simple indexing, with redundant flops

surface/volume ratio– To complicated indexing, with fewer redundant flops

• Sequential implementation– Depends on whether vectors fit in fast memory

• Reordering rows, columns of A– Important in parallel and sequential cases– Can be reduced to pair of Traveling Salesmen Problems

• Plus all the optimizations for one SpMV!

Page 92: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary • Communication-Avoiding Linear Algebra (CALA)• Lots of related work

– Some going back to 1960’s– Reports discuss this comprehensively, not here

• Our contributions– Several new algorithms, improvements on old ones– Unifying parallel and sequential approaches to avoiding

communication– Time for these algorithms has come, because of growing

communication costs– Why avoid communication just for linear algebra motifs?

Page 93: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Possible Class Projects

• Come to BEBOP meetings (T 9 – 10:30, 606 Soda)

• Incorporate multicore optimizations into OSKI• Experiment with SpMV on GPU

– Which optimizations are most effective?

• Try to speed up particular matrices of interest– Data mining

• Experiment with new [x,Ax,…,Akx] kernel– GPU, multicore, distributed memory– On matrices of interest– Experiment with new solvers using this kernel

Page 94: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Extra Slides

Page 95: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Tuning Higher Level Algorithms

• So far we have tuned a single sparse matrix kernel• y = AT*A*x motivated by higher level algorithm (SVD)

• What can we do by extending tuning to a higher level?• Consider Krylov subspace methods for Ax=b, Ax = x

• Conjugate Gradients (CG), GMRES, Lanczos, …• Inner loop does y=A*x, dot products, saxpys, scalar ops• Inner loop costs at least O(1) messages• k iterations cost at least O(k) messages

• Our goal: show how to do k iterations with O(1) messages• Possible payoff – make Krylov subspace methods much faster on machines with slow networks• Memory bandwidth improvements too (not discussed)• Obstacles: numerical stability, preconditioning, …

Page 96: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Krylov Subspace Methods for Solving Ax=b

• Compute a basis for a subspace V by doing y = A*x k times

• Find “best” solution in that Krylov subspace V

• Given starting vector x1, V spanned by x2 = A*x1, x3 = A*x2 , … , xk = A*xk-1

• GMRES finds an orthogonal basis of V by Gram-Schmidt, so it actually does a different set of SpMVs than in last bullet

Page 97: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Standard GMRES

r = b - A*x1, = length(r), v1 = r / … length(r) = sqrt( ri2 )

for m= 1 to k do w = A*vm … at least O(1) messages

for i = 1 to m do … Gram-Schmidt him = dotproduct(vi , w ) … at least O(1) messages, or log(p)

w = w – h im * vi

end for hm+1,m = length(w) … at least O(1) messages, or log(p)

vm+1 = w / hm+1,m

end forfind y minimizing length( Hk * y – e1 ) … small, local problem

new x = x1 + Vk * y … Vk = [v1 , v2 , … , vk ]

O(k2), or O(k2 log p), messages altogether

Page 98: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Latency-Avoiding GMRES (1)

r = b - A*x1, = length(r), v1 = r / …O(log p) messages

Wk+1 = [v1 , A * v1 , A2 * v1 , … , Ak * v1 ] … O(1) messages

[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages

Hk = R(:, 2:k+1) * (R(1:k,1:k))-1 … small, local problem

find y minimizing length( Hk * y – e1 ) … small, local problem

new x = x1 + Qk * y … local problemO(log p) messages altogether

Independent of k

Page 99: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Latency-Avoiding GMRES (2)

[Q, R] = qr(Wk+1) … QR decomposition, O(log p) messages

Easy, but not so stable way to do it:

X(myproc) = Wk+1T(myproc) * Wk+1 (myproc)

… local computation Y = sum_reduction(X(myproc)) … O(log p) messages

… Y = Wk+1T* Wk+1

R = (cholesky(Y))T … small, local computation

Q(myproc) = Wk+1 (myproc) * R-1 … local computation

Page 100: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Numerical example (1)

Diagonal matrix with n=1000, Aii from 1 down to 10-5

Instability as k grows, after many iterations

Page 101: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Numerical Example (2)Partial remedy: restarting periodically (every 120 iterations)

Other remedies: high precision, different basis than [v , A * v , … , Ak * v ]

Page 102: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Operation Counts for [Ax,A2x,A3x,…,Akx] on p procs

Problem Per-proc cost

Standard Optimized

1D mesh #messages 2k 2

(tridiagonal)

#words sent

2k 2k

#flops 5kn/p 5kn/p + 5k2

memory (k+4)n/p (k+4)n/p + 8k3D mesh #messages 26k 26

27 pt stencil

#words sent

6kn2p-2/3 + 12knp-1/3

+ O(k)6kn2p-2/3 + 12k2np-1/3

+ O(k3)

#flops 53kn3/p 53kn3/p + O(k2n2p-2/3)

memory (k+28)n3/p+ 6n2p-2/3 + O(np-1/3)

(k+28)n3/p + 168kn2p-2/3 + O(k2np-1/3)

Page 103: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary and Future Work

• Dense– LAPACK– ScaLAPACK

• Communication primitives• Sparse

– Kernels, Stencils– Higher level algorithms

• All of the above on new architectures– Vector, SMPs, multicore, Cell, …

• High level support for tuning– Specification language– Integration into compilers

Page 104: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Extra Slides

Page 105: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

A Sparse Matrix You Encounter Every Day

Who am I?

I am aBig Repository

Of usefulAnd uselessFacts alike.

Who am I?

(Hint: Not your e-mail inbox.)

Page 106: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

What about the Google Matrix?

• Google approach– Approx. once a month: rank all pages using connectivity

structure• Find dominant eigenvector of a matrix

– At query-time: return list of pages ordered by rank• Matrix: A = G + (1-)(1/n)uuT

– Markov model: Surfer follows link with probability , jumps to a random page with probability 1-

– G is n x n connectivity matrix [n billions]• gij is non-zero if page i links to page j• Normalized so each column sums to 1• Very sparse: about 7—8 non-zeros per row (power law dist.)

– u is a vector of all 1 values– Steady-state probability xi of landing on page i is solution to x

= Ax

• Approximate x by power method: x = Akx0

– In practice, k 25

Page 107: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Current Work

• Public software release• Impact on library designs: Sparse BLAS, Trilinos, PETSc,

…• Integration in large-scale applications

– DOE: Accelerator design; plasma physics– Geophysical simulation based on Block Lanczos (ATA*X; LBL)

• Systematic heuristics for data structure selection?• Evaluation of emerging architectures

– Revisiting vector micros

• Other sparse kernels– Matrix triple products, Ak*x

• Parallelism• Sparse benchmarks (with UTK) [Gahvari & Hoemmen]• Automatic tuning of MPI collective ops [Nishtala, et al.]

Page 108: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary of High-Level Themes

• “Kernel-centric” optimization– Vs. basic block, trace, path optimization, for instance– Aggressive use of domain-specific knowledge

• Performance bounds modeling– Evaluating software quality– Architectural characterizations and consequences

• Empirical search– Hybrid on-line/run-time models

• Statistical performance models– Exploit information from sampling, measuring

Page 109: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Related Work

• My bibliography: 337 entries so far• Sample area 1: Code generation

– Generative & generic programming– Sparse compilers– Domain-specific generators

• Sample area 2: Empirical search-based tuning– Kernel-centric

• linear algebra, signal processing, sorting, MPI, …

– Compiler-centric• profiling + FDO, iterative compilation, superoptimizers,

self-tuning compilers, continuous program optimization

Page 110: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Future Directions (A Bag of Flaky Ideas)

• Composable code generators and search spaces• New application domains

– PageRank: multilevel block algorithms for topic-sensitive search?

• New kernels: cryptokernels– rich mathematical structure germane to performance; lots

of hardware

• New tuning environments– Parallel, Grid, “whole systems”

• Statistical models of application performance– Statistical learning of concise parametric models from

traces for architectural evaluation– Compiler/automatic derivation of parametric models

Page 111: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Possible Future Work

• Different Architectures– New FP instruction sets (SSE2)– SMP / multicore platforms– Vector architectures

• Different Kernels• Higher Level Algorithms

– Parallel versions of kenels, with optimized communication– Block algorithms (eg Lanczos)– XBLAS (extra precision)

• Produce Benchmarks– Augment HPCC Benchmark

• Make it possible to combine optimizations easily for any kernel

• Related tuning activities (LAPACK & ScaLAPACK)

Page 112: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Review of Tuning by Illustration

(Extra Slides)

Page 113: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Splitting for Variable Blocks and Diagonals

• Decompose A = A1 + A2 + … At

– Detect “canonical” structures (sampling)– Split

– Tune each Ai

– Improve performance and save storage

• New data structures– Unaligned block CSR

• Relax alignment in rows & columns

– Row-segmented diagonals

Page 114: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Variable Block Row (Matrix #12)

2.1x over CSR1.8x over RB

Page 115: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Row-Segmented Diagonals

2x over CSR

Page 116: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Mixed Diagonal and Block Structure

Page 117: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary

• Automated block size selection– Empirical modeling and search– Register blocking for SpMV, triangular solve, ATA*x

• Not fully automated– Given a matrix, select splittings and transformations

• Lots of combinatorial problems– TSP reordering to create dense blocks (Pinar ’97;

Moon, et al. ’04)

Page 118: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Extra Slides

Page 119: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

A Sparse Matrix You Encounter Every Day

Who am I?

I am aBig Repository

Of usefulAnd uselessFacts alike.

Who am I?

(Hint: Not your e-mail inbox.)

Page 120: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Problem Context

• Sparse kernels abound– Models of buildings, cars, bridges, economies, …– Google PageRank algorithm

• Historical trends– Sparse matrix-vector multiply (SpMV): 10% of peak– 2x faster with “hand-tuning”– Tuning becoming more difficult over time– Promise of automatic tuning: PHiPAC/ATLAS, FFTW, …

• Challenges to high-performance– Not dense linear algebra!

• Complex data structures: indirect, irregular memory access

• Performance depends strongly on run-time inputs

Page 121: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Key Questions, Ideas, Conclusions

• How to tune basic sparse kernels automatically?– Empirical modeling and search

• Up to 4x speedups for SpMV• 1.8x for triangular solve• 4x for ATA*x; 2x for A2*x• 7x for multiple vectors

• What are the fundamental limits on performance?– Kernel-, machine-, and matrix-specific upper bounds

• Achieve 75% or more for SpMV, limiting low-level tuning• Consequences for architecture?

• General techniques for empirical search-based tuning?– Statistical models of performance

Page 122: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Statistical models of performance

Page 123: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

Compressed Sparse Row (CSR) Storage

Matrix-vector multiply kernel: y(i) y(i) + A(i,j)*x(j)

for each row i

for k=ptr[i] to ptr[i+1] do

y[i] = y[i] + val[k]*x[ind[k]]

Page 124: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Statistical models of performance

Page 125: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Historical Trends in SpMV Performance

• The Data– Uniprocessor SpMV performance since 1987– “Untuned” and “Tuned” implementations– Cache-based superscalar micros; some vectors– LINPACK

Page 126: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Historical Trends: Mflop/s

Page 127: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: The Difficulty of Tuning

• n = 21216• nnz = 1.5 M• kernel: SpMV

• Source: NASA structural analysis problem

Page 128: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Still More Surprises

• More complicated non-zero structure in general

Page 129: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Still More Surprises

• More complicated non-zero structure in general

• Example: 3x3 blocking– Logical grid of 3x3 cells

Page 130: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Historical Trends: Mixed News

• Observations– Good news: Moore’s law like behavior– Bad news: “Untuned” is 10% peak or less,

worsening– Good news: “Tuned” roughly 2x better today, and

improving– Bad news: Tuning is complex

– (Not really news: SpMV is not LINPACK)

• Questions– Application: Automatic tuning?– Architect: What machines are good for SpMV?

Page 131: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques

– SpMV [SC’02; IJHPCA ’04b]– Sparse triangular solve (SpTS) [ICS/POHLL ’02]– ATA*x [ICCS/WoPLA ’03]

• Upper bounds on performance• Statistical models of performance

Page 132: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SPARSITY: Framework for Tuning SpMV

• SPARSITY: Automatic tuning for SpMV [Im & Yelick ’99]– General approach

• Identify and generate implementation space• Search space using empirical models & experiments

– Prototype library and heuristic for choosing register block size

• Also: cache-level blocking, multiple vectors

• What’s new?– New block size selection heuristic

• Within 10% of optimal — replaces previous version

– Expanded implementation space• Variable block splitting, diagonals, combinations

– New kernels: sparse triangular solve, ATA*x, A*x

Page 133: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Automatic Register Block Size Selection

• Selecting the r x c block size– Off-line benchmark: characterize the machine

• Precompute Mflops(r,c) using dense matrix for each r x c• Once per machine/architecture

– Run-time “search”: characterize the matrix• Sample A to estimate Fill(r,c) for each r x c

– Run-time heuristic model• Choose r, c to maximize Mflops(r,c) / Fill(r,c)

• Run-time costs– Up to ~40 SpMVs (empirical worst case)

Page 134: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (1/4)

NOTE: “Fair” flops used (ops on explicit zeros not counted as “work”)

DGEMV

Page 135: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (2/4)DGEMV

Page 136: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (3/4)DGEMV

Page 137: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (4/4)DGEMV

Page 138: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance

– SC’02

• Statistical models of performance

Page 139: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Motivation for Upper Bounds Model

• Questions– Speedups are good, but what is the speed limit?

• Independent of instruction scheduling, selection

– What machines are “good” for SpMV?

Page 140: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Upper Bounds on Performance: Blocked SpMV

• P = (flops) / (time)– Flops = 2 * nnz(A)

• Lower bound on time: Two main assumptions– 1. Count memory ops only (streaming)– 2. Count only compulsory, capacity misses: ignore conflicts

• Account for line sizes• Account for matrix size and nnz

• Charge min access “latency” i at Li cache & mem

– e.g., Saavedra-Barrera and PMaC MAPS benchmarks

1mem11

1memmem

Misses)(Misses)(Loads

HitsHitsTime

iiii

iii

Page 141: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 142: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 143: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Bounds on Itanium 2

Page 144: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Fraction of Upper Bound Across Platforms

Page 145: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Achieved Performance and Machine Balance

• Machine balance [Callahan ’88; McCalpin ’95]– Balance = Peak Flop Rate / Bandwidth (flops /

double)

• Ideal balance for mat-vec: 2 flops / double– For SpMV, even less

• SpMV ~ streaming– 1 / (avg load time to stream 1 array) ~ (bandwidth)– “Sustained” balance = peak flops / model bandwidth

i

iii Misses)(Misses)(LoadsTime mem11

Page 146: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 147: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Where Does the Time Go?

• Most time assigned to memory• Caches “disappear” when line sizes are equal

– Strictly increasing line sizes

1

memmem HitsHitsTimei

ii

Page 148: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Execution Time Breakdown: Matrix 40

Page 149: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Speedups with Increasing Line Size

Page 150: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary: Performance Upper Bounds

• What is the best we can do for SpMV?– Limits to low-level tuning of blocked implementations– Refinements?

• What machines are good for SpMV?– Partial answer: balance characterization

• Architectural consequences?– Example: Strictly increasing line sizes

Page 151: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Tuning other sparse kernels• Statistical models of performance

– FDO ’00; IJHPCA ’04a

Page 152: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Statistical Models for Automatic Tuning

• Idea 1: Statistical criterion for stopping a search– A general search model

• Generate implementation• Measure performance• Repeat

– Stop when probability of being within of optimal falls below threshold

• Can estimate distribution on-line

• Idea 2: Statistical performance models– Problem: Choose 1 among m implementations at run-

time– Sample performance off-line, build statistical model

Page 153: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Select a Matmul Implementation

Page 154: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Support Vector Classification

Page 155: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Road Map

• Sparse matrix-vector multiply (SpMV) in a nutshell• Historical trends and the need for search• Automatic tuning techniques• Upper bounds on performance• Tuning other sparse kernels• Statistical models of performance• Summary and Future Work

Page 156: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary of High-Level Themes

• “Kernel-centric” optimization– Vs. basic block, trace, path optimization, for instance– Aggressive use of domain-specific knowledge

• Performance bounds modeling– Evaluating software quality– Architectural characterizations and consequences

• Empirical search– Hybrid on-line/run-time models

• Statistical performance models– Exploit information from sampling, measuring

Page 157: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Related Work

• My bibliography: 337 entries so far• Sample area 1: Code generation

– Generative & generic programming– Sparse compilers– Domain-specific generators

• Sample area 2: Empirical search-based tuning– Kernel-centric

• linear algebra, signal processing, sorting, MPI, …

– Compiler-centric• profiling + FDO, iterative compilation, superoptimizers,

self-tuning compilers, continuous program optimization

Page 158: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Future Directions (A Bag of Flaky Ideas)

• Composable code generators and search spaces• New application domains

– PageRank: multilevel block algorithms for topic-sensitive search?

• New kernels: cryptokernels– rich mathematical structure germane to performance; lots

of hardware

• New tuning environments– Parallel, Grid, “whole systems”

• Statistical models of application performance– Statistical learning of concise parametric models from

traces for architectural evaluation– Compiler/automatic derivation of parametric models

Page 159: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Acknowledgements

• Super-advisors: Jim and Kathy• Undergraduate R.A.s: Attila, Ben, Jen, Jin,

Michael, Rajesh, Shoaib, Sriram, Tuyet-Linh• See pages xvi—xvii of dissertation.

Page 160: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

TSP-based Reordering: Before

(Pinar ’97;Moon, et al ‘04)

Page 161: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

TSP-based Reordering: After

(Pinar ’97;Moon, et al ‘04)

Up to 2xspeedupsover CSR

Page 162: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: L2 Misses on Itanium 2

Misses measured using PAPI [Browne ’00]

Page 163: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Distribution of Blocked Non-Zeros

Page 164: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse/Dense Partitioning for SpTS

• Partition L into sparse (L1,L2) and dense LD:

2

1

2

1

2

1

b

b

x

x

LL

L

D

• Perform SpTS in three steps:

22

1222

111

ˆ)3(

ˆ)2(

)1(

bxL

xLbb

bxL

D

• Sparsity optimizations for (1)—(2); DTRSV for (3)• Tuning parameters: block size, size of dense triangle

Page 165: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpTS Performance: Power3

Page 166: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 167: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary of SpTS and AAT*x Results

• SpTS — Similar to SpMV– 1.8x speedups; limited benefit from low-level tuning

• AATx, ATAx– Cache interleaving only: up to 1.6x speedups– Reg + cache: up to 4x speedups

• 1.8x speedup over register only

– Similar heuristic; same accuracy (~ 10% optimal)– Further from upper bounds: 60—80%

• Opportunity for better low-level tuning a la PHiPAC/ATLAS

• Matrix triple products? Ak*x?– Preliminary work

Page 168: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocking: Speedup

Page 169: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocking: Performance

Page 170: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocking: Fraction of Peak

Page 171: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Confidence Interval Estimation

Page 172: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Costs of Tuning

Page 173: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Splitting + UBCSR: Pentium III

Page 174: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Splitting + UBCSR: Power4

Page 175: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Splitting+UBCSR Storage: Power4

Page 176: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 177: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Variable Block Row (Matrix #13)

Page 178: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 179: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Preliminary Results (Matrix Set 2): Itanium 2

Web/IR

Dense FEM FEM (var) Bio LPEcon Stat

Page 180: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multiple Vector Performance

Page 181: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 182: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

What about the Google Matrix?

• Google approach– Approx. once a month: rank all pages using connectivity

structure• Find dominant eigenvector of a matrix

– At query-time: return list of pages ordered by rank• Matrix: A = G + (1-)(1/n)uuT

– Markov model: Surfer follows link with probability , jumps to a random page with probability 1-

– G is n x n connectivity matrix [n 3 billion]• gij is non-zero if page i links to page j• Normalized so each column sums to 1• Very sparse: about 7—8 non-zeros per row (power law dist.)

– u is a vector of all 1 values– Steady-state probability xi of landing on page i is solution to x

= Ax

• Approximate x by power method: x = Akx0

– In practice, k 25

Page 183: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 184: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

MAPS Benchmark Example: Power4

Page 185: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

MAPS Benchmark Example: Itanium 2

Page 186: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Saavedra-Barrera Example: Ultra 2i

Page 187: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Execution Time Breakdown (PAPI): Matrix 40

Page 188: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Preliminary Results (Matrix Set 1): Itanium 2

LPFEM FEM (var) AssortedDense

Page 189: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Tuning Sparse Triangular Solve (SpTS)

• Compute x=L-1*b where L sparse lower triangular, x & b dense

• L from sparse LU has rich dense substructure– Dense trailing triangle can account for 20—90% of

matrix non-zeros

• SpTS optimizations– Split into sparse trapezoid and dense trailing triangle– Use tuned dense BLAS (DTRSV) on dense triangle– Use Sparsity register blocking on sparse part

• Tuning parameters– Size of dense trailing triangle– Register block size

Page 190: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-

dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 191: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocked SpMV on LSI Matrix: Ultra 2i

A10k x 255k3.7M non-zeros

Baseline:16 Mflop/s

Best block size& performance:16k x 64k28 Mflop/s

Page 192: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocking on LSI Matrix: Pentium 4

A10k x 255k3.7M non-zeros

Baseline:44 Mflop/s

Best block size& performance:16k x 16k210 Mflop/s

Page 193: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocked SpMV on LSI Matrix: Itanium

A10k x 255k3.7M non-zeros

Baseline:25 Mflop/s

Best block size& performance:16k x 32k72 Mflop/s

Page 194: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocked SpMV on LSI Matrix: Itanium 2

A10k x 255k3.7M non-zeros

Baseline:170 Mflop/s

Best block size& performance:16k x 65k275 Mflop/s

Page 195: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Inter-Iteration Sparse Tiling (1/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 196: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Inter-Iteration Sparse Tiling (2/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

• Orange = everything needed to compute y1

– Reuse a11, a12

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 197: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Inter-Iteration Sparse Tiling (3/3)

• [Strout, et al., ‘01]• Let A be 6x6 tridiagonal• Consider y=A2x

– t=Ax, y=At

• Nodes: vector elements• Edges: matrix elements

aij

• Orange = everything needed to compute y1

– Reuse a11, a12

• Grey = y2, y3

– Reuse a23, a33, a43

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 198: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Inter-Iteration Sparse Tiling: Issues

• Tile sizes (colored regions) grow with no. of iterations and increasing out-degree– G likely to have a few

nodes with high out-degree (e.g., Yahoo)

• Mathematical tricks to limit tile size?– Judicious dropping of

edges [Ng’01]

y1

y2

y3

y4

y5

t1

t2

t3

t4

t5

x1

x2

x3

x4

x5

Page 199: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary and Questions

• Need to understand matrix structure and machine– BeBOP: suite of techniques to deal with different sparse

structures and architectures• Google matrix problem

– Established techniques within an iteration– Ideas for inter-iteration optimizations– Mathematical structure of problem may help

• Questions– Structure of G?– What are the computational bottlenecks?– Enabling future computations?

• E.g., topic-sensitive PageRank multiple vector version [Haveliwala ’02]

– See www.cs.berkeley.edu/~richie/bebop/intel/google for more info, including more complete Itanium 2 results.

Page 200: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Exploiting Matrix Structure

• Symmetry (numerical or structural)– Reuse matrix entries– Can combine with register blocking, multiple vectors,

• Matrix splitting– Split the matrix, e.g., into r x c and 1 x 1– No fill overhead

• Large matrices with random structure– E.g., Latent Semantic Indexing (LSI) matrices– Technique: cache blocking

• Store matrix as 2i x 2j sparse submatrices• Effective when x vector is large• Currently, search to find fastest size

Page 201: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Symmetric SpMV Performance: Pentium 4

Page 202: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV with Split Matrices: Ultra 2i

Page 203: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocking on Random Matrices: Itanium

Speedup on four bandedrandom matrices.

Page 204: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 205: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocked SpMV: Pentium III

Page 206: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocked SpMV: Ultra 2i

Page 207: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocked SpMV: Power3

Page 208: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Register Blocked SpMV: Itanium

Page 209: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Possible Optimization Techniques

• Within an iteration, i.e., computing (G+uuT)*x once– Cache block G*x

• On linear programming matrices and matrices with random structure (e.g., LSI), 1.5—4x speedups

• Best block size is matrix and machine dependent

– Reordering and/or splitting of G to separate dense structure (rows, columns, blocks)

• Between iterations, e.g., (G+uuT)2x– (G+uuT)2x = G2x + (Gu)uTx + u(uTG)x + u(uTu)uTx

• Compute Gu, uTG, uTu once for all iterations• G2x: Inter-iteration tiling to read G only once

Page 210: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multiple Vector Performance: Itanium

Page 211: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-

dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 212: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpTS Performance: Itanium

(See POHLL ’02 workshop paper, at ICS ’02.)

Page 213: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse Kernels and Optimizations

• Kernels– Sparse matrix-vector multiply (SpMV): y=A*x– Sparse triangular solve (SpTS): x=T-1*b– y=AAT*x, y=ATA*x– Powers (y=Ak*x), sparse triple-product (R*A*RT), …

• Optimization techniques (implementation space)– Register blocking– Cache blocking– Multiple dense vectors (x)– A has special structure (e.g., symmetric, banded, …)– Hybrid data structures (e.g., splitting, switch-to-dense, …)– Matrix reordering

• How and when do we search?– Off-line: Benchmark implementations– Run-time: Estimate matrix properties, evaluate

performance models based on benchmark data

Page 214: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimizing AAT*x

• Kernel: y=AAT*x, where A is sparse, x & y dense– Arises in linear programming, computation of SVD– Conventional implementation: compute z=AT*x, y=A*z

• Elements of A can be reused:

n

k

Tkk

Tn

T

n xaax

a

a

aay1

1

1 )(

• When ak represent blocks of columns, can apply register blocking.

Page 215: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimized AAT*x Performance: Pentium III

Page 216: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Current Directions

• Applying new optimizations– Other split data structures (variable block, diagonal,

…)– Matrix reordering to create block structure– Structural symmetry

• New kernels (triple product RART, powers Ak, …)• Tuning parameter selection• Building an automatically tuned sparse matrix

library– Extending the Sparse BLAS– Leverage existing sparse compilers as code

generation infrastructure– More thoughts on this topic tomorrow

Page 217: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Related Work

• Automatic performance tuning systems– PHiPAC [Bilmes, et al., ’97], ATLAS [Whaley & Dongarra

’98]– FFTW [Frigo & Johnson ’98], SPIRAL [Pueschel, et al.,

’00], UHFFT [Mirkovic and Johnsson ’00]– MPI collective operations [Vadhiyar & Dongarra ’01]

• Code generation– FLAME [Gunnels & van de Geijn, ’01]– Sparse compilers: [Bik ’99], Bernoulli [Pingali, et al., ’97]– Generic programming: Blitz++ [Veldhuizen ’98], MTL

[Siek & Lumsdaine ’98], GMCL [Czarnecki, et al. ’98], …

• Sparse performance modeling– [Temam & Jalby ’92], [White & Saddayappan ’97],

[Navarro, et al., ’96], [Heras, et al., ’99], [Fraguela, et al., ’99], …

Page 218: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

More Related Work

• Compiler analysis, models– CROPS [Carter, Ferrante, et al.]; Serial sparse tiling

[Strout ’01]– TUNE [Chatterjee, et al.]– Iterative compilation [O’Boyle, et al., ’98]– Broadway compiler [Guyer & Lin, ’99]– [Brewer ’95], ADAPT [Voss ’00]

• Sparse BLAS interfaces– BLAST Forum (Chapter 3)– NIST Sparse BLAS [Remington & Pozo ’94];

SparseLib++– SPARSKIT [Saad ’94]– Parallel Sparse BLAS [Fillipone, et al. ’96]

Page 219: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Context: Creating High-Performance Libraries

• Application performance dominated by a few computational kernels

• Today: Kernels hand-tuned by vendor or user• Performance tuning challenges

– Performance is a complicated function of kernel, architecture, compiler, and workload

– Tedious and time-consuming

• Successful automated approaches– Dense linear algebra: ATLAS/PHiPAC– Signal processing: FFTW/SPIRAL/UHFFT

Page 220: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Blocked SpMV on LSI Matrix: Itanium

Page 221: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sustainable Memory Bandwidth

Page 222: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multiple Vector Performance: Pentium 4

Page 223: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multiple Vector Performance: Itanium

Page 224: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Multiple Vector Performance: Pentium 4

Page 225: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimized AAT*x Performance: Ultra 2i

Page 226: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimized AAT*x Performance: Pentium 4

Page 227: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

High Precision GEMV (XBLAS)

Page 228: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

High Precision Algorithms (XBLAS)• Double-double (High precision word represented as pair of doubles)

– Many variations on these algorithms; we currently use Bailey’s• Exploiting Extra-wide Registers

– Suppose s(1) , … , s(n) have f-bit fractions, SUM has F>f bit fraction– Consider following algorithm for S = i=1,n s(i)

• Sort so that |s(1)| |s(2)| … |s(n)|• SUM = 0, for i = 1 to n SUM = SUM + s(i), end for, sum = SUM

– Theorem (D., Hida) Suppose F<2f (less than double precision)• If n 2F-f + 1, then error 1.5 ulps• If n = 2F-f + 2, then error 22f-F ulps (can be 1)• If n 2F-f + 3, then error can be arbitrary (S 0 but sum = 0 )

– Examples• s(i) double (f=53), SUM double extended (F=64)

– accurate if n 211 + 1 = 2049• Dot product of single precision x(i) and y(i)

– s(i) = x(i)*y(i) (f=2*24=48), SUM double extended (F=64) – accurate if n 216 + 1 = 65537

Page 229: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

More Extra Slides

Page 230: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Awards• Best Paper, Intern. Conf. Parallel Processing, 2004

– “Performance models for evaluation and automatic performance tuning of symmetric sparse matrix-vector multiply”

• Best Student Paper, Intern. Conf. Supercomputing, Workshop on Performance Optimization via High-Level Languages and Libraries, 2003– Best Student Presentation too, to Richard Vuduc– “Automatic performance tuning and analysis of sparse triangular solve”

• Finalist, Best Student Paper, Supercomputing 2002– To Richard Vuduc– “Performance Optimization and Bounds for Sparse Matrix-vector Multiply”

• Best Presentation Prize, MICRO-33: 3rd ACM Workshop on Feedback-Directed Dynamic Optimization, 2000– To Richard Vuduc– “Statistical Modeling of Feedback Data in an Automatic Tuning System”

Page 231: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Accuracy of the Tuning Heuristics (4/4)

Page 232: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Can Match DGEMV PerformanceDGEMV

Page 233: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Fraction of Upper Bound Across Platforms

Page 234: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Achieved Performance and Machine Balance

• Machine balance [Callahan ’88; McCalpin ’95]– Balance = Peak Flop Rate / Bandwidth (flops /

double)– Lower is better (I.e. can hope to get higher fraction

of peak flop rate)

• Ideal balance for mat-vec: 2 flops / double– For SpMV, even less

Page 235: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 236: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Where Does the Time Go?

• Most time assigned to memory• Caches “disappear” when line sizes are equal

– Strictly increasing line sizes

1

memmem HitsHitsTimei

ii

Page 237: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Execution Time Breakdown: Matrix 40

Page 238: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Execution Time Breakdown (PAPI): Matrix 40

Page 239: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Speedups with Increasing Line Size

Page 240: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Summary: Performance Upper Bounds

• What is the best we can do for SpMV?– Limits to low-level tuning of blocked implementations– Refinements?

• What machines are good for SpMV?– Partial answer: balance characterization

• Architectural consequences?– Help to have strictly increasing line sizes

Page 241: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Evaluating algorithms and machines for SpMV

• Metrics– Speedups– Mflop/s (“fair” flops)– Fraction of peak

• Questions– Speedups are good, but what is “the best?”

• Independent of instruction scheduling, selection• Can SpMV be further improved or not?

– What machines are “good” for SpMV?

Page 242: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09
Page 243: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Statistical Models for Automatic Tuning

• Idea 1: Statistical criterion for stopping a search– A general search model

• Generate implementation• Measure performance• Repeat

– Stop when probability of being within of optimal falls below threshold

• Can estimate distribution on-line

• Idea 2: Statistical performance models– Problem: Choose 1 among m implementations at run-

time– Sample performance off-line, build statistical model

Page 244: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Historical Trends: Fraction of Peak

Page 245: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Motivation for Automatic Performance Tuning of SpMV

• Historical trends– Sparse matrix-vector multiply (SpMV): 10% of peak or

less

• Performance depends on machine, kernel, matrix– Matrix known at run-time– Best data structure + implementation can be surprising

• Our approach: empirical performance modeling and algorithm search

Page 246: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Sparse CALA Summary

• Optimizing one SpMV too limiting• Take k steps of Krylov subspace method

– GMRES, CG, Lanczos, Arnoldi– Assume matrix “well-partitioned,” with modest surface-to-

volume ratio– Parallel implementation

• Conventional: O(k log p) messages• CALA: O(log p) messages - optimal

– Serial implementation• Conventional: O(k) moves of data from slow to fast memory• CALA: O(1) moves of data – optimal

– Can incorporate some preconditioners• Need to be able to “compress” interactions between distant i, j• Hierarchical, semiseparable matrices …

– Lots of speed up possible (measured and modeled)• Price: some redundant computation

Page 247: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Motivation for Automatic Performance Tuning• Writing high performance software is hard

– Make programming easier while getting high speed

• Ideal: program in your favorite high level language (Matlab, Python, PETSc…) and get a high fraction of peak performance

• Reality: Best algorithm (and its implementation) can depend strongly on the problem, computer architecture, compiler,…– Best choice can depend on knowing a lot of applied

mathematics and computer science

• How much of this can we teach?• How much of this can we automate?

Page 248: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Examples of Automatic Performance Tuning (1)

• Dense BLAS– Sequential– PHiPAC (UCB), then ATLAS (UTK) (used in Matlab)– math-atlas.sourceforge.net/

• Fast Fourier Transform (FFT) & variations– Sequential and Parallel– FFTW (MIT)– www.fftw.org

• Digital Signal Processing– SPIRAL: www.spiral.net (CMU)

• Communication Collectives (UCB, UTK)• Rose (LLNL), Bernoulli (Cornell), Telescoping Languages

(Rice), …• More projects, conferences, government reports, …

Page 249: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Potential Impact on Applications: T3P

• Application: accelerator design [Ko] • 80% of time spent in SpMV• Relevant optimization techniques

– Symmetric storage– Register blocking

• On Single Processor Itanium 2– 1.68x speedup

• 532 Mflops, or 15% of 3.6 GFlop peak– 4.4x speedup with multiple (8) vectors

• 1380 Mflops, or 38% of peak

Page 250: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Examples of Automatic Performance Tuning (2)

• What do dense BLAS, FFTs, signal processing, MPI reductions have in common?– Can do the tuning off-line: once per architecture,

algorithm– Can take as much time as necessary (hours, a week…)– At run-time, algorithm choice may depend only on few

parameters• Matrix dimension, size of FFT, etc.

Page 251: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Performance (Matrix #2): Generation 2

Ultra 2i - 9% Ultra 3 - 5%

Pentium III-M - 15%Pentium III - 19%

63 Mflop/s

35 Mflop/s

109 Mflop/s

53 Mflop/s

96 Mflop/s

42 Mflop/s

120 Mflop/s

58 Mflop/s

Page 252: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

SpMV Performance (Matrix #2): Generation 1

Power3 - 13% Power4 - 14%

Itanium 2 - 31%Itanium 1 - 7%

195 Mflop/s

100 Mflop/s

703 Mflop/s

469 Mflop/s

225 Mflop/s

103 Mflop/s

1.1 Gflop/s

276 Mflop/s

Page 253: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Taking advantage of block structure in SpMV

• Bottleneck is time to get matrix from memory• Don’t store each nonzero with index, instead

store each nonzero r-by-c block with index– Storage drops by up to 2x, if rc >> 1, all 32-bit

quantities– Time to fetch matrix from memory decreases

• Change both data structure and algorithm– Need to pick r and c– Need to change algorithm accordingly

• In example, is r=c=8 best choice?– Minimizes storage, so looks like a good idea…

Page 254: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: L2 Misses on Itanium 2

Misses measured using PAPI [Browne ’00]

Page 255: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Sparse Triangular Factor

• Raefsky4 (structural problem) + SuperLU + colmmd

• N=19779, nnz=12.6 M

Dense trailing triangle: dim=2268, 20% of total nz

Can be as high as 90+%!1.8x over CSR

Page 256: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Cache Optimizations for AAT*x

• Cache-level: Interleave multiplication by A, AT

– Only fetch A from memory once

• Register-level: aiT to be rc block row, or diag

row

n

i

Tii

Tn

T

nT xaax

a

a

aaxAA1

1

1 )(

dot product“axpy”

… …

Page 257: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Combining Optimizations (1/2)

• Register blocking, symmetry, multiple (k) vectors– Three low-level tuning parameters: r, c, v

v

kX

Y A

cr

+=

*

Page 258: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Example: Combining Optimizations (2/2)

• Register blocking, symmetry, and multiple vectors [Ben Lee @ UCB]– Symmetric, blocked, 1 vector

• Up to 2.6x over nonsymmetric, blocked, 1 vector

– Symmetric, blocked, k vectors• Up to 2.1x over nonsymmetric, blocked, k vecs.• Up to 7.3x over nonsymmetric, nonblocked, 1, vector

– Symmetric Storage: up to 64.7% savings

Page 259: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

How the OSKI Tunes (Overview)

• At library build/install-time– Pre-generate and compile code variants into dynamic

libraries– Collect benchmark data

• Measures and records speed of possible sparse data structure and code variants on target architecture

– Installation process uses standard, portable GNU AutoTools• At run-time

– Library “tunes” using heuristic models• Models analyze user’s matrix & benchmark data to choose

optimized data structure and code– Non-trivial tuning cost: up to ~40 mat-vecs

• Library limits the time it spends tuning based on estimated workload

– provided by user or inferred by library• User may reduce cost by saving tuning results for application

on future runs with same or similar matrix

Page 260: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Optimizations in OSKI, so far

• Fully automatic heuristics for– Sparse matrix-vector multiply

• Register-level blocking• Register-level blocking + symmetry + multiple vectors• Cache-level blocking

– Sparse triangular solve with register-level blocking and “switch-to-dense” optimization

– Sparse ATA*x with register-level blocking• User may select other optimizations manually

– Diagonal storage optimizations, reordering, splitting; tiled matrix powers kernel (Ak*x)

– All available in dynamic libraries– Accessible via high-level embedded script language

• “Plug-in” extensibility– Very advanced users may write their own heuristics, create new

data structures/code variants and dynamically add them to the system

Page 261: Automatic Performance Tuning of Sparse-Matrix-Vector-Multiplication (SpMV) and Iterative Sparse Solvers James Demmel demmel/cs267_Spr09

Performance Results• Measured

– Sequential/OOC speedup up to 3x

• Modeled – Sequential/multicore speedup up to 2.5x– Parallel/Petascale speedup up to 6.9x – Parallel/Grid speedup up to 22x

• See bebop.cs.berkeley.edu/#pubs