47
Mike Noeth, Frank Mueller North Carolina State University Martin Schulz, Bronis R. de Supinski Lawrence Livermore National Laboratory Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

  • Upload
    thy

  • View
    27

  • Download
    0

Embed Size (px)

DESCRIPTION

Mike Noeth, Frank Mueller North Carolina State University Martin Schulz, Bronis R. de Supinski Lawrence Livermore National Laboratory. Scalable Compression and Replay of Communication Traces in Massively Parallel Environments. Outline. Introduction Intra-Node Compression Framework - PowerPoint PPT Presentation

Citation preview

Page 1: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

Mike Noeth, Frank MuellerNorth Carolina State University

Martin Schulz, Bronis R. de SupinskiLawrence Livermore National Laboratory

Scalable Compression and Replay of

Communication Traces in Massively

Parallel Environments

Page 2: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

2

Outline

Introduction

Intra-Node Compression Framework

Inter-Node Compression Framework

Replay Mechanism

Experimental Results

Conclusion and Future Work

Page 3: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

3

Introduction

Contemporary HPC Systems— Size > 1000 processors— take IBM Blue Gene/L: 64k processors

Challenges on HPC Systems (large-scale scientific applications)

— Communication scaling (MPI)— Communication analysis— Task mapping

Procurements also require performance prediction of future systems

Page 4: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

4

Communication Analysis Existing approaches and short comings

— Source code analysis+ Does not require machine time- Wrong abstraction level: source code often complicated- No dynamic information

— Lightweight statistical analysis (mpiP)+ Low instrumentation cost- Less information available (i.e. aggregate metrics only)

— Fully captured (lossless) (Vampir, VNG)

+ Full trace available for offline analysis- Traces generated per task: not scalable- Gather only traces on subset of nodes- Use cluster for visualization

viz I/O nodes

Page 5: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

5

Our Approach

Trace-driven approach to analyze MPI communication

Goals— Extract entire communication trace— Maintain structure— Full replay (independent of original application)— Scalable— Lossless— Rapid instrumentation— MPI implementation independent

Page 6: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

6

Design Overview

Two Parts:

Recording traces— Use MPI profiling layer— Compress at the task-level— Compress across all nodes

Replaying traces

Page 7: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

7

Outline

Introduction

Intra-Node Compression Framework

Inter-Node Compression Framework

Replay Mechanism

Experimental Results

Conclusion and Future Work

Page 8: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

8

Intra-Node Compression Framework

Umpire [SC’00] wrapper generator for MPI profiling layer— Initialization wrapper— Tracing wrapper— Termination wrapper

Intra-node compression of MPI calls— Provides load scalability

Interoperability with cross-node framework

Event aggregation— Special handling of MPI_Waitsome

Maintain structure of call sequences stack walk signatures— XOR signature for speed

XOR match necessary (not sufficient) same call sequence

Page 9: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

9

Intra-Node Compression Example

Consider the MPI operation stream:

op1, op2, op3, op4, op5, op3, op4, op5

op1 op2 op3 op4 op5 op5

targethead

targettail

targettail

targettail

targettail

targettail

mergehead

mergehead

op3

targethead

mergetail

targettail

op4

targettail

mergetail

targethead

targettail

mergetail

targethead

matchmatchmatch

Page 10: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

10

Intra-Node Compression Example

Consider the MPI operation stream:

op1, op2, op3, op4, op5, op3, op4, op5

Algorithm in the paper

op1 op2 op3 op4 op5(( ), iters = 2)

Page 11: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

11

Event Aggregation

Domain-specific encoding improve compression

MPI_Waitsome (array_of_requests, incount, outcount, …) blocks until one or more requests satisfied

Number of Waitsome calls in loop is nondeterministic

Take advantage of general usage to delay compression— MPI_Waitsome does not compress until a different

operation is executed— Accumulate output parameter outcount

Page 12: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

12

Outline

Introduction

Intra-Node Compression Framework

Inter-Node Compression Framework

Replay Mechanism

Experimental Results

Conclusion and Future Work

Page 13: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

13

Inter-Node Framework Interoperability

Single Program, Multiple Data (SPMD) nature of MPI codes

Match operations across nodes by manipulating parameters— Source / destination offsets— Request offsets

Page 14: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

14

Location-Independent Encoding

Point-to-point communication specifies targets by MPI rank— MPI rank parameters will not match across tasks Use offsets instead

16 processor (4x4) 2D stencil example— MPI rank targets (source/destination)

–9 communicates with 8, 5, 10, 13–10 communicates with 9, 6, 11, 14

— MPI offsets–9 & 10 communicate with -1, -4, +1, +4

Page 15: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

15

Request Handles

Asynchronous MPI ops associated w/ a MPI_Request handle— Handle nondeterministic across tasks Parameter mismatch in inter-node framework

Handle w/ circular request buffer (pre-set size)— On asynchronous MPI operation, store handle in buffer— On lookup of handle, use offset of current position in

buffer— Requires special handling in replay mechanism

Current

H1

Lookup H1 = -2

H2

Current

H3

Current

Page 16: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

16

Inter-Node Compression Framework

Invoked after all computations done (in MPI_Finalize wrapper)

Merges operation queues produced by task-level framework

Job size scalability

Task 0

Task 1

Task 2

Task 3

op1 op2 op3

op4 op5 op6

op1 op2 op3Match

op4 op5 op6

Page 17: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

17

Inter-Node Compression Framework

Invoked after all computations done (in MPI_Finalize wrapper)

Merges operation queues produced by task-level framework

Job size scalability

There’s more: Relaxed reordering of events of different nodes (dependence check) paper …

Task 0

Task 1

Task 2

Task 3

op1 op2 op3

op4 op5 op6

op4 op5 op6Match

Page 18: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

18

Reduction over Binary Radix Tree

Cross-node framework merges operation queues of each task

Merge algorithm supports merging two queues at a time paper

Radix layout facilitates compression (constant stride b/w nodes)

Need a control mechanism to order merging process

Page 19: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

19

Outline

Introduction

Intra-Node Compression Framework

Inter-Node Compression Framework

Replay Mechanism

Experimental Results

Conclusion and Future Work

Page 20: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

20

Replay Mechanism

Motivation— Can replay traces on any architecture

–Useful for rapid prototyping procurements–Communication tuning (Miranda -- SC’05)–Communication analysis (patterns)–Communication tuning (inefficiencies)

Replay design— Replays comprehensive trace produced by recording

framework— Parses trace, loads task-level op queues (inverse of merge

algo)— Replay on-the-fly (inverse of compression algo)— Timing deltas

Page 21: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

21

Experimental Results

Environment— 1024 node BG/L at Lawrence Livermore National Labs— Stencil micro-benchmarks— Raptor real world application

[Greenough’03]

Page 22: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

22

Trace System Validation

Uncompressed trace dumps compared

Replay— Matching profiles from mpiP

Page 23: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

23

Task Scalability Performance

Varied:— Strong (task) scaling: number of nodes

Examined metrics:— Trace file size— Memory usage— Compression time (or write time)

Results for— 1/2/3D stencils— Raptor application— NAS Parallel Benchmarks

Page 24: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

24

Trace File Size – 3D Stencil

Constant size for fully compressed traces (inter-node)

Log scale

100MB

0.5MB

10kB

Page 25: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

25

Memory Usage – 3D Stencil

Constant memory usage for fully compressed (inter-node)— min = leaves, avg = middle layer (decr. W/ node #), max ~

task 0

Average memory usage decreases w/ more processors0.5MB

Page 26: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

26

Trace File Size – Raptor App

Sub-linear increase for fully compressed traces (inter-node)

NOT on log scale

10kB

80MB

93MB

Page 27: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

27

Memory Usage – Raptor

Constant memory usage for fully compressed (inter-node)

Average memory usage decreases w/ more processors

500MB

Page 28: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

28

Load Scaling – 3D Stencil

Both intra- and inter-node compression result in constant size

Log scale

Page 29: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

29

Trace File Size – NAS PB Codes

Log scale file size [Bytes], 32-512 CPUs

3 categories for none / intra- / inter-node

Focus: blue = full compression

Near-constant size (EP, also IS, DT)— Instead of exponential

Sub-linear (MG, also LU)— Still good

Non-scalable (FT, also BT, CG)— Still 2-4 orders of magn. smaller— But could improve— Due to complex comm. Patterns

–along diagonal of 2D layout–Even varying # endpoints

Page 30: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

30

Memory Usage – NAS PB Codes

Log scale memory [Bytes], 32-512 CPUs

3 categories for min, avg, max, root0

Near-constant size (EP, also IS, DT)— Also constant in memory

Sub-linear (MG, also LU)— Sometimes constant in memory

Non-scalable (FT, also BT, CG)— Non-scalable in memory

Page 31: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

31

Compression/Write Overhead – NAS PB

Log scale time [ms], 32-512 CPUs

none / intra- / inter-node (full compression)

Near-constant size (EP, also IS, DT)— Inter-node compression fastest

Sub-linear (LU, also MG)— Intra-node faster

than inter-node

Non-scalable (FT, also BT, CG)— Not competitive

better write with intra-nodecompression

Page 32: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

32

Conclusion

Contributions:

Scalable approach to capturing full trace of communication— Near constant trace sizes for some apps (others: more work)— Near constant memory requirement

Rapid analysis via replay mechanism

Lossless MPI tracing of any number of node feasible— May store and visualize MPI traces on your desktop

Future Work:— Task layout model (i.e. Miranda)— Post-analysis stencil identification— Tuning detect non-scalable MPI usage— Support for procurements— Scalable replay— Offload compression to I/O nodes

Page 33: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

33

Acknowledgements

Mike Noeth (NCSU)

Martin Schulz (LLNL)

Bronis R. de Supinski (LLNL)

Prasun Ratn (NCSU)

Availability under BSD license:moss.csc.ncsu.edu/~mueller/scala.html

Funded in part by NSF CCF-0429653, CNS-0410203, CAREER CCR-0237570

Part of this work was performed under the auspices of the U.S. Department of Energy

by University of California Lawrence Livermore National Laboratory

under contract No. W-7405-Eng-48, UCRL-???

Page 34: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

34

Global Compression/Write Time [ms]

Average time per node Max. time

BT CG DT EP FT IS LU MG

Page 35: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

35

Trace File Size – Raptor

Near constant size for fully compressed traces

Linear scale

Page 36: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

36

Memory Usage – Raptor Results

Same as 3D stencil

Investigating the min memory usage for 1024 tasks

Page 37: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

37

Intra-Node Compression Algorithm

Intercept MPI call

Identify target

Identify merger

Match verification

Compression

Page 38: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

38

Call Sequence ID: Stack Walk Signature

Maintain structure by distinguishing between operations

XOR signature speed— XOR match necessary (not sufficient) same calling

context

Page 39: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

39

Inter-Node Merge Algorithm

Iterate through

both queues

Find match

Maintain order

Compress ops

Page 40: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

40

Consider two tasks each with their own operation queue

Inter-Node Merge Example

MasterQueue

SlaveQueue

Sequence 1

Participants:Task 0

Sequence 4

Participants:Task 0

Sequence 1

Participants:Task 1

Sequence 2

Participants:Task 1

Sequence 3

Participants:Task 1

Sequence 4

Participants:Task 1

MI

SI SH

Task 1

MATCH!

MasterQueue

SlaveQueue

Page 41: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

41

Consider two tasks each with their own operation queue

Inter-Node Merge Example

MasterQueue

SlaveQueue

Sequence 1

Participants:Task 0

Sequence 4

Participants:Task 0

Sequence 1

Participants:Task 1

Sequence 2

Participants:Task 1

Sequence 3

Participants:Task 1

Sequence 4

Participants:Task 1

MI

SH SI

Task 1

SI

MATCH!

MasterQueue

SlaveQueue

Page 42: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

42

Consider two tasks each with their own operation queue

Cross-Node Merge Example

MasterQueue

SlaveQueue

Sequence 1

Participants:Task 0

Sequence 4

Participants:Task 0

Sequence 2

Participants:Task 1

Sequence 3

Participants:Task 1

Sequence 4

Participants:Task 1

MI

SH SI

Task 1 Task 1

MasterQueue

SlaveQueue

MATCH!

Page 43: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

43

Temporal Cross-Node Reordering

Requirement: queue maintains order of operations

Merge algorithm maintains order of operations too strictly— Unmatched sequences in slave queue always moved to

master— Results in poorer compression

Solution: only move operations that must be moved— Intersect task participation lists of matched & unmatched

ops— Intersection empty no dependency— O/w ops must be moved

Page 44: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

44

Dependency Example

Consider the 4 task job (task 1 & 3 have already merged):

Sequence 1

Participants:Task 0

Sequence 2

Participants:Task 0

Sequence 2

Participants:Task 1

Sequence 1

Participants:Task 3

SI

MI

SH SI

MasterQueue

SlaveQueue

MATCH!

Page 45: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

45

Dependency Example

Consider the 4 task job (task 1 & 3 have already merged):

Sequence 1

Participants:Task 0

Sequence 2

Participants:Task 0

Sequence 2

Participants:Task 1

Sequence 1

Participants:Task 3

MI

SH SI

MasterQueue

SlaveQueue

Page 46: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

46

Dependency Example

Consider the 4 task job (task 1 & 3 have already merged):

Sequence 1

Participants:Task 0

Sequence 2

Participants:Task 0

Sequence 2

Participants:Task 1

Sequence 1

Participants:Task 3

MI

SH SI

MasterQueue

SlaveQueue

Task 3

Duplicates

Page 47: Scalable Compression and Replay of Communication Traces in Massively Parallel Environments

47

Dependency Example

Consider the 4 task job (task 1 & 3 have already merged):

Sequence 1

Participants:Task 0

Sequence 2

Participants:Task 0

Sequence 2

Participants:Task 1

Sequence 1

Participants:Task 3

MI

SH SI

MasterQueue

SlaveQueue

Task 3 Task 1