Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis

Embed Size (px)

Citation preview

  • Slide 1

Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis Slide 2 2 Climate Simulation We use a computer model of the climate system a computer program, which simulates an abstract model (mathematical representation) of the climate system reproducing the relevant features based on theoretical principles (e.g. laws of nature) observed relationships Slide 3 Blizzard IBM Power6 System Peak performance: 158 TeraFlop/s (158 trillion floating point operations per second) 264 IBM Power6 nodes 16 dual core CPUs per node (altogether 8,448 compute cores) more than 20 TeraByte memory 7,000 TeraByte of disk space until 2011 Infiniband network: 7.6 TeraByte/s (aggregated) High performance computing system Blizzard at DKRZ - compute nodes (orange), infiniband switch (red), disks (green) Slide 4 Message Passing Hybrid World Node OpenMP Slide 5 5 Parallel Compiler Why cant I just say f90 Parallel mycode.f and everything works fine ? Logical dependencies Data dependencies Slide 6 Multiprocessor Shared Memory CPU Network Memory Module Memory Module Memory Module Memory Module CPU Slide 7 7 Concepts - Shared Memory Directives Single Process Master Thread Parallel Region Team of Threads Single Process Parallel Region Master Thread Team of Threads Slide 8 8 Amdahls law Slide 9 Message Passing Hybrid World Node OpenMP Slide 10 Processes und Threads Message Passing OpenMP Slide 11 Blizzard IBM Power6 System Peak performance: 158 TeraFlop/s (158 trillion floating point operations per second) 264 IBM Power6 nodes 16 dual core CPUs per node (altogether 8,448 compute cores) more than 20 TeraByte memory 7,000 TeraByte of disk space until 2011 Infiniband network: 7.6 TeraByte/s (aggregated) High performance computing system Blizzard at DKRZ - compute nodes (orange), infiniband switch (red), disks (green) Slide 12 Bottlenecks 12 Bottlenecks of Massively Parallel Computing Systems Memory Bandwidth Communication Network Idle Processors Slide 13 Memory Hierarchy Register L1,L2,L3 Cache Memory 13 Slide 14 Data Movement 14 Slide 15 15 Data Movement in Parallel Systems Slide 16 Message Passing Hybrid World Node OpenMP Slide 17 The World of MPI Network CPU Memory Module CPU Memory Module CPU Slide 18 Processes und Threads Message Passing OpenMP Slide 19 Improve the efficiency of a parallel program running on High Performance Computers Typical Workflow Motivation 19 Measurement and Runtimeanalysis of the Code Development of a parallel Program Optimizing the Code Slide 20 Profiling Summarize performance data per process/thread during execution statistical Analysis Tracing Trace record with performance data and timestamp per process/thread e.g. MPI messages Performance Engineering 20 Slide 21 Optimization Compilers cannot optimize automatically everything Optimization is not just finding the right compiler flag Major algorithmic changes are necessary 21