Lab Exercises: Lab 1 (Performance measurement)

  • Published on
    15-Jan-2016

  • View
    36

  • Download
    8

DESCRIPTION

Programming Multi-Core Processors based Embedded Systems A Hands-On Experience on Cavium Octeon based Platforms. Lab Exercises: Lab 1 (Performance measurement). Lab # 1: Parallel Programming and Performance measurement using MPAC. Lab 1 Goals. Objective - PowerPoint PPT Presentation

Transcript

  • Programming Multi-Core Processors based Embedded Systems

    A Hands-On Experience on Cavium Octeon based PlatformsLab Exercises: Lab 1 (Performance measurement)

    1-*

  • 1-*Lab # 1: Parallel Programming and Performance measurement using MPAC

    1-*

  • 1-*Lab 1 GoalsObjectiveUse MPAC benchmarks to measure the performance of different subsystems of multi-core based systemsUse MPAC to learn to develop parallel programsMechanismMPAC CPU and memory benchmarks will exercise the processor and memory unit by generating compute and memory intensive workload

    1-*

  • 1-*What to Look forObservationsObserve the throughput with increasing number of threads for compute and memory intensive workloadsIdentify performance bottlenecks

    1-*

  • 1-*Measurement of Execution TimeMeasuring the elapsed time since the start of a task until its completion is a straight-forward procedure in the context of a sequential task. This procedure becomes complex when the same task is executed concurrently by n threads on n distinct processors or cores. Not guaranteed that all tasks start at the same time or complete at the same time. Therefore, the measurement is imprecise due to concurrent nature of the tasks.

    1-*

  • 1-*ContExecution time measured either globally or locally.In the case of global measurement, execution time is equal to the difference of time stamps taken at global fork and join instants. Local times can be measured and recorded by each of the n threads.After thread joining, the maximum of all these individual execution times provides an estimate of overall execution time.

    1-*

  • 1-*Definitions

    LETE: Local Execution Time Estimation

    GETE: Global Execution Time Estimation

    1-*

  • 1-*ContLETEGETE

    1-*

  • 1-*The ProblemLack of PrecisionSome tasks finish before othersSynchronization issue with large no. of coresResults not repeatable

    1-*

  • 1-*Performance Measurement MethodologiesFor multithreaded caseFor sequential case

    1-*

  • 1-*Accurate LETE Measurement Methodology

    1-*

  • 1-*Measurement Observations

    1-*

  • 1-*Accurate MINMAX ApproachRepeat for N no. of IterationsStore thread local execution time for each thread for each iterationFor an individual iteration store the largest execution time amongst the threadsWe have stored N largest execution time valuesChoose the minimum of that value to be your execution time. The MINMAX value!!

    1-*

  • 1-*Compile and Run (Memory Benchmark)Memory Benchmark$ cd //mpac_1.2$ ./configure$ make clean$ make$ cd benchmarks/mem$ ./mpac_mem_bm n -s -r -t For Help./mpac_cpu_bm h

    1-*

  • 1-*Compile and Run (CPU Benchmark)CPU Benchmark$ cd //mpac_1.2$ ./configure$ make clean$ make$ cd benchmarks/cpu$ ./mpac_cpu_bm n -r For Help./mpac_cpu_bm h

    1-*

  • 1-*Performance Measurements (CPU)Integer Unit (summation), Floating Point Unit (sine) and Logical Unit (string operation) of the processor are exercised.

    Intel Xeon, AMD Opteron (x86) and Cavium Octeon (MIPS64) are used as System under Test (SUT).

    Throughput scales linearly across number of threads for all cases.

    1-*

  • 1-*Performance Measurements (Memory)With concurrent symmetric threads one expects to see the memory-memory throughput scale with the number of threads.

    With data sizes of 4 KB, 16 KB and 1 MB, most of the memory accesses should hit L2 caches rather than the main memory.

    For these cases the throughput scales linearly.

    1-*

  • 1-*Performance Measurements (Memory)Copying 16 MB requires extensive memory accessesIn case of Intel shared bus is used. Thus, throughput is lower compared to the cases where accesses hit in L2 caches, and saturates as bus becomes a bottleneckMemory copy throughput saturates at around 40 Gbps, which is half of the available bus bandwidth (64 bits x 1333 MHz = 85.3 Gbps)For AMD and Cavium based SUT, throughput scales linearly for 16MB case due to their more efficient low-latency memory controllers instead of a shared system bus

    1-*

  • 1-*MPAC fork and join infrastructureIn MPAC based applications, the initialization and argument handling is performed by the main thread.The task to be run in parallel are forked to worker threadsThe worker threads join after completing their task.Final processing is done by main thread

    1-*

  • 1-*MPAC code structure

    1-*

  • 1-*MPAC Hello WorldObjectiveTo write a simple Hello World program using MPACMechanismUser specifies number of worker threads through commandlineEach worker thread prints Hello World and exits

    1-*

  • 1-*Compile and Run$ cd //mpac_1.2/apps/hello$ make clean$ make$ ./mpac_hello_app n

    1-*

    **************

Recommended

View more >