Basics Concepts of Parallel Programming

Embed Size (px)

Citation preview

  • 8/3/2019 Basics Concepts of Parallel Programming

    1/24

    Basics Concepts of ParallelProgramming

  • 8/3/2019 Basics Concepts of Parallel Programming

    2/24

    Wh at is parallel computing

    Its a form of computation w h ere manycalculations can be carried out

    simultaneously.In parallel computing we can divide largeprograms into smaller one and performcomputation simultaneously.

  • 8/3/2019 Basics Concepts of Parallel Programming

    3/24

  • 8/3/2019 Basics Concepts of Parallel Programming

    4/24

    2/8/2012 4

  • 8/3/2019 Basics Concepts of Parallel Programming

    5/24

    Save time and/or moneySolve larger problems

    Provide concurrency

    2/8/2012 5

  • 8/3/2019 Basics Concepts of Parallel Programming

    6/24

    Ph ysics - applied, nuclear, particle,condensed matter, h igh pressure, fusion,ph otonics

    Bioscience, Biotech

    nology, GeneticsCh emistry, Molecular SciencesGeology, SeismologyMech anical Engineering - from prost h eticsto spacecraftElectrical Engineering, Circuit Design,MicroelectronicsComputer Science, Mat h ematics

    2/8/2012 6

  • 8/3/2019 Basics Concepts of Parallel Programming

    7/24

    Threads

    A Thread is single stream of control in the flow of a program

    Example 1:Consider the following code which computes product of

    2 dense matrice of size n x n.for(row=0;row

  • 8/3/2019 Basics Concepts of Parallel Programming

    8/24

    The for loop in this code fragment has n 2 Iteration, eachwhich can be executed independently. Such independent

    sequence of instructions is referred as thread. In above example there are n 2 threads one for each

    iteration of for loop. Since each thread can be executedindependently of others, they can be scheduledconcurrently on multi processors.

    Example 2:here we are using function called create_threadto provide mechanism for specifying c function as thread

    for(row=0;row

  • 8/3/2019 Basics Concepts of Parallel Programming

    9/24

    Th read Basic Unit of execution Lig h t weig h t process Process can h ave multiple t h readWh y th read To improve t h e CPU Utilization

    Note: all t h reads s h are same s h ared memory.

    2/8/2012 9

  • 8/3/2019 Basics Concepts of Parallel Programming

    10/24

    An Application Program Interface (API) that may be used toexplicitly direct multi-threaded, shared memory parallelism.

    O penMP is an API that can be used with F OR TR AN, C andC++ for programming shared address space machines.

    O penMP directives provides support for concurrency,

    synchronization and data handling.

    The O penMP directives in C and C++ are based on the#pragma compiler directives.

    2/8/2012 10

  • 8/3/2019 Basics Concepts of Parallel Programming

    11/24

    O penMP programs execute serially until theyencounter the parallel directive. This directiveis responsible of creating group of threads.

    The main thread that encounters the parallel directive becomes the Master of

    this group of thread and the thread isassigned with ID 0 within this group

  • 8/3/2019 Basics Concepts of Parallel Programming

    12/24

    Fork - Join Model: OpenMP uses t h e fork-join model of parallel

    execution:

    All OpenMP programs begin as a single

    process: t h e master thread 2/8/2012 12

  • 8/3/2019 Basics Concepts of Parallel Programming

    13/24

    Format:

    Ex: #pragma omp parallel default(shared)

    2/8/2012 13

    #pragma omp directive-name [clause, ...] newline

    Required for allOpenMP C /C++directives.

    A valid OpenMPdirective. Mustappear after t h epragma and beforeany clauses.

    Optional. Clausescan be in any order,and repeated asnecessary unlessot h erwiserestricted.

    Required.Precedes t h estructuredblock w h ich is enclosedby t h is

    directive.

  • 8/3/2019 Basics Concepts of Parallel Programming

    14/24

    General Rules Case sensitive Directives follow conventions of t h e C /C++

    standards for compiler directives Only one directive-name may be specified per

    directive Eac h directive applies to at most one

    succeeding statement, wh

    ich

    must be astructured block. Long directive lines can be "continued" on

    succeeding lines by escaping t h e newlinech aracter wit h a backslas h ("\") at t h e end of adirective line. 2/8/2012 14

  • 8/3/2019 Basics Concepts of Parallel Programming

    15/24

    A parallel region is a block of code th

    at will beexecuted by multiple t h readsWh en a t h read reac h es a PARALLEL directive It creates a team of t h reads

    Becomes th

    e master of th

    e team. Master t h read ID is 0 .

    Starting from t h e beginning of t h e parallelregion,

    th

    e code is duplicated all t h reads will execute t h at code

    If any th read terminates wit h in a parallel region,all t h reads in t h e team will terminate

    2/8/2012 15

  • 8/3/2019 Basics Concepts of Parallel Programming

    16/24

    Th e number of t h reads in a parallel regionis determined by t h e following factors

    Setting of th

    e NUM_THREAD S clause Use of t h e omp _set _num _threads() libraryfunction

    Setting of t h e OMP _NUM_THREAD S

    environment variable Implementation default - usually t h e number of

    CPUs on a node

    2/8/2012 16

  • 8/3/2019 Basics Concepts of Parallel Programming

    17/24

    #pragma omp parallel

    Its a parallel directive t h at creates grouppof th reads.

    Using th

    is we can compute operationsparallelly.

  • 8/3/2019 Basics Concepts of Parallel Programming

    18/24

    tid=omp_get_t h read_num()

    Used to give unique ID for eac h th readcreated.

  • 8/3/2019 Basics Concepts of Parallel Programming

    19/24

    L11 .Design, develop and execute a parallelprogram in C to add, element wise, two one-dimensional arrays A and B of N integer elements and store t h e result in anot h er one-dimensional array C of N integer elements.

  • 8/3/2019 Basics Concepts of Parallel Programming

    20/24

    # include

    # include

    int main(){int a[ 10] ,b[10] ,c[10] ,i, n;printf("\nenter t h e number of elements");scanf("%d",&n);

    printf("\nEnter th

    e element of 1st array");for(i=0 ;i

  • 8/3/2019 Basics Concepts of Parallel Programming

    21/24

    printf("enter t h e elements of 2nd array");for(i=0 ;i

  • 8/3/2019 Basics Concepts of Parallel Programming

    22/24

    printf("\nt h e contents of array B\n");

    for(i=0 ;i

  • 8/3/2019 Basics Concepts of Parallel Programming

    23/24

    #pragma omp parallel for s h ared(c)for(i=0 ;i

  • 8/3/2019 Basics Concepts of Parallel Programming

    24/24

    printf("\nt h e new array is====");for(i=0 ;i