24
Lecture 1 Lecture 1 Introduction to MPI Introduction to MPI Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied Sciences [email protected] Some slides have bee adapted with thanks from some other lectures available on Internet

Introduction to MPI

Embed Size (px)

Citation preview

Page 1: Introduction to MPI

Lecture 1Lecture 1

Introduction to MPI Introduction to MPI Dr. Muhammad Hanif Durad

Department of Computer and Information Sciences

Pakistan Institute Engineering and Applied Sciences

[email protected]

Some slides have bee adapted with thanks from some other lectures available on Internet

Page 2: Introduction to MPI

Dr. Hanif Durad 2

Lecture Outline

Models for Communication for PP MPI Libraries Features of MPI Programming With MPI Using MPI Manual Compilation and running a program Basic concepts Homework

IntroMPI.ppt

Page 3: Introduction to MPI

Models for Communication for PP

Parallel program = program composed of tasks (processes) which communicate to accomplish an overall computational goal

Two prevalent models for communication: Shared memory (SM) Message passing (MP)

D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt

Page 4: Introduction to MPI

Shared Memory Communication Processes in shared memory program communicate by

accessing shared variables and data structures Basic shared memory primitives

Read to a shared variable Write to a shared variable

4

InterconnectionMedia…Processors memories

Basic Shared Memory Multiprocessor Architecture

D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt

Page 5: Introduction to MPI

Accessing Shared Variables

Dr. Hanif Durad 5

Conflicts may arise if multiple processes want to write to a shared variable at the same time.

Programmer, language, and/or architecture must provide means of resolving conflicts

Shared variable x

+1proc. A

+1proc. B

Process A,B:read xcompute x+1write x

D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt

Page 6: Introduction to MPI

Message Passing Communication

Dr. Hanif Durad 6

• Processes in message passing program communicate by passing messages

• Basic message passing primitives– Send(parameter list)

– Receive(parameter list)

– Parameters depend on the software and can be complex

A B

D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt

Page 7: Introduction to MPI

MPI Libraries (1/2)

All communication, synchronization require subroutine calls No shared variables Program run on a single processor just like any

uniprocessor program, except for calls to message passing library

7

IntroMPI.ppt

Page 8: Introduction to MPI

MPI Libraries (2/2) Subroutines for

Communication Pairwise or point-to-point: Send and Receive

Collectives all processor get together to Move data: Broadcast, Scatter/gather Compute and move: sum, product, max, … of data on many processors

Synchronization Barrier No locks because there are no shared variables to protect

Enquiries How many processes? Which one am I? Any messages

waiting? 8

IntroMPI.ppt

Page 9: Introduction to MPI

Novel Features of MPI

Communicators encapsulate communication spaces for library safety

Datatypes reduce copying costs and permit heterogeneity Multiple communication modes allow precise buffer

management Extensive collective operations for scalable global

communication Process topologies permit efficient process placement, user

views of process layout Profiling interface encourages portable tools

9

IntroMPI.ppt

Page 10: Introduction to MPI

Programming With MPI MPI is a library

All operations are performed with routine calls Basic definitions in

mpi.h for C/C++ mpif.h for Fortran 77 and 90 MPI module for Fortran 90 (optional)

First Program: Write out process number Write out some variables (illustrate separate name space)

10

IntroMPI.ppt

Page 11: Introduction to MPI

Finding Out About the Environment

Two important questions that arise early in a parallel program are: How many processes are participating in this

computation? Which one am I?

MPI provides functions to answer these questions: MPI_Comm_size reports the number of processes. MPI_Comm_rank reports the rank, a number between

0 and size-1, identifying the calling process 11

IntroMPI.ppt

Page 12: Introduction to MPI

Hello (C)#include "mpi.h"#include <stdio.h>

int main( int argc, char *argv[] ){ int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0;}

Dr. Hanif Durad 12

IntroMPI.ppt

Page 13: Introduction to MPI

Hello (Fortran)program maininclude 'mpif.h'integer ierr, rank, size

call MPI_INIT( ierr )call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr )call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr )print *, 'I am ', rank, ' of ', sizecall MPI_FINALIZE( ierr )end

Dr. Hanif Durad 13

IntroMPI.ppt

Page 14: Introduction to MPI

Hello (C++)

Dr. Hanif Durad 14

#include "mpi.h"#include <iostream>

int main( int argc, char *argv[] ){ int rank, size; MPI::Init(argc, argv); rank = MPI::COMM_WORLD.Get_rank(); size = MPI::COMM_WORLD.Get_size(); std::cout << "I am " << rank << " of " << size <<

"\n"; MPI::Finalize(); return 0;}

IntroMPI.ppt

Page 15: Introduction to MPI

Using MPI Manual command

Linux man ls (?)

MPI hanif@virtu:~> mpiman

What manual page do you want? mpiman –help mpiman -V cd /opt/mpich/ch-p4/bin ls

mpif77 mpif90 mpicc mpirun mpiCC15

Page 16: Introduction to MPI

Compilation in C

mpiman mpicc To compile a single file foo.c , use

mpicc -c foo.c To link the output and make an executable, use

mpicc -o foo foo.o Combining compilation and linking in a single command is a convenient way to build simple programs.

16

Page 17: Introduction to MPI

Compilation and running in C

mpicc -o helloc hello1.c mpirun -np 4 ./helloc

I am 0 of 4 I am 2 of 4 I am 3 of 4 I am 1 of 4

17

Page 18: Introduction to MPI

Compilation and running in fortran 90

mpif90 -o hellof hello1.f90 mpirun -np 4 ./hellof

I am 0 of 4 I am 2 of 4 I am 1 of 4 I am 3 of 4

18

Page 19: Introduction to MPI

Compilation and running in C++

mpiCC -o hellocpp hello1.cpp mpirun -np 4 ./hellocpp

I am 0 of 4 I am 2 of 4 I am 1 of 4 I am 3 of 4

19

Page 20: Introduction to MPI

Notes on Hello World All MPI programs begin with MPI_Init and end with MPI_Finalize MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and

designates all processes in the MPI “job” Each statement executes independently in each process

including the printf/print statements I/O not part of MPI-1but is in MPI-2

print and write to standard output or error not part of either MPI-1 or MPI-2 output order is undefined (may be interleaved by character, line, or blocks of

characters), The MPI-1 Standard does not specify how to run an MPI program, but many

implementations provide mpirun –np 4 a.out

Dr. Hanif Durad 20

IntroMPI.ppt

Page 21: Introduction to MPI

What you have learnt?

Dr. Hanif Durad 21

Page 22: Introduction to MPI

Some Basic Concepts

Processes can be collected into groups Each message is sent in a context, and must be received

in the same context Provides necessary support for libraries

A group and context together form a communicator A process is identified by its rank in the group associated

with a communicator There is a default communicator whose group contains all

initial processes, called MPI_COMM_WORLDDr. Hanif Durad 22

Page 23: Introduction to MPI

General MPI Program Structure

Dr. Hanif Durad 23

https://computing.llnl.gov/tutorials/mpi/

Page 24: Introduction to MPI

Home Work

Modify the previous program so the processor name executing a process is also printed out.

Use MPI routineMPI_Get_processor_name

(processor_name,&namelen)

But in C++ context

Dr. Hanif Durad 24