1. Introducere - Limbaje de Descriere a Algoritmilor Paraleli. Concurenta Si Sincronizare. Atomicitate

Embed Size (px)

Citation preview

System Dependability

Algoritmi Paraleli i DistribuiiCiprian [email protected]

Reguli, punctaje,

Despre curs (1)http://curs.cs.pub.ro

MaterialeForumTeme .LuniMartiMiercuriJoiVineri 08-10331CCDragos Comaneci 10-12331CCMihai Carabas333CCDragos Comaneci 12-14333CCEliana Tirsa 14-16332CCEliana Tirsa332CCGabriel Gutu 16-18334CCAlecsandru Patrascu 18-20334CCAlecsandru PatrascuDespre curs (2)Nota:10 = 5 (Laborator) + 1.5 (Curs) + 0.5 (Supl) + 4 (Examen) Laborator+Curs >= 3 Examen >= 2

Laborator: 4 teme(4)+activitate(1)

Curs: lucrri de curs (1) + teste la laborator (0.5)

Examen: scris (4) + problem suplimentar (0.5)

ntrebri?

Interactivitate dialogOpen-office: ????, EG403Regulament afiat pe pagina cursuluiPunctajele obinute pe parcurs sau examen se pot pastra pentru in an universitar (nu acumulare)Temele se pot trimite doar pe parcursul primului semestru universitarMrirea de not nu se poate face far re-susinerea examenuluiDespre curs (3)Vineri, 10-12 AMInteractivitate - teamwork

Despre mine

Absolvent al Facultatii de Automatica si Calculatoare

E-mail: [email protected]: http://www.facebook.com/ciprian.dobre http://cipsm.hpc.pub.roDespre curs

Mainframe

Mini Computer

Workstation

PC

BibliografieG.R.AndrewsConcurrent Programming. Principles and PracticeThe Benjamin/Cummings Publishing Company, Inc., 1991

G.R.AndrewsFoundations of Multithreaded, Parallel, and Distributed ProgrammingAddison Wesley, Inc., 2000

F. Thomson LeightonIntroduction to Parallel Algorithms and Architectures: Arrays. Trees. Hypercubes.Morgan Kaufmann Publishers, San Mateo, California, 1992

A.G. AklParallel Computation. Models and MethodsPrentice Hall 1997

M.J. QuinnParallel Computing. Theory and PracticeMcGraw-Hill, 1994

Claudia LeopoldParallel and Distributed ComputingJohn Wiley & Sons, 2001

A.Y.H. ZomayaParallel and Distributed ComputingMcGraw-Hill, 1996

Gerard TelIntroduction to Distributed AlgorithmsCambridge University, 1994

Robert W. SebestaConcepts of Programming Languages (Second Edition)The Benjamin Cummings, 1993

S.A. WilliamsProgramming Models for Parallel SystemsJohn Wiley & Sons, 1990

K.M. Chandy, J.MisraParallel program design.A foundation Addison-Wesley Publishing Company, 1988

M. Ben AriPrinciples of concurrent and distributed programmingPrentice Hall, N.Y. 1990

H.BallProgramming distributed systemsPrentice Hall, NY 1990

C.A.R. HoareCommunicating Sequential ProcessesComm. of the ACM, 1978

S.G. AklThe Design and Analysis of Parallel AlgorithmsPrentice Hall, 1989

G.R. Andrews, R.A. OlssonThe SR Programming Language. Concurrency in PracticeThe Benjamin/Cummings Publishing Company, Inc., 1993

Ian FosterDesigning and Building Parallel ProgramsAddison-Wesley Publishing Company, 1995

A.S. TanenbaumStructured Computer Organization (Fourth Edition)Prentice Hall, 1999Curs1.Introducere - Limbaje de descriere a algoritmilor paraleli. Concurena i sincronizare. Atomicitate. Bariere.3 ore2.Paralelism de date - Calcule prefix. Prelucrri de liste i matrice.3 ore3.Complexitatea calculului paralel - Msuri de performan. Calculul complexitii. Proprieti ale modelului de evaluare. Modelul Work-depth. 3 ore4.Dezvoltarea algoritmilor folosind variabile partajate 2 ore5.Dezvoltarea aplicaiilor pentru modele PRAM - Cutarea paralel. Selecia paralel.3 ore6.Comunicarea prin mesaje - Biblioteci pentru programare distribuit. MPI.3 ore7.Complexitatea calculului distribuit - Modelul Foster. Modelul LogP.2 oreCurs (2)8.Ceasuri logice i ordonarea evenimentelor -Vectori de timp (vector timestamps).3 ore9.Algoritmi und Descriere i proprieti. Algoritmii inel, arbore, ecou. Algoritmul fazelor. Algoritmul lui Finn.3 ore10.Stabilirea topologiei2 ore11.Terminarea programelor distribuite3 ore12.Algoritmi pentru sisteme tolerante la defecte3 ore13.Alegerea liderului3 ore14.Algoritmi pentru excludere mutual3 oreObiectivAcumularea competentelor necesare pentru rezolvarea problemelor prin solutii paralele sau distribuite

Calcul paralel = impartirea unei aplicatii in task-uri executate simultanCalcul distribuit = impartirea unei aplicatii in task-uri executate in sisteme diferite (cu resurse diferite)

APD difera de algoritmii secventialiAu la baza concepte diferiteCommunicating Sequential Processes (Hoare)ConcurentaAtomicitateSincronizare Folosesc modele de programare care asigura comunicarea intre procese prinDate partajateComunicare de mesaje Algoritmii paraleli si distributi NU sunt simple extensii sau versiuni ale celor secventiali Sunt folosite abordari diferiteAPD difera de algoritmii secventiali

ExempluDou conturi replicate n New York(NY) i San Francisco(SF)Dou actualizri n acelai timp:Soldul curent: $1,000Actualizare1: Adaug $100 la SF; Actualizare2: Adaug dobnda de 1% la NYWhoops, stri inconsistente!SFNYActualizare 2Actualizare 1Baze de date replicateOrdine actualizri1221La terminare veti cunoasteConceptele de bazaModelele de programareMetode de proiectare a solutiilor paralele si distribuite Modalitati de implementare a solutiilor folosind limbaje de programare / biblioteciJava concurentMPIMetode de imbunatatire a performantei solutiilor folosind modele de complexitate

Ce este calculul paralel?abordarea serial:

17High End Computing afecteaza toate campurile algo

Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU); A problem is broken into a discrete series of instructions. Instructions are executed one after another. Only one instruction may execute at any moment in time.

abordarea paralel:

(simplificat): folosirea simultan a mai multor resurse de calcul pentru rezolvarea unei probleme computaionale

Ce este calculul paralel? (2)

18In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs

DEFINITIE PROGRAM CONCURENTResurse de calculUn singur computer cu mai multe CPU DualCore, QuadCore, HPC

Mai multe computere conectate ntr-o reea Clustere

Combinaie ntre cele dou Grid-uri, Sisteme Cloud

Problem computaionalCaracteristici:

Poate fi divizat n pri discrete ce pot fi rezolvate simultan

Poate executa mai multe instruciuni concomitent

Poate fi rezolvat n mai puin timp cu resurse multiple dect cu o singur resursDe ce calcul paralel? (2)Timp mai puin (wall clock time)Probleme de dimensiuni mai mariConcuren (procesri simultane)Folosirea resurselor nelocaleReducerea costurilorDepirea constrngerilor de memorie

21Other reasons might include: Taking advantage of non-local resources - using available compute resources on a wide area network, or even the Internet when local compute resources are scarce. Cost savings - using multiple "cheap" computing resources instead of paying for time on a supercomputer. Overcoming memory constraints - single computers have very finite memory resources. For large problems, using the memories of multiple computers may overcome this obstacle.

Limitri ale arhitecturilor seriale:Viteze de transmisie dependene de hardware Limitri de miniaturizare chiar i n molecular computing!Limitri economice costuri ridicate pentru a realiza un procesor mai rapid (ex: Intel, IBM Cell)De ce calcul paralel? (3)22

Great Challenge Problems:Fizica nuclearClim, meteoBiologie genomul umanGeologie activitate seismicElectronic circuiteMedicin imagistic

Aplicaii comerciale:Baze de date paralele data minningMotoare de cutareCollaborative workRealitate virtuala (gaming), graficaNetworked videoAviaie - modelare

Cine i ce?Cine i ce? (2)Experimentul ALICE la CERN:Unul din cele 4 experimente LHC, dedicat fizicii ionilor greiVolum de date: 1 luna de experimente Pb-Pb ~ 1 Pbyte11 luni de experimente p-p ~ 1 PbyteSimulare: 1 eveniment Pb-Pb ~24 oreReconstrucie de date, filtrare, analiz, calibrare

Paralel vs. DistribuitCalcul paralelmprirea unei aplicaii n task-uri executate simultanCalcul distribuitmprirea unei aplicaii n task-uri executate n sisteme diferite (cu resurse diferite)Convergen paralel distribuitFolosesc din ce n ce mai mult aceleai arhitecturiSisteme distribuite folosite n calcul paralelCalculatoare paralele folosite ca servere de mare performanAu zone de aplicaii comuneProblemele de cercetare se intreptrund i sunt abordate n comunSe folosete termenul comun de calcul de nalta performan (HPC High Performance Computing)Arhitecturi ParaleleTaxonomia Flynn (1986)SISD Single Instruction Stream, Single Data StreamSIMD Single Instruction Stream, Multiple Data StreamMISD Multiple Instruction Stream, Single Data StreamMIMD Multiple Instruction Stream, Multiple Data StreamImportana pentru implementarea algoritmilor paraleli26Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple. SISDModel clasic von Neumann

UnitatedecontrolProcesorMemorieFlux deinstruciuniFlux de date27A serial (non-parallel) computer Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle Single data: only one data stream is being used as input during any one clock cycle Deterministic execution This is the oldest and until recently, the most prevalent form of computer Examples: most PCs, single CPU workstations and mainframes

SIMD

Implementat ca Sisteme cu memorie partajat - Shared Memory (PRAM) Multiprocesoare interconectate - Interconnected MultiprocessorsCMMM28A type of parallel computer Single instruction: All processing units execute the same instruction at any given clock cycle Multiple data: Each processing unit can operate on a different data element This type of machine typically has an instruction dispatcher, a very high-bandwidth internal network, and a very large array of very small-capacity instruction units. Best suited for specialized problems characterized by a high degree of regularity,such as image processing. Synchronous (lockstep) and deterministic execution Two varieties: Processor Arrays and Vector Pipelines Examples: Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2 Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820

SIMD (2)Shared memory (Parallel Random Access Machine - PRAM)EREW - Exclusive Read Exclusive WriteCREW - Concurrent Read Exclusive WriteERCWCRCWInflueneaz performanaExemplu: citirea valorii unei variabile partajateUn pas in CREW, CRCWlog N pai in EREW, ERCW SIMD (3)Valoarea este n variabila A n memoria comunFolosete X0, X1, din memoria comun

Procesul P0 citeste valoarea i o scrie n X0Procesul P1 citete X0 i o scrie n X1Procesele P2, P3 citesc X0, X1 i scriu n X2, X3

numrul de cpii se dubleaz la fiecare paspentru N procesoare operaia se termin dupa log N paiX0X1X2X3X4X5aAMemoriacomunP0P1P2P3P4Pas 1aX0X1X2X3X4X5aAMemoriacomunP0P1P2P3P4Pas 2aX0aX1aX2X3X4X5aAMemoriacomunP0P1P2P3P4Pas 3SIMD (4)Reele de interconectareTopologii:tablouarborecubhipercubConfiguraia depinde de:aplicaieperformanele dorite numrul procesoarelor disponibileExemple: IBM 9000, Cray C90, Fujitsu VP Arbore01234567MISDFr relevan practic

M34A single data stream is fed into multiple processing units. Each processing unit operates on the data independently via independent instruction streams. Few actual examples of this class of parallel computer have ever existed. One is the experimental Carnegie-Mellon C.mmp computer (1971). Some conceivable uses might be: multiple frequency filters operating on a single signal stream multiple cryptography algorithms attempting to crack a single coded message.

MIMDImplementat ca Multi-calculatoare (memorie distribuit) sau Multi-procesoare (memorie partajat)

35Currently, the most common type of parallel computer. Most modern computers fall into this category. Multiple Instruction: every processor may be executing a different instruction stream Multiple Data: every processor may be working with a different data stream Execution can be synchronous or asynchronous, deterministic or non-deterministic Examples: most current supercomputers, networked parallel computer "grids" and multi-processor SMP computers - including some types of PCs.

MIMD (2)"Shared Memory"Uniform Memory Access (UMA)Non-Uniform Memory Access (NUMA)Cache coherent Non-Uniform Memory Access (ccNUMA)

Avantaje:Spaiu de adrese global uurina n programarePartajare rapid a datelor ntre procese datorit proximitii memorie CPU

Dezavantaje:Lips scalabilitii ntre memorie i CPUSincronizarea n responsabilitatea programatoruluiScump36Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space. Multiple processors can operate independently but share the same memory resources. Changes in a memory location effected by one processor are visible to all other processors. Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA. Uniform Memory Access (UMA): Most commonly represented today by Symmetric Multiprocessor (SMP) machines Identical processors Equal access and access times to memory Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level. Non-Uniform Memory Access (NUMA): Often made by physically linking two or more SMPs One SMP can directly access memory of another SMP Not all processors have equal access time to all memories Memory access across link is slower If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA Advantages: Global address space provides a user-friendly programming perspective to memory Data sharing between tasks is both fast and uniform due to the proximity of memory to CPUs Disadvantages: Primary disadvantage is the lack of scalability between memory and CPUs. Adding more CPUs can geometrically increases traffic on the shared memory-CPU path, and for cache coherent systems, geometrically increase traffic associated with cache/memory management. Programmer responsibility for synchronization constructs that insure "correct" access of global memory. Expense: it becomes increasingly difficult and expensive to design and produce shared memory machines with ever increasing numbers of processors.

MIMD (3)"Multi-Computer"Massively Parallel Processors (MPP)Network Of Workstations (NOW)

Avantaje:Scalabilitate memorie CPUAcces rapid la memorieCosturi reduse: procesoare + networking

Dezavantaje:Responsabilitatea programatorului pentru comunicaia inter-procesoareMapare dificil a structurilor de date globale37Distributed memory systems require a communication network to connect inter-processor memory. Processors have their own local memory. Memory addresses in one processor do not map to another processor, so there is no concept of global address space across all processors. Because each processor has its own local memory, it operates independently. Changes it makes to its local memory have no effect on the memory of other processors. Hence, the concept of cache coherency does not apply. When a processor needs access to data in another processor, it is usually the task of the programmer to explicitly define how and when data is communicated. Synchronization between tasks is likewise the programmer's responsibility. The network "fabric" used for data transfer varies widely, though it can can be as simple as Ethernet.

Advantages: Memory is scalable with number of processors. Increase the number of processors and the size of memory increases proportionately. Each processor can rapidly access its own memory without interference and without the overhead incurred with trying to maintain cache coherency. Cost effectiveness: can use commodity, off-the-shelf processors and networking. Disadvantages: The programmer is responsible for many of the details associated with data communication between processors. It may be difficult to map existing data structures, based on global memory, to this memory organization. Non-uniform memory access (NUMA) times

Metode de programare Date partajate (Shared data)

MMMMMMMMMMMMMMCPUCPUCPUMetode de Programare (2)Transmitere de mesaje (Message passing)CPUMCPUMCPUMCPUM39Shared memory model on a distributed memory machine: Kendall Square Research (KSR) ALLCACHE approach. Machine memory was physically distributed, but appeared to the user as a single shared memory (global address space). Generically, this approach is referred to as "virtual shared memory". Note: although KSR is no longer in business, there is no reason to suggest that a similar implementation will not be made available by another vendor in the future. Message passing model on a shared memory machine: MPI on SGI Origin. The SGI Origin employed the CC-NUMA type of shared memory architecture, where every task has direct access to global memory. However, the ability to send and receive messages with MPI, as is commonly done over a network of distributed memory machines, is not only implemented but is very commonly used.

Un model de programareUn program paralel / distribuit = colecie de procese paralele comunicante - Communicating Sequential ProcessesBazat pe modelul CSP al lui HoareFolosit n multe limbaje i biblioteci paralele / distribuiteAdaptat pentru message passing i shared data40Descriere adoptata pt. programele paralele se incadreaza in modelul CSP HoareBazat pe constructii Pascal + notatii pt. concurenta, sincronizare, comunicare

booleanboolntreg intrealrealcaractercharir caracterestring

Declaraia variabilelor var id1: tip1:= val1, ... , idn: tipn:= valn;Definiiile de constante const id1 = val1, ... , idn = valn;Tipuri de baz41Initializarile de variabile sunt optionaleTablou

var vector: array [1:10] of int; matrice: array [1:n,1:m] of real;

Constructor

var forks: array [1:5] of bool := ([5]false);

Tipuri de baz (2)Elemente contigue ce pot fi procesate n paralel421:10 primele 10 elemente contigue, ele pot fi procesate in paralelnregistrare

type student = rec (nume : string[20]; vrsta: int; clase: array [1:5] of string[15]);

var queue: rec (front: int :=1; rear: int :=1; size: int :=0; contents: array [1:n] of int);

Tipuri de baz (3)Atribuirea x:=e

Interschimbarea x1:=:x2

Comanda cu gard B S

Instruciunea compus [S] este o secven de instruciuni. Selecia (if)

if B1 S1[] B2 S2 ...[] Bn SnfiInstruciuniS se execut doar dac B este evaluat la valoarea true

44B SB = instructiune booleanaS = instructiune (simpla sau compusa)S nu poate fi executat dc. B nu este true

Dc. Mai multe garzi sunt true, alegerea instructiunii ce se executa este arbitraraIteraia (do) do B1 S1 [] B2 S2 ... [] Bn Sn od

Ciclul cu contor (fa) fa cuantificatori instruciuni af variabil := val_init to val_final st B

Exemple:fa i:=1 to n, j:=i+1 to n m[i,j]:=:m[j,i] affa i:=1 to n, j:=i+1 to n st a[i]>a[j] a[i]:=:a[j]afInstruciuni (2)45Do se termina cand nici o garda nu mai este true procedure p(f1: t1; ...; fn: tn) returns r: tr; declaraii instruciuni end;

Exemplu: calculul factorialului. procedure fact(i : int) returns f: int; if i < 0 f := -1 [] i = 0 or i=1 f:=1 [] i > 1 f := i * fact(i-1) fi end;

Proceduri

46P = numele proceduriiF1:t1 parametrii formali, apelare prin valoare (implicit) sau prin referinta (se pune var in fata)Return r:tr = numele valorii intoarse si tipul (optional)

Executie:call p(e1, , en)x:=p(e1, , en)

Execuie concurentco S1 || S2 || ... || Sn oc Ex.1: x:=0; y:=0; co x:=x+1 || y:=y+1 oc z:=x+y;

Ex. 2: co j:=1 to n a[j]:=0 oc

Execuie concurentco S1 || S2 || ... || Sn oc Ex.1: x:=0; y:=0; co x:=x+1 || y:=y+1 oc z:=x+y;

Ex. 2: co j:=1 to n a[j]:=0 ocEx. 3: produs de matrici

var a,b,c: array [1:n,1:n] of real;

co Prod(i:1..n, j:1..n):: var sum : real :=0; fa k := 1 to n sum := sum + a[i, k] * b[k, j] af c[i,j]:=sumoc

Execuie concurent (2)49Aciuni indivizibile care examineaz sau modific starea sistemului / starea programuluiOrice stare intermediar din implementarea aciunii atomice nu trebuie s fie vizibil celorlalte proceseEx: instruciuni main de load sau storeRead/write aciuni atomice, fiecare proces are propriul set de regitri, strile intermediare evalurii unei expresii complexe sunt stocate n regitrii proprii procesuluiExecuia unui program concurent const n ntreeserea secvenelor de aciuni atomice executate de fiecare procesIstorie (trace): s0 s1 snAciuni atomice50Chiar si executia paralela poate fi modelata ca un trace liniar (efectul executiei unui set de aa in paralel executia lor intr-o ordine seriala sunt indivizibile)Interactiuni neacceptabile => sincronizareVariabilele pot fi mprite n:R: Variabile care sunt citite, dar nu i modificateW: Variabile care sunt modificate (pot fi i citite)Dou pri ale unui program sunt independente dac mulimea R a fiecrei pri este disjunct fa de mulimile R i W ale celeilalte pri

Referin critic: o variabil dintr-o expresie care este modificat de un alt procesAciuni Atomice (2) y:= 0; z:= 0;co x:=y+z || y:=1 || z:=2 oc;

La sfrit, x poate avea oricare valoare dintre: 0,1,2,3.

Corecie: n cursul evalurii unei expresii, variabilele nu trebuie modificate de alte procese

Majoritatea declaraiilor din programele concurente nu ndeplinesc aceast condiie!

Aciuni Atomice (3)evaluarea atomicx:=e satisface proprietatea dac fie:e conine cel mult o referina critic i x nu e citit de alte procesee nu conine referine critice (deci x poate fi citit de alte procese)

Dac nu e ndeplinit aceast condiie, folosim mecanisme de sincronizare pentru a asigura atomicitatea.

At Most Once53Cel mult o variabila partajata poate fi referita!

Daca e indeplinita, executia va aparea atomica var partajata e citita / scrisa o singura data!

X=x+1 // y=y+1 1//1X=y+1 // y=y+11,2//1X=y+1// y=x+11//2 2//1 1//1SincronizareSoluia:Aciuni atomice: Sincronizare folosind await:

Doar sincronizare condiionat: spin loop / busy waiting ! 54Cand pot sa spun s = ? S single statement, s at most once

Daca b at most once cum se implementeaza? While (not B);

Exemplu: Productor - ConsumatorProductorConsumatorBuffervar buf: int, p: int :=0, c:int :=0;

co Producer:: var a: array[1:n] of int; do p < n ; buf := a[p + 1];p := p + 1 odoc

co Consumer:: var b: array [1:n] of int;do c < n ; b[c + 1] := buf;c := c + 1 odoc

p = cp > c

Exemplu: Productor-ConsumatorParalelism de date, algoritmi iterativi:do true co i := 1 to n code_process_i ocod

co Process(i: 1..n):: do true code_process_i barrier odocToate procesele se sincronizeaz la sfritul fiecarui cicluUtil cnd fiecare iteraie depinde n ntregime de rezultatele iteraiei precedente

Sincronizarea cu barierSoluie ineficient!coocbarrierbarriercode_process_i

Sincronizarea cu bariercooccooccooctimpvar gril, nou: array [0:n+1, 0:n+1] of real;var converge: boolean := false;co CalculGril(i:1..n, j:1..n):: do not converge nou[i,j] := (gril[i-1,j] + gril[i+1,j] + gril[i,j-1] + gril[i,j+1])/4; barrier test_convergent; barrier gril[i,j] := nou[i,j]; barrier odoc

Exemplu: calcul gril59- De exemplu pentru calculul solutiei ecuatiei Laplace (2()=0) metoda cu diferente finite (ex. Jacobi) dar si in graficaSumarProblematica calculului paralel i distribuitArhitecturi paraleleMetode de programareLimbaje de descriere a algoritmilor paraleliConcuren i sincronizareAtomicitateBarierentrebri?