60
Lecture 08: Principles of Parallel Algorithm Design Concurrent and Mul<core Programming CSE 436/536 Department of Computer Science and Engineering Yonghong Yan [email protected] www.secs.oakland.edu/~yan 1

Lecture 08: Principles of Parallel Algorithm Design

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture 08: Principles of Parallel Algorithm Design

Lecture08:PrinciplesofParallelAlgorithmDesign

ConcurrentandMul<coreProgrammingCSE436/536

DepartmentofComputerScienceandEngineeringYonghongYan

[email protected]/~yan

1

Page 2: Lecture 08: Principles of Parallel Algorithm Design

Lastlecture:AlgorithmsandConcurrency

•  IntroducAontoParallelAlgorithms–  TasksandDecomposiAon–  ProcessesandMapping

•  DecomposiAonTechniques–  RecursiveDecomposiAon(divide-conquer)–  DataDecomposiAon(input,output,input+output,intermediate)

•  Termsandconcepts–  Taskdependencygraph,taskgranularity,degreeofconcurrency–  TaskinteracAongraph,criAcalpath

•  Examples:–  DensevectoraddiAon,matrixvectorproduct–  Densematrixmatrixproduct–  Databasequery–  Quicksort,MIN

2

Page 3: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  Decomposi<onTechniques-con<nued–  ExploratoryDecomposiAon–  HybridDecomposiAon

Mappingtaskstoprocesses/cores/CPU/PEs•  Characteris<csofTasksandInterac<ons

–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAcandDynamicMapping

•  MethodsforMinimizingInterac<onOverheads•  ParallelAlgorithmDesignModels

3

Page 4: Lecture 08: Principles of Parallel Algorithm Design

ExploratoryDecomposi<on

4

•  DecomposiAonisfixed/staAcfromthedesign–  Dataandrecursive

•  ExploraAon(search)ofastatespaceofsoluAons–  problemdecomposiAonreflectsshapeofexecuAon–  Goeshand-in-handwithitsexecuAon

•  Examples–  discreteopAmizaAon,e.g.0/1integerprogramming–  theoremproving–  gameplaying

Page 5: Lecture 08: Principles of Parallel Algorithm Design

ExploratoryDecomposi<on:Example

5

Solvea15puzzle•  Sequenceofthreemovesfromstate(a)tofinalstate(d)

•  Fromanarbitrarystate,mustsearchforasoluAon

Page 6: Lecture 08: Principles of Parallel Algorithm Design

ExploratoryDecomposi<on:Example

6

Solvinga15puzzle•  Search

–  generatesuccessorstatesofthecurrentstate–  exploreeachasanindependenttask

Page 7: Lecture 08: Principles of Parallel Algorithm Design

ExploratoryDecomposi<onSpeedupSolvea15puzzle

•  ThedecomposiAonbehavesaccordingtotheparallelformulaAon–  Maychangetheamountofworkdone

7Execu<onterminatewhenasolu<onisfound

Page 8: Lecture 08: Principles of Parallel Algorithm Design

Specula<veDecomposi<on

8

•  Dependenciesbetweentasksarenotknowna-priori.–  ImpossibletoidenAfyindependenttasks

•  Twoapproaches–  ConservaAveapproaches,whichidenAfyindependenttasks

onlywhentheyareguaranteedtonothavedependencies•  Mayyieldlialeconcurrency

–  OpAmisAcapproaches,whichscheduletasksevenwhentheymaypotenAallybeinter-dependent•  Roll-backchangesincaseofanerror

Page 9: Lecture 08: Principles of Parallel Algorithm Design

Specula<veDecomposi<on:Example

9

Discreteeventsimula<on•  CentralizedAme-orderedeventlist

–  yougetupàgetreadyàdrivetoworkàworkàeatlunchàworksomemoreàdrivebackàeatdinneràandsleep

•  SimulaAon–  extractnexteventinAmeorder–  processtheevent–  ifrequired,insertneweventsintotheeventlist

•  OpAmisAceventscheduling–  assumeoutcomesofallpriorevents–  speculaAvelyprocessnextevent–  ifassumpAonisincorrect,rollbackitseffectsandconAnue

Page 10: Lecture 08: Principles of Parallel Algorithm Design

Specula<veDecomposi<on:Example

10

Simula<onofanetworkofnodes•  Simulatenetworkbehaviorforvariousinputandnodedelays

–  Theinputaredynamicallychanging•  Thustaskdependencyisunknown

•  SpeculateexecuAon:tasks’input–  Correct:parallelism–  Incorrect:rollbackandredo

Page 11: Lecture 08: Principles of Parallel Algorithm Design

Specula<vevsExploratory

•  ExploratorydecomposiAon–  TheoutputofmulApletasksfromabranchisunknown–  Parallelprogramperformmore,lessorsameamountofwork

asserialprogram•  SpeculaAve

–  TheinputatabranchleadingtomulApleparalleltasksisunknown

–  Parallelprogramperformmoreorsameamountofworkastheserialalgorithm

11

Page 12: Lecture 08: Principles of Parallel Algorithm Design

HybridDecomposi<ons

12

Usemul<pledecomposi<ontechniquestogether•  OnedecomposiAonmaybenotopAmalforconcurrency

–  QuicksortrecursivedecomposiAonlimitsconcurrency(Why?)

•  CombinedrecursiveanddatadecomposiAonforMIN

Page 13: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  DecomposiAonTechniques-conAnued–  ExploratoryDecomposiAon–  HybridDecomposiAonMappingtaskstoprocesses/cores/CPU/PEs

•  CharacterisAcsofTasksandInteracAons–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAcandDynamicMapping

•  MethodsforMinimizingInteracAonOverheads•  ParallelAlgorithmDesignModels

13

Page 14: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTasks

14

•  Theory–  DecomposiAon:toparallelizetheoreAcally

•  Concurrencyavailableinaproblem•  PracAce

–  TaskcreaAons,interacAonsandmappingtoPEs.•  RealizingconcurrencypracAcally

–  CharacterisAcsoftasksandtaskinteracAons•  Impactchoiceandperformanceofparallelism

•  Characteris<csoftasks–  Taskgenera<onstrategies–  Tasksizes(theamountofwork,e.g.FLOPs)–  Sizeofdataassociatedwithtasks

Page 15: Lecture 08: Principles of Parallel Algorithm Design

TaskGenera<on

15

•  StaActaskgeneraAon–  Concurrenttasksandtaskgraphknowna-priori(beforeexecuAon)–  TypicallyusingrecursiveordatadecomposiAon–  Examples

•  MatrixoperaAons•  Graphalgorithms•  ImageprocessingapplicaAons•  Otherregularlystructuredproblems

•  DynamictaskgeneraAon–  ComputaAonsformulateconcurrenttasksandtaskgraphonthefly

•  Notexplicitapriori,thoughhigh-levelrulesorguidelinesknown–  TypicallybyexploratoryorspeculaAvedecomposiAons.

•  AlsopossiblebyrecursivedecomposiAon,e.g.quicksort–  Aclassicexample:gameplaying

•  15puzzleboard

Page 16: Lecture 08: Principles of Parallel Algorithm Design

TaskSizes/Granularity

16

•  TheamountofworkàamountofAmetocomplete–  E.g.FLOPs,#memoryaccess

•  Uniform:–  OlenbyevendatadecomposiAon,i.e.regular

•  Non-uniform–  Quicksort,thechoiceofpivot

Page 17: Lecture 08: Principles of Parallel Algorithm Design

SizeofDataAssociatedwithTasks

17

•  Maybesmallorlargecomparedtothetasksizes–  Howrelevanttotheinputand/oroutputdatasizes–  Example:

•  size(input)<size(computa<on),e.g.,15puzzle•  size(input)=size(computa<on)>size(output),e.g.,min•  size(input)=size(output)<size(computa<on),e.g.,sort

•  Consideringtheeffortstoreconstructthesametaskcontext–  smalldata:smallefforts:taskcaneasilymigratetoanother

process–  largedata:largeefforts:Aesthetasktoaprocess

•  ContextreconstrucAngvscommunicaAng–  Itdepends

Page 18: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTaskInterac<ons

•  AspectsofinteracAons–  What:shareddataorsynchronizaAons,andsizesofthemedia–  When:theAming–  Who:withwhichtask(s),andoveralltopology/paaerns–  DoweknowdetailsoftheabovethreebeforeexecuAon–  How:involveoneorboth?

•  TheimplementaAonconcern,implicitorexplicitOrthogonalclassificaAon•  StaAcvs.dynamic•  Regularvs.irregular•  Read-onlyvs.read-write•  One-sidedvs.two-sided

18

Page 19: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTaskInterac<ons

•  AspectsofinteracAons–  What:shareddataorsynchronizaAons,andsizesofthemedia–  When:theAming–  Who:withwhichtask(s),andoveralltopology/paaerns–  DoweknowdetailsoftheabovethreebeforeexecuAon–  How:involveoneorboth?

•  StaAcinteracAons–  PartnersandAming(andelse)areknowna-priori–  RelaAvelysimplertocodeintoprograms.

•  DynamicinteracAons–  TheAmingorinteracAngtaskscannotbedetermineda-priori.–  Hardertocode,especiallyusingexplicitinteracAon.

19

Page 20: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTaskInterac<ons

20

•  AspectsofinteracAons–  What:shareddataorsynchronizaAons,andsizesofthemedia–  When:theAming–  Who:withwhichtask(s),andoveralltopology/paaerns–  DoweknowdetailsoftheabovethreebeforeexecuAon–  How:involveoneorboth?

•  RegularinteracAons–  DefinitepaaernoftheinteracAons

•  E.g.ameshorring–  CanbeexploitedforefficientimplementaAon.

•  IrregularinteracAons–  lackwell-definedtopologies–  Modeledasagraph

Page 21: Lecture 08: Principles of Parallel Algorithm Design

ExampleofRegularSta<cInterac<on

21

Imageprocessingalgorithms:dithering,edgedetec<on•  NearestneighborinteracAonsona2Dmesh

Page 22: Lecture 08: Principles of Parallel Algorithm Design

ExampleofIrregularSta<cInterac<on

22

Sparsematrixvectormul<plica<on

Page 23: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTaskInterac<ons

23

•  AspectsofinteracAons–  What: shared data or synchronizaAons, and sizes of the

media

•  Read-onlyinteracAons–  Tasksonlyreaddataitemsassociatedwithothertasks

•  Read-writeinteracAons–  Read,aswellasmodifydataitemsassociatedwithothertasks.–  Hardertocode

•  RequireaddiAonalsynchronizaAonprimiAves–  toavoidread-writeandwrite-writeorderingraces

Shareddata

T2T1 write

read

Page 24: Lecture 08: Principles of Parallel Algorithm Design

Characteris<csofTaskInterac<ons

24

•  AspectsofinteracAons–  What:shareddataorsynchronizaAons,andsizesofthemedia–  When:theAming–  Who:withwhichtask(s),andoveralltopology/paaerns–  DoweknowdetailsoftheabovethreebeforeexecuAon–  How:involveoneorboth?

•  TheimplementaAonconcern,implicitorexplicit•  One-sided

–  iniAated&completedindependentlyby1of2interacAngtasks•  GETandPUT

•  Two-sided–  bothtaskscoordinateinaninteracAon

•  SEND+RECV

Page 25: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  DecomposiAonTechniques-conAnued–  ExploratoryDecomposiAon–  HybridDecomposiAon

•  CharacterisAcsofTasksandInteracAons–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAcandDynamicMapping

•  MethodsforMinimizingInteracAonOverheads•  ParallelAlgorithmDesignModels

25

Page 26: Lecture 08: Principles of Parallel Algorithm Design

MappingTechniques

26

•  Parallelalgorithmdesign– Programdecomposed– CharacterisAcsoftaskandinteracAonsidenAfied

Assignlargeamountofconcurrenttaskstoequalorrela<velysmallamountofprocessesforexecu<on•  Thougho^enwedo1:1mapping

Page 27: Lecture 08: Principles of Parallel Algorithm Design

MappingTechniques

27

•  Goalofmapping:minimizeoverheads–  Thereiscosttodoparallelism

•  Interac<onsandidling(serializa<on)

•  ContradicAngobjecAves:interacAonsvsidling–  Idling(serializaAon)ñ:insufficientparallelism–  InteracAonsñ:excessiveconcurrency

–  E.g.AssigningallworktooneprocessortriviallyminimizesinteracAonattheexpenseofsignificantidling.

Page 28: Lecture 08: Principles of Parallel Algorithm Design

MappingTechniquesforMinimumIdling

28

•  ExecuAon:alternaAngstagesofcomputaAonandinteracAon

•  Mappingmustsimultaneouslyminimizeidlingandloadbalance–  Idlingmeansnotdoingusefulwork–  Loadbalance:doingthesameamountofwork

•  Merelybalancingloaddoesnotminimizeidling

Apoormapping,50%waste

Page 29: Lecture 08: Principles of Parallel Algorithm Design

MappingTechniquesforMinimumIdling

Sta<cordynamicmapping•  StaAcMapping

–  Tasksaremappedtoprocessesa-prior–  NeedagoodesAmateoftasksizes–  OpAmalmappingmaybeNPcomplete

•  DynamicMapping–  TasksaremappedtoprocessesatrunAme–  Because:

•  TasksaregeneratedatrunAme•  Theirsizesarenotknown.

•  Otherfactorsdeterminingthechoiceofmappingtechniques–  thesizeofdataassociatedwithatask–  thecharacterisAcsofinter-taskinteracAons–  eventheprogrammingmodelsandtargetarchitectures

29

Page 30: Lecture 08: Principles of Parallel Algorithm Design

SchemesforSta<cMapping

•  MappingsbasedondatadecomposiAon–  Mostly1-1mapping

•  MappingsbasedontaskgraphparAAoning•  Hybridmappings

30

Page 31: Lecture 08: Principles of Parallel Algorithm Design

MappingsBasedonDataPar<<oning

31

•  ParAAonthecomputaAonusingacombinaAonof–  DatadecomposiAon–  The``owner-computes''rule

Example:1-Dblockdistribu5onof2-Ddensematrix 1-1mappingoftask/dataandprocess

Page 32: Lecture 08: Principles of Parallel Algorithm Design

BlockArrayDistribu<onSchemes

32

Mul<-dimensionalBlockdistribu<on

Ingeneral,higherdimensiondecomposiAonallowstheuseoflarger#ofprocesses.

Page 33: Lecture 08: Principles of Parallel Algorithm Design

BlockArrayDistribu<onSchemes:Examples

Mul<plyingtwodensematrices:A*B=C•  ParAAontheoutputmatrixCusingablockdecomposiAon

–  Loadbalance:EachtaskcomputethesamenumberofelementsofC•  Note:eachelementofCcorrespondstoasingledotproduct

–  ThechoiceofprecisedecomposiAon:1-D(row/col)or2-D•  DeterminedbytheassociatedcommunicaAonoverhead

33

Page 34: Lecture 08: Principles of Parallel Algorithm Design

BlockDistribu<onandDataSharingforDenseMatrixMul<plica<on

34

X=

AXB=C

P0P1P2P3

X=

AXB=C

P0P1P2P3

•  Row-based1-D

•  Column-based1-D

•  Row/Col-based2-D

Page 35: Lecture 08: Principles of Parallel Algorithm Design

CyclicandBlockCyclicDistribu<ons

•  ConsiderablockdistribuAonforLUdecomposiAon(GaussianEliminaAon)–  Theamountofcomputa<onperdataitemvaries–  Blockdecomposi<onwouldleadtosignificantloadimbalance

35

Page 36: Lecture 08: Principles of Parallel Algorithm Design

LUFactoriza<onofaDenseMatrix

1:

2:

3:

4:

5:

6:

7:

8:

9:

10:

11:

12:

13:

14:

36

AdecomposiAonofLUfactorizaAoninto14tasks

Page 37: Lecture 08: Principles of Parallel Algorithm Design

BlockDistribu<onforLU

No<cethesignificantloadimbalance

37

Page 38: Lecture 08: Principles of Parallel Algorithm Design

BlockCyclicDistribu<ons

•  VariaAonoftheblockdistribuAonscheme–  ParAAonanarrayintomanymoreblocks(i.e.tasks)thanthe

numberofavailableprocesses.–  Blocksareassignedtoprocessesinaround-robinmannerso

thateachprocessgetsseveralnon-adjacentblocks.–  N-1mappingoftaskstoprocesses

•  Usedtoalleviatetheload-imbalanceandidlingproblems.

38

Page 39: Lecture 08: Principles of Parallel Algorithm Design

Block-CyclicDistribu<onforGaussianElimina<on

39

•  AcAvesubmatrixshrinksaseliminaAonprogresses•  Assigningblocksinablock-cyclicfashion

–  EachPEsreceivesblocksfromdifferentpartsofthematrix–  Inonebatchofmapping,thePEdoingthemostwillmost

likelyreceivetheleastinthenextbatch

Page 40: Lecture 08: Principles of Parallel Algorithm Design

Block-CyclicDistribu<on

•  AcyclicdistribuAon:aspecialcasewithblocksize=1•  AblockdistribuAon:aspecialcasewithblocksize=n/p•  nisthedimensionofthematrixandpisthe#ofprocesses.

40

Page 41: Lecture 08: Principles of Parallel Algorithm Design

BlockPar<<oningandRandomMapping

Sparsematrixcomputa<ons•  Loadimbalanceusingblock-cyclicparAAoning/mapping

–  morenon-zeroblockstodiagonalprocessesP0,P5,P10,andP15thanothers

–  P12getsnothing

41

Page 42: Lecture 08: Principles of Parallel Algorithm Design

BlockPar<<oningandRandomMapping

42

Page 43: Lecture 08: Principles of Parallel Algorithm Design

GraphPar<<oningBasedDataDecomposi<on

•  Array-basedparAAoningandstaAcmapping–  Regulardomain,i.e.rectangular,mostlydensematrix–  StructuredandregularinteracAonpaaerns–  QuiteeffecAveinbalancingthecomputaAonsandminimizing

theinteracAons

•  Irregulardomain–  Sparsmatrix-related–  NumericalsimulaAonsofphysicalphenomena

•  Car,water/bloodflow,geographic•  ParAAontheirregulardomainsoasto

–  Assignequalnumberofnodestoeachprocess–  MinimizingedgecountoftheparAAon.

43

Page 44: Lecture 08: Principles of Parallel Algorithm Design

Par<<oningtheGraphofLakeSuperior

RandomParAAoning

ParAAoningforminimumedge-cut.

44

•  EachmeshpointhasthesameamountofcomputaAon–  Easyforloadbalancing

•  Minimizeedges•  OpAmalparAAonisanNP-complete–  UseheurisAcs

Page 45: Lecture 08: Principles of Parallel Algorithm Design

MappingsBasedonTaskPari<oning

•  SchemesforStaAcMapping–  MappingsbasedondataparAAoning

•  Mostly1-1mapping–  MappingsbasedontaskgraphparAAoning–  Hybridmappings

•  DataparAAoning–  DatadecomposiAonandthen1-1mappingoftaskstoPEs

Par<<oningagiventask-dependencygraphacrossprocesses•  AnopAmalmappingforageneraltask-dependencygraph

–  NP-completeproblem.•  ExcellentheurisAcsexistforstructuredgraphs.

45

Page 46: Lecture 08: Principles of Parallel Algorithm Design

MappingaBinaryTreeDependencyGraph

46

Mappingdependencygraphofquicksorttoprocessesinahypercube

•  Hypercube:n-dimensionalanalogueofasquareandacube–  nodenumbersthatdifferin1bitareadjacent

Page 47: Lecture 08: Principles of Parallel Algorithm Design

MappingaSparseGraph

Sparsematrixvectormul<plica<onUsingdataparAAoning

47

Page 48: Lecture 08: Principles of Parallel Algorithm Design

MappingaSparseGraph

Sparsematrixvectormul<plica<onUsingtaskgraphparAAoning

48

13itemstocommunicate

Process0 0,4,5,8

Process1 1,2,3,7

Process2 6,9,10,11

Page 49: Lecture 08: Principles of Parallel Algorithm Design

Hierarchical/HybridMappings

•  Asinglemappingisinadequate.–  E.g.taskgraphmappingofthebinarytree(quicksort)cannot

usealargenumberofprocessors.•  Hierarchicalmapping

–  Taskgraphmappingatthetoplevel–  DataparAAoningwithineachlevel.

49

Page 50: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  DecomposiAonTechniques-conAnued–  ExploratoryDecomposiAon–  HybridDecomposiAon

•  CharacterisAcsofTasksandInteracAons–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAc–  DynamicMapping

•  MethodsforMinimizingInteracAonOverheads•  ParallelAlgorithmDesignModels

50

Page 51: Lecture 08: Principles of Parallel Algorithm Design

SchemesforDynamicMapping

•  Alsoreferredtoasdynamicloadbalancing–  LoadbalancingistheprimarymoAvaAonfordynamic

mapping.•  Dynamicmappingschemescanbe

–  Centralized–  Distributed

51

Page 52: Lecture 08: Principles of Parallel Algorithm Design

CentralizedDynamicMapping

•  Processesaredesignatedasmastersorslaves–  Workers(slaveispoliAcallyincorrect)

•  Generalstrategies–  Masterhaspooloftasksandascentraldispatcher–  Whenonerunsoutofwork,itrequestsfrommasterformorework.

•  Challenge–  Whenprocess#increases,mastermaybecometheboaleneck.

•  Approach–  Chunkscheduling:aprocesspicksupmulApletasksatonce–  Chunksize:

•  Largechunksizesmayleadtosignificantloadimbalancesaswell•  SchemestograduallydecreasechunksizeasthecomputaAonprogresses.

52

Page 53: Lecture 08: Principles of Parallel Algorithm Design

DistributedDynamicMapping

•  Allprocessesarecreatedequal–  Eachcansendorreceiveworkfromothers

•  Alleviatestheboaleneckincentralizedschemes.•  FourcriAcaldesignquesAons:

–  howaresendingandreceivingprocessespairedtogether–  whoiniAatesworktransfer–  howmuchworkistransferred–  whenisatransfertriggered?

•  AnswersaregenerallyapplicaAonspecific.

•  Workstealing

53

Page 54: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  DecomposiAonTechniques-conAnued–  ExploratoryDecomposiAon–  HybridDecomposiAon

•  CharacterisAcsofTasksandInteracAons–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAc–  DynamicMapping

•  MethodsforMinimizingInteracAonOverheads•  ParallelAlgorithmDesignModels

54

Page 55: Lecture 08: Principles of Parallel Algorithm Design

MinimizingInterac<onOverheads

Rulesofthumb•  Maximizedatalocality

–  Wherepossible,reuseintermediatedata–  RestructurecomputaAonsothatdatacanbereusedinsmaller

Amewindows.•  Minimizevolumeofdataexchange

–  parAAoninteracAongraphtominimizeedgecrossings•  MinimizefrequencyofinteracAons

–  MergemulApleinteracAonstoone,e.g.aggregatesmallmsgs.•  MinimizecontenAonandhot-spots

–  Usedecentralizedtechniques–  Replicatedatawherenecessary

55

Page 56: Lecture 08: Principles of Parallel Algorithm Design

MinimizingInterac<onOverheads(con<nued)

Techniques•  OverlappingcomputaAonswithinteracAons

–  Usenon-blockingcommunicaAons–  MulAthreading–  Prefetchingtohidelatencies.

•  ReplicaAngdataorcomputaAonstoreducecommunicaAon•  UsinggroupcommunicaAonsinsteadofpoint-to-pointprimiAves.

•  OverlapinteracAonswithotherinteracAons.

56

Page 57: Lecture 08: Principles of Parallel Algorithm Design

Today’slecture

•  DecomposiAonTechniques-conAnued–  ExploratoryDecomposiAon–  HybridDecomposiAon

•  CharacterisAcsofTasksandInteracAons–  TaskGeneraAon,Granularity,andContext–  CharacterisAcsofTaskInteracAons

•  MappingTechniquesforLoadBalancing–  StaAc–  DynamicMapping

•  MethodsforMinimizingInteracAonOverheads•  ParallelAlgorithmDesignModels

57

Page 58: Lecture 08: Principles of Parallel Algorithm Design

ParallelAlgorithmModels

•  Waysofstructuringparallelalgorithm–  DecomposiAontechniques–  Mappingtechnique–  StrategytominimizeinteracAons.

•  DataParallelModel

–  EachtaskperformssimilaroperaAonsondifferentdata–  TasksarestaAcally(orsemi-staAcally)mappedtoprocesses

•  TaskGraphModel–  Usetaskdependencygraphtoguidethemodelforbeaer

localityorlowinteracAoncosts.

58

Page 59: Lecture 08: Principles of Parallel Algorithm Design

ParallelAlgorithmModels(con<nued)

•  Master-SlaveModel–  Master(oneormore)generatework–  Dispatchworktoworkers.–  DispatchingmaybestaAcordynamic.

•  Pipeline/Producer-ComsumerModel–  Streamofdataispassedthroughasuccessionofprocesses,

eachofwhichperformsometaskonit–  MulAplestreamconcurrently

•  HybridModels–  ApplyingmulAplemodelshierarchically–  ApplyingmulAplemodelssequenAallytodifferentphasesofa

parallelalgorithm.

59

Page 60: Lecture 08: Principles of Parallel Algorithm Design

References

•  Adaptedfromslides“PrinciplesofParallelAlgorithmDesign”byAnanthGrama

•  BasedonChapter3of“IntroducAontoParallelCompuAng”byAnanthGrama,AnshulGupta,GeorgeKarypis,andVipinKumar.AddisonWesley,2003

60