L22: Parallel Programming Language Features (Chapel and MapReduce)

  • Published on
    08-Feb-2016

  • View
    42

  • Download
    0

Embed Size (px)

DESCRIPTION

L22: Parallel Programming Language Features (Chapel and MapReduce). December 1, 2009. Administrative. Schedule for the rest of the semester Midterm Quiz = long homework Handed out over the holiday (Tuesday, Dec. 1) Return by Dec. 15 Projects 1 page status report on Dec. 3 - PowerPoint PPT Presentation

Transcript

  • L22: Parallel Programming Language Features (Chapel and MapReduce) December 1, 2009

  • AdministrativeSchedule for the rest of the semesterMidterm Quiz = long homeworkHanded out over the holiday (Tuesday, Dec. 1)Return by Dec. 15Projects 1 page status report on Dec. 3 handin cs4961 pdesc Poster session dry run (to see material) Dec. 8Poster details (next slide)Mailing list: cs4961@list.eng.utah.edu12/01/09

  • Poster DetailsI am providing:Foam core, tape, push pins, easelsPlan on 2ft by 3ft or so of material (9-12 slides)Content:Problem description and why it is importantParallelization challengesParallel Algorithm How are two programming models combined?Performance results (speedup over sequential)

    12/01/09

  • OutlineGlobal View LanguagesChapel Programming LanguageMap-Reduce (popularized by Google)Reading: Ch. 8 and 9 in textbookSources for todays lectureBrad Chamberlain, CrayJohn Gilbert, UCSB

    12/01/09

  • Shifting GearsWhat are some important features of parallel programming languages (Ch. 9)?CorrectnessPerformanceScalabilityPortability 12/01/09And what about ease of programming?

  • Global View Versus Local ViewP-IndependenceIf and only if a program always produces the same output on the same input regardless of number or arrangement of processorsGlobal view A language construct that preserves P-independenceExample (todays lecture)Local viewDoes not preserve P-independent program behaviorExample from previous lecture?12/01/09

  • What is a PGAS Language?PGAS = Partitioned Global Address SpacePresent a global address space to the application developerMay still run on a distributed memory architectureExamples: Co-Array Fortran, Unified Parallel CModern parallel programming languages present a global address space abstractionPerformance? Portability? A closer look at a NEW global view language, ChapelFrom DARPA High Productivity Computing Systems programLanguage design initiated around 2003Also X10 (IBM) and Fortress (Sun)12/01/09

  • Chapel Domain Example: Sparse MatricesRecall sparse matrix-vector multiply computation from P1&P2

    for (j=0; j

  • Chapel FormulationDeclare a dense domain for sparse matrixconst dnsDom = [1..n, 1..n];Declare a sparse domainvar spsDom: sparse subdomain(dnsDom);Var spsArr: [spsDom] real;Now you need to initialize the spsDom. As an example,spsDom = [(1,2),(2,3),(2,7),(3,6),(4,8),(5,9),(6,4),(9,8)];Iterate over sparse domain:forall (i,j) in spsDomresult[i] = result[i] + spsArr(i,j) * input[j];12/01/09

  • I. MapReduceWhat is MapReduce?Example computing environmentHow it worksFault ToleranceDebuggingPerformance

  • What is MapReduce?Parallel programming model meant for large clustersUser implements Map() and Reduce()Parallel computing frameworkLibraries take care of EVERYTHING elseParallelizationFault ToleranceData DistributionLoad BalancingUseful model for many practical tasks (large data)

  • Map and ReduceBorrowed from functional programming languages (eg. Lisp)Map()Process a key/value pair to generate intermediate key/value pairsReduce()Merge all intermediate values associated with the same key

  • Example: Counting WordsMap()Input Parses file and emits pairseg. Reduce()Sums values for the same key and emits eg. =>

  • Example Use of MapReduceCounting words in a large set of documents

    map(string key, string value)//key: document name//value: document contentsfor each word w in valueEmitIntermediate(w, 1);

    reduce(string key, iterator values)//key: word//values: list of countsint results = 0;for each v in valuesresult += ParseInt(v);Emit(AsString(result));

  • Google Computing EnvironmentTypical Clusters contain 1000's of machinesDual-processor x86's running Linux with 2-4GB memoryCommodity networkingTypically 100 Mbs or 1 GbsIDE drives connected to individual machinesDistributed file system

  • How MapReduce WorksUser to do list:indicate:Input/output filesM: number of map tasksR: number of reduce tasksW: number of machinesWrite map and reduce functionsSubmit the jobThis requires no knowledge of parallel/distributed systems!!!What about everything else?

  • Data DistributionInput files are split into M pieces on distributed file systemTypically ~ 64 MB blocksIntermediate files created from map tasks are written to local diskOutput files are written to distributed file system

  • Assigning TasksMany copies of user program are startedTries to utilize data localization by running map tasks on machines with dataOne instance becomes the MasterMaster finds idle machines and assigns them tasks

  • Execution (map)Map workers read in contents of corresponding input partitionPerform user-defined map computation to create intermediate pairsPeriodically buffered output pairs written to local diskPartitioned into R regions by a partitioning function

  • Partition FunctionExample partition function: hash(key) mod RWhy do we need this?Example Scenario:Want to do word counting on 10 documents5 map tasks, 2 reduce tasks

  • Execution (reduce)Reduce workers iterate over ordered intermediate dataEach unique key encountered values are passed to user's reduce functioneg. Output of user's reduce function is written to output file on global file systemWhen all tasks have completed, master wakes up user program

  • ObservationsNo reduce can begin until map is completeTasks scheduled based on location of dataIf map worker fails any time before reduce finishes, task must be completely rerunMaster must communicate locations of intermediate filesMapReduce library does most of the hard work for us!

  • Fault ToleranceWorkers are periodically pinged by masterNo response = failed workerMaster writes periodic checkpointsOn errors, workers send last gasp UDP packet to master Detect records that cause deterministic crashes and skips them

  • Fault ToleranceInput file blocks stored on multiple machinesWhen computation almost done, reschedule in-progress tasksAvoids stragglers

  • DebuggingOffers human readable status info on http serverUsers can see jobs completed, in-progress, processing rates, etc.Sequential implementationExecuted sequentially on a single machineAllows use of gdb and other debugging tools

  • PerformanceTests run on 1800 machines4GB memoryDual-processor # 2 GHz Xeons with HyperthreadingDual 160 GB IDE disksGigabit Ethernet per machineRun over weekend when machines were mostly idleBenchmark: SortSort 10^10 100-byte records

  • Performance NormalNo Backup Tasks200 Processes Killed

  • MapReduce ConclusionsSimplifies large-scale computations that fit this modelAllows user to focus on the problem without worrying about detailsComputer architecture not very importantPortable model

  • ReferencesJeffery Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large ClustersJosh Carter, http://multipart-mixed.com/software/mapreduce_presentation.pdfRalf Lammel, Google's MapReduce Programming Model Revisitedhttp://code.google.com/edu/parallel/mapreduce-tutorial.html

    RELATEDSawzallPigHadoop

    *********************

Recommended

View more >