What is Parallel Computing (1)

Embed Size (px)

Citation preview

  • 7/31/2019 What is Parallel Computing (1)

    1/21

    What is Parallel Computing? Traditionally, software has been written for serial computation:

    o To be run on a single computer having a single Central Processing Unit (CPU);o A problem is broken into a discrete series of instructions.o Instructions are executed one after another.o Only one instruction may execute at any moment in time.

    In the simplest sense, parallel computing is the simultaneous use of multiple computeresources to solve a computational problem:

    o To be run using multiple CPUso A problem is broken into discrete parts that can be solved concurrentlyo Each part is further broken down to a series of instructionso Instructions from each part execute simultaneously on different CPUs

  • 7/31/2019 What is Parallel Computing (1)

    2/21

    The compute resources can include:o A single computer with multiple processors;o An arbitrary number of computers connected by a network;o A combination of both.

    The computational problem usually demonstrates characteristics such as the ability to be:o Broken apart into discrete pieces of work that can be solved simultaneously;o Execute multiple program instructions at any moment in time;o Solved in less time with multiple compute resources than with a single compute

    resource.

    The Universe is Parallel:

    Parallel computing is an evolution of serial computing that attempts to emulate what has always

    been the state of affairs in the natural world: many complex, interrelated events happening at the

    same time, yet within a sequence. For example:

    o

    Galaxy formationo Planetary movemento Weather and ocean

    patterns

    o Tectonic plate drift

    o

    Rush hour traffico Automobile assembly lineo Building a space shuttle

    o Ordering a hamburger at the drivethrough.

    The Real World is Massively Parallel

  • 7/31/2019 What is Parallel Computing (1)

    3/21

    Uses for Parallel Computing:

    Historically, parallel computing has been considered to be "the high end of computing",and has been used to model difficult scientific and engineering problems found in the realworld. Some examples:

    o Atmosphere, Earth, Environmento Physics - applied, nuclear, particle, condensed matter, high pressure, fusion,

    photonicso Bioscience, Biotechnology, Geneticso Chemistry, Molecular Scienceso Geology, Seismologyo Mechanical Engineering - from prosthetics to spacecraft

  • 7/31/2019 What is Parallel Computing (1)

    4/21

  • 7/31/2019 What is Parallel Computing (1)

    5/21

    Why Use Parallel Computing?

    Main Reasons:

    Save time and/or money: In theory, throwing more resources at a task will shorten its timeto completion, with potential cost savings. Parallel clusters can be built from cheap,

    commodity components.

    Solve larger problems: Many problems are so large and/or complex that it is impracticalor impossible to solve them on a single computer, especially given limited computer memory. For example:

    o "Grand Challenge" ( en.wikipedia.org/wiki/Grand_Challenge ) problems requiringPetaFLOPS and PetaBytes of computing resources.

    o Web search engines/databases processing millions of transactions per second

    http://en.wikipedia.org/wiki/Grand_Challengehttp://en.wikipedia.org/wiki/Grand_Challenge
  • 7/31/2019 What is Parallel Computing (1)

    6/21

  • 7/31/2019 What is Parallel Computing (1)

    7/21

    cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond).Increasing speeds necessitate increasing proximity of processing elements.

    o Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-levelcomponents, a limit will be reached on how small components can be.

    o

    Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achievethe same (or better) performance is less expensive.

    Current computer architectures are increasingly relying upon hardware level parallelism toimprove performance:

    o Multiple execution unitso Pipelined instructionso Multi-core

    RAM modelRandom Access Machine is a favorite model of a sequential computer. Its main features are:1. Computation unit with a user defined program.2. Read-only input tape and write-only output tape.3. Unbounded number of local memory cells.4. Each memory cell is capable of holding an integer of unbounded size.Instruction set includes operations for moving data between memory cells, comparisons andconditional

    branches, and simple arithmetic operations.5.6. Execution starts with the first instruction and ends when a HALT instruction is executed.7. All operations take unit time regardless of the lengths of operands.8. Time complexity = the number of instructions executed.9. Space complexity = the number of memory cells accessed.

    .

  • 7/31/2019 What is Parallel Computing (1)

    8/21

    PRAM modelParallel Random Access Machine is a straightforward and natural generalization of RAM. It isan idealizedmodel of a shared memory SIMD machine . Its main features are:1. Unbounded collection of numbered RAM processors P 0 , P 1 , P 2 ,... (without tapes).

    2. Unbounded collection of shared memory cells M[0], M[1], M[2],... .3. Each P i has its own (unbounded) local memory (registers) and knows its index i.

    4. Each processor can access any shared memory cell (unless there is an access conflict, seefurther) in unit time .5. Input af a PRAM algorithm consists of n items stored in (usually the first) n shared 5. memorycells.6. Output of a PRAM algorithm consists of n' items stored in n' shared memory cells.

  • 7/31/2019 What is Parallel Computing (1)

    9/21

    7. PRAM instructions execute in 3-phase cycles .1. Read (if any) from a shared memory cell.2. Local computation (if any).3. Write (if any) to a shared memory cell.8. Processors execute these 3-phase PRAM instructions synchronously .

    9. Special assumptions have to be made about R-R and W-W shared memory access conflicts .10. The only way processors can exchange data is by writing into and reading from memory cells.11. P 0 has a special activation register specifying the maximum index of an active processor.Initially, only P 0 is active, it computes the number of required active processors and loads thisregister, and then the other corresponding processors start executing their programs.

    12. Computation proceeds until P 0 halts, at which time all other active processors are halted.13. Parallel time complexity = the time elapsed for P 0's computation.14. Space complexity = the number of shared memory cells accessed.

    PRAM is an attractive and important model for designers of parallel algorithms. Why?

    1. It is natural : the number of operations executed per one cycle on p processors is at most p.2. It is strong : any processor can read or write any shared memory cell in unit time.3. It is simple : it abstracts from any communication or synchronization overhead, which makes thecomplexity and correctness analysis of PRAM algorithms easier. Therefore,4. It can be used as a benchmark : If a problem has no feasible/efficient solution on PRAM, it hasno feasible/efficient solution on any parallel machine.5. It is useful : it is an idealization of existing (and nowaday more and more abundant) sharedmemory parallel machines.

  • 7/31/2019 What is Parallel Computing (1)

    10/21

    Simulation From One PRAM Model To Other

  • 7/31/2019 What is Parallel Computing (1)

    11/21

  • 7/31/2019 What is Parallel Computing (1)

    12/21

  • 7/31/2019 What is Parallel Computing (1)

    13/21

  • 7/31/2019 What is Parallel Computing (1)

    14/21

    PREFIX SUM

  • 7/31/2019 What is Parallel Computing (1)

    15/21

  • 7/31/2019 What is Parallel Computing (1)

    16/21

    The sequential for loop executes [logn] times. Hence, The overall execution time will be [logn].

    List Ranking Algorithm

    .

    The

    .

  • 7/31/2019 What is Parallel Computing (1)

    17/21

  • 7/31/2019 What is Parallel Computing (1)

    18/21

    Merging Two Sorted List

  • 7/31/2019 What is Parallel Computing (1)

    19/21

  • 7/31/2019 What is Parallel Computing (1)

    20/21

    Cost Optimal Parallel Algorithms

  • 7/31/2019 What is Parallel Computing (1)

    21/21