23

Parallel programming model, language and compiler in ACA

Embed Size (px)

Citation preview

A programming model is a collection of program

abstraction providing a programmer a simplified

and transparent view of computer H/W and S/W.

Parallel programming model is designed for vector

computers.

Fundamental issues in parallel programming.

Creation, suspension, reactivation, termination.

Five model are designed that exploits

parallelism-:

Shared-variable model.

Message-passing model.

Data parallel model.

Object oriented model.

Functional and logic model.

In shared variable model parallelism depends on

how IPC is implemented.

IPC implemented in parallel programming by two

ways.

IPC using shared variable.

IPC using message passing.

IPC with shared variable

IPC with message passing

Critical section.

Memory consistency.

Atomicity with memory operation.

Fast synchronization.

Shared data structure.

Two process communicate with each other by

passing message through a network.

Delay caused by message passing is much longer

than shared variable model in a same memory.

Two message passing approach are introduced here.

Synchronous message passing-:

Its synchronizes the sender and receiver process

with time and space just like telephone call.

No shared memory.

No need of mutual exclusion.

No buffer are used in communication channel.

It can be blocked by channel being busy.

Asynchronous message passing-:

Does not need to synchronize the sender and

receiver in time and space.

Non blocking can be achieved.

Buffer are used to hold the message along the path

of connecting channel.

Message passing programming is gradually

changing, once the virtual memory from all nodes

are combined.

It require the use of pre-distributed data set.

Interconnected data structure are also needed to

facilitate data exchange operation.

It emphasizes local computation and data routing

operation such as permutation, replication, reduction

and parallel prefix.

It can be implemented on either SIMD or SPMD

multicomputer, depending on the grain size of

program.

Object are created and manipulated dynamically.

Processing is performed using object.

Concurrent programming model are built up from

low level object such as processes, queue and

semaphore.

C-OOP achieve parallelism using three methods.

Pipeline concurrency.

Divide and conquer concurrency.

Co-operating problem solving.

Two language-oriented programming for parallel

processing are purposed.

Functional programming model such as LISP,

SISAL, Strand 88.

Logic programming model as prolog.

Based on predicate logic, logic programming is

suitable for solving large database queries.

Language feature for parallel programming into six

categories according to functionality.

Optimization features

Used for program restructuring and compilation

directives.

Sequentially coded program into parallel code.

Automated parallelization.

Semi-automated parallelization.

Availability feature

Its use to enhance the user- friendliness.

Make language portable to large class of parallel

computers.

Scalability.

Compatibility.

Portability.

Synchronization/ communication feature

Shared variable for IPC.

Single assignment language.

Send/receive for message passing.

Logical shared memory such as the row space in

Linda.

Remote procedure call.

Data flow languages such as id.

.

Control of parallelism

Coarse, medium or fine grain.

Explicit versus implicit parallelism.

Loop parallelism in iteration.

Shared task queue.

Divide and conquer paradigm.

Shared abstract data type.

Data parallelism feature

It specified how data are accessed and distributed

Runtime automatic decomposition.

Mapping specification.

Virtual processor support.

Direct access to shared data.

Process management features

These feature are needed to support the efficient

creation of parallel processes.

Implementation of multithreading or multitasking.

Dynamic process creation at runtime.

Automatic load balancing.

Light weight processes.

Special language construct and data array

expression for exploiting parallelism in program.

First is FORTRAN 90 array notation.

Parallel flow control is achieve using do across and

do all type of keyword which is use in the

FORTRAN 90.

Same we also use FORK and JOIN method.

The role of compiler to remove the burden of

program optimization and code generation.

A parallelizing compiler consist of the three major

phases.

Flow analysis.

Optimization.

Code generation.

“Compilation phases in parallel code generation”

THANK YOU