ProgrammingLanguages Programming Languages Parallel Programming languages

Embed Size (px)

Text of ProgrammingLanguages Programming Languages Parallel Programming languages

  • Programming Languages

  • Parallel Programming languages

  • ObjectivesThis lecture discusses the concept of parallel, or concurrent, programming.The major reason for investigating concurrent programming is that it provides a distinct way of conceptualizing the solution to a problemA second reason is to take advantage of parallelism in the underlying hardware to achieve a significant speedup.

  • ConceptsA sequential program specifies the execution of a sequence of statements that comprise the program. A process is a program in execution. As such, each process has its own state independent of the state of any other process or program.

    A process also has attached resources, such as files, memory, and so on. Part of the state of a process includes memory and the location of the current instruction being executed. Such an extended state is termed an execution context.

  • Parallel program is a program designed to have two or more execution contexts. Such a program is said to be multi-threaded, since it has more than one execution context.A parallel program is a concurrent program in which more than one execution context, or thread, is active simultaneously.In semantics, there is no difference between a concurrent program and a parallel one.

  • In a multiprocessing operating system the same program can be executed by multiple processes, each resulting in its own state or execution context, separate from the other processes. This is distinctly different from a multithreaded program in which some of the data resides simultaneously in each execution context. In a multi-threaded program, part of the program state is shared among the threads, while part of the state including the flow of control is unique to each thread.

  • Concurrent execution of a program can either occur using separate processors or be logically interleaved on a single processor using time slicing. In both Java and Ada, separate threads are applied to functions or methods, rather than being at the operation or statement level.

  • A thread can be found in any one of the following states: 1. Created: but is not yet ready to run. 2. Runnable or ready: is ready to run, but awaits getting a processor to run on. 3. Running: is actually executing on a processor. 4. Blocked or waiting: is either waiting on gaining access to a critical section or has voluntarily given up the processor. 5. Terminated: has been stopped and will not execute again.

  • These states and the transitions between them are pictured in the following Figure: Created Blocked | / \ V / \ Runnable Running Terminated

  • Communication: All concurrent programs involve inter-thread communication or interaction. This occurs for the following reasons: 1. Threads compete for exclusive access to shared resources, such as physical devices, files, or data. 2. Threads communicate to exchange data.

  • In both cases it is necessary for threads to synchronize their execution to avoid conflict when acquiring resources, or to make contact when exchanging data. 1. Non-local shared variables: this is the primary mechanism used by Java, and it can also be used by Ada. 2. Message passing: this is the primary mechanism used by Ada.3. Parameters: this is used by Ada in conjunction with message passing. A thread can communicate with other threads through:

  • Threads normally cooperate with one another to solve a problem. Thus, even in the simplest cases, communication between threads is essential. It is unusual for a thread not to communicate with other threads.However, it is highly desirable to keep communication between threads to a minimum; this makes the code easier to understand and allows each thread to run at its own speed, without being slowed down by the coordination of communication.

  • The fundamental problem in sharing access to a variable is termed a race condition. This occurs when the function computed by a program depends on the order in which operations occur.In the presence of such non-determinism, faults in a concurrent program may appear as transient errors. The error may or may not occur, even for the same data, depending on the execution paths of the various threads

  • Thus, a great skill in designing a concurrent program is the ability to express it in a form that guarantees correct program behavior in the presence of non-determinism. If a thread is unable to acquire a resource, its execution is normally suspended until the resource becomes available. Resource acquisition should normally be administered so that no thread is unduly delayed.

  • Code that accesses a shared variable or other resource is termed a critical section.For a thread to safely execute a critical section, there needs to be a locking mechanism such that it can test and set a lock as a single atomic instruction. Such a mechanism is used to ensure that only a single thread is executing a critical section at a time.

  • Deadlock and Unfairness: A thread is said to be in a state of deadlock if it is waiting for an event that will never happen.Deadlock normally involves several threads, A thread is said to be indefinitely postponed if it is delayed awaiting an event that may never occur each waiting for resources held by others. A deadlock can occur whenever two or more threads compete for resources.

  • Deadlock and Unfairness: A thread is said to be indefinitely postponed if it is delayed awaiting an event that may never occur.Such a situation can occur if the algorithm that allocates resources to requesting threads makes no allowance for the waiting time of a thread.Allocating resources on a first-in-first-out basis is a simple solution that eliminates this indefinite postponement.

  • Analogous to indefinite postponement is the concept of unfairness. In such a case no attempt is made to ensure that threads of equal status make equal progress in acquiring resources. A neglect of fairness in designing a concurrent system may lead to indefinite postponement, thereby rendering the system incorrect. A simple fairness criterion is that when an open choice of action is to be made, any action should be equally likely

  • Semaphores: Basically, a semaphore is an integer variable and an associated thread queuing mechanism. P(s) if s > 0 then set s = s 1 else the thread is blocked (enqueued). V(s) if a thread T is blocked on the semaphore s, then wake up T, else set s = s + l. Binary Semaphore : 0 or 1. Counting Semaphore: Arbitrary nonnegative values.

  • Producer Consumer Operation: A classic example occurs in the case of producer-consumer cooperation, where the single producer task produces information for the single consumer task to consume. The producer waits (via a P) for the buffer to be empty, deposits product, then signals (via a V) that the buffer is full. The consumer waits (via a P) for the buffer to be full, then removes the product from the buffer, and signals (via a V) that the buffer is empty.

  • Using Semaphores in Concurrent Pascal: program SimpleProducerConsumer; var buffer : string; full : semaphore = 0; empty : semaphore = 1; Procedure Producer: var : string begin while (true) do begin produce(tmp); P(empty) { begin critical section} Buffer := tmp; V(full); { end critical section} End; End;

  • Using Semaphores in Concurrent Pascal: procedure Consumer; var tmp : string begin while (true) do begin P(full); { begin critica1 section } tmp := buffer; V(empty); { end critica1 section } consume(tmp); end; end; begin cobegin Producer; Consumer; coend; end.

  • Monitors: Monitors provide the basis for synchronization in Java. Its purpose is to encapsulate a shared variable and operations on the variable. This capsulation is combined with an automatic locking mechanism on the operations so that at most one thread can be executing an operation at one time.

  • Using Monitor in Consumer/Producer Operation: Monitor Buffer; Const size=5; var buffer : array[1..size] of string; in : integer = 0; out : integer = 0; count : integer = 0; nonfull : condition; nonempty : condition;

  • Using Monitor in Consumer/Producer Operation: procedure put(s : string); begin if (count = size) then wait(nonfull); else in := in mod size +1; buffer[in] := tmp; count := count + 1; V(nonempty); end;

  • Using Monitor in Consumer/Producer Operation: function get : string; var tmp : string begin if (count = 0) then wait(nonempty); else out = out mod size + 1; tmp := buffer[out]; count := count - 1; signa1(nonfu1l); get := tmp; end; end;

  • ConclusionAvoiding Deadlocks and Achieving Fairness in a concurrent system should be considered at the design level.Synchronizing the execution of parallel programs is to avoid conflict when acquiring resources, or to make contact when exchanging data.Monitors and semaphores are equivalent mechanisms in power in that you can implement a monitor using semaphores and implement a semaphore using a monitor.

Recommended

View more >