90
CS 471 - Lecture 3 Process Synchronization & Deadlock Ch. 6 (6.1-6.7), 7 Fall 2009

CS 471 - Lecture 3 Process Synchronization & Deadlock

Embed Size (px)

DESCRIPTION

CS 471 - Lecture 3 Process Synchronization & Deadlock. Ch. 6 (6.1-6.7), 7 Fall 2009. Process Synchronization. Race Conditions The Critical Section Problem Synchronization Hardware Classical Problems of Synchronization Semaphores Monitors Deadlock. Concurrent Access to Shared Data. - PowerPoint PPT Presentation

Citation preview

CS 471 - Lecture 3

Process Synchronization & Deadlock

Ch. 6 (6.1-6.7), 7

Fall 2009

3.2GMU – CS 571

Process Synchronization

Race Conditions The Critical Section Problem Synchronization Hardware Classical Problems of Synchronization Semaphores Monitors Deadlock

3.3GMU – CS 571

Concurrent Access to Shared Data

Suppose that two processes A and B have access to a shared variable “Balance”.

PROCESS A: PROCESS B:

Balance = Balance - 100 Balance = Balance + 200

Further, assume that Process A and Process B are executing concurrently in a time-shared, multi-programmed system.

3.4GMU – CS 571

Concurrent Access to Shared Data

The statement “Balance = Balance – 100” is implemented by several machine level instructions such as:A1. lw $t0, BALANCE

A2. sub $t0,$t0,100

A3. sw $t0, BALANCE

Similarly, “Balance = Balance + 200” can be implemented by the following: B1. lw $t1, BALANCE

B2. add $t1,$t1,200

B3. sw $t1, BALANCE

3.5GMU – CS 571

Race Conditions

Scenario 1:

A1. lw $t0, BALANCE

A2. sub $t0,$t0,100

A3. sw $t0, BALANCE

Context Switch

B1. lw $t1, BALANCE

B2. add $t1,$t1,200

B3. sw $t1, BALANCE Balance is increased by

100

Observe: In a time-shared system the exact instruction execution order cannot be predicted

Scenario 2:

A1. lw $t0, BALANCE

A2. sub $t0,$t0,100

Context Switch

B1. lw $t1, BALANCE

B2. add $t1,$t0,200

B3. sw $t1, BALANCE

Context Switch

A3. sw $t0, BALANCE Balance is decreased by

100

3.6GMU – CS 571

Race Conditions

Situations where multiple processes are writing or reading some shared data and the final result depends on who runs precisely when are called race conditions.

A serious problem for most concurrent systems using shared variables! Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

We must make sure that some high-level code sections are executed atomically.

Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflict-causing process.

3.7GMU – CS 571

The Critical-Section Problem

n processes all competing to use some shared data

Each process has a code segment, called critical section (critical region), in which the shared data is accessed.

Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in their critical section.

The execution of the critical sections by the processes must be mutually exclusive in time.

3.8GMU – CS 571

Mutual Exclusion

3.9GMU – CS 571

Solving Critical-Section Problem Any solution to the problem must satisfy three conditions.

1. Mutual Exclusion:

No two processes may be simultaneously inside the same critical section.

2. Bounded Waiting:

No process should have to wait forever to enter a critical section.

3. Progress:

No process executing a code segment unrelated to a given critical section can block another

process trying to enter the same critical section.

Arbitrary Speed:

In addition, no assumption can be made about the relative speed of different processes (though all processes have a non-zero speed).

3.10GMU – CS 571

Attempts to solve mutual exclusiondo {

….

entry section

critical section

exit section

remainder section

} while (1);

General structure as above Two processes P1 and P2 Processes may share common variables Assume each statement is executed atomically (is this

realistic?)

3.11GMU – CS 571

Algorithm 1 Shared variables:

• int turn; initially turn = 0

• turn = i Pi can enter its critical section

Process Pi

do {

while (turn != i) ; //it can be forever

critical section

turn = j;

reminder section

} while (1);

Satisfies mutual exclusion, but not progress

busy waiting

3.12GMU – CS 571

Algorithm 2 Shared variables:

• boolean flag[2]; initially flag[0] = flag[1] = false

• flag[i] = true Pi wants to enter its critical region

Process Pi

do {

flag[i]=true;

while (flag[j]) ; //Pi & Pj both waiting

critical section

flag[i] = false;

reminder section

} while (1);

Satisfies mutual exclusion, but not progress

3.13GMU – CS 571

Algorithm 3: Peterson’s solution Combine shared variables of previous attempts Process Pi

do {

flag[i]=true;

turn = j;

while (flag[j] and turn == j) ;

critical section

flag[i] = false;

reminder section

} while (1);

Meets our 3 requirements (assuming each individual statement is executed atomically)

flag[j] = false or turn = i

3.14GMU – CS 571

Synchronization Hardware Many machines provide special hardware instructions

that help to achieve mutual exclusion

The TestAndSet (TAS) instruction tests and modifies the content of a memory word atomically

TAS R1,LOCK reads the contents of the memory word LOCK into register R1, and stores a nonzero value (e.g. 1) at the memory word LOCK (atomically)

Assume LOCK = 0• calling TAS R1, LOCK will set R1 to 0, and set LOCK to 1.

Assume LOCK = 1• calling TAS R1, LOCK will set R1 to 1, and set LOCK to 1.

3.15GMU – CS 571

Mutual Exclusion with Test-and-Set

Initially, shared memory word LOCK = 0.

Process Pi

do {

entry_section: TAS R1, LOCK

CMP R1, #0 /* was LOCK = 0? */

JNE entry_section

critical section

MOVE LOCK, #0 /* exit section */

remainder section

} while(1);

3.16GMU – CS 571

Classical Problems of Synchronization

Producer-Consumer Problem

Readers-Writers Problem

Dining-Philosophers Problem

We will develop solutions using semaphores and monitors as synchronization tools (no busy waiting).

3.17GMU – CS 571

Producer/Consumer Problem

Producer Consumer

bounded size buffer (N)

Producer and Consumer execute concurrently but only one should be accessing the buffer at a time. Producer puts items to the buffer area but should not be allowed to put items into a full buffer Consumer process consumes items from the buffer but cannot remove information from an empty buffer

3.18GMU – CS 571

Readers-Writers Problem

A data object (e.g. a file) is to be shared among several concurrent processes.

A writer process must have exclusive access to the data object.

Multiple reader processes may access the shared data simultaneously without a problem

Several variations on this general problem

3.19GMU – CS 571

Dining-Philosophers Problem

Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks.

A philosopher may pick up only one chopstick at a time, and cannot pick up a chopstick already in use. When done, he puts down both of his chopsticks, one after the other.

3.20GMU – CS 571

Synchronization

Synchronization• Most synchronization can be regarded as

either: Mutual exclusion (making sure that only one

process is executing a CRITICAL SECTION [touching a variable or data structure, for example] at a time),

or as CONDITION SYNCHRONIZATION, which

means making sure that a given process does not proceed until some condition holds (e.g. that a variable contains a given value)

The sample problems will illustrate this

Competition

Cooperation

3.21GMU – CS 571

Semaphores

Language level synchronization construct introduced by E.W. Dijkstra (1965)

Motivation: Avoid busy waiting by blocking a process execution until some condition is satisfied.

Each semaphore has an integer value and a queue. Two operations are defined on a semaphore variable

s:

wait(s) (also called P(s) or down(s))

signal(s) (also called V(s) or up(s)) We will assume that these are the only

user-visible operations on a semaphore. Semaphores are typically available in thread

implementations.

3.22GMU – CS 571

Semaphore Operations

Conceptually a semaphore has an integer value greater than or equal to 0.

wait(s):: wait/block until s.value > 0; s.value-- ; /* Executed atomically! */

A process executing the wait operation on a semaphore with value 0 is blocked (put on a queue) until the semaphore’s value becomes greater than 0.• No busy waiting

signal(s):: s.value++; /* Executed atomically! */

3.23GMU – CS 571

Semaphore Operations (cont.)

If multiple processes are blocked on the same semaphore s, only one of them will be awakened when another process performs signal(s) operation. Who will have priority?

Binary semaphores only have value 0 or 1. Counting semaphores can have any non-

negative value. [We will see some of these later]

3.24GMU – CS 571

Shared data:

semaphore mutex; /* initially mutex = 1 */

Process Pi:

do {

wait(mutex);

critical section

signal(mutex);

remainder section

} while (1);

Critical Section Problem with Semaphores

3.25GMU – CS 571

Re-visiting the “Simultaneous Balance Update” Problem

Process A:

…….

wait (mutex); Balance = Balance – 100;

signal (mutex);

……

Shared data:

int Balance;

semaphore mutex; // initially mutex = 1

Process B:

…….

wait (mutex); Balance = Balance + 200;

signal (mutex);

……

3.26GMU – CS 571

Suppose we need to execute B in Pj only after A executed in Pi

Use semaphore flag initialized to 0 Code:

Pi : Pj :

A wait(flag)

signal(flag) B

Semaphore as a General Synchronization Tool

3.27GMU – CS 571

Returning to Producer/Consumer

Producer Consumer

bounded size buffer (N)

Producer and Consumer execute concurrently but only one should be accessing the buffer at a time. Producer puts items to the buffer area but should not be allowed to put items into a full buffer Consumer process consumes items from the buffer but cannot remove information from an empty buffer

3.28GMU – CS 571

Producer-Consumer Problem (Cont.)

Make sure that:

1.The producer and the consumer do not access the buffer area and related variables at the same time.

2.No item is made available to the consumer if all the buffer slots are empty.

3.No slot in the buffer is made available to the producer if all the buffer slots are full.

competition

cooperation

3.29GMU – CS 571

Producer-Consumer Problem

Shared data

semaphore full, empty, mutex;

Initially:

full = 0 /* The number of full buffers */

empty = n /* The number of empty buffers */

mutex = 1 /* Semaphore controlling the access to the buffer pool */

binary

counting

3.30GMU – CS 571

Producer Process

do { …

produce an item in p …

wait(empty);wait(mutex);

…add p to buffer

…signal(mutex);signal(full);

} while (1);

3.31GMU – CS 571

Consumer Process

do { wait(full);wait(mutex);

…remove an item from buffer to c

…signal(mutex);signal(empty);

…consume the item in c

…} while (1);

3.32GMU – CS 571

Readers-Writers Problem A data object (e.g. a file) is to be shared among

several concurrent processes. A writer process must have exclusive access

to the data object. Multiple reader processes may access the

shared data simultaneously without a problem

Shared data

semaphore mutex, wrt;

int readcount;

Initially

mutex = 1, readcount = 0, wrt = 1;

3.33GMU – CS 571

Readers-Writers Problem: Writer Process

wait(wrt); …

writing is performed …

signal(wrt);

3.34GMU – CS 571

Readers-Writers Problem: Reader Process

wait(mutex);readcount++;if (readcount == 1)

wait(wrt);signal(mutex);

…reading is performed

…wait(mutex);readcount--;if (readcount == 0)

signal(wrt);signal(mutex);

Is this solution o.k.?

3.35GMU – CS 571

Dining-Philosophers Problem

Shared data

semaphore chopstick[5];

Initially all semaphore values are 1

Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks.

A philosopher may pick up only one chopstick at a time, and cannot pick up a chopstick already in use. When done, he puts down both of his chopsticks, one after the other.

3.36GMU – CS 571

Dining-Philosophers Problem

Philosopher i:do {

wait(chopstick[i]);wait(chopstick[(i+1) % 5]);

…eat …

signal(chopstick[i]);signal(chopstick[(i+1) % 5]);

…think …

} while (1);

Is this solution o.k.?

3.37GMU – CS 571

Semaphores

It is generally assumed that semaphores are fair, in the sense that processes complete semaphore operations in the same order they start them

Problems with semaphores• They're pretty low-level.

When using them for mutual exclusion, for example (the most common usage), it's easy to forget a wait or a release, especially when they don't occur in strictly matched pairs.

• Their use is scattered. If you want to change how processes synchronize access to a

data structure, you have to find all the places in the code where they touch that structure, which is difficult and error-prone.

• Order Matters What could happen if we switch the two ‘wait’ instructions in the

consumer (or producer)?

3.38GMU – CS 571

Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.

Let S and Q be two semaphores initialized to 1

P0 P1

wait(S); wait(Q);

wait(Q); wait(S);

M M signal(S);

signal(Q); signal(Q)

signal(S); Starvation – indefinite blocking. A process

may never be removed from the semaphore queue in which it is suspended.

Deadlock and Starvation

3.39GMU – CS 571

High Level Synchronization Mechanisms

Several high-level mechanisms that are easier to use have been proposed. • Monitors• Critical Regions• Read/Write locks

We will study monitors (Java and Pthreads provide synchronization mechanisms based on monitors)

They were suggested by Dijkstra, developed more thoroughly by Brinch Hansen, and formalized nicely by Tony Hoare in the early 1970s

Several parallel programming languages have incorporated some version of monitors as their fundamental synchronization mechanism

3.40GMU – CS 571

Monitors A monitor is a shared object

with operations, internal state, and a number of condition queues. Only one operation of a given monitor may be active at a given point in time

A process that calls a busy monitor is delayed until the monitor is free• On behalf of its calling

process, any operation may suspend itself by waiting on a condition

• An operation may also signal a condition, in which case one of the waiting processes is resumed, usually the one that waited first

3.41GMU – CS 571

Monitors

Condition (cooperation) Synchronization with Monitors:• Access to the shared data

in the monitor is limited by the implementation to a single process at a time; therefore, mutually exclusive access is inherent in the semantic definition of the monitor

• Multiple calls are queued

Mutual Exclusion (competition) Synchronization with Monitors:• delay takes a queue type

parameter; it puts the process that calls it in the specified queue and removes its exclusive access rights to the monitor’s data structure

• continue takes a queue type parameter; it disconnects the caller from the monitor, thus freeing the monitor for use by another process. It also takes a process from the parameter queue (if the queue isn’t empty) and starts it

3.42GMU – CS 571

Shared Data with Monitors

Monitor SharedData

{ int balance;

void updateBalance(int amount);

int getBalance();

void init(int startValue) { balance = startValue; }

void updateBalance(int amount){ balance += amount; }

int getBalance(){ return balance; }

}

3.43GMU – CS 571

Monitors To allow a process to wait within the monitor, a

condition variable must be declared, as

condition x, y; Condition variable can only be used with the

operations wait and signal.• The operation

x.wait();means that the process invoking this operation is suspended until another process invokes

x.signal();

• The x.signal operation resumes exactly one suspended process on condition variable x. If no process is suspended on condition variable x, then the signal operation has no effect.

Wait and signal operations of the monitors are not the same as semaphore wait and signal operations!

3.44GMU – CS 571

Monitor with Condition Variables When a process P “signals” to wake up the process Q

that was waiting on a condition, potentially both of them can be active.

However, monitor rules require that at most one process can be active within the monitor. Who will go first?

• Signal-and-wait: P waits until Q leaves the monitor (or, until Q waits for another condition).

• Signal-and-continue: Q waits until P leaves the monitor (or, until P waits for another condition).

• Signal-and-leave: P has to leave the monitor after signaling (Concurrent Pascal)

The design decision is different for different programming languages

3.45GMU – CS 571

Monitor with Condition Variables

3.46GMU – CS 571

Producer-Consumer Problem with Monitors

Monitor Producer-consumer

{

condition full, empty;

int count;

void insert(int item); //the following slide

int remove(); //the following slide

void init() { count = 0;

}}

3.47GMU – CS 571

Producer-Consumer Problem with Monitors (Cont.)

void insert(int item){if (count == N) full.wait();

insert_item(item); // Add the new item count ++;if (count == 1) empty.signal();

}int remove()

{ int m;if (count == 0) empty.wait();m = remove_item(); // Retrieve one itemcount --;if (count == N–1) full.signal();

return m; }

3.48GMU – CS 571

Producer-Consumer Problem with Monitors (Cont.)

void producer() { //Producer process while (1) {

item = Produce_Item();

Producer-consumer.insert(item); }

}

void consumer() { //Consumer process while (1) {

item =

Producer-consumer.remove(item);

consume_item(item); }

}

3.49GMU – CS 571

Monitors

Evaluation of monitors:• Strong support for mutual exclusion synchronization • Support for condition synchronization is very similar as

with semaphores, so it has the same problems• Building a correct monitor requires that one think about the

"monitor invariant“. The monitor invariant is a predicate that captures the notion "the state of the monitor is consistent."

It needs to be true initially, and at monitor exit It also needs to be true at every wait statement

• Can implement monitors in terms of semaphores that semaphores can do anything monitors can.

• The inverse is also true; it is trivial to build a semaphores from monitors

3.50GMU – CS 571

Dining-Philosophers Problem with Monitors

monitor dp {

enum {thinking, hungry, eating} state[5];condition self[5];void pickup(int i) // following

slidesvoid putdown(int i) // following

slidesvoid test(int i) // following

slidesvoid init() {

for (int i = 0; i < 5; i++)state[i] = thinking;

}}

Each philosopher will perform:

dp. pickup (i);

…eat..

dp. putdown(i);

3.51GMU – CS 571

void pickup(int i) {state[i] = hungry;test(i);if (state[i] != eating)

self[i].wait();}

void putdown(int i) {state[i] = thinking;// test left and right neighbors// wake them up if possibletest((i+4) % 5);test((i+1) % 5);

}

Solving Dining-Philosophers Problem with Monitors

3.52GMU – CS 571

void test(int i) {if ( (state[(i + 4) % 5] != eating)

&& (state[i] == hungry) && (state[(i + 1) % 5] != eating)) {

state[i] = eating;self[i].signal();

}}

Solving Dining-Philosophers Problem with Monitors

Is this solution o.k.?

3.53GMU – CS 571

Dining Philosophers problem (Cont.)

Philosopher1 arrives and starts eating

Philosopher2 arrives; he is suspended

Philosopher3 arrives and starts eating

Philosopher1 puts down the chopsticks, wakes up Philosopher2 (suspended once again)

Philosopher1 re-arrives, and starts eating

Philosopher3 puts down the chopsticks, wakes up Philosopher2 (suspended once again)

Philosopher3 re-arrives, and starts eating

……

3.54GMU – CS 571

Deadlock

In a multi-programmed environment, processes/threads compete for (exclusive) use of a finite set of resources

3.55GMU – CS 571

Necessary Conditions for Deadlock

1. Mutual exclusion – At least one resource must be held in a non-sharable mode

2. Hold and wait – A process holds at least one resource while waiting to acquire additional resources that are being held by other processes

3. No preemption – a resource is only released voluntarily from a process

4. Circular wait: A set of processes {P0,…,Pn} such that Pi is waiting for a resource held by Pi+1 and Pn is waiting for a resource held by P0

3.56GMU – CS 571

Resource-Allocation Graphs

Process

Resource Type with 4 instances

Pi requests instance of Rj

Pi is holding an instance of Rj

Pi

PiRj

Rj

3.57GMU – CS 571

Resource Allocation Graph

deadlocked

request edge

assignment

edge

3.58GMU – CS 571

Graph With A Cycle But No Deadlock

3.59GMU – CS 571

Handling Deadlock

Deadlock prevention – ensure that at least one of the necessary conditions cannot occur

Deadlock avoidance – analysis determines whether a new request could lead toward a deadlock situation

Deadlock detection – detect and recover from any deadlocks that occur

3.60GMU – CS 571

Deadlock Prevention

Prevent deadlocks by ensuring one of the required conditions cannot occur

Mutual exclusion this condition typically cannot be removed for non-sharable resources

Hold and wait elimination requires that processes request and acquire all resources in single atomic action

No preemption add pre-emption when waiting for a resource and some other processes needs a resource currently held

Circular wait require processes to acquire resources in some ordered way.

3.61 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Deadlock AvoidanceDeadlock Avoidance

When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state.

System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes in the systems such that for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < i. If a system is in safe state no deadlocks.

If a system is in unsafe state possibility of deadlock.

Avoidance ensure that a system will never enter an unsafe state.

3.62GMU – CS 571

Example: Deadlock Avoidance

Overall System has 12 tape drives and 3 processes:

Safe state: <P1, P0, P2>• P1 only needs 2 of the 3 remaining drives

• P0 needs 5 (3 remaining + 2 held by P1)

• P2 needs 7 (3 remaining + 4 held by P0)

Max Needs Current Needs

P0 10 5

P1 4 2

P2 9 2

3 free drives

3.63GMU – CS 571

Example (cont’d)

Overall System has 12 tape drives and 3 processes.

Now P2 requests 1 more drive – are we still in a safe state if we grant this request??

Unsafe state – only P1 can be granted its request and once it terminates, there are still not enough drives for P0 and P2 potential deadlock

Avoidance algorithm would make P2 wait

Max Needs Current Needs

P0 10 5

P1 4 2

P2 9 3

2 free drives

3.64 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Avoidance algorithmsAvoidance algorithms

Single instance of a resource type. Use a resource-allocation graph

Multiple instances of a resource type. Use the banker’s algorithm

3.65 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Resource-Allocation Graph SchemeResource-Allocation Graph Scheme

Claim edge Pi Rj indicated that process Pi may request resource Rj; represented by a dashed line.

Claim edge converts to request edge when a process requests a resource.

Request edge converted to an assignment edge when the resource is allocated to the process.

When a resource is released by a process, assignment edge reconverts to a claim edge.

Resources must be claimed a priori in the system.

3.66 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Resource-Allocation GraphResource-Allocation Graph

P1 has been assigned R1

P2 has requested R1

Both P1 and P2 may request

R2 in the future

A request can be granted only

if converting the request edge

to an assignment edge does not

result in the formation of a cycle in

the resource allocation graph

3.67 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Unsafe State In Resource-Allocation GraphUnsafe State In Resource-Allocation Graph

3.68 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Banker’s AlgorithmBanker’s Algorithm

Multiple instances.

Each process must a priori claim maximum use.

When a process requests a resource it may have to wait.

When a process gets all its resources it must return them in a finite amount of time.

3.69GMU – CS 571

Bankers Algorithm

Overall System has m=3 different resource types (A=10, B=5, C=7) and n=5 processes:

In a safe state??

Allocation

A B C

Max

A B C

P0 0 1 0 7 5 3

P1 2 0 0 3 2 2

P2 3 0 2 9 0 2

P3 2 1 1 2 2 2

P4 0 0 2 4 3 3

3.70 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Data Structures for the Banker’s Algorithm Data Structures for the Banker’s Algorithm

Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available.

Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.

Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.

Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j].

Let n = number of processes, and m = number of resources types.

3.71GMU – CS 571

Bankers Algorithm (cont’d)

Overall System has 3 different resource types (A=10, B=5, C=7) and 5 processes:

safe state: <P1,P3,P4,P2,P0>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

3 3 2

3.72GMU – CS 571

Bankers Algorithm (cont’d)

<P1,P3,P4,P2,P0> All of P1’s requests can be granted from available

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

3 3 2

*

*

3.73GMU – CS 571

Bankers Algorithm (cont’d)

<P1,P3,P4,P2,P0> Once P1 ends, P3’s requests could be granted

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

5 3 2

**

3.74GMU – CS 571

Bankers Algorithm (cont’d)

<P1,P3,P4,P2,P0> Once P3 ends, P4’s requests could be granted

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

7 4 3*

*

*

3.75GMU – CS 571

Bankers Algorithm (cont’d)

<P1,P3,P4,P2,P0> Once P4 ends, P2’s requests could be granted

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

7 4 5

*

*

3.76GMU – CS 571

Bankers Algorithm (cont’d)

<P1,P3,P4,P2,P0> Once P2 ends, P0’s requests could be granted

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

10 4 7

*

3.77 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Safety Algorithm (Sec. 7.5.3.1)Safety Algorithm (Sec. 7.5.3.1)

1. Let Work and Finish be vectors of length m and n, respectively. Initialize:

Work = Available

Finish [i] = false for i = 0, 1, …, n- 1.

2. Find an i such that both:

(a) Finish [i] = false

(b) Needi Work

If no such i exists, go to step 4.

3. Work = Work + Allocationi

Finish[i] = truego to step 2.

4. If Finish [i] == true for all i, then the system is in a safe state.

May require m x n2 operations

3.78GMU – CS 571

Bankers Algorithm (cont’d)

Starting from previous state, suppose P1 requests (1,0,2). Should this be granted?

<P1>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 2 0 0 3 2 2 1 2 2

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

3 3 2

*3 2

2 0

0 0

3.79GMU – CS 571

Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?

<P1,P3>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

5 3 2

*

*

3.80GMU – CS 571

Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?

<P1,P3,P0>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

7 4 3

*

*

*

3.81GMU – CS 571

Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?

<P1,P3,P0,P2>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

7 5 3

*

*

3.82GMU – CS 571

Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?

<P1,P3,P0,P2,P4>

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

10 5 5

*

3.83GMU – CS 571

Bankers Algorithm (cont’d)

Now, P4 requests (3,3,0). Should this be granted?

No – more than the available resources

Allocation

A B C

Max

A B C

Need

A B C

P0 0 1 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

2 3 0

3.84GMU – CS 571

Bankers Algorithm (cont’d)

Now, P0 requests (0,2,0). Should this be granted?

Allocation

A B C

Max

A B C

Need

A B C

P0 0 3 0 7 5 3 7 4 3

P1 3 0 2 3 2 2 0 2 0

P2 3 0 2 9 0 2 6 0 0

P3 2 1 1 2 2 2 0 1 1

P4 0 0 2 4 3 3 4 3 1

Available

2 1 0

3.85 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Resource-Request Algorithm for Process Resource-Request Algorithm for Process PPi i

(Sec 7.5.3.2)(Sec 7.5.3.2)

Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj.

1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim.

2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available.

3. Pretend to allocate requested resources to Pi by modifying the state as follows:

Available = Available – Request;

Allocationi = Allocationi + Requesti;

Needi = Needi – Requesti; If safe the resources are allocated to Pi. If unsafe Pi must wait, and the old resource-allocation

state is restored

3.86GMU – CS 571

Deadlock Detection and Recovery

Allow system to enter deadlock state

Detection algorithm• Single Instance (look for cycles in wait-for graph)

• Multiple instance (related to banker’s algorithm) Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state.

Recovery scheme

3.87 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Resource-Allocation Graph and Wait-for GraphResource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph

3.88 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Detection-Algorithm UsageDetection-Algorithm Usage

When, and how often, to invoke depends on:

How often a deadlock is likely to occur?

How many processes will need to be rolled back?

one for each disjoint cycle

If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.

3.89 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Recovery from Deadlock: Process TerminationRecovery from Deadlock: Process Termination

Abort all deadlocked processes.

Abort one process at a time until the deadlock cycle is eliminated.

In which order should we choose to abort?

Priority of the process.

How long process has computed, and how much longer to completion.

Resources the process has used.

Resources process needs to complete.

How many processes will need to be terminated.

Is process interactive or batch?

3.90 Silberschatz, Galvin and Gagne ©2005Operating System Concepts - 7th Edition

Recovery from Deadlock: Resource PreemptionRecovery from Deadlock: Resource Preemption

Selecting a victim – minimize cost.

Rollback – return to some safe state, restart process for that state.

Starvation – same process may always be picked as victim, include number of rollbacks in cost factor.