54
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes (i.e., processes share memory or use message passing to cooperate). Example: Recall Producer Consumer problem in Chapter 4 (could only use n-1 elements of the buffer). New solution adds a variable to track number of items in the buffer.

Background

  • Upload
    yehuda

  • View
    26

  • Download
    0

Embed Size (px)

DESCRIPTION

Background. Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes (i.e., processes share memory or use message passing to cooperate). Example: - PowerPoint PPT Presentation

Citation preview

Page 1: Background

Background

Concurrent access to shared data may result in data inconsistency

Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes (i.e., processes share memory or use message passing to cooperate).

Example: Recall Producer Consumer problem in Chapter 4

(could only use n-1 elements of the buffer). New solution adds a variable to track number of

items in the buffer.

Page 2: Background

Race Conditions

global shared int counter = 0, BUFFER_SIZE = 10 ;

Producer: while (1)

{ while (counter == BUFFER_SIZE); // do nothing // produce an item and put in nextProduced

buffer[in] = nextProduced;in = (in + 1) % BUFFER_SIZE;counter++;

}

Page 3: Background

Race Conditions

Consumer:

while (1) {

while (counter == 0); // do nothingnextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;// consume the item

}

Page 4: Background

Executed Separately work as expected. Execute concurrently, causes race condition:

Two or more processes (threads) access shared data concurrently and the outcome of the execution depends on the particular order in which the access takes place.

Page 5: Background

Consider this execution interleaving of counter++ and counter--

S0: producer execute register1 = counter {register1 = 5}

S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute counter = register1 {counter = 6 } S5: consumer execute counter = register2

{counter = 4} //final value for this execution sequence

Page 6: Background

Critical Section Problem

The part of a processes (threads) code that modifies shared data is termed its critical section.

The critical section problem is to design a protocol that the processes can use to handle shared data without causing race conditions.

Each process must request permission to enter into its critical section (entry section). The protocol may also require an exit section.

The rest of the program code is termed the remainder section. We are not concerned with this code.

Page 7: Background

Program Model while(1)

{ <entry section> <critical section>

<exit section>

<remainder> }

Page 8: Background

Consumer

while (1) {

while (counter == 0) ; // Entry Section

nextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;

counter--; //Critical section

// consume the item (remainder section)

}

Page 9: Background

Producer while (1)

{ while (counter == BUFFER_SIZE); // entry section // produce an item and put in nextProduced buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

counter++; //Critical section }

Page 10: Background

Correct Solutions to Critical-Section Problem

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

2. No assumptions may be made about speed or number of CPUs.

3. No process executing in its remainder section can block another process from entering into its critical section.

4. No process should have to wait forever to enter its critical section.

Page 11: Background

Attempt 1

shared int turn = 0 ; while (1) while(1){

while (turn !=0) ; while(turn !=1); //entry section critical section ; critical section ;turn = 1 ; turn = 0 ; //exit section<remainder> <remainder>

} }

Page 12: Background

shared int turn = 0 ;P0: P1: while (1) while(1) { while (turn !=0) ; { while(turn !=1);

critical_region() ; critical_region() turn = 1 ; turn = 0 ; <remainder> <remainder>} }

1) P0 accesses CR sets turn to 1.2) P1 accesses CR sets turn to 0. In remainder section reads binary

equivalent of War and Peace.3) P0 enters CR sets turn to 1.4) P0 attempts to enter CR but cannot (until after book report due).

Page 13: Background

Does it meet requirements?

Mutual Exclusion? Violates rule three.

Page 14: Background

Attempt 2: Using flag to indicate interest

global shared int flag[2] = {0,0} ;

while(1) while(1) { flag[0] = 1 ; { flag[1] =1 ; while(flag[1] == 1) ; while(flag[0] ==1 ) ; //entry section <critical section> <critical section> flag[0] = 0 ; flag[1] = 0 ; //exit section <remainder <remainder> } }

Page 15: Background

Attempt 2: Using flag to indicate interest global shared int flag[2] = {0,0} ;

while(1) while(1) { flag[0] = 1 ; { flag[1] =1 ; while(flag[1] == 1) ; while(flag[0] ==1 ) ; //busy wait <critical section> <critical section> flag[0] = 0 ; flag[1] = 0 ; <remainder <remainder> } }

1. P0 sets flag[0] = 1 gets preempted.2. P1 sets flag[1] = 1 gets preempted.

Page 16: Background

Problem?

Mutual Exclusion? Progress? Bounded Wait?

Page 17: Background

Attempt 2.1: Using flag to indicate interest

global shared int flag[2] = {0,0} ;

while(1) while(1) {while(flag[1] ==1 ) ; { while(flag[0] == 1) ; flag[0] = 1 ; flag[1] =1 ;<critical section> <critical section> flag[0] = 0 ; flag[1] = 0 ; <remainder <remainder> } }

Page 18: Background

Attempt 2.1: Using flag to indicate interest

global shared int flag[2] = {0,0} ;

while(1) while(1) {while(flag[1] ==1 ) ; { while(flag[0] == 1) ; flag[0] = 1 ; flag[1] =1 ;<critical section> <critical section> flag[0] = 0 ; flag[1] = 0 ; <remainder <remainder> } }

1. P0 reads flag[1] =0 ; Gets preempted.2. P1 reads flag[0] = 0 ; Gets preempted.3. P0 sets flag[0] = 1 and enters critical_region.4. P1 sets flag[1] = 1 and enters critical_region.

Page 19: Background

Problem?

Mutual Exclusion? Progress? Bounded Wait?

Page 20: Background

Correct for Two Processes

while(1) while(1)

{ flag[0] = 1 ; { flag[1] = 1 ; turn = 1 ; turn = 0 ;while(flag[1]&&turn==1) ;

while(flag[0]&&turn==0); <critical section> <critical section> flag[0] = 0 ; flag[1] = 0 ; } }

Page 21: Background

Correct for Two Processes

while(1) while(1) { flag[0] = 1 ; { flag[1] = 1 ; turn = 1 ; turn = 0 ;while(flag[1]&&turn==1) ;//entry while(flag[0]&&turn==0); //entry

section <critical section> <critical section> flag[0] = 0 ; // exit flag[1] = 0 ; // exit section. do_boring_things() //Remainder do_interesting_things() // Remainder

section } }

1. P0 sets flag to 1 and turn to 1. Gets preempted.1. e.g., reg1 = 1 // ready to store value of 1 to turn.

2. P1 sets flag to 1 and turn to 0. Gets preempted.1. e.g., reg1 = 0 //ready to store value of 0 to turn.

3. What happens if P0 stores value of turn = 1 and falls into while loop??

4. What happens if P1 then stores value of turn = 0 ?

Page 22: Background

Goal of Critical Sections (Regions)

Mutual exclusion using critical regions

Page 23: Background

Hardware Solutions

Many systems provide hardware support to critical section code.

Simple: Disable interrupts during critical section. Would this provide mutual exclusion??

Problems:

Page 24: Background

Hardware Solutions

Many systems provide hardware support to critical section code.

Simple: Disable interrupts during critical section. Would this provide mutual exclusion??

Problems: Do not want to do this at user level. Does not scale to multiprocessor system.

OK for OS to do so for short periods of time, not users.

Page 25: Background

Most machines provide special atomic hardware instructions

Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words

Page 26: Background

Test and Set Instruction (Busy Waiting)

This is a software description of the TestandSet hardware instruction.

int lock = 0 ;// global integer var initially set to 0.

int TestAndSet (int lock)

{ return_val = lock ; lock = 1 ; return (return_val); }

Page 27: Background

Usage:

shared int lock = 0 ; while(TestAndSet(lock)) ; //Entry code

<critical section> lock = 0 ; //Required for correctness

Do_something_else() //Remainder Section

Page 28: Background

Solutions that minimize busy waiting

Producer-consumer problem (Revisited)

Page 29: Background

Deadlock?

Page 30: Background

Deadlock?

Producer:

If (count = N) //assume condition true.

Preempted

Consumer:

N-- ;

If (count == N-1)

wakeup(producer) ;

Producer:

Sleep() ; //taking the dirt nap

Page 31: Background

Semaphore

Synchronization tool that does not require busy waiting (spin lock)

Semaphore S – integer variable Two standard operations modify S: wait() and signal()

Originally called P() and V(), called up and down in text. Can only be accessed via two indivisible (atomic) operations

Page 32: Background

typedef struct {

int count ; // for mutual exclusion between 2 processes initialize to 1.

PCB *Sem_Q ; } Semaphore ;

Page 33: Background

Use of Semaphores

struct Semaphore exclusion = 1 ;

while (1) { wait(exclusion) ; //Note: Not a busy wait! <critical section> signal(exclusion) ; <remainder section> }

Page 34: Background

Semaphore Sem = 1 ; //Initialize counter variable

wait(Sem) {Sem.count-- ; if (Sem.count < 0)

{ Place process on SemQ ; Block process ;

} }

Page 35: Background

Signal(Sem)

{ Sem.count++ ; if (Sem.count <= 0 ) //someone is sleeping on S.Q

{ Remove process from SemQ; Place on ready queue ;}

}

Page 36: Background

Example

Semaphore Sem_Baby = 1 ;

Process Operation Sem.count Result

PO wait 0 Passes through

P1 wait -1 Blocked P 0 signal 0 removes P1 from SemQ

and it passes through the semaphore.

semq

Page 37: Background

Ensuring Mutual Exclusion of Sem Code.

shared int lock = 0 ;Semaphore Sem = 1 ; //Initialize count

wait(Sem) { while(TestAndSet(lock)) ; //busy loop. Why not a problem?? Sem.count-- ; if (Sem.count < 0)

{ Place process on SemQ Block process and set lock to 0

}else

lock = 0 ; }

Page 38: Background

Signal(Sem)

{ while (TestAndSet(lock)) ; //busy wait Sem.count++ ; if (Sem.count <= 0 )

{ Remove process from SemQ; Place on ready queue ;}

lock = 0 ; }

Page 39: Background

Deadlock and Starvation Deadlock – two or more processes

are waiting indefinitely for an event that can be caused by only one of the waiting processes

Let S and Q be two semaphores initialized to 1

Page 40: Background

Semaphore Q, S = {1,1} ; P1: P2:

wait(S) ; wait(Q) ; wait(Q) ; wait(S) ; <CS> <CS> signal S ; signal(Q) ; signal Q ; signal(S) ;

Page 41: Background

Semaphore Q, S = {1,1} ;

P0: wait(S) ; // S.count = 0 ;

P1: wait(Q) ; //Q.count = 0 ;

P0: wait(Q) ; // Q.count = -1; recall: if Q.count < 0 put on Q.semqueue

P1: wait(S) ; // S.count = -1; recall: if S.count < 0 put on S.semqueue

Page 42: Background

Semaphores for Synchronization Want Statement S1 in P0 to execute before

statement S2 in P1.

Semaphore Syncher = 0 ; // What does this do?

P1:

wait(Syncher) ; // Syncher.count = -1

S2 ;

Page 43: Background

Semaphores for Synchronization

Want Statement S1 in P0 to execute before statement S2 in P1.

Semaphore Syncher = 0 ; // What does this do?

P0: P1: S1 ; wait(Syncher) ; signal (Syncher) ;S2 ;

Page 44: Background

Semaphores for Synchronization

Case 1: P1 executes wait first:

P1:

wait(Syncher) ; //Syncher.count = -1, (put on Syncher.queue) P0:

S1 ; signal(Syncher) ; // Syncher.count = 0; take P1 off

//Syncher.queue and // place on run queue.

// Synchronization accomplished!!

Page 45: Background

Semaphores for Synchronization Case 2: P0 executes wait first:

P0: S1 ;

signal (Syncher) ; //Syncher.count = 1, (put on Syncher.queue) P1: wait (Syncher) ; //Syncher.count = 0. P1 does not have to

wait and // executes statement S2. Synchronization

// accomplished!!

Page 46: Background

Semaphores for Synchronization

Want Statement S1 in P0 to execute before statement S2 in P1 before S3 in P2.

Semaphore Syncher = 0 ; Semaphore Synch1 = 0 ;

P0: P1: P2: S1 ; wait(Syncher) ; wait(Synch1) signal (Syncher) ; S2 ; S3 ;

signal(Synch1) ;

Page 47: Background

global shared int counter = 0, BUFFER_SIZE = 10 ;

Producer: while (1)

{ while (counter == BUFFER_SIZE); // do nothing // produce an item and put in nextProduced

buffer[in] = nextProduced;in = (in + 1) % BUFFER_SIZE;counter++;

}

Page 48: Background

Consumer:

while (1) {

while (counter == 0); // do nothingnextConsumed = buffer[out];out = (out + 1) % BUFFER_SIZE;counter--;// consume the item

}

Page 49: Background

How can this busy waiting be (largely) avoided?

Page 50: Background

How can this busy waiting be (largely) avoided?

Use semaphores. How many semaphores?

Page 51: Background

How can this busy waiting be (largely) avoided?

Use semaphores. How many semaphores?

2: One for producer and one for consumer (call them full and empty)

To what should they be initialized That is, how many times can a process fall through

the semaphore before it becomes a problem?

Page 52: Background

Struct semaphore full = ?

Struct semaphore empty = ?

Page 53: Background

struct semaphore full = 10 ;

struct semaphore empty = 0 ;

Think about how you would use these semaphores in the bounded buffer problem.

Page 54: Background