Upload
others
View
6
Download
0
Embed Size (px)
Citation preview
Chapter 6: SynchronizationChapter 6: Synchronization
6.2 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Module 6: SynchronizationModule 6: Synchronization
■ Background■ The Critical-Section Problem■ Peterson’s Solution■ Synchronization Hardware■ Semaphores■ Classic Problems of
Synchronization■ Monitors■ Synchronization Examples ■ Atomic Transactions
6.3 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
BackgroundBackground
■ Concurrent access to shared data may result in data inconsistency
■ Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
■ Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.
6.4 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Producer Producer
while (true)
/* produce an item and put in nextProduced
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
6.5 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
ConsumerConsumer
while (1)
{
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
6.6 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Race ConditionRace Condition
■ count++ could be implemented as
register1 = count register1 = register1 + 1 count = register1
■ count-- could be implemented as
register2 = count register2 = register2 - 1 count = register2
6.7 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Race ConditionRace Condition
■ Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}
6.8 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Race ConditionRace Condition
■ Race condition: The situation where several processes access and manipulate shared data concurrently. The outcome of the execution depends on the particular order in which the access takes
■ To prevent race conditions, concurrent processes must be synchronized.
6.9 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
The Critical-Section ProblemThe Critical-Section Problem
■ n processes all competing to use some shared data
■ Each process has a code segment, called critical sectioncritical section, in which the shared data is accessed.
■ Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
6.10 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Solution to Critical-Section ProblemSolution to Critical-Section Problem
■ A solution to the critical-section problem must satisfy the following three requirements:
1.Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections
2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then only those not in RS can participate the decision
of “the next”, and this selection cannot be postponed indefinitely.
6.11 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Solution to Critical-Section ProblemSolution to Critical-Section Problem
3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
6.12 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
General structure of a typical process General structure of a typical process PiPi
do {
entry section
critical section
exit section
reminder section
} while (true);
6.13 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Peterson’s SolutionPeterson’s Solution
■ Two process solution■ Assume that the LOAD and STORE instructions
are atomic; that is, cannot be interrupted.■ The two processes share two variables:
● int turn; ● Boolean flag[2]
■ The variable turn indicates whose turn it is to enter the critical section.
■ The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
6.14 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Algorithm for Process PiAlgorithm for Process Pi
do { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
} while (TRUE);
6.15 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Algorithm 1Algorithm 1■ Shared variables:
● int turn;initially turn = 0
● turn = i ⇒ Pi can enter its critical section
■ Process Pi
do {while (turn != i) ;
critical sectionturn = j;
reminder section} while (true);
■ Satisfies mutual exclusion, but not progress■ Executing sequence: Pi, Pj, Pi, Pj, Pi, Pj, …
6.16 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Algorithm 2 Algorithm 2 ■ Shared variables
● boolean flag[2];initially flag [0] = flag [1] = false.
● flag [i] = true ⇒ Pi ready to enter its critical section
■ Process Pi
do {flag[i] := true;while (flag[j]) ;
critical section
flag [i] = false;
remainder section
} while (1);
■ Satisfies mutual exclusion, but not progress requirement.
6.17 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Algorithm 2 Algorithm 2
■ May loop infinitely■ If changes the order to be
while (flag[j]) ;
flag[i] := true;
then the condition of mutual exclusion will not hold.
6.18 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Synchronization HardwareSynchronization Hardware
■ Many systems provide hardware support for critical section code
■ Uniprocessors – could disable interrupts● Currently running code would execute without
preemption● Generally too inefficient on multiprocessor
systemsOperating systems using this not broadly
scalable■ Modern machines provide special atomic hardware
instructionsAtomic = non-interruptable
● Either test memory word and set value● Or swap contents of two memory words
6.19 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
TestAndndSet Instruction TestAndndSet Instruction
■ Definition:
boolean TestAndSet (boolean *target)
{ boolean rv = *target; *target = TRUE; return rv: }
6.20 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Solution using TestAndSetSolution using TestAndSet
■ Shared boolean variable lock., initialized to false.
■ Solution:
do { while ( TestAndSet (&lock )) ; /* do nothing
// critical section
lock = FALSE;
// remainder section
} while ( TRUE);
6.21 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Swap InstructionSwap Instruction
■ Definition:
void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }
6.22 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Solution using SwapSolution using Swap■ Shared Boolean variable lock initialized to
FALSE; Each process has a local Boolean variable key.
■ Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section
lock = FALSE;
// remainder section
} while ( TRUE);
6.23 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bounded-waiting mutual exclusion with Bounded-waiting mutual exclusion with TestAndSet()TestAndSet()
■ The hardware-based solutions to the CS problem do not satisfy the bounded-waiting requirement
■ Another algorithm using the Test-and-Set() instruction that satisfies all the CS requirement
boolean waiting [n]; /* are initialized to false */
boolean lock; /* is initialized to false */
■ A process Pi can enter its critical section only if either waiting[i]== false or key == false.
6.24 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE;
Critical Section;
j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j +1) % n;
if (j == i) lock = FALSE; else waiting[j] = FALSE; Remainder Section; } while (TRUE);
6.25 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
SemaphoreSemaphore
■ Synchronization tool that does not require busy waiting
■ Semaphore S – integer variable■ Two standard operations modify S: wait() and
signal()● Originally called P() (test) and V()
(increment)■ Less complicated
6.26 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
SemaphoreSemaphore
■ Can only be accessed via two indivisible (atomic) operations● wait (S) { while S <= 0
; // no-op (busy waiting) S--; }● signal (S) { S++; }
6.27 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore as General Synchronization Semaphore as General Synchronization ToolTool
■ Counting semaphore – integer value can range over an unrestricted domain
■ Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement● Also known as mutex locks
■ Can implement a counting semaphore S as a binary semaphore
■ Provides mutual exclusion● Semaphore S; // initialized to 1● wait (S);
Critical Section
signal (S);
6.28 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
A more complicated example:A more complicated example:
beginparbegin /*Initially, all semaphores a ~ g are 0*/
begin S1; signal(a); signal(b); end;begin wait(a); S2; S4; signal(c); signal(d); end;
begin wait(b); S3; signal(e); end;
begin wait(c); S5; signal(f); end;begin wait(d); wait(e); S6; signal(g); end;begin wait(f); wait(g); S7; end;
parendend
S1
S7
S3
S6S5
S4
S2
a b
ced
f g
6.29 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore ImplementationSemaphore Implementation
■ Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time
■ Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section.
● Could now have busy waiting in critical section implementationBut implementation code is shortLittle busy waiting if critical section rarely occupied
■ Note that applications may spend lots of time in critical sections and therefore this is not a good solution.
6.30 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore ImplementationSemaphore Implementation
■ The main disadvantage of mutual-exclusion solutions is busy waitingbusy waiting (wasting CPU cycles in the loops of enter section).
■ The semaphore defined before also has the same problem. Thus, the type of semaphore is also call a spinlockspinlock (“spin” while waiting for the lock).
■ To overcome the need for busy waiting, we can modify the definition of P and V, using block and wakeup operations. And define a semaphore as a record
■ typedef struct { int value; struct process *list;
} semaphore;
6.31 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore Implementation Semaphore Implementation
■ Semaphore operations now defined as
wait(semaphore *S):S->value--;if (S->value < 0) {
add this process to S->list;block();
}signal(S):
S->value++;if (S->value >= 0) {
remove a process P from S-> list;wakeup(P);
}
6.32 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore Implementation Semaphore Implementation
■ signal(S) and wait(S) should be atomic. This situation is a CS problem (no two processes can executed wait() and signal() on the same semaphore simultaneously).
■ Solutions
● uniprocessor system: inhibit interrupts during the execution of signal and wait operations
● multiprocessor system: disable interrupt does not work here (using spinlocks to ensure that wait and signal are performed atomically)
6.33 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore Implementation Semaphore Implementation
■ Note that busy waiting have not be completely eliminated.
● It is removed from the entry to the CSs of application programs.
● It is limited to CSs of signal and wait operations, which codes are short (<10 instructions).
● Thus, the CS is almost never occupied, and busy waiting occurs rarely, and only for a short time
6.34 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Semaphore as a General Semaphore as a General Synchronization ToolSynchronization Tool
■ Execute B in Pj only after A executed in Pi
■ Use semaphore flag initialized to 0
■ Code:
Pi Pj
A wait(flag)
signal(flag) B
6.35 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Deadlock and StarvationDeadlock and Starvation
■ Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes
■ Let S and Q be two semaphores initialized to 1P0 P1
wait (S); wait (Q); wait (Q); wait (S);
. .
. .
. . signal (S); signal (Q); signal (Q); signal (S);
■ Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Ex: a semaphore in LIFO order
6.36 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Classical Problems of SynchronizationClassical Problems of Synchronization
■ Bounded-Buffer ProblemReaders and Writers ProblemDining-Philosophers Problem
■ The three problems are important, because they are
● examples for a large class of concurrency-control problems.
● used for testing nearly every newly proposed synchronization scheme.
■ Semaphores are used for synchronization in our solutions.
6.37 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bounded-Buffer ProblemBounded-Buffer Problem
■ Shared datasemaphore full, empty, mutex;
■ Initially:full = 0, /* # of full slotsempty = n, /* # of empty slotsmutex = 1
producer consumer
buffer of size n
6.38 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bounded Buffer Problem (Cont.)Bounded Buffer Problem (Cont.)
■ The structure of the producer process
do { // produce an item
wait (empty); wait (mutex);
// add the item to the buffer
signal (mutex); signal (full); } while (true);
6.39 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bounded Buffer Problem (Cont.)Bounded Buffer Problem (Cont.)
■ The structure of the consumer process
do { wait (full); wait (mutex);
// remove an item from buffer
signal (mutex); signal (empty); // consume the removed item
} while (true);
6.40 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Readers-Writers ProblemReaders-Writers Problem■ A data set is shared among a number of concurrent
processes● Readers– only read the data set; they do not
perform any updates● Writers– can both read and write.
■ Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time.
■ Shared Data● Data set● Semaphore mutex initialized to 1.● Semaphore wrt initialized to 1.● Integer readcount initialized to 0.
6.41 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Readers-Writers ProblemReaders-Writers Problem
Two variations (priorities of readers and writers) No reader will be kept waiting unless a writer has
already obtained permission to use the shared object. (Thus, no reader should wait for other readers to finish because a writer is waiting.) Writers may starve
Once a writer is ready, that writer performs its write as soon as possible. (Thus, if a writer is waiting to access the object, no new readers may start reading.) Readers may starve
6.42 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Readers-Writers Problem Readers-Writers Problem
■ The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (true)
6.43 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Readers-Writers Problem Readers-Writers Problem
■ The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readercount == 1) wait (wrt) ; signal (mutex) // reading is performed
wait (mutex) ; readcount - - ; if redacount == 0) signal (wrt) ; signal (mutex) ; } while (true)
6.44 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Readers-Writers Problem Readers-Writers Problem
■ If a writer is in the CS and n readers are waiting, then
● the first one is queued on wrt
● the other n-1 are queued on mutex
■ When a writer executes signal(wrt), either the waiting readers or a single writer are resumed. The selection is made by the scheduler.
6.45 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Dining-Philosophers ProblemDining-Philosophers Problem
■ Shared data
● Bowl of rice (data set)
● Semaphore chopstick [5] initialized to 1
1 2
3
4
5
6.46 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Dining-Philosophers Problem (Cont.)Dining-Philosophers Problem (Cont.)
■ The structure of Philosopher i:
Do { wait ( chopstick[i] );
wait ( chopstick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] );
// think
} while (true) ; (May Deadlock!)
6.47 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Dining-Philosophers Problem Dining-Philosophers Problem
■ Possible solutions to the deadlock problem ● Allow at most four philosophers to be sitting
simultaneously at the table.● Allow a philosopher to pick up her chopsticks only if
both chopsticks are available (note that she must pick them up in a critical section).
● Use an asymmetric solution; that is, odd philosopher: left first, and then right an even philosopher: right first, and then left
■ Besides deadlock, any satisfactory solution to the DPP problem must avoid the problem of starvation.
6.48 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
■ Semaphores provide a convenient and effective mechanism for process synchronization.
■ However, incorrect use may result in timing errors.
... critical section ... signal(mutex);
Problems with SemaphoresProblems with Semaphores
signal(mutex); ... critical section ... wait(mutex);
wait(mutex); ... critical section ... wait(mutex);
wait(mutex); ... critical section ...
incorrect order(not mutual exclusive)
typing error(deadlock)
forgotten
6.49 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor: A High-level Language Monitor: A High-level Language ConstructsConstructs
■ A monitor is an approach to synchronize two or more processes that use a shared resource, usually a hardware device or a set of variables.
■ With monitor-based concurrency, the compiler or interpreter transparently inserts locking and unlocking code to appropriately designated procedures, instead of the programmer having to access concurrency primitives explicitly.
6.50 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor: A High-level Language Monitor: A High-level Language ConstructsConstructs
■ The representation of a monitor type consists of ●declarations of variables whose values define the state
of an instance of the type●procedures or functions that implement operations on
the type.■ A procedure within a monitor can access only variables
defined in the monitor and the formal parameters.■ The local variables of a monitor can be used only by the
local procedures.
■ The monitor construct ensures that only one process at a time can be active within the monitor.
6.51 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
MonitorsMonitors■ High-level synchronization construct that allows the safe
sharing of an abstract data type among concurrent processes.
monitor monitor-name{
// shared variable declarations
procedure body P1 (…) {. . .
}procedure body P2 (…) {
. . .} procedure body Pn (…) {
. . .} {
initialization code}
}
6.52 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitors – condition variablesMonitors – condition variables■ To avoid entering a busy waiting state, processes must
be able to signal each other about events of interest.
■ Monitors provide this capability through condition variables.
■ When a monitor function requires a particular condition to be true before it can proceed, it waits on an associated condition variable.
■ By waiting, it gives up the lock and is removed from the set of runable processes.
■ Any process that subsequently causes the condition to be true may then use the condition variable to notify a process waiting for the condition. A process that has been notified regains the lock and can proceed.
6.53 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitors – condition variablesMonitors – condition variables■ To allow a process to wait within the monitor, a condition
variable must be declared, as
condition x, y;■ Condition variable can only be used with the operations
wait and signal.● The operation
x.wait();means that the process invoking this operation is suspended until another process invokes
x.signal();● The x.signal operation resumes exactly one
suspended process. If no process is suspended, then the signal operation has no effect.
6.54 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Schematic view of a MonitorSchematic view of a Monitor
6.55 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor with Condition VariablesMonitor with Condition Variables
6.56 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor With Condition VariablesMonitor With Condition Variables■ Suppose Q executes x.wait and P executes x.signal.
■ Both P and Q are in the monitor, one must wait.
■ Two possibilities for the implementation:
(a) Signal and wait: P either waits Q leaves, or waits another condition
(b) Signal and continue: Q either waits P leaves, or waits another condition
■ Possibility (b) seems more reasonable
■ However, if Q waits, the logical condition for which Q is waiting may no longer hold by the time Q is resumed.
■ Concurrent Pascal (using a compromise)
● When P executes x.signal, it leaves immediately, and Q is resumed immediately.
6.57 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Dining Philosophers Example Dining Philosophers Example (deadlock-free)(deadlock-free)
monitor dp {
enum {THINKING, HUNGRY, EATING} state[5];condition self[5];void pickup(int i) void putdown(int i) void test(int test(int i)) void init() {
for (int i = 0; i < 5; i++)state[i] = THINKING;
}}
6.58 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
void pickup(int i) {state[i] = HUNGRY;test[i];if (state[i] != EATING)
self[i].wait();}
void putdown(int i) {state[i] = THINKING; /*allow neighbors to eat// test left and right neighborstest((i+4) % 5);test((i+1) % 5);
}
try to eat
leftright
6.59 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
void test(int i) {if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING)) {
state[i] = EATING;self[i].signal();
}}
try to let Pi eat(if it is hungry)
if Pi is suspended, resume itif Pi is not suspended, no effect
6.60 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
dp.pickup(1)
eating;
dp.pushdown(1) test (0) ; test (2) ; if P3 is not eating ⇒ self[2].signal
thinking
P0 P1
P2
P3
P4
01
2
34 hungryself[2].wait;
An illustrationAn illustration
thinking
This solution cannot solve starvation!
6.61 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor Implementation Using Monitor Implementation Using SemaphoresSemaphores
■ Variables: semaphore mutex; // (initially = 1)semaphore next; // (initially = 0)int next-count = 0;
■ Each external procedure F will be replaced by
■ Mutual exclusion within a monitor is ensured.
wait(mutex);body of F;
if next-count>0 then signal(next)else
signal(mutex);
body of Fcompiler
wakeup a suspended
process
allow othersto enter
the monitor
6.62 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor Implementation Using Monitor Implementation Using SemaphoresSemaphores
■ For each condition variable x, we have:semaphore x-sem; // (initially = 0)int x-count = 0;
■ The operation x.wait can be implemented as:x-count++;if (next-count > 0)
signal(next);else
signal(mutex);wait(x-sem);x-count--;
o A signaling process must WAIT until the resumed process either leaves or waits.
o NEXT: the signaling processes may suspend themselves.
6.63 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor ImplementationMonitor Implementation
■ The operation x.signal can be implemented as:
if (x-count > 0) {next-count++;signal(x-sem);wait(next);next-count--;
}
no effect, if no process is waiting
wakeup a waiting process
6.64 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor ResourceAllocator{ boolean busy; condition x;
void acquire(int time) { if (busy) x.wait (time); busy = TRUE; }
void release() { busy = FALSE; x.signal (); }
initialization_code() { busy = FALSE; } }
R.acquire(t); …. access the resource; ….R.release();
6.65 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Monitor ResourceAllocatorMonitor ResourceAllocator■ Conditional-wait construct: x.wait(c);
● c – integer expression evaluated when the wait operation is executed.
● value of c (a priority numberpriority number) stored with the name of the process that is suspended.
● when x.signal is executed, process with smallest associated priority number is resumed next.
■ Check two conditions to establish correctness of system: ● User processes must always make their calls on the
monitor in a correct sequence.● Must ensure that an uncooperative process does not
ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols.
6.66 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Synchronization ExamplesSynchronization Examples
■ Solaris
■ Windows XP
■ Linux
■ Pthreads
6.67 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Solaris SynchronizationSolaris Synchronization
■ Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing
■ Uses adaptive mutexes for efficiency when protecting data from short code segments
■ Uses condition variables and readers-writers locks when longer sections of code need access to data
■ Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock
6.68 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Windows XP SynchronizationWindows XP Synchronization
■ Uses interrupt masks to protect access to global resources on uniprocessor systems
■ Uses spinlocks on multiprocessor systems
■ Also provides dispatcher objects which may act as either mutexes and semaphores
■ Dispatcher objects may also provide events
● An event acts much like a condition variable
6.69 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Linux SynchronizationLinux Synchronization
■ Linux:
● disables interrupts to implement short critical sections
■ Linux provides:
● semaphores
● spin locks
6.70 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Pthreads SynchronizationPthreads Synchronization
■ Pthreads API is OS-independent
■ It provides:
● mutex locks
● condition variables
■ Non-portable extensions include:
● read-write locks
● spin locks
6.71 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Serializability and Concurrency-Serializability and Concurrency-Control AlgorithmsControl Algorithms
■ Serializability
■ Locking protocol
■ Timestamp-based protocols
6.72 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Serializability Serializability
■ serial scheduleserial schedule: A schedule where each transaction is executed atomically.
Schedule AT0 T1
read(A)
write(A)read(B)write(B)
read(A)
write(A)read(B)
write(B)
■ nonserial schedulenonserial schedule: allow two transactions to overlap their execution.
Schedule BT0 T1
read(A)
write(A)
read(A)
write(A)
read(B)
write(B)
read(B)
write(B)
6.73 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Serializability Serializability
■ Oi and Oj conflict if they access the same data item and at least one of them is a write.
■ Conflict serializable: A schedule S can be transformed into a serial schedule S′ by a series of swaps of nonconflicting operations.
■ Schedule B ⇒ Schedule A
r(A)r(A)11w(A)w(A)1 1 r(B)r(B)00w(B)w(B)00⇒⇒ r(B) r(B)00w(B)w(B)00r(A)r(A)11w(A)w(A)11
Swap the read(B) of T0 with the write(A) of T1
Swap the read(B) of T0 with the read(A) of T1
Swap the write(B) of T0 with the write(A) of T1
Swap the write(B) of T0 with the read(A) of T1
■ Schedule B will produce the same final state as well some serial schedule.
6.74 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Locking ProtocolLocking Protocol
■ One way to ensure serializability is to associate with each data item a lock, and require that each transaction follow a locking protocol that governs how locks are acquired and released.
■ Two lock modes● Shared: allows read but not write● Exclusive: allows both read and write
■ Two-phase locking protocol: requires that each transaction issue lock and unlock in two phases: growing and shrinking.
■ The execution order between every pair of conflicting transactions is determined at execution time.
6.75 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Locking ProtocolLocking Protocol
■ growing phase: may obtain lock, but cannot release any lock.
■ shrinking phase: may release lock, but cannot obtain any lock.
■ Initially, a transaction is in the growing phase. The transaction acquires locks as needed. Once the transaction releases a lock, it enters the shrinking phase, and no more lock requests can be issued.
■ The two-phase locking protocol ensures conflict serializability. It does not, however, ensure freedom from deadlock.
6.76 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Timestamp-based Protocols Timestamp-based Protocols ■ Select the order among transactions in advance
■ Timestamps could be
● System clock
● Logical counter
■ Implementation
● W-timestamp(Q): denotes the largest timestamp of any transaction that executed write(Q) successfully.
● R-timestamp(Q): denotes the largest timestamp of any transaction that executed read(Q) successfully.
These timestamps are updated whenever a new read(Q) or write(Q) instruction is executed.
6.77 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Timestamp-based Protocols Timestamp-based Protocols ■ Timestamp-based protocols
● Suppose Ti issues read(Q): If TS(Ti) < W-timestamp(Q), [read a value of [read a value of QQ that was already that was already
overwrittenoverwritten] read(Q) is rejected, and Ti is rolled back.
If TS(Ti) > W-timestamp(Q), read(Q) is executed, and R -timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti).
● Suppose Ti issues write(Q): If TS(Ti) < R-timestamp(Q), [the value of [the value of QQ that is producing was that is producing was
needed previously and needed previously and TTii assumed that this value would never be assumed that this value would never be producedproduced] write(Q) is rejected, and Ti is rolled back.
If TS(Ti) < W-timestamp(Q), [is attempting to write an obsolete [is attempting to write an obsolete valuevalue] write(Q) is rejected, and Ti is rolled back.
Otherwise, write(Q) is executed.
6.78 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Timestamp-Based Protocols Timestamp-Based Protocols ■ A transaction that is rolled backrolled back by the concurrency-control
scheme as a results of the issuing of either a read or write is assigned a assigned a newnew timestamp and is restarted timestamp and is restarted.
■ TS(T2) < TS(T3)T2 10:00 T3 10:02
read(B) read(B)write(B)
read(A)read(A)write(A)
■ The timestamp-ordering protocol ensures conflict serializability. The protocol ensures freedom from deadlock because no transaction ever waits.
6.79 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
HomeworkHomework
■ 1, 3, 7, 9, 11, 13, 18
■ Term Project: Producer-Consumer Problem (Due: Dec. 31, 2007)
End of Chapter 6End of Chapter 6
6.81 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bakery Algorithm Bakery Algorithm Critical section for n processesCritical section for n processes
■ Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section.
■ If processes Pi and Pj receive the same number, if ii < < jj, then Pi is served first; else Pj is served first.
■ The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...
6.82 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bakery AlgorithmBakery Algorithm
■ Notation
● lexicographical order (ticket #, process id #)(a, b) < (c, d) if a < c or if a = c and b < d
● max (a0,…, an-1)
■ Shared data
boolean choosing[n];
int number[n];
initialized to false and 0, respectively.
6.83 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bakery Algorithm Bakery Algorithm
do { choosing[i] = true;number[i] = max(number[0], number[1], …, number [n – 1])+1;choosing[i] = false;for (j = 0; j < n; j++) {
while (choosing[j]) ; /* Wait for the choosing of Pj
while ((number[j] != 0) && (number[j],j) < number[i], i)) ; /* smallest first
} critical section
number[i] = 0;remainder section
} while (1);
6.84 Silberschatz, Galvin and Gagne ©2005Operating System Concepts
Bakery Algorithm Bakery Algorithm
■ Key for showing the correctness
if Pi in CS, all other Pk has
number[k] = 0, or(number[i], i) < (number[k], k)
■ mutual exclusion: OK
■ progress: OK (smallest first)
■ bounded waiting: OK
● Note that processes enter their CSs in a FCFS basis
● How many times ? n-1