View
217
Download
0
Category
Preview:
Citation preview
8/10/2019 CONCURRENCIA.pdf
1/160
Principles ofConcurrent
Programming
M. Ben-AriTel-Aviv University
Englewood Cliff*. Now Jersey London New Delhi
Singapore Sydney Tokyo Toronto We llington
8/10/2019 CONCURRENCIA.pdf
2/160
Library of Con gress Cataloging in Publication D ata
Ben-Ari, M., 1948-
Principles of concurrent programming.
Bibliography: p.Includes index.
1. Parallel proce ssing (Electronic com pute rs)
L Title
QA76.6.B437 001.64'2 62-36 50
ISBN 0-13-701078-8 (pbk.) AACR2
British Library Cataloguing in Publication Data
Ben-Ari, M.
Principles of concurrent programming.
1. Parallel processing (Electronic computers)2. Operating systems (Computers)
I. Title
001.64'25 QA76.6
ISBN 0-13-701078-8
01982 by PRENT1CE-HALL INTERNATIONAL, INC.
All rights reserved. No part of this publication m ay be re prod uced , stored in aretrieval system, o r transmitted, in any form or by any m ean s, electronic,
mechanical, photocopying, recording or otherwise, w ithout the prior permission
of Prentice-Hall International, Inc.For permission within the U nited State s contac t Prentice-Hall Inc, Englewo odCliffs, N.J. 07632.
ISBN 0-13-?010?fi-A
PRENTICE-HALL INTERNATIONAL INC., LondonPRENTICE -HALL OF AUSTRALIA PTY., LTD.. Sy dn eyPRENTICE-HALL CANADA, INC.. TorontoPRENTICE-HALL OF INDIA PRIVATE LIMITED. N ew DelhiPRENTICE-HALL OF JAPAN. INC., TokyoPRENTICE-HALL OF SOUTHEAST ASIA PTE , LTD., Sin ga po rePRENTICE-HALL INC.. En glew ood Cliffs, N ew J er se yWHITEHALL BOOKS LIMITED. Wellington. New Zealand
Printed in the United States of America
1 0 9 8 7 6 5 4 3
8/10/2019 CONCURRENCIA.pdf
3/160
CONTENTS
Preface xiii
Acknowledgements xv
W hat Is Conc urrent Programming? 1
1.1 From Sequential to Concurrent Programming 11.2 Concurrent Programming 4
1.3 Correctness of Concurrent Programs 61.4 Interleaving 71.5 The Origin of Operating Systems 81.6 Operating Systems and Concurrent Programming 10
1.7 An Overview of the Book 12
1.8 Program Notation 141.9 Exercises 16
The Concurrent Programm ing Abstraction 18
2.1 Introduction 18
2.2 Mutual Exclusion 192.3 Correctness 202.4 Timing 22
2.5 Implementing Primitive Instructions 242.6 Concurrent Programming in Pascal*S 262.7 Summary 272.8 Exercises 28
8/10/2019 CONCURRENCIA.pdf
4/160
* CONItNTS
The Mutual Exclusion Problem 29
3.1 Introduction 293.2 First Attem pt 303.3 Second Attem pt 32
3.4 Third Attempt 343.5 Fourth Attempt 363.6 Dekkers Algorithm 38
3.7 A Proof of Dckkers Algorithm 403.8 Conclusion 433.9 F.xereises 43
Semaphores SO
-. .1 M uiuai tA^iusior. 5 !
4.3 The Producer-Consumer Problem 364.4 More on the Producer-Consumer Problem 584.5 The Sleeping Barber 62
4.6 The Bounded Buffer 654.7 Exercises 68
Monitors 73
5.1 Introduction 735.2 Definition of Monitors 75
5.3 Simulation of the Semaphore 785.4 The Readers and Writers Problem 80
5.5 Proving Properties of Monitors 835.6 The Simulation of Monitors by Semaphores 865.7 Unrestricted Signals 885.8 Exerciscs 90
The Ada Rendezvous 93
6.1 Introduction 93
6.2 The Accept Statement 94
6.3 The Select Statement 996.4 Proving Properties of the Rendezvous 1056.5 Exerciscs 105
8/10/2019 CONCURRENCIA.pdf
5/160
CONTE N'IS XI
/ T h e D i ni ng P h i l o s o p h e r s 1 09
7.1 Introduction 1097.2 First Attemp t 1107.3 Sccond Attempt 1117.4 A Correct Solution 113
7.5 Conditional Critical Regions 1157.6 Exercises 117
A p p e n d i x : I m p l e m e n t a t i o n K it 1 19
A. I fnlrodiicljun 119A.2 The Compiler 120
A.3 The P-Code 121A.4 Procedure Calls 124A.5 Concurrency 129
A.6 Semaphores 132A.7 Random ^aiion 133A.# Program Listing 133
Bibliography 165
Textbooks 165Sources 165
Index 171
8/10/2019 CONCURRENCIA.pdf
6/160
PREFACE
Co ncurrent programmingthe programm ing tools and techniques for deal*
ing with parallel processeshas traditionally been a topic in operating
systems theory texts. There are several reasons why concurrent program
ming deserves a book of its own and should be the core of an advancedcomputer science systems course:
1. Co ncu rrent program ming is what distinguishes operating systems and
real-time systems from other software systems. Computer scicnce stu
dents who have taken co urses in programming, data structures, com puter architecture and probability can easily master the applications of
these disciplines in operating systems, but they need to be introduced to
techniques that will enable them to deal with parallelism.
2 . I d o u b t i f m an y o f my s tu d en t s will ev e r d es ig n o r co n s t ru c t a m u l t ip ro
cess in g t im e - sh a r in g sy s tem w h e re a lg o r i th m s fo r p ag in g an d sch ed u l in g
a re o f p rime im p o r tan ce . I am ce r t a in th a t th ey wi ll b e d e s ig n in g an d
c o n s t ru c t in g r e a M i m e s y st em s f or m i ni - a n d m i c ro c o m p u t e rs . A s o u n d
k n o w l e d g e o f c o n c u r r e n t p r o g r a m m i n g w ill e n a b l e t h e m t o c o p e w i th
r ea l -t im e sy s tems in p a r ti cu la r w i th th e sev e re r e l iab i li ty r eq u i r em e n t s t h a t a r e i m p o s e d .
3. Th ere is a trend towards increasing use of abstract concurrency that has
nothing to d o with the parallelism o f actual systems. D ata flow diagrams
used in software e ngineering are nothing more than networks o f concur*
rent processes. Traditionally, such a design must be implemented in a
sequential language, but UNIX* is a programming system which
encourages the use of concurrent processes. Ada*, the new language
designed for the U.S. Department of Defense, includes concurrent
programming featu re s as an integral part o f the language.
t UN IX is a trademark o f Bell Laboratories.
J Ada is a trademark of the United States Dept, of Defense.
xiii
8/10/2019 CONCURRENCIA.pdf
7/160
Kiv PRbhACE
4. Finally, concurrcni programming is an important topic in computer
science research. The basic results in the field are still scatteredthroughout the literature and deserve to be collected into one volume
for a newcomer to the field.
The hook requires no pre requisites as such othe r than co m puter science
maturity (to borrow ilte term from mathematics). The book is aimed at
advanced undergrad uate students of com puter science. A t the l e i Aviv
Univeristy. we arrange the course of study so that a student has four
semesters of computer science courses which include extensive program
ming exercises. The material is also appropriate for practicing systems and
real'iime programmers who are looking for a more formal treatment of the
tools of their trade.I have used the m atcriul for half of a w eekly four-h our sem ester course
in operating systemsthe second half is devoted to the classical subjects:
memory man agem ent, etc. I should like to see curricula evolve tha t would
devote a quarter or trimester to (theoretical) concurrent programming
followed by a project oriented course on operating systems or real-time
systems.The book is not oriented to any particular system o r technique. 1 have
tried to give equal time to the most widespread and successful tools for
concurrent programming: memory arbiters, semaphores, monitors and
rendezvous. Only the most elem entary features of Pascal and A da are used;they should be understandable by anyone with experience in a modern
piogramming language.Much of the presentation closely follows the original research articles
by E. W. D ijkstra a nd C. A. R. H oare. In par ticular, D ijkstra's Co-operating
Sequential Processesreads like a novel and it was always gre at fun to lecture
the m aterial. This boo k is an attem pt to explain and exp and on their work.
Verification of concurrcn i pro grams is one of (he m ost exciting areas of
research today. Concurrent programs are notorious for the hidden and
sophisticated bugs they contain. Formal verification seems indispensable.
A novel feature of this book is its attempt to verify concurrent programs
rigorously though informally. Hopefully, a studen t who is introduced early toverification will be prepared to study formal methods at a more advanced
level.One cannot learn any programming technique without practicing it.
Th ere are several program ming systems th at can be used for class exercise.
For in stitution s w here no such system is available, the Append ix contains
the description and listing of a simple Pascal-S based concurrent program
ming system that 1 have used for class exercise.
8/10/2019 CONCURRENCIA.pdf
8/160
ACK NOWLEDGEMENTS
I am grate ful to A miram Yehudai for the many discussions that wc have had
over the past few years on programm ing and how to teach it. He was also
kind enough to read the entire manuscript and suggest improvements.
I shou ld also like to (hunk Amir Pnueli for teaching me programminglogic and verification, and Henry Hirschberg of P rentice/H al) International
for his encouragement during the transition from notes to book.Also, special thanks to my wife Margalit for drawing the sketches on
which the illustrations were based.
t v
8/10/2019 CONCURRENCIA.pdf
9/160
1 WHAT IS CONCURRENTPROGRAMMING?
1.1 FROM SEQUENTIAL TO CONCUR RENT PROG RA MM ING
Figure l . l shows a n interchange sort program. The program c a n b e c o m
piled into a set of machine language instru ct ions and (h en ex cculed on a
com puter. The program is sequential; for any given input (o f 40 integers) the
computer will always execulc the same sequence of machine instructions.
If we suspcct that there is a bug in the program then we can debug by
tracing (listing the sequence o f instructions execu ted) or by breakpo ints and
snapsh ots (suspending the execution o f (he program to list the values of the
variables).
Th ere are b etter sequential sorting algorithms (see A ho etal.,1974 ) butwe are going to improve the perform ance o f this algorithm by exploiting the
possibility o f execu ting portions o f the sort in para llel. Suppose that (for
n= 10) the input sequence is: 4, 2, 7 , 6 , 1, 8, 5, 0, 3, 9. Divide the array into
two hatves: 4 ,2 ,7 ,6 , I and 8, 5 ,0 ,3 ,9 ; get two colleagues to sort the halves
simultaneously: I, 2, 4, 6, 7 and 0, 3, 5, 8, 9; and finally, with a brief
inspection of the data, merge the two halves:
A simple complexity analysis will now show ihat even without help of
colleagues, the parallel algorithm can still be more efficient than the sequ en
tial algorithm. In the inner loop of an interchange sort, there arc (n-~ l) +
(n -2 ) + . . . + ! ( /- 1)/2 comparisons. This is approx. n 3/2. To sort n /2
0
0. 1
0. I, 2
8/10/2019 CONCURRENCIA.pdf
10/160
2 WHAT IS CONCURRENT PROGRAMMING? CHAP. I
pro gra m so rtpro gra m ;
const n = 40;
var a: array[1..n] o f integer;
k: integer,
proced ure sort(low,high: integer);
var i,j, temp; integer;
begin
for i:= low to high - 1 do
forj := i+1 to high do
if a[j]
8/10/2019 CONCURRENCIA.pdf
11/160
SECT- I i f r o m s e q u e n t i a l t o c o n c u r r e n t p r o g r a m m in g 3
pro gra m sortp rogram,
const n = 20;twon= 40;
var a: array[1..twon] of integer;k: integer,
pro cedure sort{low , high: integer);
(* as before *)
procedure merge(low, middle, high: integer);
var count1, count2. integer;
k, index1, index 2: integer;
begin
count1 := low,
count2 := middle ;
while count1 < middledo
if a[count1] < a[count2] then
begin
write (a[coun t1]);
count1 : = c o u n t 1+ 1;
if co un t1 > = middle thenfor index2 : = count2 to high do
write(a[index2])
endelse
begin
write ( a [ coun t2 ] ) ;
count2 : = count2+ 1;
if count2 > high then
begin
for index1 : = count1 to middle- 1 dowrite
c o u n t 1 : = m i d d l e (* terminate *)
end
endend;
begin (* main program *)
for k := 1 to twon do read (a[k]);sort( l , n) ;
sort(n+ 1, twon);merge(1, n+1, twon)
end.
Fl*. 1.3.
8/10/2019 CONCURRENCIA.pdf
12/160
A W IIA r IS CONCURRENT PROGRAMMING? CHAR I
Suppose that the program is lo b e run on a multiprocessor comp uteracomputer with more than one CPU. Then we need some notation (hut can
express the fact that th e calls sori[1,/) an ilxor/ (n + 1, twon )can be executedin parallel. Such a no tation is the cobcgin -coend brac ket show n in Fig. 1.4.cobcginp x\ . . . ;pcoend means: suspend the execution of the main prog*
ram; initiate (he execution of proc edur es............. on multiple com puters;when all of p have terminated then resume live main program.
pro gram sortprogram;
( declarations as before )
begin ( main program )
for k := to iwondo reaJ(aifc));cobcgin
sort(\ , n);son(n f 1, twon)
cocnd;merge(1, f 1, twon)
end.
Fig. 1.4.
The programs of Figs. 1.3 and 1.4 are identical except for the
cobe gin-cotnd in Fig. 1.4. There would be no need for bo th versions if the
definition of cobegin-cocnd was modified. Instead of requiring that theproce du re s be execu ted in pa ra llel , cobegin-coen d becomes a dec lara tion
that the procedures m ay be exec uted in parallel. It is left to the im plem enta
tionthe system hardw are and software to decide if parallel execu tion will
be do ne . Processors may be ad ded or removed from the system withou taffecting the correctness of the programonly the time tha t it would take to
cxccuic iIk; program.The word concurrentis used to describe processes that have the pote n
tial for parallel execution. We have shown how an algorithm can beimproved by identifying procedures that may be executed concurrently.While the greatest improvement is obtained only under true parallel execu
tion. it is possible to ignore this im plem entation de tail without affecting thesuperiority of the concurrent algorithm over the sequential algorithm,
1.2 CONCURRENT PROGRAMM ING
Concurrent programm ing is the name given to programming notations and
technique s for expressing potential parallelism and for solving the resulting
synchronization and com mu nication problems. Imp lementation o f parallelism is a topic in com pute r systems (hardw are and so ftware ) that is essentially
8/10/2019 CONCURRENCIA.pdf
13/160
SCC I. 1.2 CONCURRENT PROGRAMMING S
independent of concurrent programming. Concurrent programming is
important because it provides an abs tract setting in which lo study parallel
ism without getting bogged down in the implementation details. This abs-traction has proved to be so useful in writing clear, correct software that
modern programming languages offer facilities for concurrent program
ming.
The basic prob lem in writing a concu rrcn i program is to identify which
activities may be done concurren tly. If the mergeproced ure is also included
in the cobcgin-coend bracket (Fig. 1.5), the p rogram is no longer correc t. If
you merge the d ata in parallel with sorting done by your two colleagues, the
scenario of Fig. 1.6 might occur.
cobegin
sori(I, rt);
son (n+ 1, iwon);
merge(1, + 1,frvoti)cocnd
F if. 1.5.
Colleague) Coltcague2 You
Initially 4 . 2. 7. 6 . 1 8. 5, 0 , 3, 9 _Colleague 1 cxcltangcs 2. 4, 7, 6, 1 8, 5, 0, 3. 9 -Collcaguc2 cxchangcs 2. 4, 7, 6. 1 5. 8. 0, 3. 9 -You merge ,, 2You merge 2 .4You merge .. 2. 4. 5
Fig. 1.4.
However, mergecould be a concurrcni process if there were some way
of synchronizing its execution with the execution of tlte sort processes
(Fig. 1.7).
while countl < middle do
wait until i o f procedure call so rt( l,n ) is greater than co un tl and i o f
procedure call sort(n + I . twon) is greater than count2 and only then:if a[conntf] < a[count2J then
Kg. 1.7.
Parallelism is important not only for the improvement that can be
8/10/2019 CONCURRENCIA.pdf
14/160
6 W IIAT IS CONCURRENT PROCRAMMING? CHAP. I
following problem :
Rea d 80-colunui cards and print them on ! 25 -char acier lines. However,
every run o f = 1 to 9 blank spaces is lo be replace d by a single blank
followed by the numeral n.
This program is difficult to write as a sequential program. There are
many intera cting special cases: a run of blanks overlap ping the end o f a card,
the pair blank* overlap ping the e nd of a line and so on. O ne way to improve
the clarity of the prog ram would be lo write three sepa rate programs: one to
read cards a nd write a stream of characters onto a tem porary fi le; a second
pro gram to read this c haracte r str eam and mod ify ru ns o f blanks, w riting the
new stream o nto a second temporary file; and a third program to read the
second temporary file and print lines of 125 characters each.
This solution is not acceptable because of the high overhead of the
temporary files. However, if the three programs could be run concurrently
(not necessarily in parallel) and com mu nications paths could be established
betw een th em , th en the pro gra m s would be both eff ic ie nt and ele gant.
Sequence * ---------- " ---------- **........
1 .3 C OR R E C T N E S S OF C ON C U R R E N T P R OGR A M S
Con current program ming is much m ore difficult than sequ ential program
ming because of the difficulty of ensuring that a concurrent program is
correct. C onside r the se que ntial sort progra ms of Figs. 1.1 and 1.3: if they
were tested on several sets of input data then we would feel confident thatthey are correct. G uidelines for testing would be to include a sorted input, a
reversed input, an input all of whose elements are identical and so on. A
run-of-the-mill bug (such as an incorrect for-loop limit) would seldom
escape detection.
Th e sc enario in Fig. 1.6. illustrates that the con curre nt prog ram of Fig.
1.5 is incorrect. However, this program is not a sequential program and
oth er scena rios exist. If the processo rs assigned to sort are sufficiently rapid
then mergemay always be working on sorted data. In that case, noa m ou nt 'of
testing would detect any problem. One day (perhaps months after the
pro gra m has been put in to production) an im pro ved com ponent in th e
com puter system causes the m erge to speed up and then the program gives
incorrcct answers as dem on strated in 1.6. O f course the na tural reac tion is:
ofINPUT
cards
OUTPUT
Processes P |, may execute concurrenlly
8/10/2019 CONCURRENCIA.pdf
15/160
w -1.4 INTERLEAVING 7
This program w orked yesterday so the new com ponent must be at fault."
|> A scenario is a desc ription of a possible execution sequ ence of a pro-
m and shows how a computer might act out a program. It is usually
d loshow that a program is incorrect: since the co m pu ter may cxccutc the
^fogram in a m anner that produces the w rong answer, the program cannot
corrcci.
. Conversely, how can we show tha t the con cu rren t prog ram in Fig. 1.4 is
COrrccl? It no longer makes sense to look for and test paths that can be
execution scqucnccs. A t times, there may be tw o such seque nces caused by
parallel execution o f th e alg orithm .Sequential prog ram m ing has a well-developed pro of theo ry. A ssertions
| are made abou t the state of the co m pute r (i.e. the values o f the variables and
j lhc program counter) before and a fte r executing an in structio n, and th ese
are then combined into a logical proof. In concurrent programming, this
' method needs to be modified because the program s can interfere with each
other.
The correctness assertions for procedures sortand mergeof the previ
ous sections are elementary to state and prove:
sort input assertion: a is an array of integers,sortoutput assertion: ais s or ted , i.e. a now contains a perm utation of
the original elements and they are in ascending order,
merge input assertion: the two halves of a arc sorted".
mergeou tput assertion: the elem ents of a have been w ritten in ascend
ing orde r.
The correctness of the program in Fig. 1.1. is immediate from the
correctness o f p rocedure sort. The correctness of 1.3 is easily obtained by
concatenating the correctness proofs of sortand merge.Th e correctness of
Fig. 1.4 needs a new tec hn iqu e. W e have to be a ble to ex pres s the fact thatthe two instances of sortdo n ot interfere with one ano ther. The program in
Fig. 1.5 is incorrect tho ugh the proce du res comp rising it are c orrect; un for
tunately, they interact in a m ann er which m akes the prog ram incorrect. The
program in 1.7 is co rrect but new id eas a re needed to be able to reason aboul
synchronization.
1 .4 INTERLEAVING
In terleaving is a logical device that m ake s it possible to analyze the c orre ct
ness of conc urrent program s. Suppose tha t a concurrent program Pcon sists
of two processes /*, and P2. Then we say that P cxccutcs any one o f the
execution sequences that can be obtained by interleaving the cxccution
scqucn ccs of the tw o processes. It is as if some supe rnatu ral being were lo
execute the instructions one at a time, each time flipping a coin to dccidc
whether the next instruction will be from P , or P2.
8/10/2019 CONCURRENCIA.pdf
16/160
We claim ihat these execution sequ ence s exhaust the possible behav iors
of P.C onsider any instructions /, a nd l}from Ptand P2,respectively. If /, and/j do not access the same memoryccJI or reg ister then it certainly doe s not
m atter if/, is executed before /f, aft e r/j o r even simultaneously w iih/j (if the
haidware so allows). Suppose on the othe r hand that 1, is "Sto re 1 into
memory cell M" and that l tis Store 2 into memory cell M '\ If /, and / 2areexecuted simultaneously then the only reasonable assumption is that the
result is consistent. Th at is, cell M will contain e ither 1o r 2 and the com puter
docs not store another value (such as 3) of its own volition.
if this were not true then it would be impossible to reason about
concurrent programs. The result of an individual instruction on any given
data cannot depend upon the circumstances of its execution. Only the
external behavior of the system may changedepending upon the intcrac*tion of the instructions through the common data. In fact, computer
hardware is built so that the result of executing an individual instruction is
consistent in the way just defined.Il iu s jt ih c result of the simultaneous execution o f/ , and is ) then this
is the same as saying that /, occurr ed before / 2 in an interleaving and
conversely if the result is 2.
Interleaving d ocs not make the analysis of concurren t program s simple.
The num ber of possible execution sequences can be astronomical. N everthe
less. interleaved execution sequences are amen able lo formal me thods and
will allow us to demonstrate the correctness of concurrent programs.
1.5 THE ORIGIN OF OPERA TING SYSTE MS
Concurrent programming, though generally applicable, grew out of prob
lems associated with operating systems. This scction outlines the develop
ment o f such systems so that the background to the growth o f concurrent
prog ramming ca n be apprec ia ted.
It is not often that an obsolete technology reappears. Programmable
pock et ca lculators have re su rrecte d machine language program ming: ab so
lute addresses must be used for data and labels. On the other hand, theowner is not constrained lo working during the hours that the computer
center is open.While the po cket calcu lator is a m arvel of electronics, mac hine language
prog ramming direct ly on the com pute r is slow and difficult. In the 1950s,
when computers w ere few and expensive, there was great concern over the
waste caused by this method . If you signed up to sit at the com puter consolefrom 0130 lo 0200 and you spent 25 minutes looking for a bug, this 25
minutes of computer idle time could not be recovered. Nor was your
colleague who signed up for 02 00- 02 30 likely to lei you start anoth er run
at 0158.
If we analyze w hat is happening in the terms of the previous sections we
8 WH AT IS CONOURRLN1 PROGRAMMING? CHAP 1
8/10/2019 CONCURRENCIA.pdf
17/160
I I M l i i m i i i J E M i L i M a M M T M K I T f r r " " ~ '
SfcCT I 5 THIf ORIGIN OF OPfcRA I INC. SYSTEMS 9
see that (he manual procedures that must be performedmounting tapes,selling u p card decks, or changing places at the consolearc disjoint from
the actual com pulation and can be performed concurrently with (he com pu
ter's processing.
The sccond generation of computers used a supervisor program to
butch jobs. A professional com pute r operato r sa t at the console. Program
mers pre pared card decks which were con catenated into batches" that were
fed into the computer once an hour or so. The increase in throughput (a
measure o f the efficiency of a com puter; ii is the num ber of jobssuitablyweightedthai can be run in a given time period ) was eno rm ousthe jobs
were run one after another with no lost minutes. The programmers, how
ev er, Inst the ability to dynamically track th e progress of their programs since
they no longer sat at the computer console. In the event of an error in one
job, (he compute r simply commenced execu tion o f the next job in Hie batch,
leaving the programmer (o puzzle out what happened from core dumps.
With h turnaround time (the amount of time that elapses between a job
being submitted for execu tion and th e results being prin ted) of hours or
days, the task of programming became more difficult even though certainaspects were improved by high-level languages and program libraries.
Despite this improv ement in throughp ut .systems designers had noticed
an othe r source of inefficiency not appare nt to the hum an eye. Suppose that acomputer can cxccute one million instructions per sccond and that it is
connected to a card reader which can read 300 cards per minute (= one card
in 1/5 second). Then from the time the read instruction is issued until thetime the card has been read , 200 000 instructions could have been executed.
A program to read a deck of cards and print the average of the numbers
punched in th e cards will spend over 99% of its time do ing no thing eventhough 5 cards per sccond seems very fast.
The firsl solution to (his problem was spooling. The I/O speed of amagnetic (ape is much greater than that of the card reader and the line
prin te r that are the inte rface betw een the compute r and the programmer.
We can decompose the operation of the computer into three processes: aprocess to read card s to tape; a process lo cxecute the programs on the tape
and wrile the results on to a second tape; and a process to print the informa
tion from (he second tape . Since these processes are disjoint (cxcept for the
exchange of the tapes after processing a batch), the throughput can be
greatly increased by running each process on a separate computer. Since
very simple computers can be used to transfer information to and from (he
magnetic tape , the increase in cost is not very great com pared lo the savingsachieved by more efficient use of the main computer.
Later generations o f com puter systems have attacked these problems
by switching the com pute r among severa l computa tions whose programs
and data are held simultaneously in memory. This is known as multiprog
ramming.While I/ O is in progress for program P, the com pute r will execute
8/10/2019 CONCURRENCIA.pdf
18/160
10 WH AT IS CONCURRENT PROGRAMMING? CHAP. 1
several thousand instructions of program Pt and (hen rclurn to process thedata obtained for Pt.Similarly, while one prog ramme r sitting ai the terminal
of a lime-sharing systemf is thinking, (he computer will switch itself to
execute the program requested by a second programmer. In fact, modern
computer systems arc so powerful (hat (hey can switch (hemselves among
dozens or even hundreds of I/O devices and terminals. Even a m inicompu
ter can deal with a dozen (erminals.
The importance of the concept o f interleaved comp utations m entioned
in the previous section has its roo(s in these multiprogrammed systems.
R jlhcr than attempt to deal with (he global behavior of the switched compu
ter, we will consider the actual p rocesso r to be merely a means of interleaving the computations of several processors. Even though multiprocessorsystemssystems with more than one computer working simultane
ouslyare becoming more c omm on, the interleaved com putation model is
still appropriate.The sophisticated software systems that are responsible for multi
prog ramming are called operating systems. The term operating system is
often used 10 cover all manu facturer-provided software such as I/O pro grams and compilers and not just the software responsible for the multi
programming.
While ihe original concern of operating system designers was toimprove throu ghput, it soon turned out that the through put was affected by
numero us system crashes w hen the system slop ped functioning as it wassupposed to and extensive recovery and res tart m easures delayed execution
of jobs. These defects in the o perating systems w ere caused by ou r inad equ
ate understanding of how to execute several programs simultaneously and
new design and programming techniques are needed to prevent them.
1.6 OPERATING SYSTEMS A ND CONCURRENT PROGRA MMING
If you could sense the operation of a computer thai is switching itself every
few milliseconds among dozens of tasks you would certainly agree that the
computer seems to be performing these tasks simultaneously even though
we know that the comp uter is interleaving the com putations of the various
tasks. I now argue that it is more than a useful fiction lo assume that thecom pu ter is in faci perform ing its tasks concu rrently. To see why this is so, let
us consider task switching in greate r detail. Most com puters use interrupts
t A time-sharing system is a comp uter system that allows many programm ers to
work simultaneously at teiminats. Each programmer may work under theillusion (hat ihe com puter is working for him a lone (though Ihe com puter mayseem lo be working slowly if loo many terminals are connccted).
8/10/2019 CONCURRENCIA.pdf
19/160
SECT I 6 OPhK ATlNC l SYSTEMS ANO CONCURRENT PROGRAMMING I I
for this purpose. A typical scenario for task switch by interrupts is as follows.
Program makes a read request and then has its execution suspended. 11kCPU may now cxccu te program Pt .W hen the read requested by P,has been
com pleted, the I/O device will interrup t the execution of Pt to allow theoperating system to rccord the com pletion of the read. Now the execution ofcither P, or Pt may be resumed.
The interrup ts occur asynchronously during the execution o f programs
by the CPU. Oy this is m eant th at ihere is no way of predicting o r coord inat
ing the occurence of the interrupt with the execution of any arbitraryinstruction by the CPU. For example, if the operator who is mounting a
magnetic tape happens to sneeze, it may delay the tape ready" signal by
8.254387 seconds. I lowcvcr, if he is slow" w ith his handkerchief, the delay
might be 8.254709 seconds. Insignificant as that difference may seem, it issufficient for the CPU lo execu te dozens of instructions. Thus for all practi
cal purposes it makes no sense lo ask: What is the program that the
com puter is executing?" T he c om puter isexecuting any one of a vast number
of execution sequences that may be obta ined by arbitrarily interleaving the
execution of the instructions of a number of computer programs and I/Odevice handlers.
This reasoning justifies the abstraction th at an opera ting system consists
of many processes executing concurrently. The use of the term process
rather than program emphasizes tin; fact that we need not differentiatebetween ord inary programs and extern al devices such as te rm inals . They are
all independent processes that may, however, need to communicate witheach other.
Th e abstrac tion will try to ignore as many details of the actual applica
tion as possible. For example, we will study the produccr-consu m er problem
which is an abstraction both o f a program producing data for consumption bya printe r and of a card reader producing da ta for consumption by a program.
The synchronization and communication requirements are the same for
both problems even though the deta ils of programming an input routine are
rather different from the details of an output routine. Even as new I/O
devices are invented, the input and output routines can be designed withinthe framework of the general producer-eonsumer problem.
On Ihe oth er ha nd, we assume th at each process is a seque ntial process.It is always possible to refine the description of a system until it is given in
terms of sequential processes.The concurrent prog ram ming paradigm is applicable to a wide range of
systems, not just to the large multiprogram ming operating systems that gave
rise to this viewpoint. Moreover, every computer (except perhaps a cal
culator or the simplest microcomputer) is executing progams that can be
considered to be interleaved concurrent processes. Minicomputers are sup
plied with small multip rogramming systems. If not, (hey may embedded in
8/10/2019 CONCURRENCIA.pdf
20/160
12 WHAT IS CONCURRENT PROGRAM MING? CHAP. I
real-time systems* where (hey are cxpcctcd to concurrently absorb andprocess dozens of different asynchronous ex ternal signals and operator
commands. Finally, networks of interconnected computers are becomingcommon. In this case true parallel processing is occurring. Another term
used is distributed processing to emphasize that the connected computers
may be physically separated. While the abstract concurrency that models
switched systems is now well unders tood, the behavior of distribu ted systemsis an area of current research.
1.7 A N OVER VIEW OF THE BOOK
Within the overall context of writing correct software this book treats thesingle, but extremely important, technical point of synchronization and
communication in concurrent programming. The problems are very subtle;
ignoring the details can give rise to spectacu lar bugs. In Chap ter 2 we shall
define the concurrent programming abstraction and the arguments that
justify each po int in the de finition. The ab stra ct ion is sufficiently general
that it can be applied w ithout difficulty to real systems. On the othe r hand itis sufficiently simple lo allow a precise specification of both good and badbehav ior o f these prog rams.
Formal logics exisl which can formu late specifications and prove prop
erties of concurrent programs in this abstraction though we will limit ourselves to informal or at most semi-formal discussions. The fact that the
discussion is informal must no t be cons trued a s meaning tha t the discussion is
imprecise. A ma thematical argum ent is consid ered to be precise even if it is
not formalized in logic and set theory.
The basic concurrent p rogram ming prob lem is that o f mutual exclusion.
Several processes compete for the use of a certain resource such as a tapedrive but the nature of the resource requires that only one process at a time
actually accessed the resource. In othe r words, the use of the resource by oneprocess excludes o th er processes from using the re source. Chapte r 3 pre
sents a scries of attempts to solve this problem culminating in the solution
known as P ck ke r's algorithm. The unsuccessful attemp ts will each point outa possible "bad behavior of a concurrent program and will highlight the
differences between concurrcnt and sequential programs.
Dekkcrs algorithm is itself too complex to serve as a model for morecomplex programs. Instead, synchronization primitives are introduced. Just
as a disk Tile can be cop ied on to tape by a single con trol language com mand
t Whciircas a lime-sharing system gives the usei live ability to use all (heresources of a computer, the term real-time system is usually restricted to
systems that arc required to respond to specific pre-dcfincd requests from auser or an extern al sensor. E xamples wou ld be air-traffic control systems and
hospital monitoring systems.
8/10/2019 CONCURRENCIA.pdf
21/160
MAT. 1.7 AN OVERVIEW O f-T il l : BOOK 13
or a file can be read by writing readin a high level language, so we can define
program ming language constructs for synchron ization by th eir semanticdefinitionwhat they are supposed lo doand not by their implementa
tion. Wo shall indicate in general terms how these primitives can be
implemented bu t the detail* vary so much from system to system that to fully
describe them would defeat ou r purpose o f studying an abstraction. Hope
fully. it should be possible for a ''casual systems programmer to write
concurrcni program s without knowing how the primitives arc implemented.
A model implementation is described in the Appendix.
Chapter 4 commences the study of high level primitives with E. W.
Dijkstras sem apho re. The sem apho re has proved extrao rdinar ily successful
as the basic synchronization primitive in terms of which all others can be
defined. The semaphore has bccomc the standard of comparison. It is
sufficiently powerful that interesting problems have elegant solutions by
semaphores and it is sufficiently elementary that it can be successfully
studied by formal methods. Th e chap tcr is based on the producer-consum erproblem mentioned above ; the mutual exclus ion problem can be trivially
solved by semaphores.
Most operating systems have been based on monolithic monitors. A
central executive, supervisor o r kernel p rogram is given sole authority over
synchronization. Monitors, a generalization of this conccpt formalized by
Hoarc, arc the subject o f Ch apter S. Th e m onitor is a powerful conccptua!nolion that aids in the development of well structured, reliable programs.
The problem studied in this chap ter is the problem of the reade rs and the
writers. This is a variant of the m utual exclusion prob lem in which there arc
two classes of processes: writers which need exclusive access to a resource
and readers which need not exclude one an other (though as a class they must
exclude all writers).
The advent of distributed systems has posed new problems for concurrent programming. C. A. R. Iloarc has proposed a method of synchroniza
tion by communication (also known as synchronization by rendezvous)
approp riate for this type o f system. The designers of the Ada programming
language have chosen to incorporate in the language a variant of Hoares
system. Anticipating the future im portance of the Ada language. Chapter 6
studies the Ada rendezvous.
A classic problem in concurrent programming is (hat of the Dining
Philosophers. Though the problem is of greater entertainment value thanpractical value, it is sufficiently difficult to afford a vehicle for the compari
son of synchronization primitives and a standing challenge to proposers of
new systems. C hapte r 7 reviews the various primitives studied by examiningsolutions to the problem of (he Dining Philosophers.
Programming canno( be learned w ithout practice and concurrent pro
gramm ing is no exception. If you are fortunate eno ugh to have easy access to
8/10/2019 CONCURRENCIA.pdf
22/160
14 WHAT IS CONCURRENT PROGRAMMING? ClIAP. I
a m inicomputer or to a sophisticated simulation program , there may be no
difficulty in practicing these new concepts. If not, the Ap pend ix dcscribcs infull detail an extremely simple simulator of concurrency that can be used fo r
class cxcrcisc. In any case, the Appendix can serve as an introduction to
implementation of concurrency.
The book ends with an anno tated bibliography suggesting further studyof concurrent programming.
1.8 PROCRAM NOTATION
Th e examples in the text will be written in a re stricted s ubset o f Pascal*S, which is itself a highly restricted subset of Pascal. This subset must of
course be augmented by constructs for concurrent programming. It is
intended that the exam ples be legible to any prog ram m er with experience in
Pascal, Ada, C, Algol, or PL/I.
The implementation kit in the Appendix describes an interpreter for
this language that will execute the examples and that can be used to program
the exercises. Th e language in the kit conta ins mo re Pascal language featu resthan arc used in the text of the book and thus users of the kit are assumed to
be able to program in se que ntial Pascal. These extra features are necessary
in order to use the kit to solve the exercises, although the exercises themselves could be programmed in other languages that provide facilities for
concurrent programming.
The examples in the chapter on monitors are standard and can be
adapted to the many systems that provide the monitor facility such as
Concurrent Pascal, Pascal-Plus, or CSP/k. The examples in the chapter on
the Ada rendezvous are executable in Ada.We now present a sketch of the language that should be sufficient to
enable programmers unfamiliar with PasCHl to understand the examples.1. Com ments are inserted between ( and ).
2. The first line in a program should be
pro gra m name;
3. Symbolic names for constants may be declared by the word const
followed by the constant itself:
const rw'O/i*40;
4. All variables in each procedure in the main program must be declared
by the word var follow ed by the nam es of the variab les and a type:
var i, j, temp: integer,
fo u nd : boolean;
8/10/2019 CONCURRENCIA.pdf
23/160
SECT. I $ PROGRAM NO IAIIO N IS
The available types are : integer, boolean(with co nstants true andfa lse) and arrays:
var a:rr%y{lo\vindex...highindex] of integer;
5. Following tltc declaration of the variables, procedures and functions
may be declared: procedure name( fo rm al parameters ); and func*
lion name( fo rm al parameters ): returntype,.The formal parame
ter definition has the same form as that of a variable list:
pro cedure sort (low.high: integer)-,
function lust(index: integer): boolean)6. The body of the main program or of procedures is a sequence of
statem ents sep arated by semi-cotons between begin and end. The main
program body is te rmin ated by a period and the procedure bodies by
semicolons. The usual rules on nested scopes apply.
7. The statements are:
assignment statement
If boolean -expression then statement
ifboolean-expression
thenstatement
elsestatement
for index-variable : = lowindex to highindexdo statement
while boolean-expression do statement
repeat sequence-of-statementsuntil boolean-expression
The syntactic difference b etween while and repeat is that while takes a
single statem ent and repeat takes a sequence of statem ents (separated
by semi-colons). The semantic diffc rcnce is that the while lests before
the loop is done and repeat tests afterwards. Thus repeat executes itsloop at least once.
8. A sequence of statem ents may be substituted for siatem cnt" in theabove forms by enclosing the sequence of statements in the bracket
begin ... end to form a single com pound statem ent:
if c\j] < a{t]thenbegin
temp := a(/'];
a[j] : a[i];[i] := temp
end
In detail this is read : If the boolean expression (c[j)< *[]) has the valuetrue, then cxccute the compound statement which is a sequence of
three assignment s tatements. If the expression isfa lse,then the (compound) s ta tem ent is not executed and the execution continues with thenext statement.
8/10/2019 CONCURRENCIA.pdf
24/160
16 WH AT IS CONCURRENT PROGRAM MING CHAP I
9. Assignment statem ents arc w ritten variable : expression. Th e variable may be a simple variable or an elem ent o f an array: 0( 1']. The type
(integer o t bo olean) of the expression must m atch that o f (he variable.
Integer expressions arc composed of integer variables and co nstants
using the ope rators: + , , div (intege r divide with trun cation ) and
mod Boolean expressions may be formed from relations between
integer expressions : = , < > (not equal ) , < ,> ,< = ( less than or equal)
> = (greater than or equal) . The boolean operators and. or and not
may be used to form compound boolean expressions.
10. For those who know Pascal we list here the ad ditional featu res that are
defined in the language of the implementation kit some of which will
be necessa ry if you plan to write any program s using th e kit.
(a) Type declarations. Since there arc no scalar, subrange or record
types, this is mostly useful for array types:
type sortarray = ar ray[ l . . n ] of integer
var a: sortarray,
(b) Character constants and variables of type char.
(c) Mu ltidimensional arrays (arrays of arrays).
(d) A parameter may be passed by reference rather than value by
pre fixing th e fo rm al param ete r by var.(e) Recursive functions and procedures.
(f) I/ O may be performed only on the standa rd textfi les input and
output. To ensure that you do not forget this restriction, the
dec larat ion o f extern al f iles in the pro gram card has been
removed, read, readln, write, writeln, eoln, e o f(all w ithout a file
param ete r) fu nction as in Pascal . O nly th e default field wid th s may
be use d in a write,w hich will, how ever, acc ept a string as a field to
be prin te d:
writeln {the answer is, n).
1 . 9 E X E R C t S E S t
1.1 Wiite a two-process concurrent program to find the mcun of n numbers.
1.2 Write a thiee-process concurrcni program to multiply 3x3 matrices.
1.3 Each process of the matrix multiply program executes three multiplicationsand two additions for each of three rows or altogether IS instructions. Howmany execution sequences of the concurrent program may be obtained byinterleaving the executions of the three processes?
t Sl ight ly harder exercises are marked throu gho ut the boo k wi th an aster isk (*) .
8/10/2019 CONCURRENCIA.pdf
25/160
#
EXERCISES 17
1.4 Perform a similar analysis (or sonprogram. You will have to make someassumptions on Ihe number of interchanges lhat will be done.
1.5 Test the concurrent sonprogram of Fig 1.3.
1.6 Test the concurrent sonprogramof Fig. 1.4 which has the mergedefined as athird process. Run the program several times with exactly the same data.
1.7 Run the program in Fig. 1.8 several times. Can you explain the results?
program increment',const m 20;var n: integer;procedure incr;var i: integer,begin
for i :* 1 lo m do n : n+ 1end;bgin ( main program )
n : * 0;cobegin
incr; incrcuend;writeln (' ihe sum is n)
end.
Pig. 1.8.
8/10/2019 CONCURRENCIA.pdf
26/160
2THE CONCURRENTPROGRAMMING ABSTRACTION
2.1 INTRODUCTION
Con current programming is not the study of operating systems or real-time
systems, but of abstract programming problems posed under ccrtain rules.
Concurrent programming was motivated by the problems of constructingopera ting systems, and its examples are ab stract versions of such problems.
Most imp ortantly, the rules of concurrent programm ing are satisfied in many
systems and thus its tcchniqucs can be used in real systems. Com ponen ts of asystem which arc not amenable to concurrent programming techniques
siiouid be singled out for extremely carcful design and implementation.Chapter I gave the definition of a concurrent program. It consists of
several sequential processes whose execution sequences are interleaved.
The s equential programs arc not totally independ ent - if they were so there
would be no thing to study. They m ust communicate with each othe r in order
to synchronize or to exchange data.
The first means of communication that we shall study is the common
memory. This is appropriate for the pseudo-parallel switched computers
where all processes are running on the same proce ssor and using the same
physical mem ory. It is a lso used on some truly paral le l system s such as theCDC Cyber computers where even though one CPU and ten PP's(peripheral processors) arc simultaneously executing separate programs,
synchronization is accomplished by having the PPs read and write the
CPUs memory. In our abstraction, common memory will be represented
simply by global variables accessible to all processes.
Common memory can also be used to hold access-restricted proce
dures. Access to these proced ure s is, in cffcct, allocated to a process. This is
the way most third generation operating systems were implemented. The
system' programs can only be called by special instructions which ensure
that only one process at a time is executing a system program.
I8
8/10/2019 CONCURRENCIA.pdf
27/160
SECT. 2.2 MU TUA L EXCLUSION 19
With the introduction o f distributed com puting it is no longer valid to
assume that a common central memory exists. Chapter 5 discusses concur*rent programming by means of sending and receiving signals instead of
reading and writing a comm on variable o r executing a comm on procedure.
Synchronization by message-passing has bee n used on several experimental
systems for singlc-processor computers but this approach has not been
widely accepted because of the possible inefficiency of message-passing
compared with simpler systems. Of course, distributed systems have nochoice.
2 .2 M UTU A L EXCLUSIO N
Mutual exclusion is one o f the two most im portan t problem s in concurrent
program ming because it is the abstraction of many synchronization pro blems. We say that activityA , o f process Pxand activityA 2o f process P2must
exdu dc each othe r if the execution ofA x may not overlap the execution of
A j.If Pxand Ptsimultaneously attem pt to execute their respective activities,Athen we must ensure that only one o f them succeeds. The losing processmust block; that is. it must not proceed until the winning process completes
the execution of its activity A .
The most common example of the need for mutual exclusion in real
systems is resource allocation. Obviously, two tapes cannot be mounted
simultaneously on the same tape drive. Some provision must be made fordeciding which process will be allocated a free drive and some provision
must be made to block processes which request a drive when none is free.
There is an obvious solution: run only one job at a time. But this defeats one
of the main aims o f concurren t program ming - parallel execution of severalprocesses.
Meaningful concurrency is possible only if the processes are looselyconnected. The loose connection will manifest itself by the need for short
and occasional com munication, The abstract mutual exclusion problem willbe expressed:
remainder
pre-protocol
critical section
post-pro toco l.
remainder will be assumed to represent some significant processing.
Occasionally, i.e. after the completion of remainder, the process needs to
enter a short criticalsection It will execute certa in sequences of instructions
8/10/2019 CONCURRENCIA.pdf
28/160
20 I Mfc CONCURRhNT PROGRAMMING ABSTRACTION CHAP. 2
tailed protocols before and possibly after (he critical section. These pro
to c o l will ensure that (he critical section is in fact executed so as to excludeall other critical sccitons. Of course, just as (he critical section should be
short relative to the main program in ord er to benefit from con currency, (he
protocols must also be relative ly short. The protoco ls re prese nt the over*
head paid for concurrency. Hopefully, if the critical sections and the pro*
tocols are sufficiently short (hen the significant processing abstracted as
remainder can he overlapped (hus justifying (he design of the multipro
gramming system.
There is another, more important, reason for requiring loose connection among concurrent processes and that is to ensure reliability. We want to
be assured (hat if ihere is a hug in one o f (he processes, then i( will not
propagate itself into a system "cra sh . It should also be possible 10gracefullydegrade the perform ance of a system if an isolated device should fail ("fail*
soft ). It would be absurd to have a system crash just because one tape drive
became faulty.The abstract requirement will be that, if a process abnormally termi
nates outside the critical section then no other process should be affectcd.
(For this purpose the protocols are considered to be part of the critical
section.) Since the critical section is where the communication is taking
place, it is not reasonab le to require the same of (he critical sections. Wemight use (he following metaphor. If a runner in a relay race fell after he has
passed the baton th en the race shou ld not be affected . It is u nreasonab le to
hope that the race is unaffected if the fall occurred at the critical moments
during the baton exchange.This restriction is not unreasonable even in practice. Critical sections
such as disk I/O will often be executed by common system rou tines or by
com piler-supplied routine s which have been written by a com petent systems
programmer. The probab ility o f a sof tw are e rr or in such a routine shou ld be
much smaller than in a run-of-the-mill program.
2.3 CORRECTNESS
What does it mean for concurrent programs to be correct? An ordinary
program is correc t if it hal ls and pr in ts the "righ t" answ er . In genera l, you
will know a " rig ht answer if you see one. IT m is also true o f some con cur
rent programs such as subprogram.On the oih er hand, the single most distinguishing feature of an o p e n
ing system o r real-time system is that it must nev er hah. The only way lo hall
a typical operating system is (o push the start button on (he comp uter panel.
An operating system prints nothing of its own (except some non-essential
logging and accounting data). Thus when studying operating systems, we
8/10/2019 CONCURRENCIA.pdf
29/160
8/10/2019 CONCURRENCIA.pdf
30/160
24 1 HI: CONCURRENT PKOGRAMMINC AB SIR AC IION CHAP. 2
will respond to the p rotocol just that m uch faster. A sy nchron ous system cangenerally mix only modules whose speeds arc multiples of each other.
In the ab straction, ccrtain assum ptions will be m ailc to avo id meaning-
less pathologies. Since infinite delay is indisting uisha ble from a h alt, we will
assume (globally) that, if there is at least one p roccss ready to run , then some
(unspecified) process is allowed lo run within a finite time. We also assume
(locally) that if a proccss is allowed lo run in its critical section then it
will complete the execution of the critical section in a finite period of
time.
On Ihe other h and , we allow ourselves lo use the adversa ry appro ach in
checking for possible bugs. A concu rrent program suffers from deadlock if itis possible tode vise a scenario for deadloc k unde r Ihe sole finiteness assu m p
tions of the previous paragraph. If som eone offers you a concurrent prog
ram, you can tailor yo ur counters ccn ario specifically to the given program ;
you are an adversary" allowed to plot against the program.
2 . 5 IMP L E M E N T I N G P R I MI T IV E I N S T R U C T ION S
O ur solutions to the m utual exclusion problem will always chca t by making
use of m utual exclusion provided on a lower levelthe hardw are level. Justas the user of a high level language nee d not know how a com piler works as
long as he is provided with an accurate description of the syntax and
semantics of the language, so we will not concern ourselves with how the
hardware is implemented as long as we are supplied with an accurate
description of the syntax and semantics of the architecture. Presumably the
same thing happens at lower levelsthe computer logic designer need not
know exactly how an inte grated circuit is im plem ented ; the in tegrated circuit
designer need only conccrn himself with ihe electronic p roperties of sem i
conductors and need not know all Ihe details of the quantum physics that
explain these properties.In common memory systems there is an arbiter which provides for
mutual exclusion in the access lo an individual memory word. The word
access is a generic term for read and write or, as they arc usually called.
Load and Store corresponding to the assembler instructions for these
actions. The arbiter ensures that in case of overlap among accesses, mutual
exclusion is obtained by executing the acccsscs one after the other. The
order of the acccsscs is not guaranteed to the programmer. On the other
hand, the consistency of the access is ensured as described in Chapter 1.
N ote th at th e ac cess to a single word is an action that may not be
app arent in a high level language. Supp ose that n is a global variable that is
initially zero and is used as a counter by several processes executing the
8/10/2019 CONCURRENCIA.pdf
31/160
SfctT. I 5 IMP LtM l iN ' l INC P R l M l l l V h INSIHUCI IONS 23
instruct ion: n :* + l.T h e com piler com piles such a stateme nt into the threeassembler instructions:
U>ad n
Add I
Store />
Con sider now the following scenario. The value of/i is 6. / ' , executes L oad/t
jmd th en f*3also exe cute s Lo ud /i. / , incre m ents the value o f/i in its intermit
register to ob tain 7. Similarly, ob tain s the value 7 in its intern al register.
Finally, the two pro ccs sese xe cu te the S tore instru ction in succession and the
value 7 is stored twicc. Hence the final value of n is 7. 'Iliat is wc have
incremented the value 6 twice and have obtained 7.
Common memory arbiters arc found both on multiprocessor systems
ami on single processor systems whose I/O equipm ent is conn ected for direct
memory access (DM A ). Normally an I/ O device would transfer each data
word to the CP U for the CP U to store in the m em ory. Ho wev er, this imposes
an unacceptable overh ead on the CPU . Instead, the I/ O device is given the
address o f a block of m emory. It only interrup ts (he C PU when the transfer
of the whole block is com pleted. T he re is an ar biter lo ensure that only onedevice (or the CPU) has access to ihe memory at any one time.
In this case we say thal DMA is being implemented by cycle stealing.
Th e m emo ry is assum ed to be driven at its maximum access speed , say one
access pe r micro secon d. Each such access is also called a m em ory cycle. To
implement DMA the CPU is normally allowed to compute and access
mem ory. W hen a da ta wo rd arrives front an I /O device the right to access
memory is usurped from the CPU and the device is allowed to "steal" a
memory cycle. There is no real overhead. The memory cycle is needed
anyway to store the w ord and with cycle stealing (he CPU need not concernitself with individual words.
Ilie computer hardware will be trusted to function properly. Wc only
concern ou rselves with the correctn ess o f the system software. T his is not
always true o f coursc and in practice one must be alert (o hardw are malfunc-
lion. On e o f the most speclucular bugs known to the A uth or was caused by a
hardware faull that resulted in mixing (wo memory addresses instead of
interleaving them . Th e net result was a store o f da ta in (he mixed-up address,
and (he presence of foreign data in these memory addresses was never
explained by software specialists. Fortunately this sort of thing rarely hap*pens.
A no ther way of using a com m on mem ory system is (o define a primitive
procedure cal l th at is guaran teed to exclu de o th er calls o f th e same pro ce
dure. T ha t is, if two p roces ses try lo call Ihe same pro ced ure , only on e will
succeed and the losing process will have lo w ait. A s usual it is not specified in
which orde r sim ultaneous reque sts are granted.
8/10/2019 CONCURRENCIA.pdf
32/160
26 THE CONCURRENT PROGRAMMING A8SIRACTION CHAP 2
In multiprogramming systems, (he interrupt facility' is used. A criticalprocedure is written as an interrupt routine to be executed when a processcauses an interrupt. A hardware flag ensures mutual exclusion by inhibiting
the interruptplacing the computer in an uniittcrrupiable state. Uponcompletion o f the interrupt routine, the flag is reset and anothe r proccss may
now cause an interrupt.Another method of implemcn(ing mutual exclusion is polling. Bach
process is interroga ted in turn to see if it requires some service that must be
done under mutual exclusion.Wc shall allow ourselves the luxury of defining primitive instructions
and, beyond the sketch in this scction, we shall not worry about theimplementation. With some experience in computer architecture and data
structures it should not be too difficult to implement any of these primitives.However, the details differ widely from computer to computer. A study of
the implementation kit may help. In the bibliography we give references toseveral descriptions of concurrcnt programming implementations. In addition, the serious student should study the architecture of whatever computerand operating system he is using.
2.6 CONCURRENT PROGRA MMING IN PASCAL 'S
Sequential Pascal (and the subset used in this book) must be augmented by
concurrcnt programm ing constructs. The concurrent processes arc writtenas Pascal procedures and their identity as concurrent processes is established
by their appearancc in the cobegin . . . coend s(a(emcnt
cobegin Px; P2; . . P coend.
A request for concurrent execution of several processes may appear only inthe main program and may not be nested. The semantics of the cobegin . ..
coend statement are specified as follows:
The cobegin statement is a signal to the system that the enclosedprocedures are not to be executed bu t are lo be marked for concurrent
execution. When the coend statement is reached, the execution of Ihemain program is suspended and the concurrcnt processesare executed.The interleaving of the executions of these processes is not predictable
and may change from one run to another. When all concurrent processes have terminated, then the main program is resumed at the
statement following the coend.
An additional notational device that we make use of is the statementrepeal . . . forever which is exactly equivalent in its semantic content withrepeat. . until/alse.However, the latter is rather obscure and wc prefer themore transparent notation.
8/10/2019 CONCURRENCIA.pdf
33/160
8/10/2019 CONCURRENCIA.pdf
34/160
28 T U t CONCURRfcNJ rMOCKAMMINC ABSTRACTION CHAP 2
6. We shall extend ou r basic programm ing language with synchronization primitive instructions. As long as (he syntax and semantics of
these instructions are ctcarty defined wc do not concern ourselves
with their implementation.
2.8 EXERCISES
2.1 Standing Ezerdse Write formal specifications of Ihe program s in this book.
For example:Specification for sonprogram.
Input: A sequence of 40 integers: 4 {a,..........
Output: A sequence of 40 inlegeis: b = [bl , . . . .Safety pro perty: W hen the program terminates then (i) bis a permutation of a,and (ii) b is ordere d, i.e. for 1 < * i < 40. b, < m b,+ I.Livcncss property: The program terminates.
2.2 Standing Exercise Test (tie exam ple pro gram s in I he text.
8/10/2019 CONCURRENCIA.pdf
35/160
8/10/2019 CONCURRENCIA.pdf
36/160
32 THE M UT UA L EXCLUSION PROBLEM CHAP 3
arc a useful programming technique but a system of coroutines must be
designed as a single integrated process and are not a substitute for concur
rent programs.
3 . 3 S E CO N O A T T E M P T
Kg. 3.3.
Wc try to remedy the previous solution by giving each process its own
key lo the critical section so if one is devo ured by a polar b ear then the o ther
can still enter its critical section. There is now (Pig. 3.3) an igloo (global
variable) identified with each proccss. It is worth noting that, while in the
solution in Fig. 3.2 the variable turn is both read (l .oad) and writ ten (Store)
by bo th pro cess es, th e p re sent s olu tion may be easie r to im plem ent because
each proccss reads bu t docs not write the variable identified with the oth er
proces s.
If P, (say) wishes lo enter its critical section, it crawls into PjH igloope riodic ally until it note s th at c2 is e qual to 1 sign ifying th at Pi is currently
not in its critical section. Having ascertained thai fact, P, may enter its
critical section after duly registering its entrance by chalking a 0 on its
bla ckboardc,. W henP, h as finished, it c hanges the mark o n e , to I to not ify
P2 that the critical section is free.
pro gram secondaitempt;
var c t, c2: integ er;
pro cedure p ,;
begin
repeat
8/10/2019 CONCURRENCIA.pdf
37/160
SCT. J.3 SECOND A T ll-M p r 33
while c , 0 do;
c , : - 0 ;eril ];
c. ;= J;
re m 1
forever
end;
procedure p 3\begin
repeat
while c ,= 0 do;
C2 0 ;
c m 2 ;
c , I ;
rem 2
forever
end;
begin (* main program *)
1;
Ci :* 1;cobegin
P u P icoend
end.
F i t . 3. 4.
- ~ ........... - -
This program (Fig. 3.4) does not even satisfy the safety requirem ent of
mutual exclusion. The following scena rio gives a co unter-exam ple w here the
first column describes the interleaving and the next columns record the
values of the variables.
c, cInitially 1 1/ \ c he ck s c 2 1 1
P 3checks c, 1 1
Pi sets cl 0 1Ptsets c2 0 0
P x enters critl 0 0
Pt enters crit2 0 0
Since/*, and P2 are simultaneously in their critical sections, the program is
incorrect.
8/10/2019 CONCURRENCIA.pdf
38/160
.. .e MU TUAL EXCLUSION PROBI EM CHAP. 3
3.4 THIRD ATTEMPT
program thirdattempi;var c c2: integer,
procedure p x\
beg/n
repeat
c, := 0;while Cj=0 do;
crti1;
Ci 1;rem 1
forever
end;
procedu re p };
beginrepeat
^2 " 0*>while c, = 0 do;
cr i t l;
< , : 1;rem2
forever
end;begin ( main program *)
c, : 1;
Cj :* 1;cobegin
p t . p i coend
end.Fit. 3.5.
Analyzing the failure of the sccond attemp t, we note that, oncc P, hasascertain ed th at P2$ not in its critical section, P, is going to chargc right into
itscriticalscction. Thus, the instant that P , has passed the while stateme nt. Ptis in effect in its critical section. This contradicts our intention that c, * 0should indicate that P, is in its critical section because there may be an
arbitrarily long wait betw een the while statement and the assignment
statement.The third attempt (Fig. 3.5) corrects this by advancing the assignment
statement so that t , 0 will indicate that P, is in its critical scction even
be fore itchecksc3. Hcncc P, is in its critical sec tion the instant that the while
has been successfully passed.
8/10/2019 CONCURRENCIA.pdf
39/160
SECT. 3. THIRD AVILMPT 35
Unfortuna tely (his program easily leads to system deadlock as seen bythe following scenario:
Initially
P, sets c,P 2 sets e2
P, checks c 2
P2checks C|
Cl
1 1
0 1
0 0
0 0
0 0
The continual checking of (he variables can be continued indefinitely and
cannot be considered progress. Thus the p rogram is hopelessly deadlocked .
Even though this program is unacceptable because o f the deadlock, it is
instructive to prove that it satisfies the mutual exclusion property. By symmetry it is sufficient lo show that: (P, in crit I) implies (P 2 is not in m /2 ).
1. (When P, ente red m /1 ) then (c2 was not 0).
This follows from the structure of the program, namely the test
on c2 by P2. (c2 is not 0) im plies (P 2 is not in critl).
crit l is brack eted between ass ignments to c2 which ensure (hat
this statement is always (rue.
3. (When P, entered crit1) then (P 2 was not in eril2).This is a logical consequence of (1) and (2).
4. (P, in erilI) implies (c, is 0).
eril I is brackeied between assignment (o c,.5. (c, is 0) implies (P 2 docs not ente r crii2).
The (est will not allow P2 through.
6. (P, in m /1 ) implies (P2docs not enter m /2).
A logical consequence of (4) and (5).
7. As long as (P , is in crit1), (P 2 will never enter m /2 ).
This follows from (6). Since (6) refers (o an arbitrary instant of
time, then as long as its antecedent (P , in m '/l ) remains true, sowill its consequent (P2 docs not en ter cm 2).
8. (P, in critI) implies (P2 is not in m /2 ).
From (3) and (7).
Note th at the pro of has (he simple structu re o f a deduction in the
propositiona l calculus except fo r the need (o express and deduce (imc-
relatcd prop erties such as w hen , as long as" etc. There is a formal logic
called temporal logic (hat can express these properties and can be used (o
formally prove p rope rties of concurren t programs. For exam ple, the reason
ing in this proof can be formalized as an induction on the time tha t has passedsince P, en tered crit1. We arc trying to p rove that mutual exclusion is never
8/10/2019 CONCURRENCIA.pdf
40/160
36 I l t r M U tU Al. EXCLUSION PROBLEM CHAP J
crii2so tha t upon the conclusion of the c urrent instruction, mutual exclusion
will still not be violated
3.5 FOURTH ATTEMPT
program four thattem pt;
var Ct, Cji integer,proced ure p ,\
begin
repeat
c, := U;while Cj*0 do
begin
c , : = 1;( do nothing for a few m oments )
c, ;= 0end;
critI ;
c, := I;
rem 1forever
end;
procedu re p 2',begin
repeatc2 := 0;while c, = 0 do
begin
Ci : I ;( do nothing fo r a few moments *)
c , : - 0
end;cr i t l;
1;retn2
foreverend;begin (* main program )
c, 1;Ci := 1;cobegin
P i P i
coend
end.
8/10/2019 CONCURRENCIA.pdf
41/160
SECT S.S FOURTH ATI hM Kl 37
In Ihe previous solution, when P, chalks up 0 on c, lo indicate its
intention to en ter its critical section, it also turns out that it is insisting on itsright to en te r the critical section. It is true tha t s e tt in g s before checking c2
prevents the violation o f mutual exclusion but if P2is no t ready lo yield thenshould yield.
In the next attempt (Fig. 3.6) we correct this stubborn behavior by
having a process relinquish temporarily its intention to enter its critical
section to give the other process a chancc to do so. P, enters its igloo and
chalks u p a 0. If upon checking P ,s igloo, P, finds a 0 there too, it chival
rously retu rns to its igloo lo erase the 0. A fter a few laps around the igloo it
restores the signal c ( 0 and tries again. The comm ent is there simply to
remind you that since arbitrary interleaving is permissible, the sequence oftwo assignments to the same variable is not meaningless.
First note that the previous proo f of mutual exclusion holds here. From
the above discussion, it should now be clear that there is such a thing as too
much chivalry. If both processes con tinue yielding then n eith er will ente r thecritical scction. The scenario is as follows:
Initially 1 1P, sets c, 0 IP j sets c, 0 0P, checks ct 0 0P, checks c, 0 0P, sets c , 1 0P, sets c2 1 Il \ sets C|
0 1P j sets c, 0 0
It is d e a r that this could be indefinitely exten ded and tha t liveness docs nothold because neither process will ev er en te r its critical section. However, it is
extremely unlikely ever to occur. Nevertheless we are forced to reject this
solution. The main objection here is not so much that neither process will
ever enter the critical section (it is unlikely that perfect synchronizationcon tinues indefinitely) but th at we have no way of giving an a prioribound
on the number of iterations that the loops will execute before they arcpassed. Thus we have no way o f guaranteein g the perform ance of such asystem.
Should this bug be classified as deadlock o r lockout? O n th e one hand,
both processes are looping on a proto col which is certain ly not useful
compulation and the situation is similar to (he previous attempt. However,we prefer to call this lockout to emphasize the following distinction. In the
prev ious attempt the situation is hope less. From the instant that (he program
is deadlocked, all future executions sequences remain deadlocked. In this
8/10/2019 CONCURRENCIA.pdf
42/160
38 THfc MUTU AL EXCLUSION PROBL EM CHAP. 3
case however, the slightest aberration of the scenario will free one of theprocesses and in pract ice (his will even tually happen . The key notion here is
the conspiracy between the processes and not the hopelessness of the situa
tion. It is only because we wish to be ab le to guaran tee a worst-case b ehav ior
(hat we reject the current attempt.
3 .6 OEKKER'S ALGORITHM
program Dekker,
var turn: integer,
c c2: integer;
procedure p x\begin
repeatc, := 0;
while c2 = 0 do
if turn * 2 (hen
begin
c, := 1;while lurn2 do;
c , : - 0
end;
cril1;
turn := 2;c, := 1;
rem I
forever
end;
pro ced ure p t :begin
repeat
c, := 0;
while C| 0 doIf turn* I then
begin
e , : 1;while turn=1 do;c2 0
end;
crit2\
. turn : = 1;
c2 := 1;
rem2forever
end;
8/10/2019 CONCURRENCIA.pdf
43/160
SECT. i 6 DtKKb R'S AI OO HII IIM 3V
begin (* main program )
c. I;ci 1;tun i : = 1;
cohcgin
Pi>Ptcoend
Dekkers solution is an ingenious combination of (he first and fourth
attem pted solutions. Recall tha t in the first solution we explicitly passed theright to enter the critical section between the processes. Unfortunately, the
key to the critical section could be irretricv ablcy lost if one o f the processes is
terminated. In (he fourth solution we found that keeping sepa rate keys leads
to the possibility of infinite deferment of one process to the other.
Dekker's algorithm (Fig. 3.7) is based on the previous solution but
solves the problem of lockout by explicitly passing the right to insist on
entering the critical solution. Each process has a separate igloo so it can goon processing even if one process is term inated by a po lar bear. Note that we
are here using the assumption that no process is terminated in its criticalsection (including the protocol).
There is now an um pire igloo with ablackboard labelled "turn",(Fig.3-8). If P, chalks up a 0 on e, and then finds that P2 has also chalkc d up aO , it
goes to consult the umpire. If the umpire has a 1 written upon it, then P,
knows that it is its turn to insist and so P , periodica lly checks P2's igloo. P2o f
course notes tha t il is its turn to defe r and chivalrously chalks u p a I on c2
which will eventua lly be no ted by P,. P? meanwhile waits for P , to term inate
end.
Ft*. 3.7.
Fig. 3.8.
8/10/2019 CONCURRENCIA.pdf
44/160
40 THfc MU JDA L EXCLUSION PKODI EM CHAP J
its critical section. U pon term ination. P, n ot only frees the critical section bysettin g c, t o I but also resets turnto 2 both to free Ptfrom Ihe inn er loop anil
to transfer the right to insist to Pt .
Mutual exclusion is proved exactly as in Section 3.4 since the value of
turn has no cffect on the decision to enter the critical section.
Proving livencss is somewhat of a challenge. Symmetrically it is suffi
cient to prove that, if Pxexecu tes c, : = 0 indicating its intention to e nte r the
critical section, then e ven tually il do cs so. This is do ne in two parts. First we
pro ve th a t if P, attempts to enter its critical section but cannot do so,
eventually the variable turn is pe rm ane ntly lield at (he value I . But if turn is
held permanent ly a t 1 then P, can always enter its critical section.
3 . 7 A P R O O F O F D E K K E R 'S A LG O R I T H M
Let us now prove the livencss of D ekk ers Algorithm. The algorithm is
shown in flowchart form in Fig. 3.9. The formal statement that we want to
pro ve is th at i f the pro gram counte r o f p ro cess /* , is a i poin t a ; (i .e . Pth as left
rent1 and thus expresses a wish to en ter the critical section), eventually the
pro gra m counte r o f /*, will be at
8/10/2019 CONCURRENCIA.pdf
45/160
Fig.
3.9
Oekkcr'i
Aljorirtim.
Iniiialvalues:ct-c
:~turn
j.
8/10/2019 CONCURRENCIA.pdf
46/160
42 THE MU TUA L EXCLUSION PRQBLbM CHAP 3
Theorem (a 2 and never a ,) is false, that is a , implies eventually a s.
Proof
1. ( a 2 and never a ,) and (turnheld at 2) imply eventually (c, held at 1).
Since (never aj) , Pteventually passes a , and thence to a 4. Since
(turnheld at 2), P, reachcs
8/10/2019 CONCURRENCIA.pdf
47/160
seci. m CONCLUSION 43
at ) is false.
3.8 CONCLUSION
Mutual exclusion of two processes is about the simplest problem in concur*
rent programming. The difficulty of obtaining a correct solution to such a
simple problem suggests that p rogramming features more pow erful than thecomm on m emory arb iter will be needed. In the exercises you can exploresome other solutions of the type given here.
In particular, the solutions of the mutual exclusion problem for n
processes arc so difficult th at they are of more o r less academic in te rest only,
especially when com pared with the trivial solution to the problem given, say,
by semaphores.
Th ere is an other defect in the com mon mem ory arbite r and that is the
busy wail th at is used to achicve synchronization. The so lu tions atl contain a
statement; while condition do (* nothing ). Unless you have a dedicatedcom puter doing the looping this is a waste of CPU com puting power. Even ifthere is no CPU waste (as would be the case if the processes were I/O
controllers) there is the severe ov erhead associated with cycle stealing. Thusthe frequent accesses to turnin Dek kers solution can p revent useful computation from being done by other processes.
The primitives discussed in the next chapters uniformly suspend theexecution of the blocked processes. This is usually implem ented by keeping
a queue of processes, i.e. a queue of small blocks of memory containing
essential information on the blocked processes. Thus the overhead is only asmall amount of memory and the small amount of computation needed tomanage the queue.
A final objection to Dekkers algorithm is that it uses a common
variable which is written into by bo th processes. In the exercises wc discuss
Lam ports algorithms which have th e advantage that each variable need onlybe written by one process. Thus his algorithms a rc su itable for im plementa
tion on distributed systems w here the values o f the variables can be trans
mitted and received but where each variable is written into only on the
computer in which il physically resides.
3.9 EXERCISES
3.! (DijksUa) Fig. 3.10 is u solution to the mutual exclusion problem for nprocessesthat is a generalization of Dckkers solution. .
(a) *Show that mutual exclusion holds.(b) Show that deadlock does not occur.(c) Show ihat lockout is possible.
8/10/2019 CONCURRENCIA.pdf
48/160
44 1 Hb M ID UAI. EXCLUSION PROBLEM CHAP 3
p ro gram Difks ira ;const = . . . ; ( number o f processes )
var b , c: array (0 . . n) of boolean,turn: integer,
p ro cedure pro cess( i : integer)',var /: integer-,
ok : boolea n;
be gin
repeal
b ( i ] f a l s e ;
repeatwhile turn i do
be gin t he no k : =* o kand cf/'J
until ok;
crit;
eCO lrueMO ,rue>turn 0 ;
rentforever
end;be gin ( main program )
for turn :* 0 to n dobe gin
b{turn] := true;c\lur n) : = true
end;turn : 0 ;
cobeginp r tx e s s { \) ,
pro ct4 s{ 2);
pro cess(n)
coendend.
Fig. 3.10.
3.2 ( Ij m p o ri) Fig- 3.11 is (what the A utho r calls) the D utch Be er version of theBakery Algorithm, reslriclcd lo two processes.
(a) Show safely and liveness. (Hint The variables are supposed to represent
'ticket" numbers". The process with ihe lower ticket number enters its
critical section. In case of a tic, il is arbitrarily resolved in favor of P (.)(b) Show that the comm ands : 1 are necessary.
b t ' r d (
8/10/2019 CONCURRENCIA.pdf
49/160
EXEKCISfcS 45
(c) Extend ihe algorithm lo n processes. (H i m Ea ch proccss will chouse aJickcl num ber greutet than Ihe maximum o f all outstan ding ticket numbers.Il will then wail until all processes wiih lower numbered tickets havecompleted their critical sections.)
p ro gram dutchbeer,var n, . integer;
p ro cedure />,;begin
repeat
* i I ;
fl, : n ,+ l ;
while (si2 < > 0) an d (n2 < ") do;crit1;
: 0;
rem I
foreverend;
pro cedure p 2;begin
repeat
2 U/ i j : f|+ I ;
while ( /i , < > 0) and (n , < - n2) do;ch/2;/ i 2 : * 0 ;
re tn l
foreverend;
be gin ( main program *)fl, 0;n 2 : = 0 ;
cobeginP 1IP 2
coendend.
Fig. 3.11.
3.3 Fig. 3.12 is Lamports Bakery Algorithm restricted 10 iwo processes.
(a) Show ihe safely an d liveness of (his solution.
(b) Gen eralize to n processes.(c) Show (hal for n > 2 Ihe values of the variables n, are nol bounded.(d) 'Suppose wc allow a read (i.e. I.oad) of a variable n, 10retu rn a ny value if il
lakes place simu ltaneously with a w rite (Store ) of n,by the ith proce ss. Showthai Ihe co rrectn ess of (he algorithm is nol affected. N ote, however, that werequire the write lo execute correctly. Similarly, all reads which do notoverlap writes lo the same variable must return the correct values.
(c) Show that (he correctne ss of ihe Du tch B eer version of the algorithm is notpre serv ed under th e m alfunction describ ed in (d ).
8/10/2019 CONCURRENCIA.pdf
50/160
46 THE MUTUAL EXCLUSION PROBLEM CHAP 3'
pr ogr am bakery,
var f | . 0 do;while (/i2 < > 0) and (/i2 < ,) do;crit I;
n , : 0;rem I
forever
end;pro ce dure p ltbegin
repeal
f 2: " l irt2 " j + l;Ci 0 ;
while c, < > 0 do;while (n, < > 0) and (,
8/10/2019 CONCURRENCIA.pdf
51/160
E X t R C I S t S 4 7
; program attempt',t var e , . cy. integer,
procedure />,;f begin
repeat: rem1;
repealc, I - c2
until Cj < > 0;crit l;
e, := 1forever
end;procedure p t \begin
repeatrem2,repeal
c , 1 - c,until C| 0;
cm2;cj I
foreverend;begin ( /nat'/i progra m *)
e, 1;
c t 1;cobcgin
P i i P lcoend
end.
Fig. 3.13.
3.S The IBM 360/370 computers have an instruction called T S T (Test and Set).There is a system global variable called e (Condition Code). Executing TST(i)for local variable / is equivalent to the following two assignments:
I c;c I.
(a) Discuss the correc tness (safety, deadlock, lockout) of the solution of themutual exclusion problem shown in Fig. 3.14.
(b) Generalize to n processes.(c) Whai would happ en if the primitive T S T instruction were replaced by the
two assignments?(d) Modify the implementation kit lo include the T S Tinstruction.
8/10/2019 CONCURRENCIA.pdf
52/160
48 IHfc MUTUAL EXCLUSION PROBLEM CHAP. 1
p ro gram teuandiet,va r c: integer,
p rocedu re /, ;var /: integer;
be gin
repealrem);
repeal7571(0
until / * 0;crit1;
c 0
foreverend;
p ro cedure p i, var I: integer,
beginrepeat
rem2;
repeatT5T(l)
until / = 0;criQ ;c 0
forevercad;be gin ( main program )
c : 0 ;cobegin
Pi> Picoend
end
llg. 3.14.
3.6 The E Xinstruction exchanges the con tents of iwo mem ory locations E X (a ,b ) isequivalent to an indivisible execution of the following assignment statements:
temp : a;a : -
Recommended