121
Chapter 3, Processes 1

Chapter 3, Processes

  • Upload
    melora

  • View
    43

  • Download
    0

Embed Size (px)

DESCRIPTION

Chapter 3, Processes. 3.1 Process Concep t. The process is the unit of work in a system. Both user and system work is divided into individual jobs, or processes. - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter 3, Processes

1

Chapter 3, Processes

Page 2: Chapter 3, Processes

2

3.1 Process Concept

• The process is the unit of work in a system. Both user and system work is divided into individual jobs, or processes.

• As already defined, a process is a program in execution, or a program that has been given a footprint in memory and can be scheduled to run.

Page 3: Chapter 3, Processes

3

Recall what multi-processing means

• The importance of processes may not be immediately apparent because of terminology and also because of the progress of technology.

• Keep in mind that multi-processing refers to multiple physical processors.

• Also, most recent general purpose computer chips are in fact multi-core, which means that at the physical level, they are multi-processor systems on a single chip.

Page 4: Chapter 3, Processes

4

Why processes are important

• The importance of processes stems from the fact that all modern, general purpose systems are multi-tasking.

• For the purposes of clarity, in this course the main topic is multi-tasking on a single physical processor.

• The point is this:• In a multi-tasking system, each individual task

exists as a process.

Page 5: Chapter 3, Processes

5

Defining an O/S by means of processes

• Chapters 1 and 2 concerned trying to define an operating system.

• Given the fundamental nature of processes, another possible definition presents itself:

• The operating system is that collection of processes which manages and coordinates all of the processes on a machine, both operating system processes and user processes.

Page 6: Chapter 3, Processes

6

• From the point of view of making the system run, the fact that the operating system is able to manage itself is fundamental.

• From the point of view of getting any useful work done, the fact that the operating system manages user processes is fundamental.

Page 7: Chapter 3, Processes

7

Why we are considering multi-tasking on one processor rather than multi-processing• One final note on the big picture before going

on:• Managing processes requires managing

memory and secondary storage, but it will become clear soon that getting work done means scheduling processes on the CPU.

• As mentioned, we are restricting our attention to scheduling multiple processes, one after the other, on a single physical processor.

Page 8: Chapter 3, Processes

8

• In multiple core systems, some of the problems of scheduling multiple jobs concurrently on more than one processor would be handled in microcode on the hardware.

• However, the operating system for such a system would have to be “multiple-core” aware.

Page 9: Chapter 3, Processes

9

• This is a way of saying that modern operating systems are complex because they are multi-processor operating systems.

• The point is that you can’t begin to address the complexities of multi-processing until you’ve examined and come to an understanding of operating system functions in a uni-processing environment.

Page 10: Chapter 3, Processes

10

What is a Process?

• A process is a running or runnable program.• It has the six aspects listed on the next overhead.• In other words, a process is in a sense defined by

a certain set of data values, and by certain resources which have been allocated to it.

• At various times in the life of a process, the values representing these characteristics may be stored for future reference, or the process may be in active possession of them, using them.

Page 11: Chapter 3, Processes

11

1. Text section = the program code2. Program counter = instruction pointer = address

or id of the current/next instruction3. Register contents = current state of the machine4. Process stack = method parameters, return

addresses, local variables, etc.5. Data section = global variables6. Heap = dynamically allocated memory

Page 12: Chapter 3, Processes

12

The term state has two meanings

• Machine state = current contents of cpu/hardware (registers…) for a given process.

• Process state = what scheduling state the O/S has assigned to a process = ready to run, waiting, etc.

Page 13: Chapter 3, Processes

13

Process state refers to the scheduling status of the process

• Systems may vary in the exact number and names of scheduling states.

• As presented in this course, a straightforward operating system would have the five process (scheduling) states listed on the next overhead.

Page 14: Chapter 3, Processes

14

Process scheduling states

1. New2. Running3. Waiting4. Ready5. Terminated

Page 15: Chapter 3, Processes

15

Process life cycle

• A process begins in the new state and ends in the terminated state.

• In order to get from one to the other it has to pass through other states.

• It may pass through the other states more than one time, cycling through periods when it is scheduled to run and periods when it is not running.

Page 16: Chapter 3, Processes

16

• In a classic system, there are six fundamental actions which trigger state transition, which are listed on the following overheads.

• The relationship between states and transitions is summarized in the state transition diagram which follows that list.

Page 17: Chapter 3, Processes

17

1. The operating system is responsible for bringing processes in initially.

2. It is also responsible for bringing jobs to an end, whether they completed successfully or not.

3. Interrupts can be viewed as temporarily ending the running of a given process.

Page 18: Chapter 3, Processes

18

4. Processes are scheduled to run by the operating system

5. Processes relinquish the processor and wait when they issue a system request for I/O from secondary storage which only the O/S can satisfy

6. The successful completion of an I/O request makes the requesting processes eligible to run again.

Page 19: Chapter 3, Processes

19

Simple State (Transition) Diagram

Page 20: Chapter 3, Processes

20

How does the operating system keep track of processes and states?

• In a sense, what the operating system does is manage processes.

• Inside the operating system software it is necessary to maintain representations of processes.

• In other words, it’s necessary to have data structures which contain the following data:– The definition of the process—its aspects and resources– The process’s state—what state it is in, as managed by

the operating system in its scheduling role

Page 21: Chapter 3, Processes

21

What is a process control block?

• The Process Control Block (PCB) is the representation of a process in the O/S.

• In other words, it is a data structure (like an object) containing fields (instance variables) which define the process and its state.

• PCB’s don’t exist in isolation.• They may be stored in linked collections of PCB’s

where the collection the PCB is in, and its location in the collection implicitly define the process’s state.

Page 22: Chapter 3, Processes

22

• The PCB contains the following 7 pieces of information.

• In effect, these 7 pieces consist of technical representations of the 6 items which define a process, plus process state.

1. Current process state = new, running, waiting, ready, terminated

2. Program counter value = current/next instruction3. CPU general purpose register contents = machine

state—saved and restored upon interrupt

Page 23: Chapter 3, Processes

23

4. CPU scheduling info = process priority and pointers to scheduling queues

5. Memory management info = values of base and limit registers

6. Accounting info = job id, user id, time limit, time used, etc.

7. I/O status info = I/O devices allocated to process, open files, etc.

Page 24: Chapter 3, Processes

24

This a graphical representation of a PCB, indicating how it might be linked with others

Page 25: Chapter 3, Processes

25

Canopic Jars

Page 26: Chapter 3, Processes

26

Threads

• In the latest edition of the book this subsection is very short

• I continue to give the longer version in these overheads

• This serves as an introduction to Chapter 4, Threads

Page 27: Chapter 3, Processes

27

• You may already have encountered the term thread in the context of Java programming.

• Threads come up in this operating systems course for two reasons:– Threads themselves are a modern operating

system concept– This book is based on Java, which implements

threads at the programming level

Page 28: Chapter 3, Processes

28

• That means it’s possible to work directly with threads in Java.

• You can learn the concept without actually working with operating system code

Page 29: Chapter 3, Processes

29

Processes and threads

• What has been referred to up to this point as a process can also be called a heavyweight thread.

• It is also possible to refer to lightweight threads.• Lightweight threads are what is meant when

using the term thread in Java.• Not all systems necessarily support lightweight

threads, but the ubiquity of Java means that threads are “everywhere” now

Page 30: Chapter 3, Processes

30

What is a lightweight thread?

• The term (lightweight) thread means that >1 execution path can be started through the code of a process (heavyweight thread).

• Each lightweight thread will have its own data, but it will share the same code with other lightweight threads

Page 31: Chapter 3, Processes

31

• The origin of the terminology and its meaning is illustrated in this picture

Page 32: Chapter 3, Processes

32

• There are two vertical threads (the warp in weaving) and six horizontal threads (the woof or weft in weaving)

• The horizontal threads represent lines of code in a program

• The vertical threads represent two independent paths of execution through the code

• The paths of execution have come to be known as threads in computer science

Page 33: Chapter 3, Processes

33

• A concrete example: A word processor might have separate threads for character entry, spell checking, etc.

• When the user opens a document, a thread becomes active for character entry.

• When the user selects the spell checking option in the menu, a separate thread of execution (in a different part of) the same program is started.

Page 34: Chapter 3, Processes

34

• These two threads can run concurrently.• They don’t run simultaneously, but the user

enters characters so slowly, that it is possible to run spell checking “at the same time”.

Page 35: Chapter 3, Processes

35

The relationship between process scheduling and thread scheduling

• In effect, threads are like processes in microcosm.

• This accounts for the lightweight/heavyweight thread terminology.

• They differ in the fact that processes run different program code while threads share the same program code.

Page 36: Chapter 3, Processes

36

• The operating system schedules processes so that they run concurrently.

• They do not run simultaneously.• Each process runs for a short span of time.• It then waits while another process runs for a

short span of time.• From the user’s (human speed) point of view,

multiple processes are running “at the same time”.

Page 37: Chapter 3, Processes

37

• An operating system supports threads in a similar way.

• The implementation of the JVM on a given system depends on that system’s implementation of threads.

• Within each process, threads are run concurrently, just as the processes themselves are run concurrently.

Page 38: Chapter 3, Processes

38

• Because this book is oriented towards Java, not C, you can’t write operating system internals in Java.

• However, you can write threaded code with a familiar programming language API, rather than having to learn an operating system API.

• All of the challenges of correct scheduling exist for Java programs, and the tools for achieving this are built into Java.

Page 39: Chapter 3, Processes

39

• You can learn some of the deeper aspects of actual Java programming at the same time that you learn the concepts which they are based on, which come from operating system theory.

• This is covered in detail in the following chapter.

Page 40: Chapter 3, Processes

40

3.2 Process Scheduling

• Multi-programming (= concurrent batch jobs) objective = maximum CPU utilization—have a process running at all times

• Multi-tasking (= interactive time sharing) objective = switch between jobs quickly enough to support multiple users in real time

• Process scheduler = the part of the O/S that picks the next job to run

Page 41: Chapter 3, Processes

41

• One aspect of scheduling is system driven, not policy driven: Interrupts force a change in what job is running

• Aside from handling interrupts as they occur, it is O/S policy, the scheduling algorithm, that determines what job is scheduled

• The O/S maintains data structures, including PCB’s, which define current scheduling state

• There are privileged machine instructions which the O/S can call in order to switch the context (move one job out and another one in)

Page 42: Chapter 3, Processes

42

• Scheduling queues = typically some type of linked list data structure

• Job queue = all processes in the system—some may still be in secondary storage—may not have been given a memory footprint yet

• Ready queue = processes in main memory that are ready and waiting to execute (not waiting for I/O, etc.

• I/O device (wait) queues = processes either in possession of or waiting for I/O device service

Page 43: Chapter 3, Processes

43

Queuing Diagram of Process Scheduling

Page 44: Chapter 3, Processes

44

Diagram key

• Rectangles represent queues• Circles represent resources• Ovals represent events external to the process• Events internal to the process which trigger a

transition are simply indicated by the queue that the process ends up in

• Upon termination the O/S removes a process’s PCB from all queues and deallocates all resources held

Page 45: Chapter 3, Processes

45

General Structure of Individual O/S Queues

Page 46: Chapter 3, Processes

46

Schedulers

• The term scheduler refers to a part of the O/S software

• In a monolithic system it may be implemented as a module or routine.

• In a non-monolithic system, a scheduler may run as a separate process.

Page 47: Chapter 3, Processes

47

Long term scheduler—this is the scheduler you usually think of second, not first, although it acts

first• Picks jobs from secondary storage to enter CPU

ready queue• Controls degree of multiprogramming (total # of

jobs in system)• Responsible for stability—number of jobs entering

should = number of jobs finishing• Responsible for job mix, CPU bound vs. I/O bound• Runs infrequently; can take some time to choose

well

Page 48: Chapter 3, Processes

48

Short term scheduler, a.k.a. the CPU scheduler, the scheduler you usually think

of first• This module implements the algorithm for

picking processes from the ready queue to give the CPU to

• This is the heart of interactive multi-tasking• This runs relatively frequently• It has to be fast so you don’t waste CPU time

on switching overhead

Page 49: Chapter 3, Processes

49

Medium term scheduler—the one you usually think of last

• Allows jobs to be swapped out to secondary storage if multi-programming level is too high

• Not all systems have to have long or medium term schedulers

• Simple Unix just had a short term scheduler. • The multi-programming level was determined

by the number of attached terminals

Page 50: Chapter 3, Processes

50

The relationship between the short, medium, and long term schedulers

Page 51: Chapter 3, Processes

51

Context Switch—Switching CPU from Process to Process—The Short Term Scheduler at Work

Page 52: Chapter 3, Processes

52

Context Switching is the Heart of Short Term Scheduling

• Context switching has to be fast. • It is pure overhead cost• In simple terms, it is supported by machine

instructions which load and save all register values for a process at one time

• It frequently has hardware support—such as multiple physical registers on the chip—so that a context switch means switching between register sets, not reading and writing memory

Page 53: Chapter 3, Processes

53

3.3 Operations on Processes

• Process creation• General model: A given process, a parent, can

spawn a child process by means of a system call.• This leads to a tree of related processes.• Since the operating system has the ability to

create processes, it could in theory spawn children of processes externally as needed, without a request from the parent.

Page 54: Chapter 3, Processes

54

Resource allocation among parent and child processes

• The O/S may allocate children their own resources (memory, etc.)

• The parent may partition its resources among its children

• The parent may share its resources with its children

• Parents may give other things to their children• As one example, they can pass parameters for

open files

Page 55: Chapter 3, Processes

55

• Two execution models for parents and children– The parent executes concurrently with its children– The parent waits for some or all of its children to

terminate before resuming

Page 56: Chapter 3, Processes

56

• Two address space models– The child is a duplicate of the parent. It has the

same program and data– The child process has a new program loaded into

it

Page 57: Chapter 3, Processes

57

• The following 10 points outline an example of C code in Unix in which a parent process spawns a child

• 1. In Unix, processes are identified by integer pid’s.• 2. A system call to the Unix fork() command from

within a parent process creates a child process.• 3. The child is a copy of the parent process.• 4. Because the child is a copy of the parent, the code

for both processes contains the call to fork().

Page 58: Chapter 3, Processes

58

• 5. Execution of the parent resumes at the return of this call.• 6. Execution of the child begins at the return of this call.• 7. The fork() call returns a pid.• 8. The child receives 0; the parent receives the pid of the

child.• 9. The code contains an if statement based on the returned

pid so both the parent and child know who they are and can take different execution paths.

• 10. This is not cosmically important, but it’s worth noting that unlike in object orientation, the parent knows the child and not vice-versa; the child only knows that it’s a child.

Page 59: Chapter 3, Processes

59

• C and Unix example continued– In Unix it’s possible to issue an exec() type system

call which has the effect of wiping out the current program and replacing it with another

– It’s also possible for the parent process to issue a wait() command which has the effect of suspending the parent’s execution until the most recently spawned child completes

– Note that the return value of a wait() call is the pid of the exiting child

Page 60: Chapter 3, Processes

60

• C and Unix example, preview of code specifics– The execlp() call takes three parameters, a path, a

command, and a third parameter which can be NULL

– The wait() command takes a parameter which can be NULL

Page 61: Chapter 3, Processes

61

• See code on next overhead. – It is a more or less faithful copy of the book's C

program illustrating the forking of children in Unix.

– It compiles (with warnings about the use of the exit() call) and runs on the department's Unix machine, math.uaa.alaska.edu

Page 62: Chapter 3, Processes

62

#include <stdio.h>#include <unistd.h>

int main(int argc, char *argv[]){

int pid;

/* fork another process */pid = fork();

if(pid < 0){

/* error */fprintf(stderr, "Fork Failed");(exit(-1));

}

else if(pid == 0){

/* child */execlp("/bin/ls", "ls", NULL);printf("child");

}

else{

/* parent waits for child */wait(NULL);printf("Child Complete");exit(0);

}}

Page 63: Chapter 3, Processes

63

• The program given on the following overheads is a modification of the first– In it the parent doesn’t wait for the child to

complete– The parent and the child process both contain

loops– This makes it possible to see the switching back

and forth between concurrent processes under multitasking

Page 64: Chapter 3, Processes

64

#include <stdio.h>#include <unistd.h>

int main(int argc, char *argv[]){

int pid;int i = 0;int j = 0;

/* fork another process */pid = fork();

if(pid < 0){/* error */fprintf(stderr, "Fork Failed");// (exit(-1));}

Page 65: Chapter 3, Processes

65

else if(pid == 0){/* child */// execlp("/bin/ls", "ls", NULL);while(i < 100000){printf("child\n");i = i + 1;}}

else{/* parent waits for child */// wait(NULL);while(j < 100000){printf("parent\n");j = j + 1;}

// printf("Child Complete");// exit(0);}

}

Page 66: Chapter 3, Processes

66

Process Creation in Windows and Java

• The book includes code showing how a process is created in the Windows API

• The details aren’t important• All that’s significant is that it is possible, as you

would expect

Page 67: Chapter 3, Processes

67

• The book also discusses processes and Java• Each instance of the JVM will be a process on

whatever system it is running on• Within an instance of the JVM, multiple

threads can be created, but not multiple processes

Page 68: Chapter 3, Processes

68

• The book pursues this further by showing code whereby a Java program can initiate a host system process external to the JVM

• This is an advanced topic and the details are not important

• Notice how this is conceptually linked with a native method

Page 69: Chapter 3, Processes

69

• Native methods are code written in C, for example, which can be embedded in a Java program

• External processes can consist of “anything” at all

• Both techniques allow Java to trigger actions (e.g., direct memory access) which are disallowed by the design of Java alone

Page 70: Chapter 3, Processes

70

Explicit process termination (self-termination)

• Explicit termination comes from an exit() call in a process

• An exit() call in a child can be set up to return a signal to a parent that called wait()

• This return value can signal successful termination, error termination, etc.

• The vanilla option is to return the child’s pid• All resources are deallocated at termination:

memory, open files, buffers, etc.

Page 71: Chapter 3, Processes

71

Process abortion (unwilling termination)

• Parent processes can abort children• This is why it’s useful to have the child pid

returned to the parent by the fork() call• Looking up the Unix command kill() in the

online documentation would be an entry point into this topic

Page 72: Chapter 3, Processes

72

Reasons for abortion

• The child’s task is no longer needed• The child has exceeded some resource (kill the

teenagers)• The parent is exiting.• In some systems, the child can’t exist without

the parent (Ur of the Chaldees)

Page 73: Chapter 3, Processes

73

• In Unix, children can continue to exist after their parent is gone

• In this case, the child is given the sys init process as its parent

Page 74: Chapter 3, Processes

74

3.4 Inter-process Communication

• Recall that the three elements of a micro-kernel are memory management, process scheduling, and inter-process communication.

• The previous section covered the creation of processes—not their scheduling

• This section provides an overview of inter-process communication as an aspect of what processes are and how they may interact with each other.

Page 75: Chapter 3, Processes

75

Independent processes

• Independent processes are the simpler case• They in no way affect each other• They do not share any data, either temporary

or persistent• In effect, these are processes that do not

communicate with each other

Page 76: Chapter 3, Processes

76

Cooperating processes

• Cooperating processes are the more interesting case.

• They affect each other• They may pass information back and forth• They may share a common message space• In theory it may be possible to get more useful

work done if multiple independent processes are working cooperatively with each other.

Page 77: Chapter 3, Processes

77

Why support process cooperation?

• This allows >1 process to share a resource• This supports divide and conquer problem

solutions• multi-tasking can support >1 process working on

different parts of the same problem• This leads to modularity in the design of problem

solutions• This can lead to performance/user convenience

benefits

Page 78: Chapter 3, Processes

78

Producers and consumers—an introduction to inter-process communication

• Cooperation between processes some method of communication

• The example to be given is known as a producer-consumer example

• One process passes items to another through a shared buffer

Page 79: Chapter 3, Processes

79

Synchronization

• Threaded code was mentioned earlier• It is a topic where concurrency control becomes

important to a correct implementation• In that context, you can think of the code as a

shared resource—and the need is for correct control of access to the shared code by different execution threads.

• Concurrency control is also referred to as synchronization.

Page 80: Chapter 3, Processes

80

Shared buffers for IPC

• Done correctly, inter-process communication also requires synchronization

• In this context, different processes have access to a shared buffer

• They can both read from and write to the buffer concurrently.

• The buffer is the shared resource• The need is for correct control of access to the

shared buffer by different processes.

Page 81: Chapter 3, Processes

81

• How to correctly do synchronization for both threads and shared buffers will be discussed later.

• The following illustrations simply give an introductory explanation of what is involved with shared buffers.

Page 82: Chapter 3, Processes

82

Simple aspects of synchronization based on buffer parameters

• Unbounded buffer– The producer doesn’t have to wait to enter an

item– The consumer may have to wait for an item to

appear• From the point of view of synchronization, the

question is, how do you enforce waiting on different processes?

Page 83: Chapter 3, Processes

83

• Bounded buffer– The producer may have to wait to enter an item– The consumer may have to wait for an item to

appear• Once again, the question is, how do you

enforce waiting?• The only difference in this scenario is who

waits under what condition.

Page 84: Chapter 3, Processes

84

Implementation of shared buffers

• At the system level, an operating system may implement shared buffers as shared memory.

• The O/S is responsible for memory management, namely keeping the allocation of memory separate for different processes.

• An additional layer of complexity in the O/S would allow for >1 process to have access to the same memory.

Page 85: Chapter 3, Processes

85

• Depending on the structure of the programming language, this can also be done in application code

• The example to come does this with a shared reference to a common buffer object

• Note that, strictly speaking, since it will be illustrated with Java, it is not shared memory access

• But it is a faithful scenario of shared access in a high level language that is accessible to a non-system programmer

Page 86: Chapter 3, Processes

86

• Remember that in order to be functional, the code would require synchronization

• Although written as Java code, the example is incomplete because it doesn’t have synchronization

• The whole example of shared access will be reviewed again later when the topic at hand is synchronization.

Page 87: Chapter 3, Processes

87

• /**• * An interface for buffers• *• *** Note that at various places the authors’ code reveals them to be

recovering C programmers. The methods in an interface specification do not have to be declared public or abstract.

• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 88: Chapter 3, Processes

88

• /**• * This program implements the bounded buffer using shared memory.• * Note that this solutions is NOT thread-safe. It will be used• * to illustrate thread safety using Java synchronization in Chapter 7.

• *** Comment on the comment: Note that it is not literally true that in Java code you• are implementing shared memory.• */

• public class BoundedBuffer implements Buffer• { • private static final int BUFFER_SIZE = 3;

• /**• *** volatile does not appear in the printed text. A discussion of• * volatile is in chapter 7.• */• private volatile int count;

• private int in; // points to the next free position in the buffer• private int out; // points to the next full position in the buffer• private Object[] buffer;• • public BoundedBuffer()• {• // buffer is initially empty• count = 0;• in = 0;• out = 0;• • buffer = new Object[BUFFER_SIZE];• }

Page 89: Chapter 3, Processes

89

• // producer calls this method• public void insert(Object item) {• while (count == BUFFER_SIZE) • ; // do nothing• • // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item + "

Buffer FULL");• else• System.out.println("Producer Entered " + item + "

Buffer Size = " + count);• }•

Page 90: Chapter 3, Processes

90

• // consumer calls this method• public Object remove() {• Object item;• • while (count == 0) • ; // do nothing• • // remove an item from the buffer• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + "

Buffer EMPTY");• else• System.out.println("Consumer Consumed " + item + "

Buffer Size = " + count);• • return item;• }

• }

Page 91: Chapter 3, Processes

91

Message passing is an alternative IPC implementation choice

• Message-passing systems• An O/S may support IPC without shared access to

common memory, without a buffer• That means that the O/S implements send() and

receive() type system calls• Fixed or variable length messages may be allowed• You may recall that in various places, the term

message passing was used more or less synonymously with inter-process communication

Page 92: Chapter 3, Processes

92

• In the following presentation you will find that most of the different theoretical possibilities are covered by bulleted lists about message passing.

• In the end, message passing may involve a mailbox construct, and in this way it may essentially subsume the shared memory resource idea.

• The mailbox, although typically managed by name rather than memory address, is the shared location where messages are passed.

Page 93: Chapter 3, Processes

93

• Message passing functionality is based on a “communication link” abstraction

• The abstraction includes these aspects– Direct or indirect communication– Synchronous or asynchronous communication– Automatic or explicit buffering

• Each of the sub-points above will be addressed in the following sections

Page 94: Chapter 3, Processes

94

Direct or indirect communication

• The concept of naming is the basis for either direct or indirect communication

• Either processes are known by name or id or they are not

• In direct communication, names are used

Page 95: Chapter 3, Processes

95

Direct communication

• Symmetric addressing: both the sender and receiver know the other’s name

• Form of calls for sender Q and receiver P• Q issues: send(P, msg)• P issues: receive(Q, msg)

Page 96: Chapter 3, Processes

96

Properties of symmetric, direct communication

• Each member of a pair of processes needs to know the other’s name

• Each communication link connects only two processes

• Each pair of processes has only one link between them

Page 97: Chapter 3, Processes

97

• Asymmetric addressing: It’s possible for just the recipient to be named

• Form of calls for receiver P• Sender issues: send(P, msg)• P issues: receive(sendername, msg)• When a process issues a receive() call, the system

supplies the name of the sending process• The disadvantage of direct communication: If the

names/id’s of P and Q are hardcoded, any changes in name will require changes in code

Page 98: Chapter 3, Processes

98

Indirect communication

• Fundamental construct: a uniquely named mailbox or port

• Form of calls for mailbox A• Sender issues: send(A, msg)• Receiver issues: receive(A, msg)

Page 99: Chapter 3, Processes

99

• Communication link properties under this scheme– A link is established by a shared mailbox– Each mailbox may be available for >2 processes– Each pair of processes may share >2 mailboxes

Page 100: Chapter 3, Processes

100

• Consider the following scenario– P1 sends a msg to mailbox A – P2 and P3 both call receive()

• Design choices– Restrict each mailbox to 2 processes– Allow only one process at a time to execute

receive() (synchronization)– Implement an algorithm for choosing between P2

and P3 to receive

Page 101: Chapter 3, Processes

101

Ownership issues

• A mailbox can be owned by the O/S• Then the O/S has to support mailbox creation

and deletion and send() and receive() calls

Page 102: Chapter 3, Processes

102

• The system can support the creation of mailboxes owned by a process

• If a process owns a mailbox, the process receives through that mailbox

• Other processes can only send to that mailbox• This supports a many-to-one communication

link

Page 103: Chapter 3, Processes

103

Implementation issues

• A mailbox is essentially a form of shared memory• Whether created by O/S or user process, the

mailbox initially is accessible only to the creator• Giving access to other processes is based on

system calls• I.e., if the system is the creator, it grants access• If a user process is the creator, it can only grant

access by requesting the system to do so

Page 104: Chapter 3, Processes

104

Synchronous or asynchronous communication

• Although ultimately related to the underlying problem of concurrency control, first consider the problem of synchronizing communication generically

• Do not worry for the moment about what “correct” synchronization would be in the technical sense

• Synchronization of communicating processes can be described by whether send() or receive() are blocking or non-blocking operations

Page 105: Chapter 3, Processes

105

Blocking and non-blocking receive

• Blocking receive: A process that issues a receive() waits until a message becomes available for it to receive

• Non-blocking receive: When there is no message in the mailbox a receive() call returns null.

Page 106: Chapter 3, Processes

106

Blocking and non-blocking send

• This is actually a little messier than receive• Blocking send may mean two different things:

– A process blocks until the message it is currently trying to send is received

– A process blocks because the buffer/mailbox is full and it has to wait to send

• Then non-blocking send may mean two different things:– A process is allowed to continue even if its message hasn’t

been received– A process isn’t blocked because the buffer/mailbox isn’t full

and there is room for the message to be sent

Page 107: Chapter 3, Processes

107

• An implementation may mix and match blocking and non-blocking send() and receive()

• If both send and receive block, this gives a recognizable, named case, a rendezvous

• Neither sender nor receiver can proceed further until a message is successfully passed between them.

Page 108: Chapter 3, Processes

108

Message passing and queues

• In direct communication, the O/S internally manages a temporary queue of messages

• In indirect communication, the mailbox is a queue-like structure

Page 109: Chapter 3, Processes

109

• There are 3 implementation options that affect whether a message passing protocol is blocking or non-blocking

1. Zero capacity queue (a.k.a., no buffering): – The sender has to block until a receiver has

issued a receive()

Page 110: Chapter 3, Processes

110

2. Bounded capacity queue: – If the queue is full, the sender has to block until a

message has been received3. Unbounded capacity queue: – The sender never blocks

Page 111: Chapter 3, Processes

111

• At the beginning of the subsection the book says it will cover automatic and explicit buffering.

• The reality is that in the body of the subsection it covers the no buffering and automatic buffering cases.

• A zero capacity queue is the no buffering case.• The other implementation options imply some

form of automatic buffering.

Page 112: Chapter 3, Processes

112

The producer-consumer example

• Like the previous example, this is given in Java code, but the code shown here is not actually complete

• In order for it to work there would have to be two threads of execution, one a sender and one a receiver

• These two threads would both have access to the mailbox, or message queue

• The syntax for threads will be explained later• In order to be correct, the code with threads would

have to be synchronized

Page 113: Chapter 3, Processes

113

• The example illustrates– An unbounded queue (since the Vector class

supports adding an arbitrary number of elements)– Non-blocking send and receive

Page 114: Chapter 3, Processes

114

• /**• * An interface for a message passing scheme.• */

• public interface Channel• {• /**• * Send a message to the channel.• * It is possible that this method may or may not

block.• */• public abstract void send(Object message);• • /**• * Receive a message from the channel• * It is possible that this method may or may not

block.• */• public abstract Object receive();• }

Page 115: Chapter 3, Processes

115

• /**• * This program implements the bounded buffer using message passing.• * Note that this solutions is NOT thread-safe. A thread safe solution• * can be developed using Java synchronization which is discussed in Chapter 6.• */

• import java.util.Vector;• • public class MessageQueue implements Channel• {• private Vector queue;

• public MessageQueue() {• queue = new Vector();• }• • /*• * This implements a non-blocking send• */• public void send(Object item) {• queue.addElement(item);• }• • /*• * This implements a non-blocking receive• */• • public Object receive() {• if (queue.size() == 0)• return null;• else • return queue.remove(0);• }• }

Page 116: Chapter 3, Processes

116

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• class Producer implements Runnable• {• public Producer(Channel m)• {• mbox = m;• } • • public void run()• {• Date message;• • while (true) { • SleepUtilities.nap(); • message = new Date(); • System.out.println("Producer produced " + message);• // produce an item & enter it into the buffer• mbox.send(message);• }• }• • private Channel mbox;• }

Page 117: Chapter 3, Processes

117

• /**• * This is the consumer thread for the bounded buffer problem.• */

• import java.util.*;

• class Consumer implements Runnable• {• public Consumer(Channel m) { • mbox = m;• }• • public void run() {• Date message;• • while (true)• {• SleepUtilities.nap();

• // consume an item from the buffer• System.out.println("Consumer wants to consume.");• message = (Date)mbox.receive();• if (message != null)• System.out.println("Consumer consumed " + message);• }• }• • private Channel mbox;• }

Page 118: Chapter 3, Processes

118

3.5 Examples of IPC Systems

• Skip

Page 119: Chapter 3, Processes

119

3.6 Communication in Client-Server Systems

• Skip

Page 120: Chapter 3, Processes

120

3.7 Summary

• …

Page 121: Chapter 3, Processes

121

The End