42
1 CHAPTER 1 INTRODUCTION An operating system is a program that is an intermediary between a user and the computer hardware. Different levels in a computer system are shown below (Figure 1.1). Figure 1.1: Levels in a computer system An operating system provides an environment/ interface through which a user can execute programs without having the knowledge about the complex details of hardware that makes a computer system. The operating systems have 2 primary objectives which are not related with each other they are: 1. To make the computer system convenient to use by providing a virtual machine or extended machine to the user 2. To use the computer hardware in an efficient manner. Certain authors refer operating systems as a control program / a program that works like a government Operating system interface Hardware interface User programs Operating system Hardware

Operating Systems

Embed Size (px)

Citation preview

Page 1: Operating Systems

1

CHAPTER 1

INTRODUCTION

An operating system is a program that is an intermediary between a user and thecomputer hardware. Different levels in a computer system are shown below (Figure 1.1).

Figure 1.1: Levels in a computer system

An operating system provides an environment/ interface through which a user can executeprograms without having the knowledge about the complex details of hardware that makes acomputer system. The operating systems have 2 primary objectives which are not related witheach other they are:

1. To make the computer system convenient to use by providing a virtual machine orextended machine to the user

2. To use the computer hardware in an efficient manner.Certain authors refer operating systems as a control program / a program that works like agovernment

Operating system interface

Hardware interface

User programs

Operating system

Hardware

Sudhir
Approved
Page 2: Operating Systems

2

1.1 OPERATING SYSTEM AS A RESOURCE MANAGER

A computer system consists of hardware, the operating system, application programs andusers (Figure 1.2).

Hardware includes the central processing unit (CPU), memory and input / output(I/O) devices. These provide the basic computing resources. Application programs like compilers,database management systems, editors and many more allow users to use the hardware to solvetheir problems. The operating system controls and co-ordinates the use of the hardware among thevarious application programs of the various users. It provides an environment where software anddata make use of the hardware in the computer system.

USER 1 USER 2 USER 3 ……………. USER n

APPLICATIONS PROGRAMSOPERATING SYSTEM

Figure 1.2: Components of a computer system

The operating system is a resource allocator. Many hardware and software resources areneeded to solve a problem. These include CPU time, memory space, file storage, I/O devices andso on. The operating system as a manager of these resources, allocates resources to programs ofusers as required by the tasks. Since there is always a possibility of conflicts for resourcesbecause of more than one user requesting for resources, the operating system resolves theconflicts for efficient operation of the computer system.

The operating system as a control program controls the execution of user programs toavoid errors and improve use of the computer system.

A computer system has been developed to execute user programs there by making workeasier for the user. But the bare hardware alone is not easy to use. Hence application programsare developed. These require certain common operations such as I/O device handling. The

HARDWARE

Page 3: Operating Systems

3

common functions of controlling and allocating resources are present in one piece of software – theoperating system.

The primary goal of an operating system, that is convenience for the user is seen in smallcomputers where as the secondary goal of efficient operation is important in large shared multi-user systems.

1.2 STORAGE STRUCTUREPrograms to be executed need to reside in the main memory. This is the memory that the

CPU directly accesses. The main memory can be visualized as an array of bytes / words, eachhaving its own address. Load / Store instructions allow movement of a word from the main memoryto a CPU register and vice versa.

The Von-Neumann architecture brought in the concept of a stored program. The following(Figure 1.3) is the sequence of steps in one instruction execute cycle:

Get address of next instruction

Fetch instruction from main memory

Decode instruction

Get operand address

Fetch operands from main memory

Perform operation specified by instruction

Page 4: Operating Systems

4

Figure 1.3: Instruction execute cycleProgram and data are to be in the main memory for execution. Ideally they should reside

there permanently, which is not possible because main memory

is not all that big to hold all needed programs and data permanently.

is volatile (contents are lost when power is turned off).

cost is very high.Secondary storage is an answer to the above listed problems. The main characteristics of

secondary storage include

large storage space

Non-volatile or permanent storage

Low costThus program and data need to be loaded from secondary storage to main memory duringexecution. Common secondary storage devices include disks and tapes.

1.3 STORAGE HIERARCHYThe various storage devices differ in speed, cost, size and volatility (permanence of

storage). They can be organized in a hierarchy. Shown below are some different levels of hierarchy(Figure 1.4).

Expensive, fast, limited storage, volatile

Cheap, slow, more storage, non volatile

CPU Registers

Cache

Main memory

Secondary storage (disk)

Secondary storage (tape)

Page 5: Operating Systems

5

Figure 1.4: Storage device hierarchy

As one move up the hierarchy, the devices are expensive but fast and as we move downthe hierarchy, the cost per bit of storage decreases but access times increases (devices are slow).There usually exists a tradeoff in the amount of storage required at each level in the hierarchy. Ofthe various storage devices in the hierarchy, those above the disks are volatile and the rest arenon-volatile.

Thus the design of a computer system balances or makes a tradeoff of all the abovefactors namely speed, cost and volatility. It provides as much expensive volatile memory asnecessary and as much inexpensive non-volatile memory as possible.1.4 INTERRUPTS

Virtually all computers support a mechanism through which other modules(I/O,Memory)can interrupt the normal operations of the processor. The table below shows thecommon classes of interrupts

Program Generated because of some exceptions during the instructionexecution such as divide by zero condition, buffer overflow andattempt to reference outside a users allowed memory space

Timer Generated by a timer/ clock within the processor, whenever the timeslice of a process expires, the operating system sends an interruptsignal to the CPU to preempt the current job

I/O Generated by an I/O controller, for example if the CPU sends a read /write signal to the CPU and if the I/O module is ready at that time thenit sends the interrupt signal to the CPU about its ready status

Hardware Failure Generated by a failure such as power failure or memory parity error

1.5 I/O COMMUNICATION TECHNIQUESThree techniques are possible for I/ O operations

Programmed I/O

Interrupt driven I/O

Direct memory access

Page 6: Operating Systems

6

Programmed I/O:In a programmed I/O the processor has to wait for the concerned I/O module to be readyfor either transmission or reception of more data. The processor while waiting has torepeatedly interrogate the status of the I/O module as a result the performance level of theentire system is severely degradedInterrupt driven I/O:Is more efficient than simple programmed I/O still requires the active intervention of theprocessor to transfer data between memory and I/O module

1.6 System CallsThe more fundamental level of services is handled through the use of system calls.System calls provide the interface between a running program and the operating system.The system call is the means through which a process may request a specific kernelservice. There are hundreds of system calls which can be broadly classified into sixcategories:1. File system2. Process3. Scheduling4. Interprocess communication5. Socket (Networking)6. Miscellaneous

1.7 Boots Trap Loader:

Booting (also known as "booting up") is a bootstrapping process that starts operatingsystems whenever the user turns the power on to a computer system. The bootstraploader is a small program that typically loads the main operating system for the computer.A CPU can only execute program code found in either Read-Only Memory (ROM) orRandom Access Memory (RAM). Modern operating systems and application program codeand data are stored on secondary/ auxiliary memory, such as hard disk drives, CD, DVD,USB flash drive, and floppy disk. When a computer is first powered on, it does not contain

Page 7: Operating Systems

7

an operating system in ROM or RAM. The computer must initially execute a small programstored in ROM along with the bare minimum of data needed to access the auxiliarymemory from which the operating system programs and data are loaded into RAM.

The small program that starts this sequence of loading into RAM is known as a bootstrap

loader, bootstrap or boot loader. This small boot loader program's only job is to load otherdata and programs related to operating systems which are in turn executed from RAM.Often, multiple-stage boot loaders are used; during which several programs of increasingcomplexity sequentially load one after the other in a process of chain loading.

1.8 Exercise

1. Consider the statements given below:S1: the OS is designed to make maximize utilization of the resourcesS2: the control program manages all the system programmesWhich of the above statements is/ are true(a) S1 is true S2 is false (b) S2 is true and S1 is false(c) both S1 and S2 are true (d) both S1 and S2 are false

2. while moving down in a memory hierarchy from register to hard disk, which of thefollowing statement is/are false(a) memory size increase (b) cost per unit decreases(c ) access time increases (d) none of the above

3. RAM belongs to the category of(a) Volatile memory (b) permanent memory(c ) associative memory (d) none of the above

4. Which one of the following is not the function of operating systems?(a) Process management (b) device manipulation(c ) Network management (d) power management

5. System calls cannot be grouped into(a) Process control (b) file management(c ) communications (d) power management

6. Bootstrap loader is always stored in(a) Cache (b) ROM(c ) RAM (d) disk

Page 8: Operating Systems

8

7. In a resident – OS computer, which of the following systems must reside in the mainmemory under all situations.[GATE – 98, 1 Mark](a) Assembler (b) Linker(c ) Loader (d) Compiler

8. Which of the following is an example of a spooled device.[GATE – 98, 1 Mark](a) The terminal used to enter the input data for the C program being executed(b) An output device used to print the output of a number of jobs(c) The secondary memory device used in a virtual storage system(d) The swapping area on a disk used by the swapper

9. Listed below are some operating system abstractions (in the left column) and thehardware components (in the right column). [GATE – 99, 1 Mark]A. Thread 1. InterruptB. Virtual address space 2. MemoryC. File System 3. CPUD. Signal 4. Disk(a) (A) – 2 (B) – 4 (C) – 3 (D) – 1 (b) (A) – 1 (B) – 2 (C) – 3 (D) – 4(c ) (A) – 3 (B) – 2 (C) – 4 (D) – 1 (d) (A) – 4 (B) – 1 (C) – 2 (D) – 3

10. System calls are usually invoked by using. [GATE – 99, 1 Mark](a) A software interrupt (b) Polling(c ) an indirect jump (d) a privileged instruction

11. A multi user, multi – processing operating system cannot be implemented on ahardware that does not support . [GATE – 99, 2 Mark](a) Address translation(b) (b) DMA for disk transfer(c) at least two modes of CPU execution (privileged and non-privileged)(d) Demand paging

12. A processor needs software interrupt to.[GATE – 2001, 1 MARK](a) Test the interrupt system of the processor(b) Implement co-routines(c) Obtain system services which need execution of privileged instructions(d) Return from subroutine

13. A CPU has two modes privileged and non-privileged. In order to change the modefrom privileged to non-privileged [GATE – 2001, 1 MARKS](a) A hardware interrupt is needed(b) A software interrupt is needed(c) A privileged instruction (which does not guarantee an interrupt) is needed(d) A non-privileged instruction (which does not generate an interrupt is needed)

14. Which of the following requires a device driver? [GATE – 2001, 1 MARKS](a) Register(b) Cache

Page 9: Operating Systems

9

(c) Main memory(d) Disk

15. Which of the following does not interrupt a running process [GATE – 2001, 2 MARKS](a) A device(b) A timer(c) Scheduler process(d) Power failure

16. Which combination of the following features will suffice to characterize an OS as amulti-programmed OS? [GATE – 2002, 2 MARKS](A) More than one program may be loaded into main memory at the same time for

execution(B) If a program waits for certain events such as I/O, another program is immediately

scheduled for execution(C) If the execution of a program terminates, another program is immediately

scheduled for execution(a) A (b) A & B (c) A & C (d) A, B & C

17. A CPU generally handles an interrupt by executing an interrupt service routine(a) As soon as interrupt is raised[GATE -2009, 1 MARK](b) By checking the interrupt register at the end of fetch cycle(c) By checking the interrupt register after finishing the execution of the current

instruction(d) By checking the interrupt register at fixed time intervals

Sudhir
Completed
Page 10: Operating Systems

6

CHAPTER 2

HISTORY OF OPERATING SYSTEMS

The present day operating systems have not been developed overnight. Just like any othersystem, operating systems also have evolved over a period of time, starting from the very primitivesystems to the present day complex and versatile ones. Described below is a brief description ofthe evolution of operating systems.

2.1 SIMPLE BATCH SYSTEMS

Computers in earlier days of their inception were very bulky, large machines usually runfrom a console. I/O devices consisted of card readers, tape drives and line printers. Direct userinteraction with the system did not exist. Users made a job consisting of programs, data and controlinformation. The job was submitted to an operator who would execute the job on the computersystem. The output appeared after minutes, hours or sometimes days. The user collected theoutput from the operator, which also included a memory dump. The operating system was verysimple and its major task was to transfer control from one job to another. The operating systemwas resident in memory (Figure 2.1).

Figure 2.1: Memory layout for simple batch systemTo speed up processing, jobs with the same needs were batched together and executed

as a group. For example, all FORTRAN jobs were batched together for execution; all COBOL jobs

Operatingsystem

User area

Sudhir
Approved
Page 11: Operating Systems

7

were batched together for execution and so on. Thus came into existence batch operatingsystems.

Even though processing speed increased to a large extent because of batch processing,the CPU was often idle. This is because of the disparity between operating speeds of electronicdevices like the CPU and the mechanical I/O devices. CPU operates in the microsecond /nanosecond ranges whereas I/O devices work in the second / minute range. With improvements intechnology, I/O devices became faster but CPU speeds became even faster. So the problem ofdisparity in operating speeds only widened.

2.2 SPOOLINGThe introduction of disks brought in the concept of spooling. Instead of reading from slow

input devices like card readers into the computer memory and then processing the job, the input isfirst read into the disk. When the job is processed or executed, the input is read directly from thedisk. Similarly when a job is executed for printing, the output is written into a buffer on the disk andactually printed later. This form of processing is known as spooling an acronym for SimultaneousPeripheral Operation On Line. Spooling uses the disk as a large buffer to read ahead as possibleon input devices and for storing output until output devices are available to accept them.

Spooling overlaps I/O of one job with the computation of other jobs. For example, spoolermay be reading the input of one job while printing the output of another and executing a third job(Figure 2.2).

Disk

I/O

Card reader CPU PrinterFigure2.2: Spooling

Page 12: Operating Systems

8

Spooling increases the performance of the system by allowing both a faster CPU andslower I/O devices to work at higher operating rates.

2.3 MULTIPROGRAMMED BATCH SYSTEMS

The concept of spooling introduced an important data structure called the job pool.Spooling creates a number of jobs on the disk which are ready to be executed / processed. Theoperating system now has to choose from the job pool, a job that is to be executed next. Thisincreases CPU utilization. Since the disk is a direct access device, jobs in the job pool may bescheduled for execution in any order, not necessarily in sequential order.

Job scheduling brings in the ability of multiprogramming. A single user cannot keep bothCPU and I/O devices busy. Multiprogramming increases CPU utilization by organizing jobs in amanner such that CPU always has a job to execute.

The idea of multiprogramming can be described as follows. A job pool on the disk consistsof a number of jobs that are ready to be executed (Figure 2.3). Subsets of these jobs reside inmemory during execution. The operating system picks and executes one of the jobs in memory.When this job in execution needs an I/O operation to complete, the CPU is idle. Instead of waitingfor the job to complete the I/O, the CPU switches to another job in memory. When the previous jobhas completed the I/O, it joins the subset of jobs waiting for the CPU. As long as there are jobs inmemory waiting for the CPU, the CPU is never idle. Choosing one out of several ready jobs inmemory for execution by the CPU is called CPU scheduling.

Figure 2.3: Memory layout for multiprogrammed system

Operatingsystem

Job 1

User areaJob 2

User areaJob 3

User areaJob 4

User area

Page 13: Operating Systems

9

2.4 TIME SHARING SYSTEMS

Multiprogrammed batch systems are suited for executing large jobs that need very little orno user interaction. On the other hand interactive jobs need on-line communication between theuser and the system. Time-sharing / multi tasking is a logical extension of multiprogramming. Itprovides interactive use of the system.

CPU scheduling and multiprogramming provide each user one time slice (slot) in a time-shared system. A program in execution is referred to as a process. A process executes for onetime slice at a time. A process may need more than one time slice to complete. During a time slicea process may finish execution or go for an I/O. The I/O in this case is usually interactive like auser response from the keyboard or a display on the monitor. The CPU switches betweenprocesses at the end of a time slice. The switching between processes is so fast that the user getsthe illusion that the CPU is executing only one user’s process.

Multiprogramming and Time-sharing are the main concepts in all modern operatingsystems.

2.5 SUMMARY

This chapter has traced the evolution of operating systems. Operating systems haveevolved from the simple batch systems to the latest multiprogrammed, time-sharing systems. Wehave also studied the concept of spooling where the disk acts as a buffer and queues upprocesses for CPU to execute. This increases system performance and allows a faster device likethe CPU to work with slower I/O devices. Multiprogramming allows many ready jobs to be presentin memory simultaneously for execution. This helps increases CPU utilization.

2.6 EXERCISE

1. A system with multiple CPU’s is called(a) Time sharing systems (b) desktop systems(c ) Client Server systems (d) parallel processing system

Page 14: Operating Systems

10

2. Time sharing provides(a) Disk Management (b) file system management(c ) concurrent execution (d) all of the above

3. When more than one process are running concurrently on a system then it is known as(a) Batched system (b) real – time systems(c ) multiprogramming systems (d) multiprocessing systems

4. Which of the following is a spooled device(a) Printer (b) hard disk(c ) mouse (d) scanner

Sudhir
Completed
Page 15: Operating Systems

11

CHAPTER 3

PROCESS MANAGEMENT

The operating system is a CPU manager since it manages one of the system resourcescalled CPU computing time. The CPU is a very high speed resource to be used efficiently and tothe maximum. Users submit jobs to the system for execution. These jobs are run on the system asprocesses and need to be managed thus are scheduled.

3.1 WHAT IS A PROCESS?

A program in execution is a process. A process is executed sequentially, one instruction ata time. A program is a passive entity. For example, a file on the disk. A process on the other handis an active entity. In addition to program code, it includes the values of the program counter, thecontents of the CPU registers, the global variables in the data section and the contents of the stackthat is used for subroutine calls.

3.2 PROCESS STATE

A process being an active entity changes state as execution proceeds. A process can beany one of the following states:

New: Process being created.

Running: Instructions being executed.

Waiting: Process waiting for an event to occur.

Ready: Process waiting for CPU.

Terminated: Process has finished execution.A state diagram (Figure 3.1) is used to diagrammatically represent the states and also the

events that trigger the change of state of a process in execution.

Sudhir
Approved
Page 16: Operating Systems

12

3.3 PROCESS CONTROL BLOCK

Every process has a number and a process control block (PCB) represents a process inan operating system. The PCB contains information that makes the process an active entity. ThePCB consists of the number of the process, its state, value of the program counter, contents of theCPU registers and much more as shown below (Figure 3.2). The PCB serves as a repository ofinformation about a process and varies from process to process.

Admitted ExitInterrupt

DispatchI/O complete I/O

Figure 3.1: Process state diagram

Figure 3.2: Process control blockThe process state gives the current state of the process is. It can be either new, ready,

running, waiting or terminated.

New

Ready Running

Terminated

Waiting

Process number

Program counter

CPU registers

Memory managementinformation

I/O information

.

.

Pointer Processstate

Page 17: Operating Systems

13

The program counter contains the address of the next instruction to be executed.The contents of the CPU registers which include the accumulator, general purpose

registers, index register, stack pointer and others are required when the CPU switches from oneprocess to another for a successful continuation.

CPU scheduling information consists of parameters for scheduling various processes bythe CPU.

Memory management information is dependent on the memory system used by theoperating. It includes information pertaining to base registers, limit registers, page tables, segmenttables and other related details.

Process numbers and amount of CPU time used by the process are stored for accountingpurposes in case users are to be billed for the amount of CPU time used.

I/O information gives details about the I/O devices allotted to the process, the number offiles that are open and in use by the process and so on.

3.4 PROCESS SCHEDULING

The main objective of multiprogramming is to see that some process is always running soas to maximize CPU utilization where as in the case of time sharing, the CPU is to be switchedbetween processes frequently so that users interact with the system while their programs areexecuting. In a uniprocessor system, there is always a single process running while the otherprocesses need to wait till they get the CPU for execution on being scheduled.

As a process enters the system, it joins a job queue that is a list of all processes in thesystem. Some of these processes are in the ready state and are waiting for the CPU for execution.These processes are present in a ready queue. The ready queue is nothing but a list of PCB’simplemented as a linked list with each PCB pointing to the next.

There are also some processes that are waiting for some I/O operation like reading from afile on the disk or writing on to a printer. Such processes are present in device queues. Eachdevice has its own queue.

A new process first joins the ready queue. When it gets scheduled, the CPU executes theprocess until

Page 18: Operating Systems

14

1. an I/O occurs and the process gives up the CPU to join a device queue only torejoin the ready queue after being serviced for the I/O.

2. it gives up the CPU on expiry of its time slice and rejoins the ready queue.Every process is in this cycle until it terminates (Figure 3.3). Once a process terminates,

entries for this process in all the queues are deleted. The PCB and resources allocated to it arereleased.

Figure 3.3: Queueing diagram of process scheduling

3.5 SCHEDULERS

At any given instant of time, a process is in any one of the new, ready, running, waiting orterminated state. Also a process moves from one state to another as long as it is active. Theoperating system scheduler schedules processes from the ready queue for execution by the CPU.The scheduler selects one of the many processes from the ready queue based on certain criteria.

Schedulers could be any one of the following:

Long-term scheduler

Short-term scheduler

Medium-term schedulerMany jobs could be ready for execution at the same time. Out of these more than one

could be spooled onto the disk. The long-term scheduler or job scheduler as it is called picks andloads processes into memory from among the set of ready jobs. The short-term scheduler or CPUscheduler selects a process from among the ready processes to execute on the CPU.

Time slice expired

I/O request

Ready queue

I/O queueI/O

CPU

Page 19: Operating Systems

15

The long-term scheduler and short-term scheduler differ in the frequency of theirexecution. A short-term scheduler has to select a new process for CPU execution quite often sinceprocesses execute for short intervals before waiting for I/O requests. Hence short-term schedulermust be very fast else the CPU will be doing only scheduling work.

A long-term scheduler executes less frequently since new processes are not created at thesame pace at which processes need to be executed. The number of processes present in theready queue determines the degree of multiprogramming. So the long-term scheduler determinesthis degree of multiprogramming. If a good selection is made by the long-term scheduler inchoosing new jobs to join the ready queue, then the average rate of process creation must almostequal the average rate of processes leaving the system. The long-term scheduler is thus involvedonly when processes leave the system to bring in a new process so as to maintain the degree ofmultiprogramming.

Choosing one job from a set of ready jobs to join the ready queue needs careful selection.Most of the processes can be classified as either I/O bound or CPU bound depending on the timespent for I/O and time spent for CPU execution. I/O bound processes are those which spend moretime executing I/O where as CPU bound processes are those which require more CPU time forexecution. A good selection of jobs by the long-term scheduler will give a good mix of both CPUbound and I/O bound processes. In this case, the CPU will also be busy and so also the I/Odevices. If this is not to be so, then either the CPU is busy and I/O devices are idle or I/O devicesare busy and CPU is idle. Time sharing systems usually do not require the services the a long-termscheduler since every ready process gets one time slice of CPU time at a time in rotation.

Sometimes processes keep switching between ready, running and waiting states withtermination taking a long time. One of the reasons for this could be an increased degree ofmultiprogramming meaning more number of ready processes than the system can handle. Themedium-term scheduler handles such a situation. When system throughput falls below a threshold,some of the ready processes are swapped out of memory to reduce the degree ofmultiprogramming. Sometime later these swapped processes are reintroduced into memory to jointhe ready queue.

Page 20: Operating Systems

16

3.6 CONTEXT SWITCH

CPU switching from one process to another requires saving the state of the currentprocess and loading the latest state of the next process. This is known as a context switch (Figure3.4). Time that is required for a context switch is a clear overhead since the system at that instantof time is not doing any useful work. Context switch time varies from machine to machinedepending on speed, the amount of saving and loading to be done, hardware support and so on.

Process P0 Operating system Process P1

ExecutingInterrupt

:

ExecutingInterrupt

:

Executing

Figure 3.4: CPU switch from process to process

Save status into PCB0

Reload status from PCB1

Save status into PCB1

Reload status from PCB0

Page 21: Operating Systems

17

3.7 ADVANCED CONCEPTS

Multitasking and multithreading are two advanced concepts in process management. Ashort description of each one of them is given below.

3.7.1 MULTITASKING

A multi-user operating system supports multiple processes at the same time. Similarly amultitasking operating system supports processes each of which can consist of multiple tasks.Multiple tasks run concurrently within a process. Thus a task can be defined as an asynchronouspath of code within a process. Let us analyze the following illustration.

Illustration: A program reads records from a tape, processes them and writes each one of themonto the disk. A simple algorithm for this statement could be as follows:

BeginWhile not end-of-fileBegin

Read a recordProcess the recordWrite the record

EndEnd

Let round robin scheduling with a time slice of ‘n’ units be used. When the above process isscheduled, ‘n’ units of CPU time will be allocated to the process. But due to the Read, a system callis generated thereby utilizing only a fraction of its time slice. When the process is ready after theI/O and gets its next time slice, it again almost immediately goes for an I/O because of the Write.This cycle continues till all the records on the tape have been read, processed and written on to thedisk. This being an I/O bound process it spends more time executing I/O.

Multitasking provides a solution for the problem described above. The entire processof reading, processing and writing is divided into two tasks. The advantage here is that the two

Page 22: Operating Systems

18

tasks can be run concurrently when synchronized. The executable code for the two tasks ismaintained separately as shown below:

BeginWhile not end-of-fileBegin

Task-0-beginRead a recordProcess the record

Task-0-endTask-1-begin

Write a recordTask-1-end

EndEnd

A process in execution has a process control block (PCB) associated with it which iscreated by the operating system. Similarly each task created needs a task control block (TCB). TheTCB saves register information at the time of switching between tasks within a process.

Similar to processes, tasks are present in different states like ready, blocked orrunning. They could also have priorities. TCBs are linked in queues and scheduled for execution.

When a process with multiple tasks is scheduled by an operating system and theprocess gets its time slice for execution,

A task of the process with highest priority is scheduled.

When a time slice expires the entire process along with all its tasks goes into theready state.

If an allotted time slice is not over, but the current task is either over or blockedthen the operating system searches and executes the next task of the process.Thus a process does not get blocked if there exists at least one task in it whichcan still execute within the allocated time slice.

Different tasks need to communicate with each other. Hence inter taskcommunication and synchronization is necessary.

Page 23: Operating Systems

19

An operating system with multitasking is complex when compared to one that does notsupport multitasking. CPU utilization improves with multitasking. For example, when a task isblocked another task of the same process can be scheduled instead of blocking the entire processas is the case with no multitasking. This also reduces the overhead of a context switch at processlevel. A context switch at task level does occur but has fewer overheads as compared to a contextswitch as process level. More the I/O and more the processing involved within a process, greaterwill be the advantage if the process is divided into tasks.

3.7.2 MULTITHREADING

A thread is similar to a task and multithreading is a concept similar to multitasking. Athread like a process has a program counter, stack and an area for saving register contents. Itshares the same address space with others, that is, different threads read and write the samememory location. Hence inter thread communication is easy. Threads can be created dynamicallytheir priorities changed and so on.

Another concept is that of a session. A session like a process gives rise to child sessions.Thus there exists a hierarchy consisting of sessions, each session having multiple processes andeach process having multiple threads as shown below (Figure 11.1).

Session 0Process

Session 1 Thread Session 3Session 2

Figure 11.1: Process hierarchy

The scheduling algorithm in such a case is more complex. Multiple sessions are used to simulatemulti window environments on dumb terminals. Each session has its own virtual terminal with a

Page 24: Operating Systems

20

keyboard, screen, etc. Each session can be used to run different processes like one for a foreground process (interactive editing), one for a background process (sorting), one for anotherbackground process (compilation) and so on. Use of a hot key (combination of keys) allows usersto cycle through the various sessions.

3.8 SUMMARY

This chapter has introduced the concept of a process. A process is a program in execution. Itgoes through various states during its life cycle. Information required for the running of a process ismaintained in a process control block. As explained in the previous chapter, multiprogrammingkeeps more than one ready processes in memory. This calls for scheduling these processes forefficient use of CPU. A proper mix of CPU and I/O bound jobs (long term scheduler) and properscheduling of these processes (short term scheduler) increases CPU utilization. We have alsolearnt two new concepts – multitasking and multithreading.

3.9 Exercise1. Whenever an interrupt occurs following activity takes place

(a) Process switching (b) context switching(c ) both (a) & (b) (d) none of the above

2. Which of the following instruction should be executed in a privileged mode(a) Set value of timer (b) clear memory(c ) disable interrupts (d) all of the above

3. The contents of a program counter register represents(a) The address of the current instruction to execute(b) The address of the previous instruction to execute(c) The address of the next instruction to execute(d) The address of the last instruction to execute

4. Which of the following actions is/are typically not performed by the operating system whenswitching context from process A to process B[GATE – 99, 2 Mark](a) Saving current register values for process A and restoring saved registers values for

process B

Page 25: Operating Systems

21

(b) Changing address translation tables(c) Swapping out the memory image of process A to the disk(d) Invalidating the translation look- aside buffer

5. Which of the following need not necessarily be saved on a context switch betweenprocesses[GATE – 2000, 1 MARK](a) General purpose registers(b) Translation look aside buffer(c) Program counter(d) All of the above

6. Consider the following statements with respect to user-level threads and kernel –supported threads[ GATE – 2004, 1 MARK](a) Context switch is faster with kernel supported threads(b) For user – level threads, a system call can block the entire process(c) Kernel supported threads can be scheduled independently(d) User level threads are transparent to the kernel

7. A process executes the following codeFor (i=0; i<n; i++) fork ();The total no. of child processes created is[GATE 2008 – 2 MARK](a) n (b) 2n – 1 (c) 2n (d) 2n-1 - 1

8. In the following process state transition diagram for a uniprocessor system, assume thatthere are always some processes in the ready state:

Now consider the following statements:I. If a process makes a transition D, it would result in another process making transition A

immediately.II. A process P2 in blocked state can make transition E while another process P1is in running

state.

Page 26: Operating Systems

22

III. The OS uses preemptive scheduling.IV. The OS uses non-preemptive scheduling.

Which of the above statements are TRUE? [GATE 2009 – 2 MARK]

(A) I and II (B) I and III (C) II and III (D) II and IV

9. A signal is a virtual interrupt which is created by which of the following?(a) Hardware (b) OS(c ) PCB (d) TLB

10. Process Control Block (PCB) of all running processes resides in which of the following?(a) RAM (b) Hard Disk(c ) Cache (d) none of the above

11. If a system contains n processors and n processes then what will be the maximum &minimum processes in running state respectively(a) n, n (b) n, 0(c ) n2, 0 (d) n2, n2

12.If a process on the system could issue an I/O request then the process will be placed onwhich of the following?(a) Ready queue (b) running state(c ) ready queue (d) I/O queue

13. Match List – I with list – II select the correct answer using the codes given below the listsList – I List – II

A. Run -> ready 1. Not possibleB. Run -> blocked 2. When a process terminates itselfC. Blocked -> run 3. When a process time quantum expiresD. Run -> terminated 4. When a process issues an input/ output request

Codes:A B C D

(a) 1 4 3 2(b) 2 1 3 4(c) 3 4 1 2(d) 1 4 3 2

Sudhir
Completed
Page 27: Operating Systems

21

CHAPTER 4

CPU SCHEDULING

The CPU scheduler selects a process from among the ready processes to execute on theCPU. CPU scheduling is the basis for multiprogrammed operating systems. CPU utilizationincreases by switching the CPU among ready processes instead of waiting for each process toterminate before executing the next.

The idea of multiprogramming could be described as follows: A process is executed by theCPU until it completes or goes for an I/O. In simple systems with no multiprogramming the CPU isidle till the process completes the I/O and restarts execution. With multiprogramming, many readyprocesses are maintained in memory. So when CPU becomes idle as in the case above, theoperating system switches to execute another process each time a current process goes into await for I/O.

4.1 CPU – I/O BURST CYCLE

Process execution consists of alternate CPU execution and I/O wait. A cycle of these twoevents repeats till the process completes execution (Figure 4.1). Process execution begins with aCPU burst followed by an I/O burst and then another CPU burst and so on. Eventually a CPU burstwill terminate the execution. An I/O bound job will have short CPU bursts and a CPU bound job willhave long CPU bursts.

Sudhir
Approved
Page 28: Operating Systems

22

::

Load memoryAdd to memory CPU burstRead from file

I/O burstLoad memoryMake increment CPU burstWrite into file

I/O burst

Load memoryAdd to memory CPU burstRead from file

I/O burst::

Figure 4.1: CPU and I/O bursts

4.2 PREEMPTIVE / NONPREEMPTIVE SCHEDULING

CPU scheduler has to take a decision when a process:1. switches from running state to waiting (an I/O request).2. switches from running state to ready state (expiry of a time slice).3. switches from waiting to ready state (completion of an I/O).4. terminates.Scheduling under condition (1) or (4) is said to be nonpreemptive. In nonpreemptive

scheduling, a process once allotted the CPU keeps executing until the CPU is released either by aswitch to a waiting state or by termination. Preemptive scheduling occurs under condition (2) or (3).In preemptive scheduling, an executing process is stopped executing and returned to the readyqueue to make the CPU available for another ready process.

Wait for I/O

Wait for I/O

Wait for I/O

Page 29: Operating Systems

23

4.3 SCHEDULING CRITERIA

Many algorithms exist for CPU scheduling. Various criteria have been suggested forcomparing these CPU scheduling algorithms. Common criteria include:

1. CPU utilization: This may range from 0% to 100% ideally. In real systems it rangesfrom 40% for a lightly loaded systems to 90% for heavily loaded systems.

2. Throughput: Number of processes completed per time unit is throughput. For longprocesses may be of the order of one process per hour where as in case of shortprocesses, throughput may be 10 or 12 processes per second.

3. Turnaround time: The interval of time between submission and completion of aprocess is called turnaround time. It includes execution time and waiting time. Forsimplicity turnaround =waiting time + execution time

4. Waiting time: Sum of all the times spent by a process at different instances waitingin the ready queue for allocation of CPU is called waiting time.

5. Response time: In an interactive process the user is using some output generatedwhile the process continues to generate new results. Instead of using theturnaround time that gives the difference between time of submission and time ofcompletion, response time is sometimes used. Response time is thus thedifference between time of submission and the time the first response occurs.

Desirable features include maximum CPU utilization, throughput and minimum turnaroundtime, waiting time and response time.

4.4 SCHEDULING ALGORITHMS

Scheduling algorithms differ in the manner in which the CPU selects a process in the readyqueue for execution.

4.4.1 FIRST COME FIRST SERVE (FCFS) SCHEDULING ALGORITHM

This is one of the very brute force algorithms. A process that requests for the CPU first isallocated the CPU first. Hence the name first come first serve. The FCFS algorithm is implemented

Page 30: Operating Systems

24

by using a first-in-first-out (FIFO) queue structure for the ready queue. This queue has a head anda tail. When a process joins the ready queue its PCB is linked to the tail of the FIFO queue. Whenthe CPU is idle, the process at the head of the FIFO queue is allocated the CPU and deleted fromthe queue.Even though the algorithm is simple, the average waiting is often quite long and varies substantiallyif the CPU burst times vary greatly as seen in the following example.Consider a set of three processes P1, P2 and P3 arriving at time instant 0 and having CPU bursttimes as shown below:

Process Burst time (msecs)P1 24P2 3P3 3

The Gantt chart below shows the result.

0 24 27 30

Average waiting time and average turnaround time are calculated as follows:The waiting time for process P1 = 0 msecs

P2 = 24 msecsP3 = 27 msecs

Average waiting time = (0 + 24 + 27) / 3 = 51 / 3 = 17 msecs.P1 completes at the end of 24 msecs, P2 at the end of 27 msecs and P3 at the end of 30 msecs.Average turnaround time = (24 + 27 + 30) / 3 = 81 / 3 = 27 msecs.If the processes arrive in the order P2, P3 and P3, then the result will be as follows:

0 3 6 30

P1 P2 P3

P1P2 P3

Page 31: Operating Systems

25

Average waiting time = (0 + 3 + 6) / 3 = 9 / 3 = 3 msecs.Average turnaround time = (3 + 6 + 30) / 3 = 39 / 3 = 13 msecs.Thus if processes with smaller CPU burst times arrive earlier, then average waiting and averageturnaround times are lesser.The algorithm also suffers from what is known as a convoy effect. Consider the following scenario.Let there be a mix of one CPU bound process and many I/O bound processes in the ready queue.The CPU bound process gets the CPU and executes (long I/O burst). In the meanwhile, I/O boundprocesses finish I/O and wait for CPU thus leaving the I/O devices idle. The CPU bound processreleases the CPU as it goes for an I/O. I/O bound processes have short CPU bursts and theyexecute and go for I/O quickly. The CPU is idle till the CPU bound process finishes the I/O andgets hold of the CPU.The above cycle repeats. This is called the convoy effect. Here small processes wait for one bigprocess to release the CPU. Since the algorithm is nonpreemptive in nature, it is not suited for timesharing systems.

4.4.2 SHORTEST JOB FIRST (SJF) SCHEDULING ALGORITHM

Another approach to CPU scheduling is the shortest job first algorithm. In this algorithm,the length of the CPU burst is considered. When the CPU is available, it is assigned to the processthat has the smallest next CPU burst. Hence the name shortest job first. In case there is a tie,FCFS scheduling is used to break the tie. As an example, consider the following set of processesP1, P2, P3, P4 and their CPU burst times:

Process Burst time (msecs)P1 6P2 8P3 7P4 3

Page 32: Operating Systems

26

Using SJF algorithm, the processes would be scheduled as shown below.

0 3 9 16 24

Average waiting time = (0 + 3 + 9 + 16) / 4 = 28 / 4 = 7 msecs.Average turnaround time = (3 + 9 + 16 + 24) / 4 = 52 / 4 = 13 msecs.

If the above processes were scheduled using FCFS algorithm, thenAverage waiting time = (0 + 6 + 14 + 21) / 4 = 41 / 4 = 10.25 msecs.Average turnaround time = (6 + 14 + 21 + 24) / 4 = 65 / 4 = 16.25 msecs.The SJF algorithm produces the most optimal scheduling scheme. For a given set of

processes, the algorithm gives the minimum average waiting and turnaround times. This isbecause, shorter processes are scheduled earlier than longer ones and hence waiting time forshorter processes decreases more than it increases the waiting time of long processes.

The main disadvantage with the SJF algorithm lies in knowing the length of the next CPUburst. In case of long-term or job scheduling in a batch system, the time required to complete a jobas given by the user can be used to schedule. SJF algorithm is therefore applicable in long-termscheduling.

The algorithm cannot be implemented for CPU scheduling as there is no way to accuratelyknow in advance the length of the next CPU burst. Only an approximation of the length can beused to implement the algorithm.

But the SJF scheduling algorithm is provably optimal and thus serves as a benchmark tocompare other CPU scheduling algorithms.

SJF algorithm could be either preemptive or nonpreemptive. If a new process joins theready queue with a shorter next CPU burst then what is remaining of the current executingprocess, then the CPU is allocated to the new process. In case of nonpreemptive scheduling, thecurrent executing process is not preempted and the new process gets the next chance, it being theprocess with the shortest next CPU burst.

Given below are the arrival and burst times of four processes P1, P2, P3 and P4.

P4 P1 P2P3

Page 33: Operating Systems

27

Process Arrival time (msecs) Burst time (msecs)P1 0 8P2 1 4P3 2 9P4 3 5

If SJF preemptive scheduling is used, the following Gantt chart shows the result.

0 1 5 10 17 26

Average waiting time = ((10 – 1) + 0 + (17 – 2) + (5 – 3)) / 4 = 26 / 4 = 6.5 msecs.If nonpreemptive SJF scheduling is used, the result is as follows:

0 8 12 17 26

Average waiting time = ((0 + (8 – 1) + (12 – 3) + (17 – 2)) / 4 = 31 / 4 = 7.75 msecs.

SJF has been modified for short term scheduling use in an interactive environment. Thealgorithm bases its decision on the expected processor burst time. Expected burst time can becomputed as a simple average of the process’s previous burst times or as an exponential average.The exponential average for a burst can be computed using the equation

en+1 = αtn + (1 - α)en

where en represents the nth expected burst,tn represents the length of the burst at time n, &

P1 P2 P1P4 P3

P1 P4P2 P3

Page 34: Operating Systems

28

α is a constant that controls the relative weight given to the most recent burst (if α is zero, the mostrecent burst is ignored, with α equal to 1, the most recent purpose is the only consideration).

Given a operating system that uses the shortest – job – first algorithm for short term schedulingand exponential averaging with α = 0.5, what will be the next expected burst time for a processwith following burst times of 5, 8, 3 and 5, and an initial value for en of 10?Solution:Now solving for various values of e

e1 = 10e2 = 0.5 x 5 + 0.5 x 10e2 = 7.5e3 = 0.5 x 8 + 0.5 x 7.5e3 = 7.75e4 = 0.5 x 3 + 0.5 x 7.75e4 = 5.375e5 = 0.5 x 5 + 0.5 x 5.375e5 = 5.1875

4.4.3 PRIORITY SCHEDULING

Each process can be associated with a priority. CPU is allocated to the process having thehighest priority. Hence the name priority. Equal priority processes are scheduled according toFCFS algorithm.

The SJF algorithm is a particular case of the general priority algorithm. In this case priorityis the inverse of the next CPU burst time. Larger the next CPU burst, lower is the priority and viceversa. In the following example, we will assume lower numbers to represent higher priority.

Page 35: Operating Systems

29

Process Priority Burst time (msecs)P1 3 10P2 1 1P3 3 2P4 4 1P5 2 5

Using priority scheduling, the processes are scheduled as shown below:

0 1 6 16 18 19

Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 41 / 5 = 8.2 msecs.Priorities can be defined either internally or externally. Internal definition of priority is based

on some measurable factors like time limits, memory requirements, number of open files, ratio ofaverage CPU burst to average I/ O burst and so on. External priorities are defined by criteria suchas importance of the user depending on the user’s department and other influencing factors.

Priority based algorithms can be either preemptive or nonpreemptive. In case ofpreemptive scheduling, if a new process joins the ready queue with a priority higher than theprocess that is executing, then the current process is preempted and CPU allocated to the newprocess. But in case of nonpreemptive algorithm, the new process having highest priority fromamong the ready processes, is allocated the CPU only after the current process gives up the CPU.

Starvation or indefinite blocking is one of the major disadvantages of priority scheduling.Every process is associated with a priority. In a heavily loaded system, low priority processes in theready queue are starved or never get a chance to execute. This is because there is always ahigher priority process ahead of them in the ready queue.

A solution to starvation is aging. Aging is a concept where the priority of a process waitingin the ready queue is increased gradually. Eventually even the lowest priority process ages toattain the highest priority at which time it gets a chance to execute on the CPU.

P2 P1P5 P3 P4

Page 36: Operating Systems

30

4.4.4 ROUND-ROBIN (RR) SCHEDULING ALGORITHM

The round-robin CPU scheduling algorithm is basically a preemptive scheduling algorithmdesigned for time-sharing systems. One unit of time is called a time slice. Duration of a time slicemay range between 10 msecs. & about 100 msecs. The CPU scheduler allocates to each processin the ready queue one time slice at a time in a round-robin fashion. Hence the name round-robin.

The ready queue in this case is a FIFO queue or a circular queue with new processesjoining the tail of the queue. The CPU scheduler picks processes from the head of the queue forallocating the CPU. The first process at the head of the queue gets to execute on the CPU at thestart of the current time slice and is deleted from the ready queue. The process allocated the CPUmay have the current CPU burst either equal to the time slice or smaller than the time slice orgreater than the time slice. In the first two cases, the current process will release the CPU on itsown and there by the next process in the ready queue will be allocated the CPU for the next timeslice. In the third case, the current process is preempted, stops executing, goes back and joins theready queue at the tail there by making way for the next process.

Consider the same example explained under FCFS algorithm.Process Burst time (msecs)

P1 24P2 3P3 3

Let the duration of a time slice be 4 msecs, which is to say CPU switches betweenprocesses every 4 msecs in a round-robin fashion. The Gantt chart below shows the scheduling ofprocesses.

0 4 7 10 14 18 22 26 30

Average waiting time = (4 + 7 + (10 – 4)) / 3 = 17/ 3 = 5.66 msecs.

P1 P2 P3 P1 P1 P1 P1 P1

Page 37: Operating Systems

31

If there are 5 processes in the ready queue that is n = 5, and one time slice is defined to be20 msecs that is q = 20, then each process will get 20 msecs or one time slice every 100 msecs.Each process will never wait for more than (n – 1) x q time units.

The performance of the RR algorithm is very much dependent on the length of the timeslice. If the duration of the time slice is indefinitely large then the RR algorithm is the same asFCFS algorithm. If the time slice is too small, then the performance of the algorithm deterioratesbecause of the effect of frequent context switching. Below is shown a comparison of time slices ofvarying duration and the context switches they generate on only one process of 10 time units.

Time ContextProcess time = 10 Slice Switch

0 10 12 0

0 6 10 6 1

0 1 2 3 4 5 6 7 8 9 10 1 9

The above example shows that the time slice should be large with respect to the contextswitch time else if RR scheduling is used the CPU will spend more time in context switching.

4.4.5 MULTILEVEL QUEUE SCHEDULING

Processes can be classified into groups. For example, interactive processes, systemprocesses, batch processes, student processes and so on. Processes belonging to a group have aspecified priority. This algorithm partitions the ready queue into as many separate queues as thereare groups. Hence the name multilevel queue. Based on certain properties is process is assignedto one of the ready queues. Each queue can have its own scheduling algorithm like FCFS or RR.

Page 38: Operating Systems

32

For example, Queue for interactive processes could be scheduled using RR algorithm wherequeue for batch processes may use FCFS algorithm. An illustration of multilevel queues is shownbelow (Figure 4.2).

Highest priority

Lowest Priority

Queues themselves have priorities. Each queue has absolute priority over low priorityqueues, that is a process in a queue with lower priority will not be executed until all processes in aqueue with higher priority have finished executing. If a process in a lower priority queue isexecuting (higher priority queues are empty) and a process joins a higher priority queue, then theexecuting process is preempted to make way for a process in the higher priority queue.

This priority on the queues themselves may lead to starvation. To overcome this problem,time slices may be assigned to queues when each queue gets some amount of CPU time. Theduration of the time slices may be different for queues depending on the priority of the queues.

4.4.6 MULTILEVEL FEEDBACK QUEUE SCHEDULING

In the previous multilevel queue scheduling algorithm, processes one assigned to queuesdo not move or change queues. Multilevel feedback queues allow a process to move betweenqueues. The idea is to separate out processes with different CPU burst lengths. All processescould initially join the highest priority queue. Processes requiring longer CPU bursts are pushed tolower priority queues. I/O bound and interactive processes remain in higher priority queues. Aging

SYSTEM PROCESSES

INTERACTIVE PROCESSES

BATCH PROCESSES

STUDENT PROCESSES

Page 39: Operating Systems

33

could be considered to move processes from lower priority queues to higher priority to avoidstarvation. An illustration of multilevel feedback queues is shown below (Figure 4.3).

Highest priority

Queue 0

Queue 1

Queue 2

Lowest priority

A process entering the ready queue joins queue 0. RR scheduling algorithm with q = 8 isused to schedule processes in queue 0. If the CPU burst of a process exceeds 8 msecs. then theprocess preempted, deleted from queue 0 and joins the tail of queue 1. When queue 0 becomesempty, then processes in queue 1 will be scheduled. Here also RR scheduling algorithm is used toschedule processes but q = 16. This will give processes a longer time with the CPU. If a processhas a CPU burst still longer, then it joins queue 3 on being preempted. Hence highest priorityprocesses (processes having small CPU bursts, that is I/O bound processes) remain in queue 1and lowest priority processes (those having long CPU bursts) will eventually sink down. The lowestpriority queue could be scheduled using FCFS algorithm to allow processes to complete execution.

Multilevel feedback scheduler will have to consider parameters such as number of queues,scheduling algorithm for each queue, criteria for upgrading a process to a higher priority queue,criteria for downgrading a process to a lower priority queue and also the queue to which a processinitially enters.

RR scheduling (q = 8)

RR scheduling (q = 16)

FCFS scheduling

Page 40: Operating Systems

34

4.5 SUMMARYIn this chapter we have discussed CPU scheduling. The long-term scheduler provides a

proper mix of CPU-I/O bound jobs for execution. The short-term scheduler has to schedule theseprocesses for execution. Scheduling can either be preemptive or non-preemptive. If preemptive,then an executing process can be stopped and returned to ready state to make the CPU availablefor another ready process. But if non-preemptive scheduling is used then a process once allottedthe CPU keeps executing until either the process goes into wait state because of an I/O or it hascompleted execution. Different scheduling algorithms were studied.

4.6 EXERCISE1. Consider n processes sharing the CPU in a round robin fashion. Assuming that each

process switch takes s seconds, what must be the quantum size q such that the overheadresulting from process switching is minimized but at the same time each process isguaranteed to get its turn at the CPU at least every t seconds(a) q ≤ t – ns / n – 1(b) q ≥ t – ns / n – 1(c) q ≤ t – ns / n + 1(d) q ≥ t – ns / n – 1

2. Consider a set on n tasks with known runtimes r1, r2, ……. rn to be run on a uniprocessormachine. Which of the following processor scheduling algorithms will result in themaximum throughput?(a) Round Robin (b) Shortest Job First(c ) Highest Response Ratio Next (d) FCFS

3. Which of the following algorithm is non-preemptive(a) Round Robin(b) FCFS(c) Multi Level Queue Scheduling(d) Multi Level Feedback Queue Schedling

Page 41: Operating Systems

35

4. A uniprocessor computer system only has two processes, both of which alternate 10 msCPU burst with 90 ms I/O bursts. Both the processes were created at nearly the sametime. The I/O of both the processes can proceed in parallel. Which of the followingscheduling strategies will result in the least CPU utilization (over a long period of time) forthis system?(a) FCFS scheduling(b) Shortest Remaining Time First scheduling(c) Static Priority Scheduling with different priorities for the two processes(d) Round Robin scheduling with a time quantum of 5 ms

5. Consider the following set of processes, with the arrival times and the CPU burst timesgiven in milliseconds.

Process Arrival time Burst time

P1 0 5

P2 1 3

P3 2 3

P4 4 1

What is the average turnaround time for these processes with the preemptive shortestremaining processing time first (SRPT) algorithm?(a) 5.50 (b) 5.75 (c) 6.60 (d) 6.25

6. Consider three CPU intensive processes, which require 10, 20 and 30 time units and arriveat times 0, 2, and 6, respectively. How many context switches are needed if the operatingsystem implements a shortest remaining time first scheduling algorithm? Do not count thecontext switches at time zero and at the end.[ GATE 2006 – 1 MARK](a) 1 (b) 2 (c) 3 (d) 4

7. Consider three processes (process id 0, 1, 2, respectively) with compute time bursts 2, 4,and 8 time units. All processes arrive at time zero. Consider the Longest Remaining TimeFirst (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the processwith the lowest process id. The average turnaround time is [GATE 2006 – 2 MARKS]

Page 42: Operating Systems

36

(a) 13 units (b) 14 units (c) 15 units (d) 16 units

8. Consider three processes, all arriving at time zero, with total execution time of 10, 20, and30 units, respectively. Each process spends the first 20% of execution time doing I/O, thenext 70% of the time doing computation and the last 10% of the time doing I/O again. Theoperating system uses a shortest remaining compute time first scheduling algorithm andschedules a new process either when the running process gets blocked on I/O or when therunning process finishes its compute burst. Assume that all I/O operations can beoverlapped as much as possible. For what percentage of time does the CPU remain idle[GATE 2006 – 2 MARKS](a) 0% (b) 10.6% (c) 30.0% (d) 89.4%

9. Group 1 contains some CPU scheduling algorithms and Group 2 contains someapplications. Match entries in Group1 to entries in Group 2.[GATE – 2007, 1 MARK]

Group I Group II(P) Gang Scheduling (1) Guaranteed Scheduling(Q) Rate Monotonic Scheduling (2) Real – time scheduling(R) Fair Share Scheduling (3) Thread Scheduling

(A) P – 3 Q – 2 R – 1 (B) P – 1 Q – 2 R – 3(C) P – 2 Q – 3 R – 1 (D) P – 1 Q – 3 R – 2

10. An operating system uses Shortest Remaining Time First (SRTF) process schedulingalgorithm. Consider the arrival times and execution times for the following processes:

What is the waiting time for process P2?[GATE 2007 – 2 MARKS](a) 5 (b) 15 (c) 40 (d) 55

process Execution time Arrival time

P1 20 0

P2 25 15

P3 10 30

P4 15 45

Sudhir
Completed