43
Operating Systems Unit-1 NR10 Introduction, Process, CPU Scheduling Mukesh Chinta Asst Prof, CSE, VRSEC 1 OPERATING SYSTEMS UNIT-1 INTRODUCTION: WHAT IS AN OPERATING SYSTEM, MAINFRAME SYSTEMS, DESKTOP SYSTEMS, MULTIPROCESSOR SYSTEMS, DISTRIBUTED SYSTEMS, CLUSTERED SYSTEMS, REAL TIME SYSTEMS, COMPUTER SYSTEM. PROCESS: CONCEPT, PROCESS SCHEDULING, OPERATION ON PROCESSES, CO-OPERATING PROCESSES, INTER-PROCESS COMMUNICATION CPU SCHEDULING: CONCEPTS, SCHEDULING CRITERIA, SCHEDULING ALGORITHMS, MULTIPLE-PROCESS SCHEDULING, REAL TIME SCHEDULING. Introduction to Operating Systems An operating system is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer hardware. A computer system can be divided roughly into four components the hardware, the operating system, the application programs and the users. The hardware consisting of CPU, memory and I/O devices provides the basic computing resources for the system. The application programs define the ways in which these resources are used to solve users’ computing problems. Abstract view of the components of a computer system The operating system controls and co-ordinates the use of hardware among the various application programs for the various users.

Operating Systems - Introduction, Processes and CPU Scheduling

Embed Size (px)

Citation preview

Page 1: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 1

OPERATING SYSTEMS UNIT-1

INTRODUCTION: WHAT IS AN OPERATING SYSTEM, MAINFRAME SYSTEMS, DESKTOP SYSTEMS,

MULTIPROCESSOR SYSTEMS, DISTRIBUTED SYSTEMS, CLUSTERED SYSTEMS, REAL TIME SYSTEMS,

COMPUTER SYSTEM.

PROCESS: CONCEPT, PROCESS SCHEDULING, OPERATION ON PROCESSES, CO-OPERATING PROCESSES,

INTER-PROCESS COMMUNICATION

CPU SCHEDULING: CONCEPTS, SCHEDULING CRITERIA, SCHEDULING ALGORITHMS, MULTIPLE-PROCESS

SCHEDULING, REAL TIME SCHEDULING.

Introduction to Operating Systems

An operating system is a program that manages the computer hardware. It also provides a basis

for application programs and acts as an intermediary between a user of a computer and the

computer hardware.

A computer system can be divided roughly into four components – the hardware, the

operating system, the application programs and the users. The hardware consisting of CPU,

memory and I/O devices provides the basic computing resources for the system. The

application programs define the ways in which these resources are used to solve users’

computing problems.

Abstract view of the components of a computer system

The operating system controls and co-ordinates the use of hardware among the various

application programs for the various users.

Page 2: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 2

User view of an Operating System: The user’s view of the computer varies according to the

interface being used. While designing a PC for one user, the goal is to maximize the work

that the user is performing. Here OS is designed mostly for ease of use. In another case the

user sits at a terminal connected to a main frame or minicomputer. Other users can access

the same computer through other terminals. OS here is designed to maximize resource

utilization to assure that all available CPU time, memory and I/O are used efficiently. In other

cases, users sit at workstations connected to networks of other workstations and servers. These

users have dedicated resources but they also share resources such as networking and

servers. Here OS is designed to compromise between individual usability and resource

utilization.

System view of an Operating System: From the computer’s point of view, OS can be viewed as

resource allocator where in resources are – CPU time, memory space, file storage space, I/O

devices etc. OS must decide how to allocate these resources to specific programs and users so

that it can operate the computer system efficiently. OS is also a control program. A control

program manages the execution of user programs to prevent errors and improper use of

computer. It is concerned with the operation and control of I/O devices.

Another service provided by the OS is of Resource manager. Resource management

includes multiplexing resources in two ways - "in time" and "in space". (i)When a resource is

time multiplexed different programs or different users gets their turn to use that resource. Ex:

Printer. (ii)When a resource is space multiplexed instead of taking turns, the resource is shared

among them, i.e. each one gets a part of the resource. Ex: Sharing main memory, hard disk etc.

USER SERVICES PROVIDED BY AN OPERATING SYSTEM

• Program creation - Editor and debugger utility programs

• Program execution - Loading instructions, initializing I/O devices

• Access to I/O devices - Making it a simple R/W command

• Controlled access to files - Media technology, file format, controlling access

• Error detection and response - internal and external hardware errors {memory error,

device failure}, software errors {arithmetic overflow, access forbidden memory

locations}

OBJECTIVES OF AN OPERATING SYSTEM

Convenience - An operating system makes a computer more convenient to use.

Efficiency - An operating system allows the computer system resources to be used in an

efficient manner.

Ability to Evolve - Should permit effective development, testing, and introduction of

new system features and functions without interfering with service.

Page 3: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 3

MMaaiinnffrraammee SSyysstteemmss

These were the first computers used to tackle many commercial and scientific applications.

They have evolved from simple batch systems to time-shared systems.

Batch systems

These are the early computers that are physically enormous machines running from a console.

The common input devices are card readers, tape drivers and some of the common output

devices are Line printers, Tape drives, Card Punches. The user did not interact directly with the

computer systems.

Bring cards to 1401, Read cards onto input tape, Put input tape on 7094, perform the computation,

writing results to output tape, Put output tape on 1401, which prints output.

The operating system in these early computers was fairly simple. Its major task was to

transfer control automatically from one job to the next. The operating system was always

resident in memory. To speed up processing, operators batched together jobs with similar

needs and ran them through the computer as a group. Thus, the programmers would leave

their programs with the operator. The operator would sort programs into batches with similar

requirements and, as the computer became available, would run each batch. The output from

each job would be sent back to the appropriate programmer. The introduction of disk

technology allowed the operating system to keep all jobs on a disk, rather than in a serial card

reader. With direct access to several jobs, the operating system could perform job scheduling,

to use resources and perform tasks efficiently.

Multiprogrammed systems

One of the most important aspects of OS is its ability to multi program. Multi programming

increases CPU utilization by organizing jobs (code and data) so that the CPU always has one to

execute. OS keeps several jobs in memory. This set of jobs can be a subset of jobs kept in

the job pool which contains all jobs that enter the system. OS picks and begins to execute one

of the jobs in memory. The job may have to wait for some task, such as I/O operation to

Page 4: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 4

complete. In a non multi programmed system, the CPU would sit idle. But here, the OS simply

switches to and executes another job. When that job needs to

wait, CPU is switched to another job and so on. As long as at

least on job needs to execute, CPU is never idle.

Multiprogramming is the first instance where the operating

system must make decisions for the users. Multiprogrammed

operating systems are therefore fairly sophisticated. All the jobs

that enter the system are kept in the job pool. This pool consists

of all processes residing on disk awaiting allocation of main

memory. If several jobs are ready to be brought into memory,

and if there is not enough room for all of them, then the

system must choose among them. Making this decision is job

scheduling. If several jobs are ready to run at the same time, the

system must choose among them. Making this decision is CPU scheduling. Multi programmed

systems provide an environment in which the various system resources are utilized effectively

but they do not provide for user interaction with the computer system.

Time- Sharing systems

Time-sharing or multi tasking is a logical extension of multi programming. In time sharing

systems, CPU executes multiple jobs by switching among them but the switches occur so

frequently that the users can interact with each program while it is running. Time-sharing

requires an interactive computer system which provides direct communication between the

user and the system. A time shared operating system allows many users to share the computer

simultaneously. It uses CPU scheduling and multi programming to provide each user with

a small portion of a time shared computer. A program loaded into memory and executing is

called a process. Time sharing and multi programming require several jobs to be kept

simultaneously in memory. Since main memory is too small to accommodate all jobs, the jobs

are kept initially on the disk in the job pool.

Time-sharing operating systems are even more complex than multiprogrammed

operating systems. In Time sharing systems, to obtain a reasonable response time, jobs may

have to be swapped in and out of main memory to the disk that now serves as a backing store

for main memory. A common method for achieving this goal is virtual memory, which is a

technique that allows the execution of a job that may not be completely in memory. The main

advantage of the virtual-memory scheme is that programs can be larger than physical memory.

Further, it abstracts main memory into a large, uniform array of storage, separating logical

Page 5: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 5

memory as viewed by the user from physical memory. This arrangement frees programmers

from concern over memory-storage limitations.

DDeesskkttoopp SSyysstteemmss Once the personal computers PCs appeared in 1970’s, during the first decade, the CPUs in these

PCs lacked the features of protection of OS from user programs. PC operating systems were

neither multiuser nor multitasking. However as time passed, the goals of these operating

systems shifted more towards maximizing user convenience and responsiveness. The PCs

started running Microsoft Windows and Apple Macintosh. Linux, a UNIX-like operating system

available for PCs, has also become popular recently.

File protection was not a priority for these systems at first, but as these computers are

now often tied into other computers over local-area networks or other Internet connections

enabling other computers and users access the files on a PC, file protection again becomes a

necessary feature of the operating system. The lack of such protection has made it easy for

malicious programs to destroy data on systems such as MS-DOS and the Macintosh operating

system. These programs may be self-replicating, and may spread rapidly via worm or virus

mechanisms and disrupt entire companies or even worldwide networks. Security mechanisms

capable of countering these attacks are to be implemented.

MMuullttiipprroocceessssoorr SSyysstteemmss

Systems that have more than one processor in close communication, sharing the computer bus,

the clock, and sometimes memory and peripheral devices are called as multiprocessor systems

(also called as parallel systems or tightly coupled systems). There are three main advantages

with these multiprocessor systems.

1. Increased Throughput: If we increase number of processors, it takes less time to finish the task

2. Economy of scale: It saves more money because it shares the resources between the

processors such as peripherals, mass storage and power supplies

3. Increased reliability: If functions can be distributed among several processors, then the

failure of one processor will not halt the system, only slows it down

The ability to continue providing service proportional to the level of surviving hardware is called

graceful degradation. Systems designed for graceful degradation are also called fault tolerant.

More no of processor increase ∞ less time to finish task

Page 6: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 6

The most common multiple-processor systems now use symmetric multiprocessing (SMP), in

which each processor runs an identical copy of the operating system, and these copies

communicate with one another as needed. Here, all the processors are peers. Some systems

use asymmetric multiprocessing (ASMP), in which each processor is assigned a specific task. A

master processor controls the system; the other processors either look to the master for

instructions or have predefined tasks. This scheme defines a master-slave relationship. The

master processor schedules and allocates work to the slave processors. The difference between

the multiprocessors is via the hardware or software.

DDiissttrriibbuutteedd SSyysstteemmss

Distributed systems depend on networking for their functionality. By being able to

communicate, distributed systems are able to share computational tasks, and provide a rich set

of features to users. Also called as Loosely coupled system – each processor has its own local

memory; processors communicate with one another through various communications lines,

such as high-speed buses or telephone lines. These systems have many advantages such as

resource sharing, computation speedup (load sharing), reliability and communications.

Distributed systems require networking infrastructure such as Local Area Networks (LAN) or

Wide Area Networks (WAN). They may be client-server or peer-to-peer systems.

Client -Server Systems

As PCs have become very powerful, faster and cheaper, the design interest has shifted from a

centralized system to that of a client-server system. Centralized systems today act as server

systems to satisfy requests generated by client systems. The general structure of a client-server

system is depicted below:

Server systems can be broadly categorized as compute servers and file servers.

Compute-server systems provide an interface to which clients can send requests to perform

an action, in response to which they execute the action and send back results to the client.

File-server systems provide a file-system interface where clients can create, update, read,

and delete files.

Page 7: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 7

Peer-to-Peer Systems

The growth of computer networks -especially the Internet and World Wide Web (WWW) has

had a profound influence on the recent development of operating systems. When PCs were

introduced in the 1970s, they were designed for "personal" use and were generally considered

standalone computers. With the beginning of widespread public use of the Internet in the

1980s for electronic mail, ftp, and others, many PCs became connected to computer networks.

With the introduction of the Web in the mid-1990s, network connectivity became an essential

component of a computer system.

Virtually all modern PCs and workstations are capable of running a web browser for

accessing hypertext documents on the Web. Operating systems (such as Windows, OS/2, Mac

OS, and UNIX) now also include the system software (such as TCP/IP and PPP) that enables a

computer to access the Internet via a local-area network or telephone connection. Several

include the web browser itself, as well as electronic mail, remote login, and file-transfer clients

and servers.

CClluusstteerreedd SSyysstteemmss Like parallel systems, clustered systems gather together multiple CPUs to accomplish

computational work. Clustered systems differ from parallel systems, however, in that they are

composed of two or more individual systems coupled together. They are used to provide high

availability. A layer of cluster software runs on the cluster nodes. Each node can monitor one or

more of the others (over the LAN). If the monitored machine fails, the monitoring machine

takes ownership of its storage and restarts the applications running on the failed machine. It

has two types:

Asymmetric Cluster: One machine is in hot – standby mode while the other is running the

application. Hot –stand- by host does nothing but monitor the active server. If that server

fails, the hot standby machine becomes the active server.

Symmetric Cluster: Two or more hosts are running applications, and they are monitoring

each other. More efficient as it uses all the available hardware

Cluster technology is rapidly changing. Cluster directions include global clusters, in which the

machines could be anywhere in the world. Clustered system use and features should expand

greatly as storage –area-networks (SAN) become prevalent.

RReeaall--TTiimmee SSyysstteemmss A real-time system is used when rigid time requirements have been placed on the operation of

a processor or the flow of data; thus, it is often used as a control device in a dedicated

Page 8: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 8

application. A real-time system has well-defined, fixed time constraints. Processing must be

done within the defined constraints, or the system will fail.

Real-Time systems are of two types. A hard real-time system guarantees that critical tasks be

completed on time. Due to the stringent time constraints, hard real-time systems conflict with

the operation of time-sharing systems, and the two cannot be mixed. A less restrictive type of

real-time system is a soft real-time system, where a critical real-time task gets priority over

other tasks, and retains that priority until it completes. Though soft real-time system is an

achievable goal, they have more limited utility than hard real-time systems and therefore risky

to be used in industrial control and robotics. They are useful in several areas, including

multimedia, virtual reality, and advanced scientific projects-such as undersea exploration and

planetary rovers.

HHaannddhheelldd SSyysstteemmss

Handheld systems includes personal Digital Assistants (PDAs) such as palm, pocket – pcs/

cellular telephones. The main challenges of this type system are limited memory size of the

system, less processor speed and small display screens. One approach for displaying the

content in webpage is web clipping, where only a small subset of a web page is delivered and

displayed on the handheld device.

Computer- System

A common computer system consists of a CPU and a number of device controllers that are

connected through a common bus that provides access to shared memory. Each device

controller is in charge of a

specific type of device as

shown.

The CPU and the device

controllers can execute

concurrently, competing for

memory cycles. To ensure

orderly access to the shared

memory, a memory

controller is provided whose

function is to synchronize

access to the memory.

Page 9: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 9

A bootstrap program is stored in read-only memory (ROM) within the computer hardware and

it initializes all aspects of the system, from CPU registers to device controllers to memory

contents. In order to load the operating system, the bootstrap program must locate and load

into memory the operating-system kernel. The OS then starts executing the first process, such

as “init”, and waits for some event to occur.

The occurrence of an event is usually signaled by an interrupt from either the hardware

or the software. Hardware may trigger an interrupt at any time by sending a signal to the CPU,

usually by way of the system bus. Software may trigger an interrupt by executing a special

operation called a system call (also called a monitor call). Events are almost always signaled by

the occurrence of an interrupt or a trap. A trap (or an exception) is a software-generated

interrupt caused either by an error (for example, division by zero or invalid memory access) or

by a specific request from a user program that an operating-system service be performed. For

each type of interrupt, separate segments of code in the operating system determine what

action should be taken. An interrupt service routine is provided that is responsible for dealing

with the interrupt. Interrupts are an important part of computer architecture. Each computer

design has its own interrupt mechanism, but several functions are common. The interrupt must

transfer control to the appropriate interrupt service routine. Interrupt vector contains the

address of service routines. Incoming interrupts are disabled while another interrupt is being

processed to prevent a lost interrupt.

I/O Structure

A device controller maintains some local buffer storage and a set of special-purpose registers.

The device controller is responsible for moving the data between the peripheral devices that it

controls and its local buffer storage. The size of the local buffer within a device controller varies

from one controller to another.

I/O Interrupts

Once the I/O is started, two kinds of I/O methods exist. In synchronous I/O, control returns to

the user program only upon I/O completion. A special wait instruction idles CPU until next

interrupt or some machines may have a wait loop: Loop: jmp Loop. No simultaneous I/O

processing is present so, at most one outstanding I/O request at a time. Another possibility

called asynchronous I/O returns control to the user program without waiting for the I/O to

complete. The I/O then can continue while other system operations occur.

A system call is needed to allow the user program to wait for I/O completion.

Page 10: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 10

The operating system uses a table containing an entry for each I/O device called the device-

status table in which each entry indicates the device’s type, address and state as shown. This

table is used to keep track of many I/O requests at the same time

Device-status table

Since it is possible for other processes to issue requests to the same device, the operating

system will also maintain a wait queue-a list of waiting requests-for each I/O device. An I/O

device interrupts when it needs service. When an interrupt occurs, the operating system first

determines which I/O device caused the interrupt. It then indexes into the I/O device table to

determine the status of that device, and modifies the table entry to reflect the occurrence of

the interrupt. For most devices, an interrupt signals completion of an I/O request. If there are

additional requests waiting in the queue for this device, the operating system starts processing

the next request.

Page 11: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 11

The main advantage of asynchronous I/0 is increased system efficiency. While I/0 is taking

place, the system CPU can be used for processing or starting I/O’s to other devices. Because I/0

can be slow compared to processor speed, the system makes efficient use of its facilities.

DMA Structure

Direct Memory Access is used for high speed I/O devices that are able to transmit information

at close to memory speeds. Device controller transfers blocks of data from buffer storage

directly to main memory without CPU intervention. Only one interrupt is generated per block,

rather than one per byte (or word).

Storage Structure

Computer programs must be in main memory (also called random-access memory or RAM) to

be executed. Main memory is implemented in a semiconductor technology called dynamic

random-access memory (DRAM), which forms an array of memory words. A typical instruction -

execution cycle, as executed on a system with a von Neumann architecture, will first fetch an

instruction from memory and will store that instruction in the instruction register. The

instruction is then decoded and may cause operands to be fetched from memory and stored in

some internal register. After the instruction on the operands has been executed, the result

may be stored back in memory.

Since main memory is too small and it’s volatile, most computer systems provide

secondary storage as an extension. The main idea is to store large quantities of data

permanently. The most common secondary-storage device is a magnetic disk, which provides

storage of both programs and data. Most programs (web browsers, compilers, word processors,

spreadsheets, and so on) are stored on a disk until they are loaded into memory. Many

programs then use the disk as both a source and a destination of the information for their

processing. Hence, the proper management of disk storage is of central importance to a

computer system. Main memory and the registers built into the processor itself are the only

storage that the CPU can access directly. To allow more convenient access to I/O devices, many

computer architectures provide memory-mapped I/O. In this case, ranges of memory

addresses set aside, and are mapped to the device registers. Reads and writes to these memory

addresses causes the data to be transferred to and from the device registers.

A disk drive is attached to a computer by a set of wires called an I/0 bus. The data

transfers on a bus are carried out by special electronic processors called controllers. The host

controller is the controller at the computer end of the bus. A disk controller is built into each

disk drive. To perform a disk I/0 operation, the computer places a command into the host

Page 12: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 12

controller, typically using memory mapped I/O ports. The host controller then sends the

command via messages to the disk controller, and the disk controller operates the disk-drive

hardware to carry out the command. Disk controllers usually have a built-in cache. Data

transfer at the disk drive happens between the cache and the disk surface, and data transfer to

the host, at fast electronic speeds, occurs between the cache and the host controller.

Storage Hierarchy: The wide variety of storage systems in a computer system can be organized

in a hierarchy according to speed and cost. The higher levels are expensive, but they are fast. As

we move down the hierarchy, the cost per bit generally decreases, whereas the access time

generally increases.

Storage-device hierarchy

The top three levels of memory in above figure may be constructed using semiconductor

memory. In the hierarchy shown above, the storage systems above the electronic disk are

volatile, whereas those below are nonvolatile. An electronic disk can be designed to be either

volatile or nonvolatile. Volatile storage loses its contents when the power to the device is

removed.

Page 13: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 13

Operating System Structures

Such a large and a complex operating system can be created by partitioning it into smaller

pieces and each piece should be a well-delineated portion of the system, with carefully defined

inputs, outputs, and functions. Most common system components are Process Management,

Main memory Management, I/O system Management, File Management, Secondary storage

Management, Networking Management, Protection Management and Command – Interpreter

System

Process Management

A process is a program in execution. A process needs certain resources, including CPU time,

memory, files, and I/O devices, to accomplish its task. These resources are either given to the

process when it is created, or allocated to it when it is running. Here program is a passive

entity, such as the contents of a file stored on disk, whereas process is an active entity, with a

program counter specifying the next instruction to execute. The execution of a process must be

sequential. The operating system is responsible for the following activities in connection with

process management:

Creating and deleting both user and system processes.

Suspending and resuming processes.

Providing mechanisms for process synchronization.

Providing mechanisms for process communication.

Providing mechanisms for deadlock handling.

Main Memory Management

Main memory is a large array of words or bytes each with its own address. Main memory is a

repository of quickly accessible data shared by the CPU and I/O devices. Many different

memory management schemes are available and the effectiveness of the different algorithms

depends on the particular situation. The operating system is responsible for the following

activities in connection with memory management:

Keeping track of which parts of memory are currently being used and by whom.

Deciding which processes are to be loaded into memory when memory space

becomes available.

Allocating and de allocating memory space as needed.

Page 14: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 14

File Management

File management is one of the most visible components of operating systems. Computers can

store information on several different types of physical media each having its own

characteristics and physical organization. A file is a collection of related information defined by

its creator. It represents programs and data. The OS implements the abstract concept of a file

by managing mass storage media, such as disks and tapes, and the devices that control them.

The operating system is responsible for the following activities in connection with file

management:

Creating and deleting files

Creating and deleting directories

Supporting primitives for manipulating files and directories

Mapping files onto secondary storage

Backing up files on stable (nonvolatile) storage media.

I/O –System Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware

devices form the user. The I/O subsystem consists of

A memory management component that includes buffering, caching and spooling. A general device – driver interface. Drivers for specific hardware devices

Secondary storage management

The main purpose of a computer system is to execute programs. These programs, with the data

they access, must be in main memory, or primary storage, during execution. Because main

memory is too small and volatile, the computer system must provide secondary storage to back

up main memory. The operating system is responsible for the following activities in connection

with disk management:

Free – space management Storage allocation Disk scheduling.

Networking

Distributed system is a collection of processors that do not share memory, peripheral devices,

or a clock. Each processor has its own local memory and clock, and the processors

communicate with one another through various communication lines, such as high – speed

buses or networks. The processors in the system are connected through a communication

network, which can be configured in a number of different ways. A distributed system collects

Page 15: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 15

physically separate, possibly heterogeneous systems into a single coherent system, providing

the user with access to the various resources that the system maintains. Access to a shared

resource allows computation speedup, increased functionality, increased data availability, and

enhanced reliability. Operating systems usually generalize network access as a form of file

access, with the details of networking being contained in the network interface's device driver.

Protection System

If a computer system has multiple users and allows the concurrent execution of multiple

processes, then the various processes must be protected from one another’s activities.

Protection is any mechanism for controlling the access of programs, processes, or users to the

resources defined by a computer system. Protection can improve reliability by detecting latent

errors at the interfaces between component subsystems. Early detection of interface errors can

often prevent contamination of a healthy subsystem by another subsystem that is

malfunctioning.

Command-Interpreter System

One of the most important systems programs for an operating system is the command

interpreter, which is the interface between the user and the operating system. Some of the

operating systems include the command interpreter in the kernel. When a new job is started in

a batch system, or when a user logs on to a time – shared system, a program that reads and

interprets control statements is executed automatically. This program is sometimes called the

control – card interpreter or the command- line interpreter, and is often known as the shell.

Its function is simple: To get the next command statement and execute it.

The command statements themselves deal with process creation and management, I/O

handling, secondary-storage management, main-memory management, file-system access,

protection, and networking.

Operating System services

Operating system provides an environment for the execution of programs. It provides certain

services to programs and to the users of those programs. Some of the common services

provided by the operating systems are:

Services that provide user-interfaces to OS

Program execution - load program into memory and run it

I/O Operations - since users cannot execute I/O operations directly

File System Manipulation - read, write, create, delete files

Communications - interprocess and intersystem

Error Detection - in hardware, I/O devices, user programs

Page 16: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 16

Services for providing efficient system operation

Resource Allocation - for simultaneously executing jobs

Accounting - for account billing and usage statistics

Protection - ensure access to system resources is controlled

System Calls

System calls provide the interface between a running program and the operating system. They

are generally available as assembly-language instructions. The languages defined to replace

assembly language for systems programming allow system calls to be made directly (e.g., C,

C++).

Three general methods are used to pass parameters between a running program and the

operating system:

Pass parameters in registers

Store the parameters in a table in memory, and the table address is passed as a

parameter in a register

Push(store) the parameters onto the stack by the program, and pop-off the stack by

operating system

Different system calls are

Process control

end, abort

load, execute

create process, terminate process

get process attributes, set process attributes

wait for time

wait event, signal event

allocate and free memory

Page 17: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 17

File management

create file, delete file

open, close

read, write, reposition

get file attributes, set file attributes

Device management

request device, release device

read, write, reposition

get device attributes, set device attributes

logically attach or detach devices

Information maintenance

get time or date, set time or date

get system data, set system data

get process, file, or device attributes

set process, file, or device attributes

Communications

There are two common models for communication

1. Message passing model

2. Shared memory model

In message passing model, information is exchanged through an interprocess – communication

facility provided by the OS. Before communication can take place, a condition must be opened.

The name of the other communicator must be known be it another process on the same CPU /

Process on another computer connected by a communications network.

In shared memory, processes use map memory system calls to gain access to regions of

memory owned other processes. They may then exchange information by reading and writing

data in the shared area. Message passing is useful when smaller numbers of data need to be

exchanged, because no conflicts need to be avoided. It is also easier to implement than is

shared memory for inter-computer communication. Shared memory allows maximum speed

and convenience of communication, as it can be done at memory speeds when within a

computer.

create, delete communication connection

send, receive messages

transfer status information

attach or detach remote devices

Page 18: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 18

System Programs

System programs provide a convenient environment for program development and execution.

These can be divided into following categories:

1. File Management: These programs create, delete, copy, rename, print, dump, list and

manipulate files and directories.

2. Status Information: It deals with the amount of memory or disk space, numbers of

users, date, time or similar status information.

3. File Modification: Before modifying files, we should check whether the file is existing or

not. If it is existing we can modify the files, if not error occurs.

4. Programming – Language Support: Compilers, assemblers and interpreters for common

programming languages are often provided by the Operating System

5. Program loading and execution: Once the program is assembled or compiled, it must be

loaded into memory to be executed. The system may provide loaders, linkage editors

and overlay editors.

6. Communications: These programs provide the mechanism for creating virtual

connections among processes, users, and different computer systems. They allow users

to send messages to one another's screens, to browse web pages, to send electronic-

mail messages, to log in remotely, or to transfer files from one machine to another.

Some programs such as web browsers, word processors and text formatters, spreadsheets,

database systems, compilers, plotting and statistical-analysis packages, and games which come

along with the OS and are used for common operations are called as system utilities or

application programs. The most important system program for an operating system is the

command interpreter, whose main function is to get and execute the next user-specified

command.

System Structures

The operating systems can be organized as follows:

Simple: Only one or two levels of code

Layered: Lower levels independent of upper levels

Microkernel: OS built from many user-level processes

Modular: Core kernel with dynamically loadable modules

MS-DOS is an example of a system which was originally designed to be simple, small and

limited system. It was written to provide the most functionality in the least space, so it was not

divided into modules carefully.

Page 19: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 19

MS-DOS layer architecture UNIX system structure

UNIX is another system that was initially limited by hardware functionality. It consists of two

separable parts: the kernel and the system programs. The kernel is further separated into a

series of interfaces and device drivers, which were added and expanded over the years as UNIX

evolved.

Layered Approach

Modularization of an Operating system can be done via the layered approach in which the OS is

broken up into a number of layers (levels), each built on top of lower layers. The bottom layer

(layer 0) is the hardware; the highest (layer N) is the user interface. The main advantage of the

layered approach is modularity. The layers are selected such that each uses functions (or

operations) and services of only lower-level layers. This approach simplifies debugging and

system verification.

Each layer is implemented with only those operations provided by lower level layers. A

layer does not need to know how these operations are implemented; it needs to know only

what these operations do.

Page 20: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 20

OS/ 2 layer structure OS Layers

The major difficulty with the layered approach involves the careful definition of the layers,

because a layer can use only those layers below it.

Microkernels

A new operating system called Mach that was developed at Carnegie Mellon University

modularizes the kernel using the microkernel approach. This method structures the operating

system by removing all nonessential components from the kernel, and implementing them as

system- and user-level programs. The result is a smaller kernel. The main function of the

microkernel is to provide a communication facility between the client program and the various

services that are also running in user space. Communication is provided by message passing.

Benefits:

Easier to extend a microkernel

Easier to port OS to new architectures

More reliable (less code is running in kernel mode)

Fault Isolation (parts of kernel protected from other parts)

More secure

Virtual Machines

A virtual machine takes the layered approach to its logical conclusion. It treats hardware and

the operating system kernel as though they were all hardware. A virtual machine provides an

interface identical to the underlying bare hardware. The operating system creates the illusion of

multiple processes, each executing on its own processor with its own (virtual) memory.

Page 21: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 21

The resources of the physical computer are shared to create the virtual machines.

• CPU scheduling can create the appearance that users have their own processor

• Spooling and a file system can provide virtual card readers and virtual line printers

• A normal user time-sharing terminal serves as the virtual machine operator’s console

The virtual-machine concept provides complete protection of system resources since

each virtual machine is isolated from all other virtual machines. This isolation, however,

permits no direct sharing of resources. A virtual-machine system is a perfect vehicle for

operating-systems research and development. System development is done on the virtual

machine, instead of on a physical machine and so does not disrupt normal system operation.

The virtual machine concept is difficult to implement due to the effort required to provide an

exact duplicate to the underlying machine.

System Goals

User goals –operating system should be convenient to use, easy to learn, reliable, safe,

and fast

System goals –operating system should be easy to design, implement, and maintain, as

well as flexible, reliable, error-free, and efficient

Page 22: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 22

PROCESS

A process is a program in execution. A process is a name given to

a program instance that has been loaded into memory and

managed by the operating system. Process address space is

generally organized into code, data (static/global), heap, and

stack segments. Every process in execution works with Registers

(PC, Working Registers) and Call stack (Stack Pointer Reference).

Program itself is not a process, a program is a passive entity

such as a file containing a list of instructions stored on disk

(called an executable file) whereas process is an active entity

with a program counter specifying the next instruction to execute and a set of associated

resources. A program becomes a process when an executable file is loaded into memory.

Process State

As a process executes, it changes state. The state of a process is defined in part by the current

activity of that process. As process executes over time it can be doing either of the activities or

is in following states:

State/Activity Description

New Process is being created

Running Instructions are being executed on the processor running

Waiting/blocked Process is waiting for some event to occur waiting/blocked

Ready Process is waiting to use the processor ready

terminated Process has finished execution terminated

Process State Transition Diagram

Page 23: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 23

The state names vary across different Operating systems. In uni-processor system, only one

process can be running on any processor at any instant, although many processes may be ready

and waiting.

Process Control Block

OS maintains a data structure for each process called the Process Control Block or task control

block (PCB). It contains many pieces of information associated with a specific process as shown

below and includes the following:

Process State: The state may be new, ready,

running, waiting, halted etc.

Pointer: It is used for maintaining the scheduling

list. It points to the next PCB in the ready queue.

Program Counter: The counter indicates the

address of the next instruction to be executed for this

process.

CPU registers: The registers vary in number and

type depending on the computer architecture such as

accumulators, index registers, stack pointers etc.

CPU scheduling information: This information includes a process priority, pointers to

scheduling queues, and other scheduling parameters.

Memory management information: This includes such information as the value of base

and limit registers, page tables, segment tables etc.

Accounting information: This information includes the amount of CPU and real time

used, time limits, account numbers, job or process numbers etc.

I/O status information: This information includes the list of I/O devices allocated to this

process etc.

Process is a program that performs a single thread of execution. A single thread of control

allows the process to perform only one task at one time. The modern OS extended the process

concept to allow a process to have multiple threads of execution.

PPPrrroooccceeessssss SSSccchhheeeddduuullliiinnnggg The objective of multi programming is to have some process running at all times to maximize

CPU utilization. The objective of time sharing is to switch the CPU among processes so

frequently that users can interact with each program while it is running. To meet these

objectives, the process scheduler selects an available process for program execution on the

CPU.

Page 24: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 24

Scheduling Queues

As processes enter the system, they are put inside the job queue, which consists of all

processes in the system. The processes that are residing in main memory and are ready and

waiting to execute are kept on a list called the ready queue. This queue is stored as a linked list.

A ready queue header contains pointers to the first and final PCB’s in the list. Each PCB includes

a pointer field that points to the next PCB in the ready queue.

When a process is allocated the CPU, it executes for a while and eventually quits,(or) is

interrupted or waits for the occurrence of a particular event such as the completion of an I/O

request. The list of processes waiting for a particular I/O device is called a device queue. Each

device has its own device queue. A common representation for process scheduling is queuing

diagram.

Page 25: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 25

Each rectangular box represents a queue. Two types of queues are present: ready queue and a

set of device queues. Circles represent the resources that serve the queues and the arrows

indicate the flow of processes in the system. A new process is initially put in the ready queue. It

waits there until it is selected for execution or is dispatched. Once the process is allocated the

CPU and is executing, one of the following events might occur –

a) The process could issue an I/O request and then be placed in the I/O queue.

b) The process could create a new sub process and wait for the sub process’s termination.

c) The process could be removed forcibly from the CPU as a result of an interrupt

and be put back in the ready queue.

A process continues this cycle until it terminates, at which time it is removed from all queues

and has its PCB and resources deallocated.

Schedulers

A process migrates among the various scheduling queues throughout its life time. The OS must

select for scheduling purposes, processes from these queues and this selection is carried out by

scheduler. The long term scheduler or job scheduler selects processes from this pool and loads

them into memory for execution. The short term scheduler or CPU scheduler selects from

among the processes that are ready to execute and allocates the CPU to one of them. The

distinction between these two lies in the frequency of execution. The short term scheduler

must select a new process for the CPU frequently. The long term scheduler controls the degree

of multi programming (the number of processes in memory).

Most processes can be described as either I/O bound or CPU bound. An I/O bound process is

one that spends more of its time doing I/O than it spends doing computations. A CPU bound

process generates I/O requests infrequently using more of its time doing computations. Long

term scheduler must select a good mix of I/O bound and CPU bound processes. A system with

best performance will have a combination of CPU bound and I/O bound processes.

Page 26: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 26

Some operating systems such as time sharing systems may introduce additional/intermediate

level of scheduling. Idea behind medium term scheduler is that sometimes it can be

advantageous to remove processes from memory and thus reduce the degree of multi

programming. Later the process can be reintroduced into memory and its execution can be

continued where it left off. This scheme is called swapping. The process is swapped out and is

later swapped in by the medium term scheduler.

Context Switch: Switching the CPU to another process requires performing a state save of the

current process and a state restore of a different process. This task is known as context switch.

The context of a process is represented in the PCB of a process. Kernel saves the context of the

old process in its PCB and loads the saved context of the new process scheduled to run. Context

switch time is pure overhead. Its speed varies from machine to machine depending on the

memory speed, the number of registers that must be copied and the existence of special

instructions. Context switch times are highly dependent on hardware support.

OOOpppeeerrraaatttiiiooonnn ooonnn PPPrrroooccceeesssssseeesss The processes in the system can execute concurrently, and they must be created and deleted

dynamically. Thus, the operating system must provide a mechanism (or facility) for process

creation and termination.

Process Creation

A process can create new processes during its execution. Process which creates another

process is called a parent process; the created process is called a child process resulting in a

tree of processes. Generally a process is identified and managed by a process identifier (pid).

When a process creates a subprocess, that subprocess can obtain resources from the OS

directly or it may be constrained to a subset of the parent process resources. Restricting a child

process to a subset of the parent's resources prevents any process from overloading the system

by creating too many subprocesses.

When a process creates a new process, two possibilities exist in terms of execution:

1. The parent continues to execute concurrently with its children.

2. The parent waits until some or all of its children have terminated.

Page 27: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 27

There are also two possibilities in terms of the address space of the new process:

1. The child process is a duplicate of the parent process.

2. The child process has a program loaded into it.

When one process creates a new process, the identity of the newly created process is passed to

the parent so that a parent knows the identities of all its children.

In UNIX, a new process is created by the fork system call. The new process consists of a copy of

the address space of the original process. This mechanism allows the parent process to

communicate easily with its child process. Both processes (the parent and the child) continue

execution at the instruction after the fork system call, with one difference: The return code for

the fork system call is zero for the new (child) process, whereas the (nonzero) process identifier

of the child is returned to the parent.

C Program forking separate process

Process Termination

A process terminates when it finishes executing its final statement and asks the operating

system to delete it by using the exit system call. The child process may return data (output) to

the parent process. All the resources of the process-including physical and virtual memory,

open files, and I/0 buffers-are deallocated by the operating system.

Page 28: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 28

A parent may terminate the execution of one of its children for a variety of reasons, such as:

The child has exceeded its usage of some of the resources that it has been allocated.

The task assigned to the child is no longer required.

The parent is exiting, and the operating system does not allow a child to continue if its

parent terminates. On such systems, if a process terminates (either normally or

abnormally), then all its children must also be terminated. This phenomenon, referred

to as cascading termination, is normally initiated by the operating system.

IIInnnttteeerrrppprrroooccceeesssssseeesss CCCooommmmmmuuunnniiicccaaatttiiiooonnn A process executing in the Operating system can be either an independent or a cooperating

process. It is called an independent process if it cannot affect or be affected by the other

processes executing in the system. A process is cooperating if it can affect or be affected by the

other processes executing in the system. So, any process that shares data with other processes

is a cooperating process. Many reasons exist for providing an environment for cooperating

processes:

Information sharing: Since several users may be interested in the same piece of

information, we must provide an environment to allow concurrent access to these types of

resources.

Computation speedup: If we want a particular task to run faster, we must break it into

subtasks, each of which will be executing in parallel with the others. Such a speedup can be

achieved only if the computer has multiple processing elements (such as CPUS or I/O

channels).

Modularity: We may want to construct the system in a modular fashion dividing the system

functions into separate processes or threads.

Convenience: Even an individual user may have many tasks on which to work at one time.

For instance, a user may be editing, printing, and compiling in parallel.

For implementing cooperating processes, the OS provides the means for cooperating processes

to communicate with each other via an interprocess communication (IPC) facility. IPC provides

a mechanism to allow processes to communicate and to synchronize their actions without

sharing the same address space. IPC is particularly useful in a distributed environment where

the communicating processes may reside on different computers connected with a network. An

example is a chat program used on the World Wide Web. There are two fundamental modes of

IPC: (1) Shared memory

(2) Message Passing

Page 29: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 29

Message passing Shared Memory

In the shared memory model, a region of memory that is shared by cooperating processes is

established. Processes can then exchange information by reading and writing data to the

shared region. In the message passing model, communication takes place by means of

messages exchanged between the cooperating processes. It usually has to deal with problems

of mutual exclusion and critical section problem.

Message-Passing systems

The function of a message system is to allow processes to communicate with one another

without the need to resort to shared data. If processes P and Q want to communicate, they

must send messages to and receive messages from each other; a communication link must exist

between them. Several methods for logically implementing a link and the send/receive

operations exist and are listed below:

o Direct or indirect communication

o Symmetric or asymmetric communication

o Automatic or explicit buffering

o Send by copy or send by reference

o Fixed-sized or variable-sized messages

Naming

Direct communication

Here the process must explicitly name the recipient or sender of the communication. Here, the

send and receive primitives are defined as:

send (P, message) – send a message to process P

receive(Q, message) – receive a message from process Q

Properties of communication link in this scheme are:

Page 30: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 30

Links are established automatically

A link is associated with exactly one pair of communicating processes

Between each pair there exists exactly one link

The link may be unidirectional, but is usually bi-directional

The above scheme exhibits symmetry as both the sender and receiver must name the other for

communication. A slight variation of the above scheme exhibits asymmetry in addressing. Here,

only the sender names the recipient but, the recipient need not name the sender. The send and

receive primitives for this scheme are:

send(P, message) – send a message to process P

receive(id, message) -Receive a message from any process; the variable id is set to the

name of the process with which communication has taken place.

The disadvantage in both symmetric and asymmetric schemes is the limited modularity of the

resulting process definitions.

Indirect Communication

With indirect communication, the messages are sent to and received from mailboxes, or ports.

A mailbox can be viewed abstractly as an object which messages can be placed by processes

and from which messages can be removed. Each mailbox has a unique identification. In this

scheme, a process can communicate with some other process via a number of different

mailboxes. Two processes can communicate only if they share a mailbox. The send and receive

primitives are defined as follows:

send(A, message) -Send a message to mailbox A.

receive(A, message) -Receive a message from mailbox A.

A communication link in this scheme has the following properties

Link established only if processes share a common mailbox

A link may be associated with many processes

Each pair of processes may share several communication links, each communication link

corresponding to one mailbox

Link may be unidirectional or bi-directional

A mailbox may be owned either by a process or by the operating system. If the mailbox is

owned by a process (that is, the mailbox is part of the address space of the process), then we

distinguish between the owner (who can only receive messages through this mailbox) and the

user (who can only send messages to the mailbox). Since each mailbox has a unique owner,

there can be no confusion about who should receive a message sent to this mailbox. When a

process that owns a mailbox terminates, the mailbox disappears. Any process that

subsequently sends a message to this mailbox must be notified that the mailbox no longer

exists. Also, a mailbox owned by the operating system is independent and is not attached to

Page 31: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 31

any particular process and must provide a mechanism that allows a process to create a new

mailbox, send and receive messages through the mailbox and delete a mailbox.

Synchronization

Communication between processes takes place by calls to send and receive primitives. Message

passing may be either blocking or nonblocking-also known as synchronous and asynchronous.

Blocking send: The sending process is blocked until the message is received by the

receiving process or by the mailbox.

Nonblocking send: The sending process sends the message and resumes operation.

Blocking receive: The receiver blocks until a message is available.

Nonblocking receive: The receiver retrieves either a valid message or a null.

Different combinations of send and receive are possible. When both the send and receive are

blocking, we have a rendezvous between the sender and the receiver.

Buffering

Whether the communication is direct or indirect, messages exchanged by communicating

processes reside in a temporary queue. Basically, such a queue can be implemented in three

ways:

Zero capacity: The queue has maximum length 0; thus, the link cannot have any messages

waiting in it. In this case, the sender must block until the recipient receives the message.

Bounded capacity: The queue has finite length n; thus, at most n messages can reside in it.

If the queue is not full when a new message is sent, the latter is placed in the queue (either

the message is copied or a pointer to the message is kept), and the sender can continue

execution without waiting. The link has a finite capacity, however. If the link is full, the

sender must block until space is available in the queue.

Unbounded capacity: The queue has potentially infinite length; thus, any number of

messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering; the

other cases are referred to as automatic buffering.

Page 32: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 32

Threads

A thread (or lightweight process) is a basic unit of CPU utilization; it consists of program counter, register set and stack space. A thread shares code section, data section and OS

resources (open files, signals) with peer threads. Heavyweight process is a task with one thread. Benefits of Threads:

Responsiveness Resource sharing Economy Utilization of MP architecture

Types of Threads

Kernel-supported threads (Mach and OS/2)

User-level threads

Hybrid approach implements both user-level and kernel-supported threads (Solaris 2).

Multithreading Models: Many-to-One, One-to-One, Many-to-Many

Page 33: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 33

CPU Scheduling

Scheduling is a fundamental operating-system function. Process execution consists of a cycle of

CPU execution and I/O wait where, processes alternate between a CPU burst and an I/O burst.

A CPU scheduler selects from among the processes in memory that are ready to execute, and

allocates the CPU to one of them. CPU scheduling decisions may take place when a process:

1. Switches from running state to waiting state {I/O request or waiting for a child process}

2. Switches from running state to ready state {when an interrupt occurs}

3. Switches from waiting state to ready {completion of I/O}

4. Terminates

Scheduling taken place under circumstances 1 and 4 is called as nonpreemptive scheduling

and otherwise its called as preemptive. In nonpreemptive scheduling, once the CPU has been

allocated to a process, the CPU is kept by it, until it releases the CPU either by terminating or by

switching to the waiting state. In preemptive scheduling, process may have CPU taken away

before completion of current CPU burst.

Page 34: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 34

Dispatcher is the module that gives control of the CPU to the process selected by the short-

term scheduler; this involves:

switching context

switching to user mode

jumping to the proper location in the user program to restart that program

Dispatch latency is the time taken by the dispatcher to stop one process and start another

running.

Scheduling Criteria

Different CPU-scheduling algorithms have different properties and may favor one class of

processes over another. Different criteria are taken into account for comparison of CPU-

scheduling algorithms.

CPU utilization – keep the CPU as busy as possible

Throughput – One measure of work is the number of processes completed per time

unit, called throughput.

Turnaround time – The interval from the time of submission of a process to the time of

completion is the turnaround time. Turnaround time is the sum of the periods spent

waiting to get into memory, waiting in the ready queue, executing on the CPU, and

doing I/O.

Waiting time – Waiting time is the sum of the periods spent waiting in the ready queue.

Response time – amount of time it takes from when a request was submitted until the

first response is produced, not output (for time-sharing environment)

The criteria for optimization of a CPU scheduling algorithm are Max CPU utilization, Max

throughput, Min turnaround time, Min waiting time, Min response time.

Scheduling Algorithms

First-Come, First-Served Scheduling

The simplest and a nonpreemptive CPU-scheduling algorithm is the first-come, first-served

(FCFS) scheduling algorithm. Here, the process that requests the CPU first is allocated the CPU

first. The implementation is easily managed by a FIFO queue. When CPU is free, it is allocated to

the process at the head of the queue. The average waiting time under FCFS policy, is often quite

long. Take an example of three process with their necessary CPU-burst time given in

milliseconds.

Page 35: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 35

Process Burst Time

P1 24

P2 3

P3 3

If the process arrive in the order P1 , P2 , P3 , then the resultant Gantt Chart is:

The waiting time for P1 is 0 milliseconds, P2 is 24 milliseconds and P3 is 27 milliseconds. The

average waiting time is (0 + 24 + 27)/3 = 17 milliseconds. But, if the processes arrive in the

order P2 , P3 , P1, then Gantt chart is

Now the average waiting time is (6 + 0 + 3)/3 = 3 milliseconds, which is a substantial reduction.

If there is one CPU bound process and many I/O bound processes, and if the CPU bound

process gets hold of the CPU, the other processes have to wait for the big process to get off the

CPU. This is called a convoy effect. This effect results in lower CPU and device utilization than

might be possible if the shorter processes were allowed to go first.

Shortest-Job-First Scheduling

In this approach, we associate with each process the length of its next CPU burst. Use these

lengths to schedule the process with the shortest time. When the CPU is available, it is assigned

to the process that has the smallest next CPU burst.

Consider the example,

Process Arrival Time Burst Time

P1 0.0 6

P2 0.0 8

P3 0.0 7

P4 0.0 3

Using SJF scheduling, we schedule the processes according to the following Gantt chart:

Page 36: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 36

Thus, the average waiting time is (3 + 16 + 9 + 0) / 4 = 7 milliseconds

SJF is optimal as it gives minimum average waiting time for a given set of processes. The real

problem with SJF is finding out the length of the next CPU burst. For long-term scheduling, the

process time limit specified by the user at the time of submission can be taken into account. So,

SJF scheduling is very widely used in long-term scheduling. Though SJF is optimal, it can be used

at the level of short-term scheduling as the length of the next CPU burst cannot be known. Only

way is to estimate its value. The next CPU burst value may be predicted. It is generally predicted

as an exponential average of the measured lengths of previous CPU bursts.

τn+1 = α * tn + (1 - α) * τn

Where:

tn is the actual duration of nth CPU burst value for the process,

α is a constant indicating how much to base estimate on last CPU burst, 0 ≤ α ≤ 1 and

τn is the estimate of CPU burst duration for time

The formula defines an exponential average. The SJF algorithm may be either preemptive or

nonpreemptive. The choice arises when a new process arrives at the ready queue while a

previous process is executing. Preemptive SJF scheduling is sometimes called shortest-

remaining-time-first scheduling (SRTF). Consider the following example:

Process Arrival Time Burst Time

P1 0 8

P2 1 4

P3 2 9

P4 3 5

If the processes arrive at the ready queue at the times shown and need the indicated burst

times, then the resulting preemptive SJF schedule is as depicted in the following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue. Process P2 arrives at

time 1. The remaining time for process P1 (7 milliseconds) is larger than the time required by

Page 37: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 37

process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled. The

average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5

milliseconds. A nonpreemptive SJF scheduling would result in an average waiting time of 7.75

milliseconds.

Another example for SJF and SRTF:

If all the jobs are of the same length, then SRTF becomes the same as FCFS (i.e. FCFS is best can

do if all jobs the same length).

Priority Scheduling

Here, a priority number (integer) is associated with each process. The CPU is allocated to the

process with the highest priority (smallest integer highest priority). Equal priority processes

are scheduled in the FCFS order. An SJF algorithm is simply a priority algorithm, where the

priority (p) is the inverse of the next (predicted) CPU burst. The larger the CPU burst, the lower

the priority and vice versa. Consider the following ex:

Process Burst Time Priority

P1 10 3

P2 1 1

P3 2 4

P4 1 5

P5 5 2

Using priority scheduling, we would schedule these processes according to the following Gantt

chart:

Page 38: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 38

The average waiting time is 8.2 milliseconds. Priority scheduling can be either preemptive or

nonpreemptive. Priorities can be defined either internally or externally. Internally defined

priorities use some measurable quantity or quantities such as time limits, memory

requirements, number of open files etc to compute the priority of a process. External priorities

are set by criteria that are external to the operating system, such as the importance of the

process, its type and the department running the process etc. Priority may be determined by

user or by some default mechanism. The system may determine the priority based on memory

requirements, time limits, or other resource usage. A major problem with priority-scheduling

algorithms is indefinite blocking (or starvation). A process that is ready to run but lacking the

CPU can be considered blocked-waiting for the CPU. Starvation occurs if a low priority process

never runs. Solution: build aging into a variable priority. Aging is a technique of gradually

increasing the priority of processes that wait in the system for a long time.

Round-Robin Scheduling

The round-robin (RR) scheduling is similar to FCFS but with preemption added to switch the

processes. Each process gets a small unit of the CPU time called the time quantum, which is

usually 10-100 milliseconds. After this time has elapsed and if the process hasn’t finished

execution, the timer sets an interrupt to the operating system. A context switch will be

executed, and the process will be put at the tail of the ready queue. The CPU scheduler will

then select the next process in the ready queue.

If there are n processes in the ready queue and the time quantum is q, then each

process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits

more than (n-1)q time units.

Consider the following example, where the time quantum is 4 milliseconds:

Process P1 gets the first 4 milliseconds and then its preempted after the first time quantum and

the CPU is given to the next process. Since process P2 does not need 4 milliseconds, it quits

before its time quantum expires. The CPU is then given to the next process, process P3. Once

Page 39: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 39

each process has received 1 time quantum, the CPU is returned to process P1 for an additional

time quantum. The average waiting time is 17/3 = 5.66 milliseconds.

Typically RR has higher average turnaround than SJF, but better response. The

performance of the RR algorithm depends heavily on the size of the time quantum. At one

extreme, if the time quantum is very large (infinite), the RR policy is the same as the FCFS

policy. If the time quantum is very small (say 1 microsecond), the RR approach is called

processor sharing, and appears to the users as though each of n processes has its own

processor running at 1/n the speed of the real processor. The time quantum is to be large with

respect to the context-switch time. If the context-switch time is approximately 10 percent of

the time quantum, then about 10 percent of the CPU time will be spent in context switch.

The performance of the RR algorithm is also effected by the context switching time

involved. In the figure shown below, it can be observed that the number of context switches

increases with a small time quantum.

So, we want the time quantum to be large with respect to the context- switch time. If the

context-switch time is approximately 10 percent of the time quantum, then about 10 percent of

the CPU time will be spent in context switch. Turnaround time also depends on the size of the

time quantum. If context-switch time is added in, the average turnaround time increases for a

smaller time quantum, since more context switches will be required.On the other hand, if the

time quantum is too large, RR scheduling degenerates to FCFS policy.

Another example:

Process Burst Time

P1 20

P2 12

P3 8

P4 16

P5 4

Assume a time quantum of 4 and context time is 0.

Page 40: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 40

P1: 60 - 20 = 40, P2: 44 - 12 = 32, P3: 32 - 8 = 24, P4: 56 - 16 = 40, P5: 20 - 4 = 16

The average waiting time is 30.4 milliseconds

Another Example:

A comparison of different quantum for the above example:

Page 41: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 41

Multilevel Queue scheduling

Usually Process can be divided into foreground (or interactive) processes and background (or

batch) processes. A multilevel queue-scheduling algorithm partitions the ready queue into

several separate queues. The processes are permanently assigned to one queue, generally

based on some property of the process, such as memory size, process priority, or process type.

Each queue has its own scheduling algorithm.

Separate queues might be used for foreground and background processes, where the

foreground queue might be scheduled by an RR algorithm, while the background queue is

scheduled by an FCFS algorithm. The processes are assigned to one queue permanently. Some

form of scheduling must be done between the queues. One is that each queue has absolute

priority over lower-priority queues. Other is a Time slice, where each queue gets some CPU

time that it schedules - e.g. 80% foreground(RR), 20% background (FCFS).

Multilevel Feedback Queue Scheduling

In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on

entry to the system. This has the advantage of low scheduling overhead, but the disadvantage

of being inflexible.

Multilevel feedback queue scheduling, however, allows a process to move between

queues. The idea is to separate processes with different CPU-burst characteristics. If a process

uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-

bound and interactive processes in the higher-priority queues. Similarly, a process that waits

too long in a lower- priority queue may be moved to a higher-priority queue. This form of aging

prevents starvation.

Page 42: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 42

In the above example, the scheduler first executes all processes in queue 0. Only when queue 0

is empty will it execute processes in queue 1. Similarly, processes in queue 2 will be executed

only if queues 0 and 1 are empty. A process entering the ready queue is put in queue 0. A

process in queue 0 is given a time quantum of 8 milliseconds. If it does not finish within this

time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1

is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into

queue 2. Processes in queue 2 are run on an FCFS basis, only when queues 0 and 1 are empty.

This scheduling algorithm gives highest priority to any process with a CPU burst of 8

milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go off to

its next I/O burst. Processes that need more than 8, but less than 24, milliseconds are also

served quickly, although with lower priority than shorter processes. Long processes

automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over from

queues 0 and 1.

A multilevel feedback queue scheduler is defined by the following parameters:

The number of queues

The scheduling algorithm for each queue

The method used to determine when to upgrade a process to a higher- priority queue

The method used to determine when to demote a process to a lower-priority queue

The method used to determine which queue a process will enter when that process

needs service

The definition of a multilevel feedback queue scheduler makes it the most general CPU-

scheduling algorithm. Although a multilevel feedback queue is the most general scheme, it is

also the most complex.

Page 43: Operating Systems - Introduction, Processes and CPU Scheduling

Operating Systems Unit-1

NR10 Introduction, Process, CPU Scheduling

Mukesh Chinta

Asst Prof, CSE, VRSEC 43

Multiple-Processor Scheduling

CPU scheduling becomes more complex when multiple CPUs are available. One solution is to

have one common ready queue accessed by each CPU. One approach used is Self scheduling -

each processor examines the common ready queue and selects a process to execute. But

problem may arise when two select the same process. Some systems implement asymmetric

multiprocessing in which all scheduling decisions, I/O processing, and other system activities

handled by one single processor-the master server. The other processors only execute user

code. This is far simpler than the symmetric multiprocessing, because only one processor

accesses the system data structures, alleviating he need for data sharing. Still, it is not efficient.

Real-Time Scheduling

Real-time computing is divided into two types. Hard real-time systems are required to

complete a critical task within a guaranteed amount of time. Generally, a process is submitted

along with a statement of the amount of time in which it needs to complete or perform I/O.

The scheduler then either admits the process, guaranteeing that the process will complete on

time, or rejects the request as impossible. This is known as resource reservation. Hard real-

time systems are composed of special-purpose software running on hardware dedicated to

their critical process, and lack the full functionality of modern computers and operating

systems.

Soft real-time computing is less restrictive. It requires that critical processes receive

priority over less fortunate ones. Implementing soft real-time functionality requires careful

design of the scheduler and related aspects of the operating system.

Types of real-time Schedulers

Periodic Schedulers - Fixed Arrival Rate

E.g. Rate monotonic (RM). Tasks are periodic. Policy is shortest-period-first, so it

always runs the ready task with shortest period.

Aperiodic Schedulers - Variable Arrival Rate

E.g. Earliest deadline (EDF). This algorithm schedules the task with closer deadline

first

Comparison of Different Scheduling Algorithms: