21
OPERATING SYSTEM LECTURE Page 1 of 21 CHAPTER 1: INTRODUCTION DEFINITION OF OPERATING SYSTEM The software responsible for controlling the allocation and usage of hardware resources such as memory, central processing unit (CPU) time, disk space, and peripheral devices. The foundation on which applications such as word processing and spread sheet programs, are built. HISTORY OF OPERATING SYSTEMS HIGHLIGHTS: Charles Babbage (1792-1871) English mathematician who designed the fist true digital computer. Ada Lovelace Daughter of the famed British poet Lord Byron that developed software for the analytical engine. Known as the world’s first programmer. 1. THE FIRST GENERATION (1945-1955) VACUUM TUBES and PLUGBOARDS 1940 – Calculating Engine was developed. Mode of Operation The programmer sign up for a block of time on the signup sheet on the wall, then come down to the machine room, insert his or her plug board into the computer, and spend the next few hours hoping that none of the 20,000 or so vacuum tubes would burn out during the run. Early 1950s – Punched card was introduced. 2. THE SECOND GENERATION (1955-1965) TRANSISTORS and BATCH SYSTEMS Mid 1950s – Transistors was introduced. Mainframes – machines that locked away in specially air conditioned computer rooms, with staffs of professional operators to run them. Mode of Operation Batch System – system that processes data in discrete groups of previously scheduled operations rather than interactively or in real time. 3. THE THIRD GENERATION (1965-1980) ICs and MULTIPROGRAMMING Mode of Operation Multiprogramming 4. THE FOURTH GENERATION (1980-Present) PERSONAL COMPUTERS Prepared by: jaypee cornejo

Opearting System Handouts

Embed Size (px)

DESCRIPTION

OS lectures for IT teacher

Citation preview

OPERATING SYSTEM

LECTURE Page 1 of 15

CHAPTER 1: INTRODUCTION

DEFINITION OF OPERATING SYSTEM The software responsible for controlling the allocation and usage of hardware resources such

as memory, central processing unit (CPU) time, disk space, and peripheral devices. The foundation on which applications such as word processing and spread sheet programs,

are built.

HISTORY OF OPERATING SYSTEMSHIGHLIGHTS:Charles Babbage (1792-1871)

English mathematician who designed the fist true digital computer.Ada Lovelace

Daughter of the famed British poet Lord Byron that developed software for the analytical engine.

Known as the world’s first programmer.1. THE FIRST GENERATION (1945-1955) VACUUM TUBES and PLUGBOARDS

1940 – Calculating Engine was developed.Mode of Operation

The programmer sign up for a block of time on the signup sheet on the wall, then come down to the machine room, insert his or her plug board into the computer, and spend the next few hours hoping that none of the 20,000 or so vacuum tubes would burn out during the run.Early 1950s – Punched card was introduced.

2. THE SECOND GENERATION (1955-1965) TRANSISTORS and BATCH SYSTEMSMid 1950s – Transistors was introduced.Mainframes – machines that locked away in specially air conditioned computer rooms, with staffs of professional operators to run them.Mode of Operation

Batch System – system that processes data in discrete groups of previously scheduled operations rather than interactively or in real time.

3. THE THIRD GENERATION (1965-1980) ICs and MULTIPROGRAMMINGMode of Operation

Multiprogramming 4. THE FOURTH GENERATION (1980-Present) PERSONAL COMPUTERS

OPERATING SYSTEM FUNCTIONS:1. The Operating System as an Extended Machine

- Operating system presents to the user the equivalent of an extended machine that is easier to program than the underlying hardware.

2. The Operating System as a Resource Manager- The primary task of operating system is to keep track the of who is using which

resource, to grant resource requests, to account for usage, and to mediate conflicting request from different programs and users.

Resource Management includes Multiplexing (sharing) resources in two ways: In time In Space

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 2 of 15

Kinds of Multiplexing Time Multiplexing – different programs or users takes turns using it.

Example:With only one CPU and multiple programs that want to run on it.

Space Multiplexing – instead of the customers taking turns, each one gets part of the resource.Example:Holding several programs in memory at once rather than give one of them all of it.

TYPES OF OPERATING SYSTEM1. MAINFRAME OPERATING SYSTEMS

Three Kinds of Services Offered by Operating Systems for Mainframes: Batch System – one that processes routine jobs w/o any interactive user present. Transaction Processing Systems – handle large numbers of small requests. Timesharing Systems – allow multiple remote users to run jobs on the computer

at once, such as querying a big database. 2. SERVER OPERATING SYSTEMS

- They run on servers, which are very large personal computers, workstations, or even mainframes.

- They can provide print service, file service, or web service.3. MULTIPROCESSOR OPERATING SYSTEMS

- Connection of multiple CPUs into a single system to get major league computer power.

- Special operating system for multiprocessors with special features for communication and connectivity.

4. PERSONAL COMPUTER OPERATING SYSTEMS- Their job is to provide a good interface to a single user.

5. REAL-TIME OPERATING SYTEMS- These systems are characterized by time as a key parameter.Kinds of Real-time System1. Hard real-time system – if the action must occur at a certain moment (or within a

range).2. Soft real-time system – in which missing an occasional deadline is acceptable.

6. EMBEDDED OPERATING SYTEMSPDA (Personal Digital Assistant) or Palmtop Computer – a small computer that

fits in a shirt pocket and performs a small number of functions such as an electronic address book and memo pad.7. SMART CARD OPERATING SYTEMS

- Smallest operating systems run on smart cards, which are credit card-sized devices containing a CPU chip.

OPERATING SYSTEM CONCEPTS1. Processes – a key concept in all operating systems that is the program in execution.2. Memory Management – used to hold executing programs.3. File System – another key concept supported by virtually all operating systems.

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 3 of 15

CHAPTER 2: PROCESSES

PROCESS- a program in execution - an asynchronous activity- the ‘animated’ spirit of a procedure- the entity to which processors are assigned

Operations on Processes1. Process Creation

Four fundamental events that cause processes to be created:1. System initialization2. Execution of a process creation system call by a running process3. A user request to create a new process4. Initiation of a batch job.

2. Process TerminationThe process will terminate due to the following condition:1. Normal exit (voluntary)2. Error exit (voluntary)3. Fatal error (involuntary)4. Killed by another process (involuntary)

Process HierarchiesIn some systems, when a process creates another process, the parent process and child process

continue to be associated in certain ways. The child process can itself create more processes, forming a process hierarchy.

Windows does not have any concept of a process hierarchy. All processes are equal. The only place where there is something like a process hierarchy is that when a process is created, the parent is given a special token (called a handle) that it can use to control the child.

Implementation of ProcessesTo implement a process model, the operating system maintains a table (an array of structures),

called the process table, with one entry per process. This entry contains information about the process’ state, its program counter, stack pointer, memory allocation, the status of its open files, its accounting and scheduling information, and everything else about the process that must be saved when the process is switched from running to ready or blocked state so that it can be restarted later as if it had never been stopped.

Process Control Block – a data structure containing information that allows the OS to locate all the key information about a process including its current state, identification, priority, memory, resources, register values, etc.

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 4 of 15

Some of the fields of a typical process table are shown in the table below:Process ManagementRegistersProgram counter Program status wordStack pointerProcess statePriorityScheduling parametersProcess IDParent processProcess groupSignalsTime when process startedCPU time usedChildren’s CPU timeTime of next alarm

Memory managementPointer to text segmentPointer to data segmentPointer to stack segment

File managementRoot directoryWorking directoryFile descriptorsUser IDGroup ID

THREADSThe other concept a process has is a thread of execution, usually shortened to just thread. The

thread has a program counter that keeps track of which instruction to execute next. It has a registers which hold its current working variables. It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from.

PROCESS SCHEDULINGScheduler – decides which job to run first.Scheduling Algorithm – algorithm use by the scheduler.

Types of Scheduling Algorithm1. Non-preemptive – if the CPU cannot be taken away from a process (no interrupts).2. Preemptive – if the CPU can be taken away from a process (can have an interrupts).

Criteria for Good Scheduling Algorithm1. Throughput – the number of jobs per hour that the system completes.2. Turnaround Time – the statistically average time from the moment that a batch job is

submitted until the moment it is completed. It measures how long the average user has to wait for the output. Interval from the time of submission to the time of completion.

3. Response Time – the time between issuing a command and getting the result. The time from the submission of the request until the first response is produced.

Priorities – assigned by the system automatically or maybe assigned externally.

Types of Priorities1. Static Priorities – remains the same throughout the duration of a process.2. Dynamic Priorities – change in response to the changing system condition.

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 5 of 15

Scheduling in Batch Systems1. First Come First Serve (FCFS) – a non preemptive scheduling algorithm in which the

processes are assigned the CPU in the order they request it.2. Shortest Job First (SJF) – a non preemptive discipline and primarily for scheduling batch

job. It minimizes the average waiting time or jobs, but long jobs can be experience lengthy wait.

3. Shortest Remaining Time First (SRTF) – the preemptive counter part of SJF, in which a running process maybe preempted by a new process with a shortest estimated time. Always choose the process whose remaining time is the shortest.

Scheduling in Interactive Systems1. Round-Robin Scheduling – the process is dispatched First-in First-out but they are given the

CPU only for a limited amount of time called time slide or quantum.2. Priority Scheduling – each process is assigned a priority, and the process with the highest

priority is allowed to run.

Example of First Come First Serve Scheduling:

Process Burst Time (millisecond)P1 24P2 3P3 3

P1 P2 P30 24 27 30

CPU Utilization is 100%

TAP1 = 24 – 0 = 24 millisecondsTAP2 = 27 – 0 = 27 millisecondsTAP3 = 30 – 0 = 30 milliseconds 81/3 = 27 milliseconds

WTP1 0WTP2 24WTP3 27 51/3 = 17 milliseconds

Example no.2

Process Arrival Burst timeP1 1 5P2 0 10P3 2 7P4 3 6

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 6 of 15

DEADLOCK CHARACTERIZATIONNecessary ConditionsA deadlock situation can arise if the following conditions hold simultaneously in a system:

1. Mutual exclusion: At least one resource must be held in a non sharable mode; that is only one process at a time use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.

2. Hold and wait: There must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes.

3. No preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after process has completed its task.

4. Circular wait: There must exist a set {P0, P1, ….., Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

DEADLOCKS MODELINGResource-allocation graphSystem resource allocation graph – directed graph that consist of two kinds of nodes: processes shown as (circles) and resources, shown as squares.

Process Resources

(a) (b) (c)

(a) Holding a resource; resource R is currently assigned to process A.

(b) Requesting a resource; Process B is waiting for resource S.

(c) Deadlock; we see a deadlock process C is waiting for resource T, which is currently held

by process D. Process D is not about to release resource T because it is waiting for resource

U, held by C.

Prepared by: jaypee cornejo

A

T U

SRC

D

B

OPERATING SYSTEM

LECTURE Page 7 of 15Example:

A B CRequest R Request S Request TRequest S Request T Request RRelease R Release S Release TRelease S Release T Release R

1.) A Requests R2.) B Requests S3.) C Requests T4.) A Requests S5.) B Requests T6.) C Requests R Deadlock ( 1 ) ( 2 ) ( 3 )

( 4 ) ( 5 ) ( 6 )

Deadlock Prevention1.) A Requests R 2.) C Requests T3.) A Requests S4.) C Requests R5.) A Releases R6.) A Releases S No deadlock ( 1 ) ( 2 ) ( 3 )

( 4 ) ( 5 ) ( 6 )

Prepared by: jaypee cornejo

R S T

B C A B C A B C

A B C A B C A B C

A

R S T

R S TR S T

R S T

R S T

R S T

B C A B C A B C

A B C A B C A B C

A

R S T

R S TR S T

R S T

R S T

OPERATING SYSTEM

LECTURE Page 8 of 15DEADLOCK DETECTION AND RECOVERY

Deadlock Detection with One Resource of Each Type

Case: Only one resource of each type exists. Such a system might have one scanner, one CD recorder, one plotter, and one tape drive.

Situation:Consider a system with seven processes, A through G, and six resources, R through

W. The states of which resources are currently owned and which ones are currently being requested is as follows:1. Process A holds R and wants S.2. Process B holds nothing but wants T.3. Process C holds nothing but wants S.4. Process D holds U and wants S and T.5. Process E holds T and wants V.6. Process F holds W and wants S.7. Process G holds V and wants U.Show this in a graph using resource-allocation graph.Question: Is this system deadlocked, and if so, which processes are involved?

Answer: Resource-allocation graph

- Processes involved in deadlocks

Prepared by: jaypee cornejo

C

R A

S D

B

T E

F U

W

V

G

D T E

U V

G

OPERATING SYSTEM

LECTURE Page 9 of 15CHAPTER 3: MEMORY MANAGEMENT

Address BindingUsually, a program resides on a disk as a binary executable file. The program must be

brought into memory and placed within a process for it to be executed. Depending on the memory management in used, the process may be moved between disk and memory during its execution.

The collection of processes on the disk that are waiting to be brought into memory for execution forms the input queue.

The normal procedures in forming input queue are:1. Select one of the processes in the input queue and load that process into memory. 2. The process is executed; it accesses instructions and data from memory. 3. The process terminates, and its memory space is declared available. Most systems allow a user process to reside in any part of the physical memory. Thus,

although the address space of the computer starts at 00000, the fist address of the user process does not need to be 00000. This arrangement affects the addresses that the user program can use. In most cases, a user program can go through several steps (some of which may be optional) before being executed (Figure 1). Addresses may be represented in different ways during these steps. Addresses in the source program are generally symbolic (such as COUNT). A compiler will typically bind these symbolic addresses to relocatable addresses (such as “14 bytes from the beginning of this module”). The linkage editor or loader will in turn bind these relocatable addresses to absolute addresses (such as 74014). Each binding is a mapping from one address space to another.

Classically, the binding of instructions and data to memory addresses can be done at any step along the way:

Compile time: If it is known at compile time where the process will reside in memory, then absolute code can be generated. For example, if it is known a priori that a user process resides starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code. The MS-DOS .COM-format programs are absolute code bound at compile time.

Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. In this case, final binding is delayed until load time. If the starting address changes, we need only to reload the user code to incorporate this changed value.

Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run

time. Special hardware must be available for this scheme to work.Dynamic Loading

To obtain better memory space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first checks to see whether the other routine has been loaded. If it has not been, the relocatable linking loader is called to load the desired routine into memory and to update the program’s address tables to reflect this change. Then, control is passed to the newly loaded routine.

The advantage of dynamic loading is that an unused routine is never loaded. This scheme is particularly useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. In this case, although the total program size may be large, the portion that is actually used (and hence actually loaded) may be much smaller.

Prepared by: jaypee cornejo Source progra

m

Source progra

mCompiler or assemblerObject

module

Object modul

e

Linkage editorLoad

module

Load modul

e

loader

In-memorybinary

memoryimageSyste

m library

System

library

Other object modul

es

Other object modul

es

Dynamically loaded

system library

Dynamically loaded

system library

Compile time

Load time

Execution time (run time)Figure 1. Multi-step processing of a user program

OPERATING SYSTEM

LECTURE Page 10 of 15

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 11 of 15Dynamic Loading

To obtain better memory space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first checks to see whether the other routine has been loaded. If it has not been, the relocatable linking loader is called to load the desired routine into memory and to update the program’s address tables to reflect this change. Then, control is passed to the newly loaded routine.

The advantage of dynamic loading is that an unused routine is never loaded. This scheme is particularly useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. In this case, although the total program size may be large, the portion that is actually used (and hence actually loaded) may be much smaller.

Dynamic LinkingNotice that figure 1 also shows dynamically linked libraries. Most operating systems support

only static linking, in which system language libraries are treated like any other object module and are combined by the loader into the binary program image. The concept of dynamic linking is similar to that of dynamic loading. Rather than loading being postponed until execution time, linking is postponed. This feature is usually used with system libraries, such as language subroutine libraries. Without this facility, all programs on a system need to have a copy of their language library (or at least the routines referenced by the program) included in the executable image. This requirement wastes both disk space and main memory. With dynamic linking, a stub is included in the image for each library-routine reference. This stub is a small piece of code that indicates how to locate the appropriate memory-resident library routine, or how to load the library if the routine is not already present.

When this stub is executed, it checks to see whether the needed routine is already in memory. If the routine is not in memory, the program loads it into memory. Either way, the stub replaces itself with the address of the routine, and executes the routine. Thus, the next time that that code segment is reached, the library routine is executed directly, incurring no cost for dynamic linking. Under this scheme, all processes that use a language library execute only one copy of the library code.

Overlays

The entire program and data of a process must be in physical memory for the process to execute. The size of a process is limited to the size of physical memory. So that a process can be larger than the amount of memory allocated to it, a technique called overlays is sometimes used.

As an example, consider a two-pass assembler. During pass 1, it constructs a symbol table; then during pass 2, it generates machine-language code.

Assume that the sizes of these components are as follows (K stands for “kilobyte,” which is 1024 bytes):

Pass 1 70kPass 2 80k Symbol table 20kCommon routines 30k

Prepared by: jaypee cornejo

OPERATING SYSTEM

LECTURE Page 12 of 15

Logical versus Physical Address SpaceLogical Address – an address generated by the CPU. Logical address is sometimes called virtual address in the execution time address-binding scheme.Physical Address – an address seen by the memory unit and is loaded into the memory address register of the memory.Memory-Management Unit (MMU) – a hardware device that do the run-time mapping from virtual to physical addresses.

Prepared by: jaypee cornejo

Symbol table

common routines

Overlay driver

Pass 2Pass 170k 80k

20k

30k

10k

Overlays for a two-pass assembler

OPERATING SYSTEM

LECTURE Page 13 of 15Relocation Register – the base register used by the memory-management unit

TWO GENERAL APPROACHES TO MEMORY MANAGEMENT

I. SWAPPINGConsist of bringing each process in its entirely, running it for a while, then putting it

back on the disk.II. OVERLAYS (VIRTUAL MEMORY)

Allows program to run even when they are only partially in main memory.

SWAPPING

(A) (B) (C) (D) (E) (F) (G)Memory allocation changes as processes come into memory and leave it.

The shaded regions are unused memory.

THE OPERATION OF A SWAPPING SYSTEMInitially only the process A is in memory, the processes B and C are created or swapped

in from disk. In D, A is swapped out to disk, then D comes in and B goes out. Finally A comes in again. Since A is now at a different location, addresses contained in it must be relocated, either by software when it is swapped in or (move likely) by hardware during program execution.

Prepared by: jaypee cornejo

Relocation register

Logical address

346

CPUmemory

140000

MMU

Physical address

14346+

Dynamic relocation using a relocation register.

OSOSOSOSOSOSOS

A

AA

BB B

D D

C C C C C

BB

D

A

OPERATING SYSTEM

LECTURE Page 14 of 15

CONTIGUOUS ALLOCATIONThe main memory must accommodate both operating system and the various user

processes. The memory is usually divided into two partitions, one for the resident operating system, and one for the user processes.

1. SINGLE – PARTITION ALLOCATION

Reserved for OS

Memory actually

Allocated but unused (wasted)

Example:

OS Size = 32k Memory Size = 640k

Before 9:00 @ 9:00 P1 arrival/start @ 9:10 P2 arrival/wait

@ 9:20 P1 Terminate P2 Start @ 9:30 P3 average/wait

Prepared by: jaypee cornejo

Wasted

Process

0 32k

500k

640k

Memory Available

OS 0 32k

640k

P1(200k)

OS 0 32k

232k

640k wasted

P1(200k)

OS 0 32k

232k

640kWasted408k

Memory Available

608k

OS 0 32k

640k

P2(250k)

OS 0 32k

282k

640kWasted358k

P2(250k)

OS 0 32k

282k

640kWasted358k

OPERATING SYSTEM

LECTURE Page 15 of 15@ 9:45 P2 Terminate P3 Start @ 10:15 P3 Terminate

0

2. DYNAMIC-PARTITION ALLOCATION Most common strategies use to select a free hole from the set of available holes:

1. First-fit – allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.

2. Best-fit – allocate the smallest hole that is big enough. We must search the entire list, unless the list is kept ordered by size. This strategy produces the entire list, unless the list is kept ordered by size. This strategy produces the smallest left-over hole.

3. Worst-fit – allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest left over hole, which may be more useful than the smaller left over hole from a best-fit approach.

OVERLAYS (VIRTUAL MEMORY) A technique that allows the execution of processes that may not be completely in

memory. The main visible advantage of this scheme is that programs can be larger than physical memory.

The solution to programs that may too big to fit in the available memory this was done by splitting the programs into pieces.

Programmer – do the splitting of program into pieces.Paging – a technique used by most virtual memory systemVirtual Addresses – program generated addresses that form the virtual address space.

Example:A computer that can generate 16-bit addresses, from 0 up to 64k (virtual addresses). This computer, however, has only 32KB of physical memory. A complete copy of a

program’s core image, up to 64KB, must be present on the disk, however, so that pieces can be brought in as needed.

The virtual address space is divided up into units called pages. The corresponding units in the physical memory are called page frames.

Prepared by: jaypee cornejo

Memory Available

608k

OS 0 32k

640k

P3(300k)

OS 0 32k

332k

640kWasted308k

Memory Available

608k

OS 0 32k

640k