Upload
hallie
View
32
Download
0
Tags:
Embed Size (px)
DESCRIPTION
COT 4600 Operating Systems Spring 2011. Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM. Lecture 21 - Thursday April 7, 2011. Last time: Scheduling Today: The scheduler Multi-level memories Next Time: Memory characterization - PowerPoint PPT Presentation
Citation preview
COT 4600 Operating Systems Spring 2011
Dan C. MarinescuOffice: HEC 304 Office hours: Tu-Th 5:00-6:00 PM
Lecture 21
Lecture 21 - Thursday April 7, 2011 Last time:
Scheduling Today:
The scheduler Multi-level memories
Next Time: Memory characterization Multilevel memories management using virtual memory Adding multi-level memory management to virtual memory Page replacement algorithms
2
Lecture 20
The scheduler
The system component which manages the allocation of the processor/core.
It runs inside the processor thread and implements the scheduling policies.
Other functions Determines the burst Manages multiple queues of threads
3
Lecture 20
CPU burst
CPU burst the time required by the thread/process to execute
4
Lecture 20
Estimating the length of next CPU burst
Done using the length of previous CPU bursts, using exponential averaging
10 , 3.burst CPUnext for the valuepredicted 2.
burst CPU oflength actual 1.
1
n
thn nt
.1 1 nnn t
5
Lecture 20
Exponential averaging
=0 n+1 = n Recent history does not count
=1 n+1 = tn Only the actual last CPU burst counts
If we expand the formula, we get:n+1 = tn+(1 - ) tn -1 + … +(1 - )j tn -j + … +(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor
6
Lecture 20
Predicting the length of the next CPU burst
7
Lecture 20
Multilevel queue
Ready queue is partitioned into separate queues each with its own scheduling algorithm : foreground (interactive) RR background (batch) FCFS
Scheduling between the queues Fixed priority scheduling - (i.e., serve all from foreground then from
background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS
8
Lecture 20
Multilevel queue scheduling
9
Lecture 20
Multilevel feedback queue
A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler characterized by: number of queues scheduling algorithms for each queue strategy when to upgrade/demote a process strategy to decide the queue a process will enter when it needs service
10
Lecture 20
Example of a multilevel feedback queue exam
Three queues: Q0 – RR with time quantum 8 milliseconds Q1 – RR time quantum 16 milliseconds Q2 – FCFS
Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.
11
Lecture 20
Multilevel feedback queues
12
Lecture 20
Unix scheduler
The higher the number quantifying the priority the lower the actual process priority.
Priority = (recent CPU usage)/2 + base Recent CPU usage how often the process has used the CPU since the
last time priorities were calculated. Does this strategy raises or lowers the priority of a CPU-bound
processes? Example:
base = 60 Recent CPU usage: P1 =40, P2 =18, P3 = 10
13
Lecture 20
Comparison of scheduling algorithmsRound Robin FIFO MFQ
Multi-LevelFeedback Queue
SFJShortest Job First
SRJNShortest Remaining Job Next
Throughput
Responsetime
May be low is quantum is too small
Shortest average responsetime if quantum chosen correctly
Not emphasized
May be poor
May be low is quantum is too small
Good for I/O bound but poor for CPU-bound processes
High
Good for short processesBut maybe poor for longer processes
High
Good for short processesBut maybe poor for longer processes
14
Lecture 20
Round Robin
FIFO MFQMulti-LevelFeedback Queue
SFJShortest Job First
SRJNShortest Remaining Job Next
IO-bound
Infinite postponement
No distinctionbetweenCPU-bound andIO-bound
Does not occur
No distinctionbetweenCPU-bound andIO-bound
Does not occur
Gets a high priority if CPU-bound processes are present
May occur for CPU bound processes
No distinctionbetweenCPU-bound andIO-bound
May occur for processes with long estimated running times
No distinctionbetweenCPU-bound andIO-bound
May occur for processes with long estimated running times
15
Lecture 20
Round Robin
FIFO MFQMulti-LevelFeedback Queue
SFJShortest Job First
SRJNShortest Remaining Job Next
Overhead
CPU-bound
Low
No distinctionbetweenCPU-bound andIO-bound
The lowest
No distinctionbetweenCPU-bound andIO-bound
Can be high Complex data structures and processing routines
Gets a low priority if IO-bound processes are present
Can be high Routine to find to find the shortest job for each reschedule
No distinctionbetweenCPU-bound andIO-bound
Can be high Routine to find to find the minimum remaining time for each reschedule
No distinctionbetweenCPU-bound andIO-bound
16
Lecture 21
Memory virtualization
A process runs in its own address space; multiple threads may share an address space.
Process/tread management and memory management are important functions of an operating system
Virtual memory Allows programs to run on systems with different sizes of real memory It may lead to performance penalty. A virtual address space has a fixed size determined by the number of bits
in an address., e.g., if n=32 then size = 232 ~ 4 GB. Swap area: image on a disk of the virtual memory of a process. Page: a group of consecutive virtual addresses e.g., of size 4 K is brought
from the swap area to the main memory at once. Not all pages of a process are in the real memory at the same time
Caching Blocks of virtual memory addresses are brought into faster memory. Leads to performance improvement.
17
Lecture 21
Locality of reference
Locality of reference: when a memory location is referenced then the next references are likely to be in close proximity. The programs include sequential code The data structures include often data that are processed together; e.g.,
an array. Spatial and temporal locality of reference. Thus it makes sense to group a set of consecutive addresses into
units and transfer such units at once between a slow but larger storage space to a faster but smaller storage. The sixe of the units is different Virtual memory Page size: 2 - 4 KB Cache 16 – 256 words.
18
Lecture 21
Virtual memory Several strategies
Paging Segmentation Paging+ segmentation
At the time a process/thread is created the system creates a page table for it; an entry in the page table contains The location in the swap area of the page The address in main memory where the page resides if the page has
been brought in from the disk Other information e.g. dirty bit.
Page fault a process/thread references an address in a page which is not in main memory
On demand paging a page is brought in the main memory from the swap area on the disk when the process/thread references an address in that page.
19
Lecture 21 20
Lecture 21 21
Lecture 20
Dynamic address translation
22