Upload
thanhlinh9191
View
221
Download
0
Embed Size (px)
Citation preview
7/31/2019 Tutorial Week09
1/7
CSC468/2204 TUTORIAL SOLUTIONS WEEK 9
by Tristan Miller -- Updated 7 November 2000
Solutions to exercises from the textbook (Silberschatz et al., 2000):
6.1 A CPU scheduling algorithm determines an order for the execution of its
scheduled processes. Givenn processes to be scheduled on one processor, how many
possible different schedules are there? Give a formula in terms ofn.
For 1 process, there is only one possible ordering: (1). For 2 processes, there are
2 possible orderings: (1, 2), (2, 1). For 3 processes, there are 6 possible
orderings: (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1). For 4
processes, there are 24 possible orderings. In general, for n processes, there are
n! possible orderings.
6.7 Consider the following preemptive priority-scheduling algorithm that is based
on dynamically changing priorities. Larger priority numbers imply higher priority.
When a process is waiting for the CPU (in the ready queue, but not running), its
priority changes at a rate ; when it is running, its priority changes at a rate . All
processes are given a priority of 0 when they enter the ready queue. The
parameters and can be set to give many different scheduling algorithms.
a. What is the algorithm that results from > > 0?
b. What is the algorithm that results from < < 0?
7/31/2019 Tutorial Week09
2/7
a. Since processes start out with priority 0, any processes already in the
system (either running or waiting) are favoured. Therefore, new
processes go to the back of the queue. Running processes priorities
increase at a greater rate than those of waiting processes, so they will
never be preempted. Thus, we have FCFS.
b. This time, process priority decreases the longer a process has been in
the system (either running or waiting). New processes start out at 0,
and thus have a higher priority than processes that have already been
in the system. The scheduler will preempt in favour of newer
processes. In the absence of new arrivals, the scheduler will pick from
the ready queue the process that has been waiting the least amount of
time, since priority decreases faster waiting than running. We
therefore have a LIFO (last in, first out) scheduling algorithm.
10.2 Assume that you have a page-reference string for a process withm frames
(initially all empty). The page-reference string has lengthp;n distinct page
numbers occur in it. Answer these questions for any page-replacement algorithms:
a. What is a lower bound on the number of page faults?
b. What is an upper bound on the number of page faults?
a. There are n distinct pages occurring in the reference string, each of which
must be loaded into memory at some point. Therefore the lower bound on the
number of page faults is n, which is irrespective of the memory size, m.
7/31/2019 Tutorial Week09
3/7
b. The upper bound depends on the memory size, m. Ifm n, then the upper
bound will be n irrespective ofp, since all the pages referenced in the string
can be loaded into memory simultaneously. Ifm = 1, then the upper bound is
1 when n = 1, andp when n > 1. Similarly, for other values ofm:
7/31/2019 Tutorial Week09
4/7
p nUpper bound
m=1 m=2 m=3 m=n
1 1 1 1 1 1
2 1 1 1 1 1
2 2 2 2 2 2
3 1 1 1 1 1
3 2 3 2 2 2
3 3 3 3 3 3
4 1 1 1 1 1
4 2 4 2 2 2
4 3 4 4 3 3
4 4 4 4 4 4
5 1 1 1 1 1
5 2 5 2 2 2
5 3 5 5 3 3
5 4 5 5 5 4
5 5 5 5 5 5
Thus it can be seen that the upper bound is n when m n, andp when m < n. If
you have trouble seeing this, it may help to go sketch a few page tables by hand
for successive small values ofm.
10.9 Consider a demand-paging system with the following time-measured
utilizations:
CPU utilization: 20%
Paging disk: 97.7%
Other I/O devices: 5%
7/31/2019 Tutorial Week09
5/7
For each of the following, say whether it will (or is likely to) improve CPU
utilization. Explain your answers.
a. install a faster CPU
b. install a bigger paging disk
c. increase the degree of multiprogramming
d. decrease the degree of multiprogramming
e. install more main memory
f. install a faster hard disk, or multiple controllers with multiple hard disks
g. add prepaging to the page-fetch algorithms
h. increase the page size
a. This will probably not increase CPU utilization on the contrary; it will
probably decrease it. The given data suggest that most of the time the CPU is
idling, waiting for the paging disk to complete its task. Since the CPU is
dependent on the paging disk, installing a faster CPU will only cause
processes to execute more quickly and place a greater demand on the paging
disk.
b. This will probably not increase CPU utilization. A frequently-accessed
paging disk, as in this example, does not imply that the disk is too small; it
implies that memory required for the number of running processes is
inadequate. A larger disk might even decrease CPU utilization, since more
time will be spent performing seeks to the hard drive.
7/31/2019 Tutorial Week09
6/7
c. Of course, this will increase CPU utilization in general, but it will decrease
CPU utilization per unit time. A frequently-accessed paging disk implies that
there is not enough physical memory for the number of running processes, so
by adding more processes, we are only decreasing the amount of memory
available to each process and thus increasing the paging disk usage. This
means more CPU time spent waiting for the paging disk and less time spent
performing useful work.
d. This will increase the amount of CPU usage per unit time. Because the
paging disk is saturated with requests, this implies that RAM is inadequate for
the number of processes currently running. Thus, if we were to decrease the
number of processes running, it would make more physical memory available
to each process, so there would be fewer chances that the processes would
page fault. Fewer page faults mean less paging disk accesses, which means
less time spent by the CPU waiting for disk I/O.
e. This will definitely increase the amount of CPU usage. Since more memory
is available to each process, more pages for each process can be in RAM
simultaneously, and there is a lower probability that any given process will
page fault. As stated before, fewer page faults mean fewer paging disk
accesses, which means less time spent by the CPU waiting for the paging disk.
f. This will improve CPU utilization, though probably not as much as the
previous two options. Increasing the paging disk speed will result in less time
spent by the CPU waiting for the paging disk to complete its tasks, so it can
spend more time executing processes.
g. Depending on the processes, this may improve CPU utilization. If a large
proportion of the page faults is being caused by a process that pages in several
frequently-used pages during every one of its time slices, then adding a
7/31/2019 Tutorial Week09
7/7
prepaging algorithm will cut down on some disk latency time by reading in
most or all of the necessary pages at once, rather than waiting for the process
to page fault for each page. Fewer page faults mean less idle CPU time.
h. Again, this depends on the processes, but it may improve CPU utilization. If
processes are continually page-faulting on logically adjacent pages, then
increasing the page size will result in fewer page faults, and therefore less disk
access time and less CPU idling. However, if this is not the case, then
increasing the page size will only worsen the problem, since more time will be
spent by the paging disk reading in data or code that will not be used anyway.