28
CS 423 – Operating Systems Design Lecture 14 – Page Replacement Policies Klara Nahrstedt Fall 2011 Based on slides by YY Zhou and Andrew S. Tanenbaum CS 423, Fall 2011

Lecture 14 Page Replacement Policies

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

CS 423 – Operating Systems Design Lecture 4 – Processes and ThreadsLecture 14 – Page Replacement
Based on slides by YY Zhou and Andrew S. Tanenbaum
CS 423, Fall 2011
CS 423, Fall 2011
CS 423, Fall 2011
Fetch Strategies
When should a page be brought into primary (main) memory from secondary (disk) storage.
Placement Strategies
When a page is brought into primary storage, where is it to be put?
Replacement Strategies
Which page now in primary storage is to be removed from primary storage when some other page is to be brought in and there is not enough room.
Review: Page Cleaning
page fault occurs
if there is no free page
Paging daemon (background process)
If too few pages, the paging daemon begins
selecting pages to evict using a page
replacement algorithm
2. Find a free page frame
1. If free page frame, use it
2. Otherwise, select a page frame using the page replacement
algorithm
3. Write the selected page to the disk and update any necessary
page table information
4. Restart the user process.
(It is necessary to be careful of synchronization problems.
For example, page faults may occur for pages being paged
out.)
Dirty pages require writing, clean pages don’t
Hardware has a dirty bit for each page frame
indicating this page has been updated or not
Where do you write? To “swap space”
Goal: kick out the page that’s least useful
Problem: how do you determine locality?
Heuristic: temporal locality exists
Kick out pages that aren’t likely to be used
again
Eviction – throwing something out
(This does not mean no page fault.)
CS 423, Fall 2011
Description (Belady’s algorithm):
Assume that each page can be labeled with the number of instructions that will be executed before that page is first references, i.e., we would know the future reference string for a program.
Then the optimal page algorithm would choose the page with the highest label to be removed from the memory.
This algorithm provides a basis for comparison with other schemes.
Impractical because it needs future references
If future references are known
should not use demand paging
should use pre paging to allow paging to be overlapped with computation.
CS 423, Fall 2011
Optimal Example 12 references,
How to track “recency”?
search for smallest time.
push it on top of stack (for every memory access)
Both approaches require large processing overhead, more space, and HW support.
CS 423, Fall 2011
CS 423, Fall 2011
NRU: A LRU Approximation NRU: Evict a page that is NOT recently used
NRU implementation: simpler than LRU
Two status bits per page R: Reference (when read or written)
D: Dirty (only when written)
Periodically, R bit is cleared to track the recency.
A page in the lowest class is replaced. 1. (0,0) neither referenced nor dirtied
2. (0,1) not referenced (recently) but dirtied
3. (1,0) referenced but clean
4. (1,1) referenced and dirtied
If conflict, use random or FIFO.
Any impact on the system reliability? (e.g., bd/pdflush in Linux)
Still require large processing overhead and HW support.
CS 423, Fall 2011
Pages are kept in FIFO order using a circular list.
Choose “victim” to evict
Select head of FIFO
If page has R bit set, reset bit and select next page in FIFO list.
keep processing until reach a page with zero reference bit and page that one out.
Does this algorithm always terminate?
A* B C* D*
A B C* D*
Treated likely a newly
Clock
Clock replacement algorithm addresses the inefficient of second change (i.e., constantly move pages around on its list)
When a page fault occurs, the page pointed by the clock hand is inspected.
If R = 0, the page is evicted
If R = 1, it’s cleared and the hand is advanced.
CS 423, Fall 2011
CS 423, Fall 2011
Page Fault Rate Curve
As page frames per VM space decrease, the page fault rate increases.
CS 423, Fall 2011
The working set model assumes locality.
The principle of locality states that a program clusters its access to data and text temporally.
As the number of page frames increases above some threshold, the page fault rate will drop dramatically.
CS 423, Fall 2011
available are not large enough to contain the
locality of the process (or working set).
The processes start faulting heavily.
Pages that are read in, are used and
immediately paged out.
CS 423, Fall 2011
Thrashing and CPU Utilization
As the page fault rate goes up, processes get suspended on page out queues for the disk.
The system may try to optimize performance by starting new jobs.
Starting new jobs will reduce the number of page frames available to each process, increasing the page fault requests.
System throughput plunges.
there will not be many page faults.
PFF (Page fault frequency) algorithm
If page fault rate increases beyond assumed
knee of curve, then increase number of page
frames available to process.
of curve, then decrease number of page frames
available to process.
Try to keep track of each process’ working set
If # free physical pages > working set of some suspended process, activate the process and map in all its working set
If working set size of some process increases and no page frame is free, suspend the process and release all its pages
(Definition) Working set is the set of pages used in the k most recent memory references
(Approximation) set of pages referenced during the past t seconds of virtual CPU time
CS 423, Fall 2011
WS Page Replacement Example
approximately x references, , and remove pages that
have not been referenced and reset reference bit.
CS 423, Fall 2011
Circular list as in the clock algorithm.
Initially, list is empty.
As pages are loaded into memory, pages are added to the list
Each entry contains time of last use as well as reference and dirty bits.
Algorithm
At each page fault, the page pointed to by the clock hand is examined.
If R-bit = 1, keep and advance to the next page.
If R-bit is 0, check if it is in working set window (i.e., if current time - time of last use is less than the working set window size time).
If page is not in the working set and the page is clean, claim it (no disk I/O).
If the page is not in working set and page is dirty, request write to disk.
Local page replacement: allocating every process a fixed fraction of
the memory (fairness)
(efficient)
In general, global algo. works better as the WS size varies.
Allocate an equal number of frames per job
But jobs use memory unequally, high priority jobs have the same
number of pages as low priority jobs, and degree of
multiprogramming might vary
Challenge: how do you determine job size: by run command
parameters or dynamically?
PFF (Page fault frequency) algorithm: Measure page fault rate of each
process to increase or decrease a process’s page allocation CS 423, Fall 2011
Conclusion The Principle of Optimality
Replace the page that will not be used in the farthest future
Not implementable but useful for benchmarking
FIFO - First in First Out
Might throw out important pages
LRU - Least Recently Used
NRU - Not Recently Used
Second Change
Clock
Realistic