38
Memory Management II MITM 205 Advanced Operating System Concepts Mr. ALVIN R. MALICDEM Professor

Lecture 4 Memory Management II

Embed Size (px)

DESCRIPTION

OS

Citation preview

Memory Management IIMITM 205 Advanced Operating System Concepts

Mr. ALVIN R. MALICDEM

Professor

Virtual Memory separation of user logical memory from

physical memory. Only part of the program needs to be in memory

for execution Logical address space can therefore be much

larger than physical address space Allows address spaces to be shared by several

processes Allows for more efficient process creation

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 2

Virtual Memory Virtual memory can be implemented via:

Demand paging Demand segmentation

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 3

The Concept of Paging

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 4

Demand Paging allows pages that are referenced actively to be

loaded into memory Remaining pages stay on disk

Bring a page into memory only when it is needed Page is needed reference to it

invalid reference abort not-in-memory bring to memory

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 5

Demand Paging We assume that a process needs to load all of

its address space before running e.g., 0x0 to 0xffffffff Logical memory must be less than physical

memory. Observation: 90% of time is spent on 10% of

code Loading the whole thing there is a big waste. Demand paging virtual memory resolves this

problem.DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 6

Advantages of Demand Paging Truly decouples physical memory and logical

memory. Provides the illusion of infinite physical memory.

Each program takes less physical memory. Less I/O when swapping processes Faster response More users

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 7

Demand Paging: how it works? The process address space (image) is always in swap

space (on disk). The page table entry now has three states:

Valid with the page physical address Invalid with the address in the swap space

Valid page, but not currently in memory Invalid (truly invalid page)

When a memory access is attempted: Valid page physical address, normal access Invalid with the address in the swap space: page fault interrupt,

which will bring in the page. Invalid, error

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 8

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 9

Page Fault an interrupt (or exception) to the software

raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory

If there is a reference to a page, first reference to that page will trap to operating system:

page fault

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 10

Address Translation

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 11

Handling a Page Fault1. Operating system looks at another table to

decide: Invalid reference abort Just not in memory

2. Get empty frame3. Swap page into frame4. Reset tables5. Set validation bit = v6. Restart the instruction that caused the page fault

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 12

Steps in handling a page fault

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 13

The Page Table A valid bit (also called present bit), indicating

whether the page is assigned to a frame or not. A modified or dirty bit, indicating whether the

page has been modified. A used bit, indicating whether the page has been

accessed recently. Access permissions, indicating whether the page

is read-only or read-write.

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 14

A typical page table entry

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 15

Translation Lookaside Buffer (TLB) a special cache that keeps information about

recently used mappings Based on the assumption of locality: if we

have used the mapping of page X recently, there is a high probability that we will use this mapping many more times

access to pages with a cached mapping can be done immediately

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 16

Translation Lookaside Buffer (TLB) A TLB to speed up paging

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 17

Page Replacement the main operating system activity with regard

to paging is deciding what pages to evict in order to make space for new pages that are read from disk

Page replacement – find some page in memory, but not really in use, swap it out algorithm performance – want an algorithm which will result

in minimum number of page faultsDMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 18

Page Replacement Prevent over-allocation of memory by modifying

page-fault service routine to include page replacement

Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk

Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 19

The Need for Page ReplacementDMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 20

Basic Page Replacement1. Find the location of the desired page on disk

2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement

algorithm to select a victim frame

3. Bring the desired page into the (newly) free frame; update the page and frame tables

4. Restart the process

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 21

Page Replacement

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 22

Page Replacement Policies Random replacement: replace a random

page+ Easy to implement in hardware (e.g., TLB)

- May toss out useful pages First in, first out (FIFO): toss out the oldest

page+ Fair for all pages

- May toss out pages that are heavily used

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 23

Page Replacement Policies Least-recently used (LRU): replaces the page

that has not been used for the longest time+ Good if past use predicts future use

- Tricky to implement efficiently Least frequently used (LFU): replaces the page

that is used least often Tracks usage count of pages

+ Good if past use predicts future use

- Difficult to replace pages with high countsDMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 24

Example See examples

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 25

Thrashing If a process does not have “enough” pages,

the page-fault rate is very high. This leads to: low CPU utilization operating system thinks that it needs to increase

the degree of multiprogramming another process added to the system

Thrashing a process is busy swapping pages in and out

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 26

Thrashing All algorithms may suffer from thrashing Occurs when the memory is overcommitted

Pages are still needed are tossed out Example

A process needs 50 memory pages A machine has only 40 memory pages Need to constantly move pages between memory

and diskDMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 27

Thrashing Avoidance Programs should minimize the maximum

memory requirement at a given time e.g., matrix multiplications can be broken into

sub-matrix multiplications OS figures out the memory needed for each

process Runs only the computations that can fit in RAM

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 28

Working Set most influential concept with regard to paging a dynamically changing set of pages At any given moment, it includes the pages

accessed in the last instructions; is called the window size and is a parameter of the definition

If the window size is too small, then the working set changes all the time

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 29

Working Set working-set window a fixed number of page references

Example: 10,000 instruction

WSSi (working set of Process Pi) =total number of pages referenced in the most recent (varies in time) if too small will not encompass entire locality. if too large will encompass several localities. if = will encompass entire program.

D = WSSi total demand frames if D > m Thrashing Policy if D > m, then suspend one of the processes.

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 30

Working Set Model

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 31

Keeping Track of the Working Set Approximate with interval timer + a reference bit Example: = 10,000

Timer interrupts after every 5000 time units. Keep in memory 2 bits for each page. Whenever a timer interrupts copy and sets the values of

all reference bits to 0. If one of the bits in memory = 1 page in working set.

Why is this not completely accurate? Improvement = 10 bits and interrupt every 1000 time

units.DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 32

The Working Set Algorithm

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 33

Summary Virtual memory is one of the main

abstractions supported by the operating system. It is up to the operating system to implement this

abstraction using a combination of limited, fragmented primary memory, and swap space on a slow disk

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 34

Summary Memory management includes many themes

related to resource management. the allocation of contiguous resources - include

First Fit, Best Fit, Next Fit break the resources into small fixed-size pieces,

rather than trying to allocate contiguous stretches based on a level of indirection providing an

associative mapping to the various pieces most popular algorithm is LRU

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 35

Summary Workload issues

display a high degree of locality the use of paging to disk in order to implement

virtual memory would simply not work Locality allows the cost of expensive operations

such as access to disk to be amortized across multiple memory accesses

it ensures that these costly operations will be rare

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 36

Summary Hardware support

Memory management is probably where hardware support to the operating system is most prominent

hardware address translation as part of each access is a prerequisite for paging.

support for used and modify bits for each page

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 37

END OF LECTUREMITM 205 Advanced Operating Systems Concepts

DMMMSU-MLUC Institute of Information Technology, Center of Development in I.T. 38