24
1 Memory Management Chapter 4 Basic memory management Swapping Virtual memory Page replacement algorithms

Memory Management

  • Upload
    shubha

  • View
    53

  • Download
    0

Embed Size (px)

DESCRIPTION

Memory Management. Basic memory management Swapping Virtual memory Page replacement algorithms. Chapter 4. Memory Management. Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache - PowerPoint PPT Presentation

Citation preview

Page 1: Memory Management

1

Memory Management

Chapter 4

Basic memory management Swapping Virtual memory Page replacement algorithms

Page 2: Memory Management

2

Memory Management• Ideally programmers want memory that is

– large– fast– non volatile

• Memory hierarchy – small amount of fast, expensive memory – cache – some medium-speed, medium price main memory– gigabytes of slow, cheap disk storage

• Memory manager handles the memory hierarchy

Page 3: Memory Management

3

Basic Memory ManagementMonoprogramming without Swapping or Paging

Three simple ways of organizing memory- an operating system with one user process

Page 4: Memory Management

4

Multiprogramming with Fixed Partitions

• Fixed memory partitions– separate input queues for each partition– single input queue

Disadvantage Queue for small partition may be full, but the queuefor a large partition is empty

Disadvantage •Wastage of space•If largest process is selected, small processAre deprived

Page 5: Memory Management

5

Relocation and Protection• Relocation

– Cannot be sure where program will be loaded in memory– suppose 1st instruction of a program jumps at absolute address 200 within

the exe file and it is loaded into partition 1 (previous slide) started at 100K. Jump should be performed at 100K+200.

– This problem is called relocation• Relocation Solution

– As the program is loaded into memory, modify the instructions accordingly.– Address locations are added to the base location of the partition to map to

the physical address.• Protection

– Must keep a program out of other processes’ partitions– Relocation during loading may cause protection problem

• Protection Solution– Address locations larger than limit value is an error

Page 6: Memory Management

6

Swapping and Virtual Memory

• Sometimes there is not enough main memory to hold all the currently active processes.

• So excess processes must be kept on disk and brought into run dynamically

• Two approach– Swapping:

• Bring in each process entirely, running it for a while and then put it back on the disk

– Virtual memory: • Brings each process partially, not entirely

Page 7: Memory Management

7

Swapping (1)

• Memory allocation changes as – processes come into memory and leave memory

• No concept of fixed partitioning of memory like previous. Number, location and size of partitions vary dynamically.

• High Utilization of memory, but it complicates allocation and de-allocation of memory

Page 8: Memory Management

8

Swapping (2)

• Allocating space for growing data segment• Allocating space for growing stack & data segment

Page 9: Memory Management

9

Memory Management for Dynamic Allocation(3)

• When memory is assigned dynamically, the OS must manage it. Two ways:– With bit maps– With linked list

Page 10: Memory Management

10

Memory Management with Bit Maps and Linked List(4)

• Bit Map– 1 = Allocated block– 0 = Free block

• Linked List– P = Process allocation– H = Hole

Page 11: Memory Management

11

Virtual Memory (1)• The basic idea behind virtual memory is that the

combined size of the program may exceed the amount of physical memory available for it.

• The OS keeps part of the program currently in use in main memory, and the rest on the disk.

• For example– A 16MB program can run on a 4MB memory

• Virtual memory also allows many programs in memory at once.

• Most virtual memory systems use a technique called paging

Page 12: Memory Management

12

Paging (2)

The relation between virtual addresses andphysical memoryaddresses given by page table

Page 13: Memory Management

13

TLBs – Translation Lookaside Buffers (3)

A TLB to speed up paging

Page 14: Memory Management

14

Page Replacement Algorithms

• Page fault forces choice – which page must be removed– make room for incoming page

• Modified page must first be saved– unmodified just overwritten

• Better not to choose an often used page– will probably need to be brought back in soon

Page 15: Memory Management

15

Optimal Page Replacement Algorithm

• Replace page needed at the farthest point in future– Optimal but unrealizable

• Estimate by …– logging page use on previous runs of process– although this is impractical

Page 16: Memory Management

16

Not Recently Used Page Replacement Algorithm

• Each page has Reference bit, Modified bit– bits are set when page is referenced, modified

• Pages are classified1. not referenced, not modified2. not referenced, modified3. referenced, not modified4. referenced, modified

• NRU removes page at random– from lowest numbered non empty class

Page 17: Memory Management

17

FIFO Page Replacement Algorithm

• Maintain a linked list of all pages – in order they came into memory

• Page at beginning of list is replaced

• Disadvantage– Highly used page may be a victim

Page 18: Memory Management

18

Second Chance Page Replacement Algorithm

• Operation of a second chance– pages sorted in FIFO order– Page list if fault occurs at time 20, A has R bit set

(numbers above pages are loading times)

Page 19: Memory Management

19

The Clock Page Replacement Algorithm

Page 20: Memory Management

20

Least Recently Used (LRU)

• Assume pages used recently will used again soon– throw out page that has been unused for longest time

• Must keep a linked list of pages– most recently used at front, least at rear– update this list every memory reference !!

• Alternatively keep counter in each page table entry– choose page with lowest value counter– periodically zero the counter

Page 21: Memory Management

21

Page Size (1)

Small page size• Advantages

– less internal fragmentation – less unused program in memory

• Disadvantages– programs need many pages, larger page tables

Page 22: Memory Management

22

Page Size (2)

• Overhead due to page table and internal fragmentation

• Where– s = average process size in bytes– p = page size in bytes– e = page entry

2s e poverheadp

page table space

internal fragmentatio

n

Optimized when2p se

Page 23: Memory Management

23

Page Fault Handling (1)

1. Hardware traps to kernel2. OS determines which virtual page needed3. Save registers4. OS checks validity of address, seeks page frame5. If selected frame is dirty, write it to disk

Page 24: Memory Management

24

Page Fault Handling (2)

6. OS brings new page in memory from disk7. Page tables updated8. Faulting instruction backed up to when it began 9. Faulting process scheduled10. Registers restored11. Program continues