29
Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Embed Size (px)

Citation preview

Page 1: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management

By: Omar A. Cruz Salgado802-00-1712

ICOM 5007 Sec. 121

Page 2: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management Introduction

Ideally… What every programmer would like is an infinitely large, infinitely fast memory that is also nonvolatile.

• Nonvolatile = Don’t lose it’s content when the electric power fails.

– Unfortunately technology don’t provide such memory.• Memory Hierarchy:

1. Fast and expensive, volatile cache memory2. Medium price and tens of megabyte, volatile

main memory (RAM)3. Tens or hundreds of gigabyte of slow, cheap,

nonvolatile disk storage.

Page 3: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management Introduction

– In order to manage this hierarchy the

operating system has a Memory Manager.

• What is it’s Job???

– To keep track of which parts OS memory are

and which parts are not in use, to allocate

memory to processes when they need it and

deallocate it when they are done, and to

manage swapping between main memory

and disk when main memory is too small to

hold all the processes.

Page 4: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Basic Memory Management

– Memory Management Systems Can be divided in two groups:

• Those that move processes back and forth between main memory and disk during execution ( Swapping and Paging ).

• Those that don’t.

– We must keep in mind that swapping and paging are largely artifacts caused by the lack of sufficient main memory to hold all the program at once.

Page 5: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Monoprograming without Swapping or Paging

– The simplest memory management scheme is to run one program at a time, sharing the memory between that program and the operating system.

– Three Variations:

User Program

OS in RAM OS in RAM

User Program User Program

OS in ROM Device drivers in ROM

0xFFF…

0 0 0

– First used in mainframes and minicomputers.

Used in palmtops and

embedded systems.

Used in early personal

computers

Page 6: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Monoprograming without Swapping or Paging

– Only one process at a time can be running. As

soon as the user types a command, the

operating system copies the requested

program from disk and executes it. When the

process finishes, the operating system displays

a prompt character and waits for a new

command. When it receives the command, it

loads a new program into memory, overwriting

the first one.

Page 7: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Multiprogramming With Fixed Partitions

– Most Modern systems allow multiple processes to run at the same time. This means that when a process is blocked waiting for I/O to finish, another one can use the CPU.

– To achieve multiprogramming must divide memory up into n (possibly unequal) partitions. This can be done manually when the system is started up.

– When a Job arrives, it can be put in an input queue.

Page 8: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Multiprogramming With Fixed Partitions

• A first scheme can be to put the incoming job in a large enough queue that can hold it. Any space in the partition not used by a job is lost.

• The disadvantage of sorting jobs into separate queues becomes apparent when the queue for a large partition is empty but the queue for a small job partition is full.

Multiple Input queues

Partition 4

Partition 3

Partition 2

Partition 1

OS

800 K

700 K

400 K

200 K

100 K

Page 9: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Multiprogramming With Fixed Partitions

• An alternative organization is to maintain a single queue.

• Whenever a partition becomes free the job closest to the front of the queue that fits in it could be loaded into the empty partition and run.

• Single Input Queue

Partition 4

Partition 3

Partition 2

Partition 1

OS

800 K

700 K

400 K

200 K

100 K

Page 10: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Modeling Multiprogramming

– A way to model multiprogramming is to look

at CPU usage from a probabilistic viewpoint.

Suppose that a process spends a fraction p of

its time waiting for I/O to complete. With n

processes in memory at once, the probability

that all n processes are waiting for I/O is pn.

CPU utilization = 1 - pn

This is called degree of multiprogramming.

Page 11: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Relocation and Protection

– Multiprogramming introduces two essential problems that must be solved:

• Relocation

• Protection

– When a program is linked, the linker must know at what address the program will begin in memory.

Example:

First Instruction is a call to a procedure in address 100. If program loaded in partition 1 (@100K) instruction will jump to address 100 which is inside OS memory space. What is needed is call to 100 + 100K. This problem is called relocation

Partition 4

Partition 3

Partition 2

Partition 1

OS

800 K

700 K

400 K

200 K

100 K

Page 12: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Relocation and Protection

– Solution:• Modify instructions as the program is loaded to memory.

Programs loaded into partition 1 have 100K added to each address. This would not work become a program can always construct a new instruction and jump to it.

• IBM provided a solution for protecting that was to divide memory into blocks of 2-KB bytes and assign a 4-Bit protection code to each block.

– PSW = Program Status Word <= Contained 4-bit Key

• An alternative solution is the Base and Limit hardware which when a process is scheduled, the base register is loaded with the address of the start of the partition, and the limit register is loaded with the length of the partition. Disadvantages of this scheme is that it must perform an adding and a comparison on every memory reference.

Page 13: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

SWAPPING

– Swapping consists of bringing in each process in its entirety, running it for a while, then putting it back on the disk.

OS OSOS OSOS OS OS

A A A

A

D D D

B B B B

C C C C C

Since A is in a different location, addresses contain in it must be relocated, either by software when it is swiped in or by hardware during program execution.

Page 14: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

SWAPPING

– The main difference between fixed partitions and

the variable partitions is that the number, location

and size of the partition vary dynamically in the

latter as processes come and go.

– This method gives the flexibility to have partitions

that are not to small or to big for a process which

improves memory utilization.

– But it also complicates allocating and deallocating

memory, as well of keeping track of it.

Page 15: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

SWAPPING

– Memory compaction:

• When swapping creates multiple holes in memory, it is

possible to put them all together to make a one big space

by moving all the processes downward as far as possible.

• This is usually not performed because it takes a lot of CPU

time.

– A point is worth making concerns how much memory

should be allocated for a process when it is created or

swapped in. If processes are created with a fixed size

that never changes, then allocating is simple: the OS

allocates exactly what is needed, no more and no less.

Page 16: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

SWAPPING

– Processes’ data segments can grow, for example by

dynamically allocating memory from heap.

– If a hole of memory is adjacent to the process memory

region then the process is allowed to grow.

– If a process is adjacent to another process then the

growing process has to be moved to a large enough

memory space for it, or one or more processes will

have to be swapped out to create a large enough area.

– If a process can grow in memory and the swap area on

the disk is not enough then the process must be killed,.

Page 17: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

SWAPPING

Page 18: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management with Bitmaps

– With bitmaps, memory is divided up into allocation units. Corresponding to each allocation unit is a bit in the Bitmap, which is 0 if the unit is free and 1 if it is occupied (or vice versa).

• The size of the allocation unit is an important issue. The smaller the allocation unit, the larger the bitmap. But appreciable memory will be wasted if the units are to large.

• We have to remember that the bitmap will be also at memory which will limit our space for data.

• But the main problem with bitmaps is that when a k unit process is brought into memory, the memory manager must search for the consecutive amount of 0 bits in the map that

will allocate the process which can be a slow task.

Page 19: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management with Linked Lists

– Another way of keeping track of memory is to maintain a link list of allocated and free memory segments, where a segment is either a process or a hole between two processes.

5P 0 H 5 3 P 8 6

H 18 2 P 20 6 P 26 3

– This process has the advantage that when an process terminates or is swapped, updating the list is straightforward.

Hole Starts at

18

Length

2

Page 20: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Memory Management with Linked Lists

– Several algorithms can be used to allocate memory for a newly created process using the link list approach.

• First Fit = The memory manager scans along the list of segments until it finds a hole that is big enough. The hole is then broken in two pieces, one for the process and the other for the unused space.

• Next Fit = It works the same way as the first fit, but it keeps track of where it is whenever it finds a hole. The next time it looks from a hole it will start in the last place it visited instead of the beginning.

• Best fit = Searches the entire list and takes the smallest hole adequate.

• Worst fit = Take the largest available hole, so that the hole broken of will be big enough to be useful.

• Quick fit = Maintains a separate lists for some of the more common sizes requested.

Page 21: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Virtual Memory

– The basic idea behind

virtual memory is that

the combined size of the

program, data, and

stack may exceed the

amount of physical

memory available for it.

The OS keeps those

parts of the program

currently in use in main

memory, and the rest in

disk.

Page 22: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Paging

– Virtual addresses are those addresses that are

program generated. These form the virtual address

space. When virtual memory is used, the virtual

address don’t go directly to the memory bus,

instead it goes to the Memory Management Unit

that maps the virtual address onto the physical one.

Page 23: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Paging

– In this example, we have a computer that can generate 16-bit addresses, form 0 to 64K,

that are virtual addresses. This computer however has only 32KB of physical memory. The complete copy of the program’s core image must be in disk.

– The virtual address space is divided into units called pages.

– The corresponding units in the physical memory are called page frames.

– Pages = Page frames = Frame size.

Page 24: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Paging

– When a program tries to access address 0, virtual address 0 is sent to the MMU. The MMU sees This virtual address is in page 0, which according to its mapping is in page frame 2.

– It will change:• MOV REG,0

– To the address:• MOV REG,8192

– Every page that is not mapped has an X in it. In actual hardware a, Present/absent bit keeps track of which pages are physically present in memory.

Page 25: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Paging

– If the program tries to access an address that is in a virtual page with no mapping, this causes the CPU to trap to the OS. This is called Page Fault.

– The OS takes a little used page frame and copies back its content to disk.

– Then fetches the page just referenced into that page frame changes the mapping and restart the trapped instruction.

– When the incoming instruction is delivered to the MMU it comes as a 16-bit virtual address that is split into a 4-bit page number a a 12-bit offset.

Page 26: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Page Tables

– The Virtual page number is used as an index into the page table to find the entry for that virtual page.

– The purpose of the page table is to map virtual pages onto page frames.

– Two major issues must be faced:• The page can be extremely large

– Modern Computers use virtual addresses of at least 32 bits. For a 32-bit address space, 1 million pages. Must remember that each process has it’s own page table.

• The Mapping must be fast.– Virtual-to physical mapping must be done on every

memory reference.

Page 27: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Multilevel Page Tables

– The secret of this method is to avoid keeping all the page table in memory all the time. Those that are not needed shouldn’t be kept around.

Page 28: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

Multilevel Page Tables

– Structure of a page table entry:• The exact layout of an entry is highly machine

dependent, but the information present is almost the same machine to machine.

• Page frame number = The value to be located.• Present/absent bit = If bit is 1, the entry is valid and can be used. If 0 virtual page to which the

entry belongs is not in memory. Causes a page fault. • Protection bit = Tells what kind of access are permitted.• Modified and referenced = bits that keep track of page usage. When a page is written to, the

hardware automatically sets the Modified bit. • Caching disabled = this is important for pages that map onto device registers rather than

memory

Page 29: Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121

TLBs – Translation Lookaside Buffers

– Computer come equipped with a small hardware device for mapping virtual addresses to physical addresses without going through the page table.

– The device is called TLB (Translation Lookaside Buffer) or sometimes an associative memory.