36
1 CPSC 410--Richard Furuta 2/24/99 1 Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2 Memory Management ¥ Goal: permit different processes to share memory--effectively keep several in memory at the same time ¥ Eventual meta-goal: users develop programs in what appears to be their own infinitely- large address space (i.e., starting at address 0 and extending without limit)

Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

Embed Size (px)

Citation preview

Page 1: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

1

CPSC 410--Richard Furuta 2/24/99 1

Silberschatz and GalvinChapter 8

Memory Management

CPSC 410--Richard Furuta 2/24/99 2

Memory Management

¥ Goal: permit different processes to sharememory--effectively keep several inmemory at the same time

¥ Eventual meta-goal: users develop programsin what appears to be their own infinitely-large address space (i.e., starting at address0 and extending without limit)

Page 2: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

2

CPSC 410--Richard Furuta 2/24/99 3

Memory Management

¥ Initially we assume that the entire program must be inphysical memory before it can be executed.

Ð How can we avoid including unnecessary routines?Programs consist of modules written by several peoplewho may not [be able to] communicate their intentionsto one another.

¥ Reality:

Ð primary memory has faster access time but limited size;secondary memory is slower but much cheaper.

Ð program and data must be in primary memory to bereferenced by CPU directly

CPSC 410--Richard Furuta 2/24/99 4

Multistep Processing of UserProgram

SourceProgram

compiler orassembler

Objectmodule

Objectmodule

linkageeditor

Objectmodule

. . .

. . .

Loadmodule

Loadmodule

systemlibraries

loader

memoryimage

Compile time ----------------Load time---------------

Page 3: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

3

CPSC 410--Richard Furuta 2/24/99 5

Multistep Processing of UserProgram

dynamiclibraries

memoryimage

Output

Run Time (execution time)

CPSC 410--Richard Furuta 2/24/99 6

Multistep Processing of UserProgram

¥ Binding: associate location with object inprogram.

¥ For example, changing addresses in a userÕsprogram from logical addresses to real ones

¥ More abstractly, mapping one address space toanother

¥ Many things can be bound in programminglanguages; we are concentrating on memoryaddresses here.

Page 4: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

4

CPSC 410--Richard Furuta 2/24/99 7

Binding

¥ Typically

Ð compiler binds symbolic names (e.g., variable names)to relocatable addresses (i.e., relative to the start of themodule)

Ð linkage editor may further modify relocatable addresses(e.g., relative to a larger unit than a single module)

Ð loader binds relocatable addresses to absolute addresses

¥ Actually, address binding can be done at any point in adesign

CPSC 410--Richard Furuta 2/24/99 8

When should binding occur?

¥ binding at compile timeÐ generates absolute code. Must know at compile time where the

process (or object) will reside in memory. Example: *0 in C.Limits complexity of system.

¥ binding at load timeÐ converts compilerÕs relocatable addresses into absolute addresses

at load time. The most common case. The program cannot bemoved during execution.

¥ binding at run timeÐ process can be moved during its execution from one memory

segment to another. Requires hardware assistance (discussedlater). Run-time overhead results from movement of process.

Page 5: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

5

CPSC 410--Richard Furuta 2/24/99 9

When should loading occur?

¥ Recall that loading moves objects into memory

¥ Load before execution

Ð load all routines before runtime starts

Ð straightforward scheme

¥ Load during execution--Dynamic loading

Ð loads routines on first use

Ð note that unused routines (ones that are not invoked) arenot loaded

Ð Implement as follows: on call to routine, check if theroutine is in memory. If not, load it.

CPSC 410--Richard Furuta 2/24/99 10

When should linking occur?

¥ Recall that linking resolves references among objects.

¥ Standard implementation: link before execution (hence allreferences to library routines have been resolved beforeexecution begins). Called static linking.

¥ Link during execution: dynamic linking

Ð memory resident library routines

Ð every process uses the same copy of the library routines

Ð hence linking is deferred to execution time, but loadingis not necessarily deferred

Page 6: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

6

CPSC 410--Richard Furuta 2/24/99 11

Dynamic Linking

¥ Implementation of dynamic linking

Ð library routines are not present in executable image.Instead stubs are present.

Ð stub: small piece of code that indicates how to locatethe appropriate memory-resident library routine (or howto load it if it is not already memory-resident)

Ð first time that a routine is invoked, stub locates (andpossibly loads) routine and then replaces itself with theaddress of the memory-resident library routine

CPSC 410--Richard Furuta 2/24/99 12

Dynamic Linking

¥ Also known as shared libraries

¥ Savings of overall memory (one copy of library routine) and of diskspace (library routines are not in executable images).

¥ Expense: first use is more expensive

¥ Problem: incompatible versions

Ð Can retain version number to distinguish incompatible versions oflibrary. Alternative is to require upward compatibility in libraryroutines.

Ð If there are different versions, then you can have multiple versionsof routine in memory at same time, counteracting a bit of thememory savings.

¥ Example: SUN OSÕs shared libraries

Page 7: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

7

CPSC 410--Richard Furuta 2/24/99 13

Overlays

¥ So far, the entire program and data of processmust be in physical memory during execution.

¥ Ad hoc mechanism for permitting process to belarger than the amount of memory allocated to it:overlays

¥ In effect keeps only those instructions and data inmemory that are in current use

¥ Needed instructions and data replace those nolonger in use

CPSC 410--Richard Furuta 2/24/99 14

OverlaysExample

Common routines

Overlay driver

Main Routine A Main Routine BOverlay Area

Common data

Page 8: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

8

CPSC 410--Richard Furuta 2/24/99 15

Overlays

¥ Overlays do not require special hardwaresupport--can be managed by programmer

¥ Programmer must structure programappropriately, which may be a difficulty

¥ Very common solution in early days ofcomputers. Now, probably dynamicloading and binding are more flexible

¥ Example: Fortran common

CPSC 410--Richard Furuta 2/24/99 16

Logical versus PhysicalAddress Space

¥ logical address: generated by the CPU (logical addressspace)

¥ physical address: loaded into the memory address registerof the memory (physical address space)

¥ compile-time and load-time address binding: logical andphysical addresses are the same

¥ execution-time address binding: logical and physicaladdresses may differ

Ð in this case, logical address referred to as virtualaddress

Page 9: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

9

CPSC 410--Richard Furuta 2/24/99 17

Mapping from Virtual to PhysicalAddresses

¥ Run-time mapping from virtual to physical addresshandled by the Memory Management Unit (MMU), ahardware device

¥ Simple MMU scheme

Ð relocation register containing start position of processin memory

Ð value in relocation register is added to every addressgenerated by a user process when it sent to memory

CPSC 410--Richard Furuta 2/24/99 18

14000

CPU +logical

address

346

physicaladdress

14346

memory

Dynamic (binding) relocation usinga relocation register

relocationregister

mmu

Page 10: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

10

CPSC 410--Richard Furuta 2/24/99 19

Logical Address Space versusPhysical Address Space

¥ User programs only see the logical addressspace, in range 0 to max

¥ Physical memory operates in the physicaladdress space, addresses in the range R+0 toR+max

¥ This distinction between logical andphysical address spaces is a key one formemory management schemes.

CPSC 410--Richard Furuta 2/24/99 20

Swapping

¥ What: temporarily move inactive process tobacking store (e.g., fast disk). At some later time,return it to main memory for continued execution.

¥ Why: permit other processes to use memoryresources (hence each process can be bigger)

¥ Who: decision of what process to swap made bymedium-term scheduler

Page 11: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

11

CPSC 410--Richard Furuta 2/24/99 21

Schematic view of Swapping

CPSC 410--Richard Furuta 2/24/99 22

Swapping

¥ Some possibilities of when to swap

Ð if you have 3 processes, start to swap one out when itsquantum expires while two is executing. Goal is tohave third process in place when twoÕs quantum expires(i.e., overlap computation with disk i/o)

Ð context switch time is very high if you canÕt achievethis

¥ Another option: roll out lower priority process in favor ofhigher priority process. Roll in the lower priority processwhen the higher priority one finishes

Page 12: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

12

CPSC 410--Richard Furuta 2/24/99 23

Swapping

¥ If you have static address binding (i.e., compile or loadtime binding) have to swap process back into samememory space. Why?

¥ If you have execution-time address binding, then you canswap the process back into a different memory space.

¥ Disk is slow and the transfer time needed is proportional tothe size of the process, so it is useful if processes canspecify the parts of allocated memory that are unused toavoid having to transfer.

CPSC 410--Richard Furuta 2/24/99 24

Swapping

¥ Process cannot be swapped until completely idle. Exampleof a problem: overlapped DMA input/output. (Thisrequires that you have buffer space allocated in memorywhen the i/o request comes back)

¥ Note that in general swapping in this form (i.e., with thislarge sized granularity) is not very common now.

Page 13: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

13

CPSC 410--Richard Furuta 2/24/99 25

Contiguous Allocation

¥ Divide memory into partitions. Initially consider twopartitions--one for the resident operating system and onefor a user process.

¥ Where should the operating system go--low memory orhigh memory?

¥ Frequently put the operating system in low memorybecause this is where the interrupt vector is located. Alsothis permits the user partition to be expanded withoutrunning into the operating system (a factor when we havemore than one partition or if we run the same binaries ondifferent system configurations).

CPSC 410--Richard Furuta 2/24/99 26

Memory Partitions

ResidentOperating

System

User Processes(program and data)

0 Low memory

High memory

Page 14: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

14

CPSC 410--Richard Furuta 2/24/99 27

Single Partition Allocation

¥ Initial location of the userÕs process in memory is not 0

¥ The relocation register (base register) points to the firstlocation in the userÕs partition. UserÕs logical addressesare adjusted by the hardware to produce the physicaladdress. (Address binding delayed until execution time.)

¥ Relocation register value is static during programexecution, hence all of the OS must be present (it might beused). Otherwise have to relocate user code/data Òon theflyÓ! In other words we cannot have transient OS code.

CPSC 410--Richard Furuta 2/24/99 28

Single Partition Allocation

¥ How about memory references passed from theuser process to the OS (for example, blocks ofmemory passed as an argument to a I/O routine)?

¥ The address must be translated from userÕs logicaladdress space to the physical address space. Otherarguments donÕt get translated (e.g., counts).

¥ Hence OS software has to handle thesetranslations.

Page 15: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

15

CPSC 410--Richard Furuta 2/24/99 29

Limit Register

¥ How do we protect the OS from accidentalor intentional interference from userprocesses?

¥ Add a limit register to the address mappingscheme

CPSC 410--Richard Furuta 2/24/99 30

Limit Register

CPU < + memorylogicaladdresses

yes

no

limitregister

relocationregister

physicaladdresses

trap: addressing error

Page 16: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

16

CPSC 410--Richard Furuta 2/24/99 31

Multiple-Partition Allocation

¥ Goal: allocate memory to multiple processes (which permits rapidswitches, for example)

¥ Simple scheme: fixed-size partition

Ð memory divided into several partitions of fixed size

Ð each partition holds one process

Ð partition becomes free when process terminates; another processpicked from the ready queue gets the free partition

Ð number of partitions bounds the degree of multiprogramming

Ð originally used in the IBM OS/360 operating system (MFT)

Ð No longer used

CPSC 410--Richard Furuta 2/24/99 32

Multiple-Partition AllocationDynamic Partition

¥ Memory is partitioned dynamicallyÐ Hole: block of available memory

Ð Holes of various size are scattered throughout memory

¥ Process still must occupy contiguous memory

¥ OS keeps a table listing which parts of memory areavailableÐ Allocated partitions

Ð Free partitions (hole)

¥ When a process arrives, the OS searches for a part ofmemory that is large enough to hold the process. Allocatesonly the amount of needed memory.

Page 17: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

17

CPSC 410--Richard Furuta 2/24/99 33

Multiple-Partition AllocationDynamic Partition

0

2000

operatingsystem p1

500K100

600

operatingsystem

p1

p2 800K

1400

operatingsystem

p1

p2

CPSC 410--Richard Furuta 2/24/99 34

Multiple-Partition AllocationDynamic Partition

0

2000

100

600

1400

operatingsystem

p1

p2

p3 400K

1800

operatingsystem

p1

p2

p3

p4 600K canÕtp2 donep4 gets alloc.

operatingsystem

p1

p3

1200p4

Page 18: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

18

CPSC 410--Richard Furuta 2/24/99 35

Multiple-Partition AllocationDynamic Partition

0

2000

100

600

1400

1800

operatingsystem

p1

p3

1200p4

200K free

200K free

p5 requests 300 K but canÕt obtain it since there is no large enough contiguous block free. Note that there is 400 K free in the system though...

CPSC 410--Richard Furuta 2/24/99 36

Multiple-Partition AllocationDynamic Partition

¥ This is an example of externalfragmentation--sufficient amount of freememory to satisfy request but not in acontiguous block.

¥ We used a first fit algorithm this time todecide where to allocate space--what aresome strategies for finding a free hole tofill?

Page 19: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

19

CPSC 410--Richard Furuta 2/24/99 37

Multiple-Partition AllocationDynamic Partition

¥ first fit algorithm: allocate the first hole that is big enough.Searching can start either at beginning of set of holes orwhere the previous first-fit search ended. We quit whenwe find a free hole that is large enough.

¥ best fit: allocate the smallest hole that is big enough. Mustsearch entire list to find it if you donÕt keep free listordered by size.

¥ worst fit: allocate the largest hole. Again may need tosearch entire free list if not ordered. Produces the largestleftover hole, which may be less likely to create externalfragmentation.

CPSC 410--Richard Furuta 2/24/99 38

Multiple-Partition AllocationDynamic Partition

¥ Simulation shows that first-fit and best-fit arebetter than worst-fit for time and storage use.

¥ First-fit is faster than best-fit

¥ First-fit and best-fit are similar in storage use.

¥ 50% rule--up to 1/3 of memory is lost to externalfragmentation in first-fit (N allocated, 1/2 N lost)

Page 20: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

20

CPSC 410--Richard Furuta 2/24/99 39

Multiple-Partition AllocationDynamic Partition

¥ General comments:

Ð memory protection is necessary to prevent state interactions. Thisis effected by the limit register.

Ð base registers are required to point to the current partition

¥ In general, blocks are allocated in some quantum (e.g., power or 2).No point in leaving space free if you canÕt address it or if it is too smallto be of any use at all. Also there is an expense in keeping track offree space (free list; traversing list; etc.).

¥ This results in lost space--allocated but not required by process

¥ Internal fragmentation: difference between required memory andallocated memory.

¥ Internal fragmentation also results from estimation error andmanagement overhead.

CPSC 410--Richard Furuta 2/24/99 40

External Fragmentation

¥ External fragmentation can be controlled with compaction.

Ð requires dynamic address binding (have to move piecesaround)

Ð can be quite expensive in time

Ð some schemes try to control expense by only doingcertain kinds of coalescing--e.g., on power of 2boundary. (Topic of a data structures class.)

Ð OS approach can also be to roll out/roll in all processes,returning processes to new addresses--no additionalcode required!

Page 21: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

21

CPSC 410--Richard Furuta 2/24/99 41

Paging

¥ Permit a processÕ memory to be non-contiguous

¥ Allocate physical memory in fixed-size, relatively smallpieces, called frames

Ð allocate frames to process as needed

Ð avoids external fragmentation

Ð causes increased internal fragmentation but attempts tominimize it through the small-sized frames. (Question:what is the characterization of the amount of internalfragmentation?)

CPSC 410--Richard Furuta 2/24/99 42

Paging

¥ Implementation:Ð Divide (user) logical memory into pages. The

page is the same size as the frame.

Ð Dynamically map between pages and frames

Ð Hardware assistance is required to do thismapping

Page 22: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

22

CPSC 410--Richard Furuta 2/24/99 43

PagingHardware Assistance

CPU p d f d physicalmemory

logicaladdress

physicaladdress

0

...

p

...

f

page table

CPSC 410--Richard Furuta 2/24/99 44

PagingImplementation

¥ Frames (and hence pages) are of fixed small size.Generally power of 2 large (why?).

¥ logical address of form (p d)

Ð p, page number

Ð d, offset within that page

¥ physical address of form (f d)

Ð f, a frame number

Ð d, offset within that frame

¥ Hardware maps from p to f; d copied across directly

¥ Q: how are f and d combined to get physical address?

Page 23: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

23

CPSC 410--Richard Furuta 2/24/99 45

Paging Example

CPSC 410--Richard Furuta 2/24/99 46

Frame Table

¥ How does the Operating System know whatframes are in use in physical memory?

¥ Frame tableÐ One entry per physical page frame

Ð Indicates whether frame is free or allocated

Ð If allocated, indicates to which page of whichprocess or processes

Page 24: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

24

CPSC 410--Richard Furuta 2/24/99 47

PagingFragmentation

¥ No external fragmentation since all frames/pagesare the same size. Any page can be mapped to anyframe.

¥ Internal fragmentation especially on last page ofprocess. Average of 50% on last page.

¥ Smaller frames create less fragmentation butincrease number of frames and size of page table.

CPSC 410--Richard Furuta 2/24/99 48

Paging

¥ We still require that entire process fit into memory

¥ Each process has its own page table

¥ This means that process cannot address outside of its ownaddress space

¥ Sharing frames (reentrant code) is possible

Ð reentrant code--pure code, i.e., non self-modifying codethat never changes during execution.

Ð code frames can be shared among all processes

Ð data separated out with one copy per process

Ð processÕ page tables for code can be pointed to sharedframe

Page 25: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

25

CPSC 410--Richard Furuta 2/24/99 49

Paging

¥ Hardware support required (too slow to search)

Ð simplest: put page table into dedicated high-speedregisters

¥ must keep page table reasonably small because ofexpense of registers (e.g., 256 entries)

¥ but this severely limits the potential size of theprocess and increases expense of context switch!

Ð Alternative: keep page table in memory with a page-table base register (PTBR) pointing to it

¥ reduces context-switch time

¥ doubles number of memory accesses

CPSC 410--Richard Furuta 2/24/99 50

Paging

¥ Hardware support--continued

Ð Associative registers, also called translation look-asidebuffers (TLBs)

¥ Associative registers: key and value

¥ Keys are compared simultaneously and acorresponding value is returned

¥ Fast but expensive. Size limited to between 8 and2048 for example.

¥ Use some strategy to decide which page table entriesto put into associative memory.

Page 26: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

26

CPSC 410--Richard Furuta 2/24/99 51

Associative Registers

¥ Associative registers--parallel search

¥ Address translation (AÕ, AÕÕ)Ð If AÕ is in associative register, get frame #

Ð Otherwise get frame # from page table inmemory

CPSC 410--Richard Furuta 2/24/99 52

Paging

¥ Hardware support (continued)

Ð TLBs (continued)

¥ on memory reference first check TLB. If match(hit) then use the value found

¥ If no hit, then need to go to memory.

¥ hit ratio: representation of how effective the processis.

¥ WhatÕs a strategy for loading the TLB? Perhapscache entries on first access. Replace most recentlyused entry or use some form of rotation.

Page 27: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

27

CPSC 410--Richard Furuta 2/24/99 53

Effective Access Time

¥ Associative lookup = e time units

¥ Assume that a memory cycle is 1 time unit¥ Hit ratio = a

Ð Percentage of times that a page number is foundin the associative registers; ratio related tonumber of associative registers

¥ Effective Access Time (EAT)EAT = (1 + e) a + (2 + e)(1 - a) = 2 + e - a

CPSC 410--Richard Furuta 2/24/99 54

Memory protection

¥ Protection bits associated with frames; keptin page tableÐ Read, write, read-only

Ð Illegal operations result in hardware trap

¥ Valid/invalid bitÐ Illegal addresses, outside of processÕ address

space

Ð Alternately: page-table length register (PTLR)

Page 28: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

28

CPSC 410--Richard Furuta 2/24/99 55

Paging

¥ Note that the OS also needs to keep track of whichframes are being used--too expensive to searchpage tables to find a free frame

¥ Note that in these cases the logical address spaceis less than or equal to the physical address spacein size. Later we will see the opposite case

¥ UserÕs view is one contiguous space. Physicalview is userÕs program scattered throughoutphysical memory.

CPSC 410--Richard Furuta 2/24/99 56

Multilevel Paging

¥ Support a very large address space (say 232 to 264)by using a two-level paging scheme.

¥ Here the page table itself is also paged.

¥ Otherwise the page table will become very large!

¥ Additional levels can also be introduced ifnecessary

¥ With caching, can decrease the requirement foradditional memory references

Page 29: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

29

CPSC 410--Richard Furuta 2/24/99 57

Two-level Page table scheme

CPSC 410--Richard Furuta 2/24/99 58

Two-level paging example

32-bit machine with 4K page size 20 bit page number and 12 bit page offset

Page 30: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

30

CPSC 410--Richard Furuta 2/24/99 59

Inverted Page Table

¥ One entry for each real page of memory

¥ Entry consists of the virtual address of the pagestored in that real memory location, withinformation about the process that owns that page

¥ Decreases memory needed to store each pagetable, but increases time needed to search the tablewhen a page reference occurs

¥ Use hash table to limit the search to one (or atmost a few) page table entries

CPSC 410--Richard Furuta 2/24/99 60

Inverted page table architecture

Page 31: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

31

CPSC 410--Richard Furuta 2/24/99 61

Shared pages

¥ Shared codeÐ One copy of read-only (reentrant) code shared among

processes (i.e., text editors, compilers, windowsystems)

Ð Shared code must appear in same location in the logicaladdress space of all processes

¥ Private code and dataÐ Each process keeps a separate copy of the code and data

Ð The pages for the private code and data can appearanywhere in the logical address space

CPSC 410--Richard Furuta 2/24/99 62

Shared pages example

Page 32: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

32

CPSC 410--Richard Furuta 2/24/99 63

Segmentation

User's View of Memorymain

program

subroutines

stack

data

CPSC 410--Richard Furuta 2/24/99 64

Segmentation

Page 33: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

33

CPSC 410--Richard Furuta 2/24/99 65

Segmentation

¥ Separating logical memory into different portions(segments) for different purposes

¥ Permits userÕs partitioning of space to match the logicalview

¥ Each segment has a name (e.g., a unique number) and alength.

¥ Logical address (from userÕs point of view) is a segmentnumber and an offset

¥ Physical address found by looking into a segment table

CPSC 410--Richard Furuta 2/24/99 66

Segmentationsegment table

¥ Segment table containsÐ limit

¥ logical addresses must fall within the range 0..limit

¥ error if it doesnÕt (out of range)

¥ provides memory access protection

Ð base¥ added to offset to find physical address

Page 34: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

34

CPSC 410--Richard Furuta 2/24/99 67

Segmentation example

CPSC 410--Richard Furuta 2/24/99 68

Segmentation Hardware

CPU s d

...

...

s

<limit,base>

d<

limit?

base+d

physicalmemoryyes

notrap: addressing error

Page 35: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

35

CPSC 410--Richard Furuta 2/24/99 69

Segmentation

¥ Permits implementation of protectionsappropriate to segmentÕs roleÐ read-only for code

Ð read/write for data

¥ But once again, since segments are variablesize, we have the problem of externalfragmentation.

CPSC 410--Richard Furuta 2/24/99 70

Paged Segmentation

¥ Paging + Segmentation

¥ Aim: take the advantages of both paging and segmentationsystems

¥ Implementation: treat each segment independently. Eachsegment has a page table.

¥ Logical address now consists of three parts

Ð segment number

Ð page number

Ð offset

Page 36: Silberschatz and Galvin Chapter 8 - csdl.tamu.edufuruta/courses/99a_410/slides/chap08.pdf · Silberschatz and Galvin Chapter 8 Memory Management CPSC 410--Richard Furuta 2/24/99 2

36

CPSC 410--Richard Furuta 2/24/99 71

Paged SegmentationImplementation

¥ Segment table has base address for pagetable

¥ Look up <page number, offset> in pagetable

¥ Get physical address in return

¥ See examples in text for implementations inMULTICS and the Intel 386

CPSC 410--Richard Furuta 2/24/99 72

Considerations in comparingmemory-management strategies

¥ Hardware support

¥ Performance

¥ Fragmentation

¥ Relocation (when can we relocate)

¥ Swapping (and the cost)

¥ Sharing

¥ Protection