Windows memory manager internals

  • View
    755

  • Download
    0

Embed Size (px)

Text of Windows memory manager internals

  • 1. Windows Memory/Cache Manager Internals Sisimon Soman

2. Locality Theory If access page/cluster n, high possibility toaccess blocks near to n. All memory based computing systemworking on this principle. Windows has registry keys to configurepre-fetch how many blocks/pages. Application specific memory manager likeDatabases, multimedia workload, haveapplication aware pre-fetching. 3. Virtual Memory Manager (VMM) Apps feels memory is infinity magicdone by VMM. Multiple apps run concurrently with outinterfering other apps data. Apps feel the entire resource is mine. Protect OS memory from apps. Advanced app may need to sharememory. Provide solution to memorysharing easily. 4. VMM Continued.. VMM reserve certain amount of memoryto Kernel. 32 bit box , 2GB for Kernel and 2GB forUser apps. Specific area in Kernel memory reservedto store process specific data like PDE,PTE etc called Hyper Space 5. Segmentation and Paging X86 processor has segmentation andpaging support. Can disable or enable paging, butsegmentation is enabled by default. Windows uses paging. Since not able to disable segmentation, itconsider the entire memory for segments(also called flat segments). 6. Paging Divide entire physical memory in to equalsize pages (4K size for x86 platforms).This is called page frames and list calledpage frame database (PF DB). X86 platform PF DB contains 20bitphysical offset (remaining 12 bit toaddress offset in the page). PF DB also contains flags stating,read/write underway , shared page , etc. 7. VMM Continued.. Upper 2GB Kernel space is common forall process. What is it mean Half of PDE is commonto all process !. Experiment See the PDE of two processand make sure half of the PDE is same 8. Physical to Virtual address translation Address translation in both direction Whenwrite PF to pagefile, VMM need to update properPDE/PTE stating page is in disk. Done by Memory Management Unit (MMU) of the processor. The VMM help MMU. VMM keep the PDE/PTE info and pass to MMUduring process context switch. MMU translate virtual address to physicaladdress. 9. Translation Lookaside Buffer (TLB) Address translation is costly operation It happen frequently when even touches virtualmemory. TLB keeps a list containing most frequentaddress translations. The list is tagged by process ID. TLB is a generic OS concept - implementation isarchitecture dependent. Before doing the address translation MMUsearch TLB for the PF. 10. Address Translation In x86 32 bit address 10 bits of MSBpoints to the PTE offset in PDE. Thus PDEsize of process is 1024 bytes. Next 10 bits point to the PF startingaddress in PTE. Thus each PTE contains1024 bytes. Remaining 12 bits to address the locationin the PF. Thus page size is 4K. 11. What is a Zero Page Page frames not specific to apps. If App1 write sensitive data to PF1, and later VMM pushthe page to page file, attach PF 1 to App2. App2 can seethese sensitive info. Its a big security flaw, VMM keep a Zero Page list. Cannot clean the page while freeing memory its aperformance problem. VMM has dedicated thread who activate when systemunder low memory situation and pick page frames fromfree PF list, clean it and push to zero page list. VMM allocate memory from zero page list. 12. Arbitrary Thread Context Top layer of the driver stack get therequest (IRP) in the same processcontext. Middle or lower layer driver MAY get therequest in any thread context (Ex: IOcompletion), the current running threadcontext. The address in the IRP is specific to thePDE/PTE in the original process context. 13. Arbitrary Thread Contextcontinued.. How to solve the issue ?. Note the half of the PDE (Kernel area) iscommon in all process. If some how map to the kernel memory(Upper half of PDE), the buffer isaccessible from all process. 14. Mapping buffer to Kernel space Allocate kernel pool from the callingprocess context, copy user buffer to thisKernel space. Memory Descriptor List (MDL) Mostcommonly used mechanism to keep datain Kernel space. 15. Memory Descriptor List (MDL) // // I/O system definitions. // // Define a Memory Descriptor List (MDL) // // An MDL describes pages in a virtual buffer in terms of physical pages. The // pages associated with the buffer are described in an array that is allocated // just after the MDL header structure itself. // // One simply calculates the base of the array by adding one to the base // MDL pointer: // //Pages = (PPFN_NUMBER) (Mdl + 1); // // Notice that while in the context of the subject thread, the base virtual // address of a buffer mapped by an MDL may be referenced using the following: // //Mdl->StartVa | Mdl->ByteOffset // typedef struct _MDL {struct _MDL *Next;CSHORT Size;CSHORT MdlFlags;struct _EPROCESS *Process;PVOID MappedSystemVa;PVOID StartVa;ULONG ByteCount;ULONG ByteOffset; } MDL, *PMDL; 16. MDL Continued.. #define MmGetSystemAddressForMdlSafe(MDL, PRIORITY)(((MDL)->MdlFlags & (MDL_MAPPED_TO_SYSTEM_VA | MDL_SOURCE_IS_NONPAGED_POOL)) ? ((MDL)->MappedSystemVa) : (MmMapLockedPagesSpecifyCache((MDL), KernelMode, MmCached, NULL, FALSE, (PRIORITY)))) #define MmGetMdlVirtualAddress(Mdl) ((PVOID) ((PCHAR) ((Mdl)->StartVa) + (Mdl)->ByteOffset)) 17. Standby list To reclaim pages from a process, VMM first move pagesto Standby list. VMM keep it there for a pre-defined ticks. If process refer the same page, VMM remove fromstandby list and assign to process. VMM free the pages from Standby list after the timeoutexpire. Pages in standby list is not free, not belong to a processalso. VMM keep a min and max value for free and standbypage count. If its out of the limits, appropriate events willsignaled and adjust the appropriate lists. 18. Miscellaneous VMM Terms ZwAllocateVirtualMemory allocateprocess specific memory in lower 2GB Paged Pool Non Paged Pool Copy on write (COW) 19. Part 2Cache Manager 20. Cache Manager concepts If disk heads run in the speed of supersonic jets, Cache Manager not required. Disk access is the main bottleneck thatreduce the system performance. FasterCPU and Memory, but disk is still in stoneage. Common concept in Operating Systems,Unix flavor called buffer cache. 21. What Cache Manager does Keep the system wide cache data offrequently used secondary storage blocks. Facilitate read ahead , write back toimprove the overall system performance. With write-back, cache manager combinemultiple write requests and issue singlewrite request to improve performance.There is a risk associated with write-back. 22. How Cache Manager works Cache Manager implement caching usingMemory Mapping. The concept is similar to an App usesmemory mapped file. CreateFile(dwFlagsAndAttributes ,..) dwFlagsAndAttributes == FILE_FLAG_NO_BUFFERING means I dont want cache manager. 23. How Cache Manager works.. Cache Manager reserve area in higher 2GB (x86platform) system area. The Cache Manager reserved page count adjustaccording to the system memory requirement. If system has lots of IO intensive tasks, systemdynamically increase the cache size. If system under low memory situation, reducethe buffer cache size. 24. How cached read operation works User SpaceKernel SpaceCached Read (1)Page Fault (4) VMM Do Memory Mapping (3)Get thePages FromCache Manager File SystemCM (2) Get the blocks from disk (5)Disk stack(SCSI/Fibre Channel) 25. How cached write operation works User SpaceKernel SpaceCached Write (1) Modified Page Writer ThreadVMMof VMM Write to diskDo Memory Mapping (3), later(4) Copy data to VMM pages. Copy Pages to CM (2) Cache ManagerFile System Write the blocks to disk (5) Disk stack (SCSI/Fibre Channel) 26. Questions ?