28
www.metron-athene.com Taking a Trip Down “vSphere” Memory Lane Jamie Baker Principal Consultant [email protected]

Vmware vsphere taking_a_trip_down_memory_lane

  • Upload
    metron

  • View
    190

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Taking a Trip Down “vSphere” Memory Lane

Jamie Baker Principal Consultant

[email protected]

Page 2: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Agenda

•  Memory Management Concepts

•  Memory Reclamation / Overcommitment

•  Resource Pool – Limits and Enforcement

•  Performance Management Reporting

•  Troubleshooting and Best Practices

•  References

www.metron-athene.com

Page 3: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Management Concepts

•  Memory virtualization is next critical component

•  Processes see virtual memory

•  Guest operating systems use page tables to map virtual memory addresses to physical memory addresses

•  The Memory Management Unit (MMU) translates virtual addresses to physical addresses and the Translation Look-aside Buffer cache help the MMU speed up these translations.

•  Page table is consulted if a TLB hit is not achievable.

•  The TLB is updated with virtual/physical address map, when page table walk is completed.

3 24/07/2014

Page 4: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

MMU Virtualization

•  Hosting multiple virtual machines on a single host requires: –  Another level of virtualization – Host Physical Memory

•  Virtual Machine Monitor (VMM) maps “guest” physical addresses (PA) to host physical addresses (MA)

•  To support the Guest operating system, the MMU must be virtualized by using:

–  Software technique: shadow page tables –  Hardware technique: Intel EPT and AMD RVI

4 24/07/2014

Page 5: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Software MMU - Shadow Page Tables

•  Are created for each primary page table

•  Consist of two mappings: VA -> PA and PA -> MA

•  Accelerate memory access –  VMM points the hardware MMU directly at Shadow

Page Tables –  Memory access runs as native speed –  Ensures VM cannot access host physical memory that

is not associated

5 24/07/2014

Page 6: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Hardware MMU Virtualization

•  AMD RVI and Intel EPT permit two levels of address mapping –  Guest page tables –  Nested page tables

•  When a virtual address is accessed, the hardware walks both the guest page and nested page tables

•  Eliminates the need for VMM to synchronize shadow page tables with guest page tables

•  Can affect performance of applications that stress the TLB –  Increases the cost of a page walk –  Can be mitigated by use of Large Pages

6 24/07/2014

Page 7: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Virtualization Overhead

•  Software MMU virtualization incurs CPU overhead: –  When new processes are created

•  New address spaces created –  When context switching occurs

•  Address spaces are switched –  Running large numbers of processes

•  Shadow page tables need updating –  Allocating or deallocating pages

•  Hardware MMU virtualization incurs CPU overhead –  When there is a TLB miss –  Overall performance win over Shadow Page Tables

7 24/07/2014

Page 8: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Reclamation Challenges •  VM physical memory is not

“freed” –  Memory is moved to the “free”

list

•  The hypervisor is not aware when the VM releases memory –  It has no access to the VMs

“free” list –  The VM can accrue lots of

host physical memory

•  Therefore, the hypervisor cannot reclaim released VM memory

8 24/07/2014

Page 9: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

VM Memory Reclamation Techniques

•  The hypervisor relies on these techniques to “free” the host physical memory

•  Transparent page sharing (default) –  redundant copies reclaimed

•  Ballooning –  Forces guest OS to “free” up guest physical memory when the physical host memory is low –  Balloon driver installed with VMware Tools

•  Memory Compression –  Reduce number of memory pages it needs to swap out –  Decompression latency is much smaller than swap-in latency –  Compressing memory pages has significant less performance impact

•  Swap to Host Cache –  Allows users to configure a special swap-cache on SSD storage –  Much faster access than regular host-level swap area, significantly reducing access latency

•  Host-level (hypervisor) swapping –  Used when TPS and Ballooning are not enough –  Swaps out guest physical memory to the swap file –  Might severely penalize guest performance

9

Page 10: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Management Reporting

0

1,000

2,000

3,000

4,000

5,000

Average Swap space in use MB

Average Amount of memory used by memory control MB

Average Memory shared across VMs MB

Production ClusterMemory Shared, Ballooned and Swapped

VIXEN (ESX)

10 24/07/2014

Page 11: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Why does the Hypervisor Reclaim Memory?

•  Hypervisor reclaims memory to support memory overcommitment

•  ESX host memory is overcommitted when the total amount of VM physical memory exceeds the total amount of host

11 24/07/2014

Page 12: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

When to Reclaim Host Memory

•  ESX/ESXi maintains four host free memory states and associated thresholds: –  High (6%), Soft (4%), Hard (2%), Low (1%)

•  If the host free memory drops towards the stated

thresholds, the following reclamation technique is used:

12 24/07/2014

High  So(  Hard  Low  

None  Ballooning  Swapping  and  Ballooning  Swapping  

Page 13: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

vSwp file usage and placement guidelines

•  Used when memory is overcommitted

•  vSwp file is created for every VM

•  Default placement is with VM files

•  Can affect vMotion performance if vSwp file is not located on Shared Storage

13 24/07/2014

Page 14: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

VMkernel Swap

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

VM  Memory

Balloon

Swap  File

Reservation  MB

Example: •  Assume maximum memory

contention •  Default 65% can be Balloon

driver •  Example Reservation is 30% •  5% In the VMkernel (.vSwp)

file.

Page 15: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Resource Pool – Memory Reporting

0

5,000

10,000

15,000

20,000

25,000

ESX Host (Vixen)Priority Guests RP

Memory Limit vs. Memory Used Per Guest05/17/2010

• Pool  Limit  

• Guest  Memory  Usage  

• Pool  Memory  Usage  

Page 16: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Use of Limits

16 24/07/2014

Web 2 Web 3

Page 17: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Enforcing Limits – Web2 VM

17 24/07/2014

Page 18: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Enforcing Limits – Web3 VM

18 24/07/2014

Page 19: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

ESX Host (Web2) – VM Active Memory

19 24/07/2014

Page 20: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

ESX Host (Web 3) – VM Active Memory

20 24/07/2014

Additional VM hosted

Awacs-­‐web3  (Yell  VM  Data)  

Page 21: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Limits are enforced!

21 24/07/2014

awacs-­‐web3  

awacs-­‐web2  

Page 22: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Limits – A guide

•  Granted Memory overruled by Resource Pool limit

•  Enforces limits by reclaiming memory from VM

•  Be aware of any limits –  Resource Pool or VM

•  Monitor your VM Active and Host Consumed VM Memory

•  Reduce the Granted Memory rather than enforce limits

•  Use Reservations where necessary

22 24/07/2014

Page 23: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Monitoring VM and Host Memory Usage •  Active

–  amount of physical host memory currently used by the guest –  displayed as “Guest Memory Usage” in vCenter at Guest level

•  Consumed –  amount of physical ESX memory allocated (granted) to the guest, accounting for

savings from memory sharing with other guests. –  includes memory used by Service Console & VMKernel –  displayed as “Memory Usage” in vCenter at Host level –  displayed as “Host Memory Usage” in vCenter at Guest level

•  If consumed host memory > active memory

–  Host physical memory not overcommitted –  Active guest usage low but high host physical memory assigned –  Perfectly normal

•  If consumed host memory <= active memory –  Active guest memory might not completely reside in host physical memory –  This might point to potential performance degradation

23 24/07/2014

Page 24: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Active and Consumed - Report

24 24/07/2014

0

500

1000

1500

2000

2500

3000

3500

4000

4500

22/11/2011

Amount  of  host  memory  consumed  by  the  VM  MB

Windows  Used  Memory  MB

Active  Memory  MB

Total  Physical  Memory  MB

ORMNVAT01VM Host Consumed vs. Active VM Memory MB

Page 25: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Troubleshooting

1.  Active host-level swapping –  Cause: excessive memory overcommitment –  Resolution:

•  reduce memory overcommitment (add physical memory / reduce VMs) •  enable balloon driver in all VMs •  reduce memory reservations and use shares

2. Guest operating system paging –  Monitor the hosts ballooning activity –  If host ballooning > 0 look at the VM ballooning activity –  If VM ballooning > 0 check for high paging activity within the guest

OS

3. When swapping occurs before ballooning –  Many VMs are powered on at same time

•  VMs might access a large portion of their allocated memory •  At the same time, the balloon drivers have not started yet •  This causes the host to swap VMs

25 24/07/2014

Page 26: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Memory Performance Best Practices

•  Allocate enough memory to hold the working set of applications running in the virtual machine, thus minimizing swapping

•  Never disable the balloon driver

•  Keep transparent page sharing enabled

•  Avoid over committing memory to the point that it results in heavy memory reclamation

26 24/07/2014

Page 27: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

References

•  http://www.vmware.com/files/pdf/vsphere_pricing.pdf

•  http://www.vmware.com/technical-resources/performance/resources.html

•  http://www.metron-athene.com/training/webinars/index.html

27 24/07/2014

Page 28: Vmware vsphere taking_a_trip_down_memory_lane

www.metron-athene.com

Taking a Trip Down “vSphere” Memory Lane

Jamie Baker Principal Consultant

[email protected]