12
By Paul Schnackenburg 2012 Redmond Guide to Microsoft Hyper V NEW White Paper

DellAppasureWhitePaper_v2_0

Embed Size (px)

Citation preview

Page 1: DellAppasureWhitePaper_v2_0

By Paul Schnackenburg

2012 Redmond Guide to Microsoft Hyper V

NEW White Paper

Page 2: DellAppasureWhitePaper_v2_0

WHITE PAPER

About the AuthorPaul Schnackenburg, MCSE, MCT, MCTS and MCITP, started in IT in the days of DOS and 286 computers. He runs IT consultancy Expert IT Solutions, which is focused on Windows, Hyper-V and Exchange Server solutions.

Part 1: Processors ............................................................................................. 2

Part 2: Memory, Storage, Networking ..........................................................4

Part 3: 8 Tips and Tricks ...................................................................................6

Part 4: Monitoring Hyper-V The Right Way .................................................8

2012 Redmond Guide to Microsoft Hyper V

1WHITE PAPER

Page 3: DellAppasureWhitePaper_v2_0

WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

Picking the right hardware for your hosts and network in a new Microsoft Hyper-V implementation can be tricky, not to mention the task of measuring and monitoring performance when in production. In this

series of articles, I’ll look at the different components that make up a balanced underlying hardware fabric for Hyper-V, starting with processor allocation and continuing on to look at the memory, storage and network subsystems.

From there we’ll delve into performance tips and tricks, choosing the right flavor of Hyper-V and common configura-tion gotchas and finish off with performance monitoring of VMs and how it differs from monitoring in the physical world.

Note: All recommendations apply to Hyper-V in Windows Server 2008 R2 with Service Pack 1. The new Hyper-V, version in the upcoming Windows Server 8, changes the game considerably as far as scalability limits are concerned but that’s a topic for another series of articles. The advice I offer here applies only to the most recent Windows version available as of this article posting.

Virtual & Logical ProcessorsThere’s often a misconception among IT admins I talk to about what virtual processors and logical processors are, and how they affect the maximum number of VMs on a given physical host. This is directly related to the amount of physical memory of each host (which we’ll cover next time) as well as the number of processors assigned to VMs.

An LP is a core in a multi-core processor, so a quad core CPU has four LPs. If that quad core has Hyper Threading, it will appear as eight cores, which means your system has eight LPs. While this is how Microsoft’s documentation talks about LPs, be aware that HT doesn’t magically double the processor

capacity. To be on the safe side, just count cores as LPs—don’t double it when you have HT turned on.

VPs are what you assign to individual VMs and how many you can assign is dictated by the guest/VM operating system. Newer is more capable in this case, so Windows 2008/2008 R2 can work with four VPs, whereas Windows Server 2003 can only be assigned one or two VPs. SuSE Linux Enterprise, CentOS and Red Hat Enterprise Linux (all supported versions of these OSs) can be assigned up to four VPs. If you’re running client operat-ing systems in a VDI infrastructure, Windows 7 can work with up to four VPs, Vista can see two and Windows XP SP3 can see two VPs. More detailed information is available here.

Just because you can assign two or four VPs to a particular VM doesn’t mean you should. First of all, there’s some overhead in any multiprocessor system, whether physical or virtual because of cross-processor communication. But the penalty is smaller in newer OSes, so Windows 2008 R2 VMs will be fine with four VPs whereas Windows Server 2003 might require some testing to see if there are benefits with two VPs in your particular situation. Secondly, it all depends on the workload—some applications are heavily multithreaded (think SQL Server and the like) and will thrive on several VPs whereas single-threaded applications or those with only a few threads won’t benefit much.

Another common misconception is that assigning one or more VPs to a VM has a correlation to physical cores. Think of it more like giving a VM a chunk of scheduled CPU time, with the hypervisor actually spreading the load of running VMs across all available CPU cores.

The number of VPs assigned to a VMs on a particular host ties in with Microsoft’s recommendation to have no more than

Hyper-V on Hyper-Drive Part 1: Processors

Figure 1. Easily pinpoint the VP to LP ratio on your Hyper-V hosts with this simple cmdlet.

In this first of a series on Hyper-V, Paul reviews tips for configuring virtual and physical processors for optimum performance.

2WHITE PAPER

Page 4: DellAppasureWhitePaper_v2_0

3WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

four VPs per LP in a system, with a maximum of eight VPs to LPs. The exception: If you have all Windows 7 VMs in a VDI scenario, the maximum supported ratio is 12.

If you have a Hyper-V host with two quad core CPUs (=eight LPs) you are safe to have eight VMs running, each with four VPs (=32 VPs total) and a maximum of 16 VMs (=64 VPs total). If you only assigned two VPs to each VM, you could double those numbers in this artificial example where each VM is identical. In the real world, of course, the number of VPs will vary between VMs based on the workload inside it.

To check the ratio on your hosts you could manually look at each VM that’s running and add up the total number of assigned VPs, which isn’t very efficient. A better way is to run this simple PowerShell cmdlet which will give you the answer:

write-host (@(gwmi -ns root\virtualization MSVM_Processor).

count / (@(gwmi Win32_Processor) | measure -p NumberOfLogi-

calProcessors -sum).Sum) “virtual processor(s) per logical

processor” -f yellow

Thanks to Ben Armstrong, Virtualization Program Manager at Microsoft for this one liner.

Fig. 1 shows the value on my quad core laptop with HT enabled (=8 LPs), with four VMs running, each with four VPs.

It’s important to have an understanding of the workloads and applications you’re going to be running in each VM: Are they CPU bound or memory bound? Do they benefit from multi-threading and, thus, from additional VPs?

Make sure the processors you’re investing in support Second Level Address Translation (SLAT), which Intel calls Extended Page Tables (EPT) and which AMD calls Rapid Virtualization Indexing (RVI; earlier documentation from AMD called this Nested Page Tables (NPT)). Older processors that don’t support SLAT means each VM will occupy an extra 10 to 30 MB of memory and processor utilization will increase by 10 percent or more.

Depending on your workload, SLAT can bring tremendous benefits. If you’re virtualizing Remote Desktop Services you might see up to 40 percent more sessions with SLAT proces-sors. Processors with large L2 and L3 cache will also help workloads with large memory requirements.

Finally if you have a host where there’s limited CPU resources you can alter the balance between VMs with the VM reserve

setting that guarantees that this amount of CPU resources will always be available to the VM (but might limit the total number of VMs that can run on the host) as well as the VM limit setting that controls how much of the assigned processor capacity it will use. The Relative weight balances this VM against other running VMs, a lower value means it will receive less resources in times of contention. Microsoft’s recommendation is to leave these settings alone unless there’s a compelling reason to change them.

There are also processor compatibil-ity settings that let you move VMs between hosts that have different generations of processors as well as letting you run ancient OSs such as Windows NT.

Next time, we’ll look at networking, memory and storage planning considerations for a Hyper-V deployment.

Figure 2. Assigning Virtual Processors to a VM is easy; just pick from the list.

Page 5: DellAppasureWhitePaper_v2_0

WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

Last time, we looked at processors and the balance of virtual and logical processors in Hyper-V hosts. Now, let’s look at how to choose a good mix of memory, networking and disk resources to match your budget.

MemoryBefore Service Pack 1 was released for Windows Server 2008 R2, assigning memory to VMs and architecting host machines was difficult because only a fixed amount could be assigned to each VM whether it needed it or not. Because Dynamic Memory is such a game changer in the Hyper-V world make sure all your hosts are running Windows 2008 R2 SP1.

It’s also important to understand that Dynamic Memory in Hyper-V differs from ballooning in VSphere/ESXi because the hypervisor communi-cates with the VM to find out its memory needs. This allows intelligent choices to be made about allocating more or less memory to a VM but does require guest OSs that can “talk” to the hypervisor, hence Dynamic Memory is only supported with VMs running Windows Server 2008 SP2/2008R2 SP1, 2003 R2 SP2 and 2003 SP2 along with Windows 7 and Vista SP1. More information on supported platforms can be found here.

Some applications (Exchange and SQL server are examples) do their own memory management and Microsoft strongly recommends not using Dynamic Memory for VMs running these workloads. With Dynamic Memory each VM is allocat-ed a certain amount of start-up memory. Microsoft recommends 512 MB for Windows Server 2008/2008 R2, Vista and Windows 7 while Windows 2003 and XP should be assigned 128 MB.

The host or parent partition has a default memory reserve. You can alter this using a registry key located at HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Memory Reserve. This key is a REG_DWORD and you have to create it: Default decimal value is 32 for 32 MB, maximum is 1024. If you follow best practises and run minimal services in the host, the default should work fine. Microsoft only supports backup, management and anti-malware agents in the parent partition.

Bottom line: For efficient use of your hardware and to get good performance out of your VMs ensure that the total memory needed by your VMs stays lower than the overall amount of RAM in your hosts.

Hyper-V on Hyper-Drive Part 2: Memory, Storage, NetworkingNow that we know how Hyper-V can take advantage of processors, it’s a good time to look at how we can take advantage of memory, networking and disk resources without breaking the budget.

Figure 1. Assigning the right amount of memory to running virtual servers is a whole lot easier with Dynamic Memory.

4WHITE PAPER

Page 6: DellAppasureWhitePaper_v2_0

5WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

StorageStorage is always a tricky part of server design and no less so in the virtual world. Ensuring that applica-tions and VMs can access the required amount of IOPS (Input/Output Operations per Second) is crucial and virtualization has made it even harder. In the old world, we could assess this need on a per server basis but now we may have many servers with different IOPS profiles running on the same physical host.

Some applications have specific storage enhancements (Exchange Server 2010 for instance has several tricks to optimize the performance of the underlying disk subsystem for sequential IO, as does SQL Server). All of these optimizations are lost when you move to virtualized disks as the “disk” the VM sees is actually just a large file on a drive or a SAN. There are a few ways to compensate for this; one is to use pass-through disks (raw disks in the VMware world) where the VM has full access to a physical disk. The drawback is that there’s no way of backing up the disk from outside of the VM. The other option is to decrease the latency and increase the speed of the disks which generally means a more expensive SAN with more spindles and / or SSD disks. The latter have excellent perfor-mance for random read IO making them eminently suited for storing VHD files, but of course their cost per gigabyte is high.

There are two types of VHD disks that you can attach to a VM, fixed size or dynamic. The former means that a 100 GB VHD is created as a 100 GB file initially, the latter starts off as a small file (while still appearing as a 100 GB drive to the VM) but grows as data is added. The benefit of the latter is better utilization of your storage hardware as only the actual used storage is consumed, but you have to be careful that you don’t oversub-scribe the underlying storage and run out of space as virtual disks grow. The golden rule used to be that fixed disks gave better IO performance, but the gap is closing and in Hyper-V 2008 R2 the difference is minimal. For a more in-depth explora-tion, see this white paper from Microsoft; the relevant section starts at page 25. Be aware that some workloads aren’t sup-ported on dynamically expanding disks, such as Exchange.

NetworkingTo achieve a well performing Hyper-V platform, don’t forget the networking subsystem. If you have five, ten or more VMs on a host, don’t expect them to fit all their connectivity needs

through a couple of Gigabit NICs. As always, it pays to know your workloads. If you’re going to virtualize busy file servers make sure to allow enough virtual network cards for the task. Hyper-V supports up to eight synthetic network interfaces in each VM (along with four emulated NICs, but these are not recommended for performance). 10 GB Ethernet is starting to become affordable and is a great way of increasing bandwidth.

NIC teaming is another area where it pays to thread carefully. Officially, Microsoft doesn’t support NIC teaming, but some of the vendors/OEMs do. Check with the manufacturer of your NIC to find out if they do support NIC teaming.

If you’re going to use iSCSI for your storage, make sure to allow network cards for this connectivity as well, and use Jumbo Frames and disable File Sharing and DNS services from these NICs. Ensure that your NICs support performance enhancing features that are supported in Windows Server 2008 R2 such as TCP Chimney Offload and Virtual Machine Queues (VMQ). When using TCP Chimney Offload, you have to enable it both in the OS and in the properties of the driver for each NIC.

Next time, we’ll look at some tricks for improving the performance of your VMs.

Figure 2. Creating virtual networks is easy in Hyper-V, but we’ll have to wait for Hyper-V in Server 8 for true virtual network switch functionality.

Page 7: DellAppasureWhitePaper_v2_0

6WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

Hyper-V on Hyper-Drive Part 3: 8 Tips and TricksSome well-known—and some more obscure—tips and tricks for enhancing Hyper-V.

In the last two installments in this series we looked at selecting CPU, disk and networking hardware for Hyper-V as well as how to configure the environment for perfor-mance. This time, let’s look at some well-known and

some more obscure tips and tricks for enhancing Hyper-V.

Integration componentsFirst and foremost, make sure that the latest version of the Integration Components (IC) are loaded in every VM—System Center Virtual Machine Manager will warn you when they’re out of date in a VM. This is the most important step for improving VM performance. If you’re unsure if they’re installed, simply check under System Devices in System Manager in the VM. The presence of Virtual Machine Bus indicates that the IC are installed.

Hyper-V ManagerIf you’re running the full GUI version of Windows on the host, close Hyper-V Manager (see Fig. 1) when it’s not being used, as thumbnails of VM screens costs resources in both host and guest while monitoring of performance statistics cause WMI activity in the parent partition. Another tip is to use Remote Desktop Sessions to connect to VMs instead of Virtual Machine Connection (which is what’s used when you

double-click on a VM in the manager), as this taxes the VM’s resources less.

Host OSIn a lab environment, running Hyper-V on the full GUI version of Windows Server 2008 R2 SP1 certainly simplifies configuration and management and is an acceptable trade-off. In production, however, Server Core or the free Hyper-V server are better choices as they come with less overhead (about 80 MB less commit charge). They also come with the “Novell benefit”—that is, it’s far less likely that someone will muck around with them, as they’re command-line only.

Guest OSThe rule here is simple: The newer the OS, the happier it is to be virtualized. So, Windows Server 2008 R2 SP1 and Windows 7 are your best candidates. This is true even if you’re trying to “squeeze” an extra VM or two onto a crowded host. In the physical world we generally look to older OSs as requiring less resources, but in the virtual world the opposite applies.

ServicesLimit the services that are running in the parent partition, not just to give as much of the host’s resources to your VMs but

also to maintain a supported configuration. Microsoft’s words are clear on this point. You can only run management-, backup—and if absolutely necessary, malware—agents in the parent partition, and nothing else.

SnapshotsA common misconception is that Hyper-V snapshots are some form of backups—they’re definitely not. They’re simply a way of capturing a snapshot of a VM at a given time and allows you to return the guest OS to that point with a single click. While this is very handy in a lab environment or for developers, many workloads (AD, SQL and Exchange Server for example) don’t support snap-shots. Also, a snapshot makes the original VHD read-only and Figure 1. Don’t leave Hyper-V Manager running when you’re not using it.

Page 8: DellAppasureWhitePaper_v2_0

7WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

subsequent changes to the disk are saved in a .avhd file, which impacts performance.

Background CPU activityTo stop wasted background processor activity in VMs, remove unused devices (such as COM ports), disable the screen saver and leave them at the logon screen when you’re not actively managing them. If your VMs are client OSs, disable Super-Fetch, Windows Search and the default scheduled defrag job. When you can, remove the virtual DVD drive, as the VM checks every second to see if there’s media in the “drive.” Setting it to “no media” isn’t enough; you actually have to remove it under VM settings.

Network configurationVMs can either use synthetic virtual NICs or legacy network adapters. The latter is required if you need your VMs to be able to PXE boot or the guest OS doesn’t support the Integration Components. But in all other cases, make sure

your VMs are using only synthetic NICs because of their performance. When creating Virtual Network names on hosts in a cluster make sure they’re identical on all hosts, as Live Migration/Quick Migration won’t work otherwise.

On the host side, depending on the size of your cluster, dedicate at least one 1GB NIC to Live Migration. You’ll need additional NICs for cluster heartbeat, management and backup. If you’re using iSCSI, you’ll also need at least two NICs dedicated to storage access, preferably connected to two physical switches and use Multipath I/O (MPIO) to provide redundant paths to the storage.

Other Common GotchasMicrosoft has a list of other common Hyper-V configuration mistakes people make; read it here.

In the final part of this series, we’ll look at how to monitor Hyper-V and VM performance.

Page 9: DellAppasureWhitePaper_v2_0

WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

Now that we have the basics of Microsoft Hyper-V down, let’s look at how to monitor performance of VMs and how your skills in this area translate from the physical to the virtual world. There’s

sometimes a sense among IT pros (or their bosses) that performance monitoring isn’t necessary any longer—“just beef up the VM with more resources”—but I think it’s vital to understand what’s going on under the covers because that’s essential knowledge when things go wrong. It also pays to be able to determine which resource (Processor, Disk, Network, Memory) is under pressure.

Task Manager Inside a VM Is a Liar First of all, don’t ever rely on Task Manager (or even Sysinternal’s Process Monitor) inside of a VM to inspect performance. A VM only sees its own view of the world with regards to memory and processor usage and it’s a very false view indeed. To give you an example, imagine a single VM with four VPs on a host with a quad-core CPU with an application running that consumes all available CPU resources. This VM will get most of the processor performance of the host (with a little reserved for the parent partition). Task Manager in the VM will report 100-percent CPU usage across its four virtual cores.

If you now start another VM and run the same application inside it, it will also report 100-percent CPU across its processors, but each of those applica-tions will actually be working at half the speed compared to a single VM running. The same goes for memory monitoring—with dynamic memory in play, Task Manager can’t give an accurate portrayal of memory usage.

Baselines, Baselines, Baselines!The second step to take for all performance monitoring, in both the virtual and the physical world, is to establish baselines when everything is hunky-dory. If you have measurements of the main components of a VM at a time when users are happy, it’s much easier to spot the problem area when users or your monitoring software flags a problem, by simple comparison.

Your friend here is Performance Monitor, which is present on every Windows system. Learn how to use Data Collector Sets to log counters over time.

Hyper-V on Hyper-Drive Part 4: Monitoring Hyper-V The Right WayIt’s time to put what you’ve learned into practice and then make sure that Hyper-V is running at hyperspeed.

Figure 1. Really spend some time with Performance Monitor, getting to know the different objects and counters and how your fabric and VMs are actually performing.

8WHITE PAPER

Page 10: DellAppasureWhitePaper_v2_0

9WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

Hyper-V Counters Are Your FriendsEven Task Manager and Performance Monitor in the host can be confused and lie about virtualization performance if you use the normal counters. Fortunately there are Hyper-V specific counters that don’t lie (see Fig. 1); for processors use the Hyper-V Hypervisor Logical Processor\% Total Run Time counter. This monitors the load on your physical cores in your processors. To look in on CPU performance inside a VM use Hyper-V Hypervisor Virtual Processor\% Guest Run Time; this lets you monitor VPs for each VM or a total of all running VMs. For the latter, a rule of thumb is that less than 75 percent across all VMs is a healthy load on the host overall, over 75 percent is a warning and more than 85 percent needs to be looked into.

Committed MemoryWhile dynamic memory makes memory management a bit fluid, do keep an eye on \Memory\Available Mbytes for the host. As long as there’s 10 percent free you should be right, but when it goes under 10 percent free it’s a warning. At less than 100 MB it’s definitely time to investigate.

Dynamic memory also brings a new set of counters. The most important is the \Hyper-V Dynamic Memory Balancer\Average Pressure counter, where healthy is less than 80. A value between 80 and 100 deserves attention, while over 100 indicates a critical condition.

Having a Bit (of Bemory) Up Your SleeveDynamic memory lets you set an initial value (the VM will never have less than this amount of memory) and you can let it scale to use whatever memory it needs. Even then, you always want to have a bit of an extra buffer. That’s what the Memory Buffer setting for dynamic memory is about, with the default at 20 percent. If you have workloads to utilize a lot of file cache (file servers), it can help performance if you increase the buffer size (see Fig. 2).

Disk LatencyKeep an eye on your disks with the \LogicalDisk(*)\Average Disk Sec\Read or Write counters which indicate disk latency. A good rule of thumb is that OK is less than 10ms (0.010); 15ms or above (0.015) is a warning; at 25ms or above (0.025) the situation is critical.

Network monitoringTo monitor network usage the counter, \Network Interface (*)\OutputQueue Length is your friend. Less than 1 on average is healthy, warning is when it’s above 1 on average, critical is when it’s 2 or more on average.

ConclusionWe’ve covered hardware considerations for Hyper-V, performance optimiza-tions, tips and gotchas to avoid for performance and how to monitor the performance of your VMs. The key now is to inspect your current environment and implement the hints and tips in this series that make sense in your environ-ment and measure performance before and after each step. Good luck with squeezing hyperspeed out of your Hyper-V environment.

Hyper-V Performance Tuning ResourcesDownload a performance tuning whitepaper for both the physical and virtual world from Microsoft for Windows Server 2008 R2 here.

For a variety of information on performance tasks, look here.

The Performance Analysis of Logs (PAL) is a PowerShell script that can help you out in deciding which performance counters to collect and how to analyze them.

For disk performance tasks, have a look at Iometer.

Detailed technical information on how to measure perfor-mance in Hyper-V can be found here.

Figure 2. Set your buffer based on the file cache you expect the server to use.

Page 11: DellAppasureWhitePaper_v2_0

10WHITE PAPER

Test Drive AppAssure Microsoft Hyper-V Solutions Today!

About AppAssure SoftwareAppAssure is the #1 unified bakup & replication software for virtual, physical and cloud environments. This multiple award-winning and customer-proven software recovers virtual and physical servers, applications and data in minutes instead of days or hours. AppAssure’s innovative and groundbreaking technologies assure 100% reliability of recovery and goes beyond just protecting data to protecting entire applications. It also supports multi-hypervisor environments including VMware vSphere / ESXi, Microsoft Hyper-V and Citrix XenServer. AppAssure is an Elite VMware Technology Alliance Partner and Microsoft Gold Certified artner. With more than 6,000 customers, partners and service providers in over 50 countries and over 3,000% growth in three years, AppAssure is the world’s fastest growing backup software company as ranked by Inc. Magazine.

AppAssure’s 3 Innovative andGroundbreaking Backup Technologies:

1. Live Recovery™ Instant restore of VMs or Servers – near-zero recovery time (RTO) & 5-minute RPO

2. Recovery Assure™ Assurance of 100% Reliability of Recoverability

3. Universal Recovery™ Anywhere to Anywhere Restore – to any VM or dissimilar hardware with Granular Object Level Recovery

Page 12: DellAppasureWhitePaper_v2_0

© 2012. AppAssure Software. All Right Reserved.

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAINTYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT ISPROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

AppAssure Software, Inc.1925 Isaac Newton Square East, Suite 440, Reston, V A 20190 Americas: 1-866-459-6653

Connect with AppAssure: http://www.appassure.com/facebook www.appassure.com/twitter

www.appassure.com/blogEMEA: 44-1306-888864

www.appassure.com/Free-Trial

5 Reasons to Try AppAssure –Get a FREE Trial Now!

1. Ultra-Fast Backup & Recovery – near-zero Recovery Time & 5-minute RPO

2. Recovery Auto-Testing and Auto-Verification – 100% Recoverability

3. Unified Backup & Replication from One Single Pane of Glass

4. Recovery Anywhere to Anywhere (P2V, V2V, V2P, P2P)

5. True Global Deduplication