114
VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private Cloud deployments with VMware vSphere and EMC VNX for up to 250 virtual machines using NFS Storage. January, 2013 EMC ® VSPEX PRIVATE CLOUD VMware vSphere ® 5.1 for up to 250 Virtual Machines Enabled by Microsoft ® Windows ® Server 2012, EMC VNX , and EMC Next- Generation Backup

EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Embed Size (px)

Citation preview

Page 1: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Proven Infrastructure

EMC VSPEX

Abstract

This document describes the EMC VSPEX Proven Infrastrucutre solution for Private Cloud deployments with VMware vSphere and EMC VNX for up to 250 virtual machines using NFS Storage.

January, 2013

EMC® VSPEX™ PRIVATE CLOUD VMware vSphere® 5.1 for up to 250 Virtual Machines Enabled by Microsoft® Windows® Server 2012, EMC VNX™, and EMC Next-Generation Backup

Page 2: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

2

Copyright © 2013 EMC Corporation. All rights reserved. Published in the USA.

Published January 2013

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website.

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

Part Number H11329.1

Page 3: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

3

Contents

Chapter 1 Executive Summary 13

Introduction .................................................................................................. 14

Target audience ............................................................................................ 14

Document purpose ....................................................................................... 14

Business needs ............................................................................................ 15

Chapter 2 Solution Overview 17

Introduction .................................................................................................. 18

Virtualization ................................................................................................ 18

Compute ....................................................................................................... 18

Network ........................................................................................................ 19

Storage ......................................................................................................... 19

Chapter 3 Solution Technology Overview 21

Overview ....................................................................................................... 22

Summary of key components ........................................................................ 23

Virtualization ................................................................................................ 24

Overview .............................................................................................................. 24

VMware vSphere 5.1 ............................................................................................ 24

VMware vCenter ................................................................................................... 24

VMware vSphere High Availability ........................................................................ 24

EMC Virtual Storage Integrator for VMware ........................................................... 25

VNX VMware vStorage API for Array Integration support ....................................... 25

Compute ....................................................................................................... 25

Overview .............................................................................................................. 25

Network ........................................................................................................ 28

Overview .............................................................................................................. 28

Storage ......................................................................................................... 29

Overview .............................................................................................................. 29

Page 4: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Contents

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

4

EMC VNX series .................................................................................................... 29

VNX FAST Cache (optional) ................................................................................... 30

VNX FAST VP (optional) ........................................................................................ 30

Backup and recovery ..................................................................................... 31

Overview .............................................................................................................. 31

EMC Avamar ......................................................................................................... 31

Other technologies ....................................................................................... 32

Overview .............................................................................................................. 32

EMC VFCache (optional) ....................................................................................... 32

Chapter 4 Solution Architecture Overview 35

Solution overview ......................................................................................... 36

Solution architecture .................................................................................... 36

Overview .............................................................................................................. 36

Architecture for up to 125 virtual machines .......................................................... 37

Architecture for up to 250 virtual machines .......................................................... 38

Key components .................................................................................................. 38

Hardware resources ............................................................................................. 40

Software resources .............................................................................................. 42

Server configuration guidelines .................................................................... 42

Overview .............................................................................................................. 42

VMware vSphere memory virtualization for VSPEX................................................ 43

Memory configuration guidelines ......................................................................... 45

Network configuration guidelines ................................................................. 46

Overview .............................................................................................................. 46

Enable jumbo frames ........................................................................................... 47

Link aggregation .................................................................................................. 48

Storage configuration guidelines .................................................................. 48

Overview .............................................................................................................. 48

VMware vSphere storage virtualization for VSPEX ................................................ 49

Storage layout for 125 virtual machines ............................................................... 50

Storage layout for 250 virtual machines ............................................................... 52

High availability and failover ......................................................................... 54

Overview .............................................................................................................. 54

Virtualization layer ............................................................................................... 54

Compute layer ...................................................................................................... 54

Network layer ....................................................................................................... 55

Storage layer ........................................................................................................ 56

Backup and recovery configuration guidelines .............................................. 57

Overview .............................................................................................................. 57

Page 5: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Contents

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

5

Backup characteristics ......................................................................................... 57

Backup layout ...................................................................................................... 58

Sizing guidelines .......................................................................................... 58

Reference workload ...................................................................................... 58

Overview .............................................................................................................. 58

Defining the reference workload ........................................................................... 59

Applying the reference workload ................................................................... 59

Overview .............................................................................................................. 59

Example 1: Custom-built application.................................................................... 60

Example 2: Point of scale system ........................................................................ 60

Example 3: Web server ........................................................................................ 60

Example 4: Decision-support database ............................................................... 61

Summary of examples .......................................................................................... 61

Implementing the reference architectures ..................................................... 62

Overview .............................................................................................................. 62

Resource types .................................................................................................... 62

CPU resources ...................................................................................................... 62

Memory resources................................................................................................ 62

Network resources ............................................................................................... 63

Storage resources ................................................................................................ 63

Implementation summary .................................................................................... 64

Quick assessment ......................................................................................... 65

Overview .............................................................................................................. 65

CPU requirements ................................................................................................ 65

Memory requirements .......................................................................................... 65

Storage performance requirements ...................................................................... 66

I/O operations per second (IOPS) ......................................................................... 66

I/O size ................................................................................................................ 66

I/O latency ........................................................................................................... 66

Storage capacity requirements ............................................................................. 67

Determining Equivalent Reference Virtual Machines ............................................ 67

Fine tuning hardware resources ........................................................................... 70

Chapter 5 VSPEX Configuration Guidelines 73

Configuration overview ................................................................................. 74

Deployment process ............................................................................................ 74

Pre-deployment tasks ................................................................................... 75

Overview .............................................................................................................. 75

Deployment prerequisites .................................................................................... 75

Customer configuration data ......................................................................... 77

Page 6: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Contents

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

6

Prepare switches, connect network, and configure switches ......................... 77

Overview .............................................................................................................. 77

Prepare network switches .................................................................................... 78

Configure infrastructure network .......................................................................... 78

Configure VLANs .................................................................................................. 79

Complete network cabling .................................................................................... 79

Prepare and configure storage array ............................................................. 79

VNX configuration ................................................................................................ 79

Install and configure vSphere infrastructure ................................................. 89

Overview .............................................................................................................. 89

Install ESXi ........................................................................................................... 89

Configure ESXi networking ................................................................................... 89

Jumbo frames ....................................................................................................... 90

Connect VMware datastores ................................................................................. 90

Plan virtual machine memory allocations ............................................................. 90

Install and configure SQL server database .................................................... 93

Overview .............................................................................................................. 93

Create a virtual machine for Microsoft SQL server................................................. 93

Install Microsoft Windows on the virtual machine ................................................ 93

Install SQL server ................................................................................................. 94

Configure database for VMware vCenter ............................................................... 94

Configure database for VMware Update Manager ................................................. 94

Install and configure VMware vCenter server................................................. 95

Overview .............................................................................................................. 95

Create the vCenter host virtual machine ............................................................... 96

Install vCenter guest OS ....................................................................................... 96

Create vCenter ODBC connections ........................................................................ 96

Install vCenter server ........................................................................................... 96

Apply vSphere license keys .................................................................................. 96

Deploy the VNX VAAI for NFS plug-in .................................................................... 97

Install the EMC VSI plug-in ................................................................................... 97

Summary ...................................................................................................... 97

Chapter 6 Validating the Solution 99

Overview ..................................................................................................... 100

Post-install checklist ................................................................................... 101

Deploy and test a single virtual server ........................................................ 101

Verify the redundancy of the solution components ..................................... 101

Page 7: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Contents

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

7

Appendix A Bills of Materials 103

Bill of materials ........................................................................................... 104

Appendix B Customer Configuration Data Sheet 107

Customer configuration data sheet ............................................................. 108

Appendix C References 111

References .................................................................................................. 112

EMC documentation ........................................................................................... 112

Other documentation ......................................................................................... 112

Appendix D About VSPEX 113

About VSPEX ............................................................................................... 114

Page 8: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Contents

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

8

Page 9: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

9

Figures

Figure 1. Private Cloud components .................................................................. 22 Figure 2. Compute layer flexibility ..................................................................... 26 Figure 3. Example of highly-available network design ....................................... 28 Figure 4. Logical architecture for 125 virtual machines ..................................... 37 Figure 5. Logical architecture for 250 virtual machines ..................................... 38 Figure 6. Hypervisor memory consumption ....................................................... 44 Figure 7. Required networks ............................................................................. 47 Figure 8. VMware virtual disk types ................................................................... 49 Figure 9. Storage layout for 125 virtual machines ............................................. 50 Figure 10. Storage layout for 250 virtual machines ............................................. 52 Figure 11. High Availability at the virtualization layer .......................................... 54 Figure 12. Redundant power supplies ................................................................. 54 Figure 13. Network layer High Availability (VNX) .................................................. 55 Figure 14. VNX series High Availability ................................................................ 56 Figure 15. Resource pool flexibility ..................................................................... 61 Figure 16. Required resource from the reference virtual machine pool ................ 68 Figure 17. Aggregate resource requirements from the referenced virtual machine

pool ................................................................................................... 70 Figure 18. Customizing server resources ............................................................. 70 Figure 19. Sample Ethernet network architecture ................................................ 78 Figure 20. Direct Writes Enabled checkbox ......................................................... 82 Figure 21. Storage System Properties dialog box ................................................ 83 Figure 22. Create FAST Cache dialog box ............................................................. 84 Figure 23. Advanced tab in the Create Storage Pool dialog ................................. 85 Figure 24. Advanced tab in the Storage Pool Properties dialog ............................ 85 Figure 25. Storage Pool Properties dialog box ..................................................... 86 Figure 26. Manage Auto-Tiering dialog box ......................................................... 87 Figure 27. LUN Properties dialog box .................................................................. 88 Figure 28. Virtual machine memory settings ....................................................... 92

Page 10: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Figures

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

10

Page 11: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

11

Tables

Table 1. VNX customer benefits ....................................................................... 29 Table 2. Solution hardware .............................................................................. 40 Table 3. Solution software ............................................................................... 42 Table 4. Hardware resources for compute ........................................................ 43 Table 5. Hardware resources for network ......................................................... 46 Table 6. Hardware resources for storage .......................................................... 48 Table 7. Profile characteristics ......................................................................... 57 Table 8. Virtual machine characteristics........................................................... 59 Table 9. Blank worksheet row .......................................................................... 65 Table 10. Reference Virtual Machine resources .................................................. 67 Table 11. Example worksheet row ...................................................................... 68 Table 12. Example applications ......................................................................... 69 Table 13. Server resource component totals ...................................................... 71 Table 14. Deployment process overview ............................................................ 74 Table 15. Tasks for pre-deployment ................................................................... 75 Table 16. Deployment prerequisites checklist .................................................... 75 Table 17. Tasks for switch and network configuration ........................................ 77 Table 18. Tasks for storage configuration ........................................................... 79 Table 19. Tasks for server installation ................................................................ 89 Table 20. Tasks for SQL server database setup .................................................. 93 Table 21. Tasks for vCenter configuration .......................................................... 95 Table 22. Tasks for testing the installation ....................................................... 100 Table 23. List of components used in the VSPEX solution for

125 virtual machines ........................................................................ 104 Table 24. List of components used in the VSPEX solution for

250 virtual machines ........................................................................ 106 Table 25. Common server information ............................................................. 108 Table 26. ESXi server information .................................................................... 108 Table 27. Array information .............................................................................. 109 Table 28. Network infrastructure information ................................................... 109 Table 29. VLAN information ............................................................................. 109 Table 30. Service accounts .............................................................................. 110

Page 12: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Tables

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

12

Page 13: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

13

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction............................................................................................... 14

Target audience ......................................................................................... 14

Document purpose .................................................................................... 14

Business needs ......................................................................................... 15

Page 14: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Executive Summary

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

14

Introduction VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX helps to reduce virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT Transformation by enabling faster deployments, choice, greater efficiency, and lower risk.

This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums.

Target audience The readers of this document are expected to have the necessary training and background to install and configure VMware vSphere, EMC VNX series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and the readers should be familiar with these documents.

Readers are also expected to be familiar with the infrastructure and database security policies of the custom installation.

Users focusing on selling and sizing a VMware Private Cloud infrastructure should pay particular attention to the first four chapters of this document. After purchase, implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices.

Document purpose This document is an initial introduction to the VSPEX architecture, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy the system.

The VSPEX Private Cloud architecture provides the customer with a modern system capable of hosting a large number of virtual machines at a consistent performance level. This solution runs on a VMware vSphere virtualization layer backed by highly available VNX family storage. The compute and network components, which are defined by the VSPEX Partners, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment.

The 125 and 250 virtual machine environments discussed are based on a defined reference workload. Since not every virtual machine has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed. For smaller environments, solutions for up to 100 virtual machines

Page 15: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Executive Summary

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

15

based on the EMC VNXe series are described in EMC VSPEX Private Cloud: VMware vSphere 5.1 for up to 100 Virtual Machines.

A Private Cloud architecture is a complex system offering. This document facilitates its setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. After the last component has been installed, there are validation tests to ensure that your system is running properly. Following the instructions in this document ensures an efficient and painless journey to the cloud.

Business needs VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk.

Business applications are moving into consolidated compute, network, and storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs for the VSPEX Private Cloud for VMware architectures:

Providing an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components.

Providing a VSPEX Private Cloud solution for VMware for efficiently virtualizing up to 250 virtual machines for varied customer use cases.

Providing a reliable, flexible, and scalable reference design.

Page 16: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Executive Summary

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

16

Page 17: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

17

Chapter 2 Solution Overview

This chapter presents the following topics:

Introduction............................................................................................... 18

Virtualization ............................................................................................. 18

Compute ................................................................................................... 18

Network ..................................................................................................... 19

Storage ..................................................................................................... 19

Page 18: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

18

Introduction The EMC VSPEX Private Cloud for VMware vSphere 5.1 provides complete system architecture capable of supporting up to 250 virtual machines with a redundant server or network topology and highly available storage. The core components that make up this particular solution are virtualization, storage, server compute, and networking.

Virtualization VMware vSphere is the leading virtualization platform in the industry. For years, it has provided flexibility and cost savings to the end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vSphere components are the VMware vSphere Hypervisor and the VMware vCenter Server for system management.

The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run on the system at one time as virtual machines. These hypervisor systems can be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through the vCenter product, and allow for dynamic allocation of CPU, memory and storage across the cluster.

Features like vMotion™, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which perform vMotions automatically to balance load, make vSphere a solid business choice.

With the release of vSphere 5.1, a VMware-virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual RAM.

Compute VSPEX provides the flexibility to design and implement your choice of server components. The infrastructure must conform to the following attributes:

Sufficient RAM, cores and memory to support the required number and types of virtual machines

Sufficient network connections to enable redundant connectivity to the system switches

Excess capacity to withstand a server failure and failover in the environment

Page 19: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

19

Network VSPEX provides the flexibility to design and implement the customer’s choice of network components. The infrastructure must conform to the following attributes:

Redundant network links for the hosts, switches, and storage.

Support for Link Aggregation.

Traffic isolation based on industry-accepted best practices.

Storage The EMC VNX storage family is the leading shared storage platform in the industry. VNX provides both file and block access with a broad feature set, which makes it an ideal choice for any private cloud implementation.

VNX storage includes the following components that are sized for the stated reference architecture workload:

Host adapter ports – Provide host connectivity via fabric to the array.

Data Movers – Front-end appliances that provide file services to hosts (optional if CIFS/SMB, NFS services are provided).

Storage processors – The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays.

Disk drives – Disk spindles that contain the host or application data and their enclosures.

The 125 and 250 virtual machine VMware Private Cloud solutions described in this document are based on the VNX5300TM and VNX5500 TM storage arrays respectively. VNX5300 can support a maximum of 125 drives, and VNX5500 can host up to 250 drives.

The EMC VNX series supports a wide range of business class features ideal for the private cloud environment including:

Fully Automated Storage Tiering for Virtual Pools (FAST VP)

FAST Cache

Data deduplication

Thin Provisioning

Replication

Snapshots/Checkpoints

File-Level Retention

Quota Management

Page 20: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

20

Page 21: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

21

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Overview ................................................................................................... 22

Summary of key components ..................................................................... 23

Virtualization ............................................................................................. 24

Compute ................................................................................................... 25

Network ..................................................................................................... 28

Storage ..................................................................................................... 29

Backup and recovery ................................................................................. 31

Other technologies .................................................................................... 32

Page 22: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

22

Overview This solution uses the EMC VNX series and VMware vSphere 5.1 to provide storage and server hardware consolidation in a Private Cloud. The new virtualized infrastructure is centrally managed, to provide efficient deployment and management of a scalable number of virtual machines and associated shared storage.

Figure 1 depicts the solution components.

Figure 1. Private Cloud components

The components are described in more details in the following sections.

Page 23: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

23

Summary of key components This section describes the key components of this solution.

Virtualization

The virtualization layer enables the physical implementation of resources to be decoupled from the applications that use them. In other words, the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the Private Cloud concept.

Compute

The compute layer provides memory and processing resources for the virtualization layer software, and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and enables the customer to implement the solution by using any server hardware that meets these requirements.

Network

The network layer connects the users of the private cloud to the resources in the cloud, and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables the customer to implement the solution by using any network hardware that meets these requirements.

Storage

The storage layer is critical for the implementation of the private cloud. With multiple hosts accessing shared data, many of the use cases defined in the Private Cloud can be implemented. The EMC VNX storage family used in this solution provides high-performance data storage while maintaining high availability.

Backup and recovery

The optional backup and recovery components of the solution provide data protection when the data in the primary system is deleted, damaged, or unusable.

Solution architecture provides details on all the components that make up the reference architecture.

Page 24: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

24

Virtualization

The virtualization layer is a key component of any Server Virtualization or Private Cloud solution. It enables the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and the physical capability of the system to change without affecting the hosted applications. In a server virtualization or private cloud use case, it enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware.

VMware vSphere 5.1 transforms the physical resources of a computer by virtualizing the CPU, RAM, hard disk, and network controller. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers.

The high-availability features of VMware vSphere 5.1 such as vMotion and Storage vMotion enable seamless migration of virtual machines and stored files from one vSphere server to another, or from one data storage area to another, with minimal or no performance impact. Coupled with vSphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources.

VMware® vCenterTM is a centralized management platform for the VMware Virtual Infrastructure. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, which can be accessed from multiple devices.

VMware vCenter also manages some advanced features of the VMware virtual infrastructure such as VMware vSphere High Availability and Distributed Resource Scheduling (DRS), along with vMotion and Update Manager.

The VMware vSphere High Availability feature enables the virtualization layer to automatically restart virtual machines in various failure conditions.

If the virtual machine operating system has an error, the virtual machine can be automatically restarted on the same hardware.

If the physical hardware has an error, the impacted virtual machines can be automatically restarted on other servers in the cluster.

Note In order to restart virtual machines on different hardware, the servers need to have available resources. Compute provides detailed information to enable this function.

With VMware vSphere High Availability, you can configure policies to determine which machines are automatically restarted, and under what conditions these operations should be attempted.

Overview

VMware vSphere 5.1

VMware vCenter

VMware vSphere High Availability

Page 25: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

25

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in for the vSphere client that provides a single management interface for EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which enables new features to be introduced rapidly in response to customer requirements.

The following features are used during validation testing:

Storage Viewer (SV) — Extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vSphere hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

Unified Storage Management — Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) and Virtual Machine File System (VMFS) datastores, and RDM volumes seamlessly within vSphere client.

Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for more information.

Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a storage enhancement in vSphere 5.1 that enables vSphere to offload specific storage operations to compatible storage hardware such as the VNX series platforms. With the assistance of storage hardware, vSphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.

Compute

The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents minimum requirements for the number of processor cores, and the amount of RAM. This can be implemented with two or twenty servers, and still be considered the same VSPEX solution.

EMC Virtual Storage Integrator for VMware

VNX VMware vStorage API for Array Integration support

Overview

Page 26: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

26

In the example shown in Figure 2, the compute layer requirements for a given implementation are 25 processor cores, and 200 GB of RAM. One customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM.

Figure 2. Compute layer flexibility

The first customer needs four of the servers they choose, while the other customer needs two.

Note To enable high availability at the compute layer, each customer needs one additional server to make sure that the system has enough capability to maintain business operations when a server fails.

Page 27: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

27

The following best practices should be used in the compute layer:

Use a number of identical or at least compatible servers. VSPEX implements hypervisor level high-availability technologies, which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

If you are implementing hypervisor layer high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implement the available high availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at least single server failures. This enables the implementation of minimal-downtime upgrades, and tolerance for single unit failures.

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be flexible to meet your specific needs. Make sure that sufficient processor cores, and RAM per core are provided to meet the needs of the target environment.

Page 28: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

28

Network

The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. An example of this highly available network topology is depicted in Figure 3.

Figure 3. Example of highly available network design

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost on

Overview

Page 29: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

29

the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage

The storage layer is also a key component of any Cloud Infrastructure solution that serves data generated by applications and operating system sin datacenter storage processing systems. This increases storage efficiency, management flexibility and reduces total cost of ownership. In this VSPEX solution, EMC VNX Series arrays are used to provide virtualization at the storage layer.

The EMC VNX family is optimized for virtual applications; and delivers industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s enterprises.

The VNX series is powered by Intel® Xeon processors for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. It is designed to meet the high performance, high-scalability requirements of midsize and large enterprises.

Table 1 shows the customer benefits that are provided by VNX series.

Table 1. VNX customer benefits

Feature

Next-generation unified storage, optimized for virtualized applications

Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies

High availability, designed to deliver five 9s availability

Automated tiering with FAST VP (Fully Automated Storage Tiering for Virtual Pools) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously

Simplified management with EMC Unisphere™ for a single management interface for all NAS, SAN, and replication needs

Up to three times improvement in performance with the latest Intel Xeon multi-core processor technology, optimized for Flash

Different software suites and packs are also available for the VNX series, which provide multiple features for enhanced production and performance:

Software Suites

FAST Suite — Automatically optimizes for the highest system performance and the lowest storage cost simultaneously.

Local Protection Suite — Practices safe data protection and repurposing.

Overview

EMC VNX series

Page 30: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

30

Remote Protection Suite — Protects data against localized failures, outages, and disasters.

Application Protection Suite — Automates application copies and proves compliance.

Security and Compliance Suite — Keeps data safe from changes, deletions, and malicious activity.

Software Packs

Total Efficiency Pack — Includes all five software suites.

Total Protection Pack — Includes local, remote, and application protection suites.

VNX FAST Cache, a part of the VNX FAST Suite, enables flash drives to be used as an expanded cache layer for the array. FAST Cache is an array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64kB increments and subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within a LUN.

VNX FAST VP, a part of the VNX FAST Suite, can automatically tier data across multiple types of drives to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently access data is promoted to higher tiers of storage in 1GB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1GB data units, or slices, is done as part of a regularly scheduled maintenance operation.

VNX FAST Cache (optional)

VNX FAST VP (optional)

Page 31: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

31

Backup and recovery

Backup and recovery is another important component in this VSPEX solution, which provides data protection by backing up data files or volumes on a defined schedule, and restoring data from backup for recovery after a disaster. This VSPEX solution uses EMC Avamar® for up to 250 virtual machines.

EMC Avamar data deduplication technology seamlessly integrates into virtual environments, providing rapid backup, and restoration capabilities. Avamar’s deduplication results in less data transmitted across the network, and greatly reduces the amount of data being backed up and stored to achieve storage, bandwidth, and operational savings.

The following are two of the most common recovery requests made to backup administrators:

File-level recovery — Object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are individual users deleting files, applications requiring recoveries, and batch process-related erasures.

System recovery — Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues.

Avamar’s functionality in conjunction with VMware implementations adds new capabilities for backup and recovery in both of these scenarios. Key capabilities added in VMware such as the vStorage API integration and change block tracking (CBT) enable the Avamar software to protect the virtual environment more efficiently.

Leveraging CBT for both backup and recovery with virtual proxy server pools minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry leading next-generation backup appliances.

Overview

EMC Avamar

Page 32: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

32

Other technologies

In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to the technologies listed below.

EMC VFCache is a server Flash caching solution that reduces latency and increases throughput to improve application performance by using intelligent caching software and PCIe Flash technology.

Server-side Flash caching for maximum speed

VFCache performs the following functions to improve system performance:

Caches the most frequently referenced data on the server-based PCIe card to put the data closer to the application.

Automatically adapts to changing workloads by determining which data is most frequently referenced and promoting it to the server Flash card. This means that the “hottest” data (most active data) automatically resides on the PCIe card in the server for faster access.

Offloads the read traffic from the storage array, which allocates greater processing power to other applications. While one application is accelerated with VFCache, the array performance for other applications is maintained or slightly enhanced.

Write-through caching to the array for total protection

VFCache accelerates reads and protects data by using a write-through cache to the storage to deliver persistent high availability, integrity, and disaster recovery.

Application agnostic

VFCache is transparent to applications, so no rewriting, retesting, or recertification is required to deploy VFCache in the environment.

Integration with vSphere

VFCache enhances both virtualized and physical environments. Integration with the VSI plug-in to VMware vSphere vCenter simplifies the management and monitoring of VFCache.

Minimum impact on system resources

Unlike other caching solutions on the market, VFCache does not require a significant amount of memory or CPU cycles, as all Flash and wear-leveling management is done on the PCIe card without using server resources. Unlike other PCIe solutions, there is no significant overhead from using VFCache on server resources.

VFCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments.

Overview

EMC VFCache (optional)

Page 33: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

33

VFCache active/passive clustering support

The configuration of VFCache clustering scripts ensures that stale data is never retrieved. The scripts use cluster management events to trigger a mechanism that purges the cache. The VFCache-enabled active/passive cluster ensures data integrity, and accelerates application performance.

VFCache performance considerations

The following are the VFCache performance considerations:

On a write request, VFCache first writes to the array, then to the cache, and then completes the application I/O.

On a read request, VFCache satisfies the request with cached data, or, when the data is not present, retrieves the data from the array, writes it to the cache, and then returns it to the application. The trip to the array can be in the order of milliseconds, therefore the array limits how fast the cache can work. As the number of writes increases, VFCache performance decreases.

VFCache is most effective for workloads with a 70 percent, or more, read/write ratio, with small, random I/O (8 K is ideal). I/O greater than 128 K is not cached in VFCache 1.5.

Note For more information, refer to the VFCache Installation and Administration Guide v1.5.

Page 34: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Technology Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

34

Page 35: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

35

Chapter 4 Solution Architecture Overview

This chapter presents the following topics:

Solution overview ...................................................................................... 36

Solution architecture ................................................................................. 36

Server configuration guidelines .................................................................. 42

Network configuration guidelines ............................................................... 46

Storage configuration guidelines ................................................................ 48

High availability and failover ...................................................................... 54

Backup and recovery configuration guidelines ............................................ 57

Sizing guidelines ....................................................................................... 58

Reference workload ................................................................................... 58

Applying the reference workload ................................................................ 59

Page 36: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

36

Solution overview VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloud-based computing by enabling faster deployment, more choice, higher efficiency, and lower risk.

This section is intended to be a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network resources; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your private cloud deployment.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

Solution architecture

The VSPEX solution for VMware vSphere Private Cloud with EMC VNX is validated at two different points of scale, one configuration with up to 125 virtual machines, and one configuration with up to 250 virtual machines. The defined configurations form the basis of creating a custom solution.

Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This document describes the process in Applying the reference workload.

Overview

Page 37: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

37

The architecture in Figure 4 characterizes the infrastructure validated for support of up to 125 virtual machines.

Figure 4. Logical architecture for 125 virtual machines

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks if sufficient bandwidth and redundancy are provided to meet the listed requirements.

Architecture for up to 125 virtual machines

Page 38: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

38

The architecture in Figure 5 characterizes the infrastructure validated for support of up to 250 virtual machines.

Figure 5. Logical architecture for 250 virtual machines

Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks if sufficient bandwidth and redundancy are provided to meet the listed requirements.

VMware vSphere 5.1 — Provides a common virtualization layer to host a server environment. The specifics of the validated environment are listed in Table 2 on page 40. vSphere 5.1 provides highly available infrastructure through such features as:

vMotion — Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption.

Storage vMotion — Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption.

vSphere High Availability (HA) – Detects and provides rapid recovery for a failed virtual machine in a cluster.

Distributed Resource Scheduler (DRS) – Provides load balancing of computing capacity in a cluster.

Storage Distributed Resource Scheduler (SDRS) – Provides load balancing across multiple datastores, based on space usage and I/O latency.

VMware vCenter Server 5 — Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vSphere 5.1 cluster. All vSphere hosts and their virtual machines are managed from vCenter.

Architecture for up to 250 virtual machines

Key components

Page 39: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

39

VSI for VMware vSphere — EMC VSI for VMware vSphere is a plug-in to the vSphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface.

SQL Server — VMware vCenter Server requires a database service to store configuration and monitoring details. A Microsoft SQL 2008 R2 server is used for this purpose.

DNS Server — DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose.

Active Directory Server — Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose.

Shared Infrastructure — DNS and authentication/authorization services like Microsoft Active Directory can be provided via existing infrastructure or set up as part of the new virtual infrastructure.

IP /Storage Network — All network traffic is carried over a standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while storage traffic is carried over a private, non-routable subnet.

EMC VNX5300 array — Provides storage by presenting NFS datastores to vSphere hosts for up to 125 virtual machines.

EMC VNX5500 array — Provides storage by presenting NFS datastores to vSphere hosts for up to 250 virtual machines.

VNX family storage arrays include the following components:

Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iSCSI, and FCoE protocols The SPs provide access for all external hosts, and for the file side of the VNX array.

The disk-processor enclosure (DPE) is 2 U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500.

X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pNFS protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists.

The Data Mover enclosure (DME) is 2 U in size and houses the Data Movers (X-Blades). The DME is similar in form to the DPE, and is used on all VNX models that support file.

Standby power supplies are 1 U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted.

Control Stations are 1 U in size and provide management functions to the file-side components referred to as X-Blades. The Control Station is responsible

Page 40: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

40

for X-Blade failover. The Control Station may optionally be configured with a matching secondary Control Station to ensure redundancy on the VNX array.

Disk-array enclosures (DAE) house the drives used in the array.

Table 2 lists the hardware used in this solution.

Table 2. Solution hardware

Hardware Configuration Notes

VMware vSphere servers

CPU:

One vCPU per virtual machine

Four vCPUs per physical core

Memory:

2 GB RAM per virtual machine

250 GB RAM across all servers for the 125-virtual-machine configuration

500 GB RAM across all servers for the 250-virtual-machine configuration

2 GB RAM reservation per vSphere host

Network:

Six 1 GbE NICs per server

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server.

Configured as a single vSphere cluster.

Network infrastructure

Minimum switching capacity:

Two physical switches

Six 1 GbE ports per vSphere server

One 1 GbE port per control station for management

Four 1 GbE ports per data mover for data

Redundant LAN configuration

Hardware resources

Page 41: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

41

Hardware Configuration Notes

Storage Common

Two Data Movers (active / standby)

Four 1 GbE interfaces per data mover

One 1GbE interface per control station for management

For 125 Virtual Machines

EMC VNX5300

Seventy-five 300 GB 15k rpm 3.5-inch SAS drives

Three 300 GB 15k rpm 3.5-inch SAS drives as hot spares

For 250 Virtual Machines

EMC VNX5500

One hundred fifty 300 GB 15k rpm 3.5-inch SAS drives

Six 300 GB 15k rpm 3.5-inch SAS drives as hot spares

VNX shared storage

Shared Infrastructure

In most cases, a customer environment already has infrastructure services such as Active Directory, DNS, and other services configured. The setup of these services is beyond the scope of this document.

If this is being implemented without existing infrastructure, a minimum number of additional servers is required:

Two physical servers

16 GB RAM per server

Four processor cores per server

Two 1 GbE ports per server

These services can be migrated into VSPEX post-deployment; however, they must exist before VSPEX can be deployed.

EMC next-generation backup

Avamar

One Gen4 utility node

One Gen4 3.9TB spare node

Three Gen4 3.9TB Storage nodes for 125 virtual machines or five Gen4 3.9TB Storage nodes for 250 virtual machines

Data Domain

One Data Domain DD640 for 125 virtual machines or one Data Domain DD670 for 250 virtual machines

One ES30 15x1TB HDD for 125 virtual machines or two ES350 15x1 TB HDDs for 250 virtual machines

Page 42: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

42

Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Table 3 lists the software used in this solution.

Table 3. Solution software

Software Configuration

VMware vSphere

vSphere server 5.1 Enterprise Edition

vCenter Server 5.1 Standard Edition

Operating system for vCenter Server Windows Server 2008 R2 SP1 Standard Edition

Microsoft SQL Server Version 2008 R2 Standard Edition

EMC VNX

VNX OE for file Release 7.1.47-5

VNX OE for block Release 32 (05.32.000.5.006)

EMC VSI for VMware vSphere: Unified Storage Management

5.3

EMC VSI for VMware vSphere: Storage Viewer

5.3

Next-generation backup

Avamar 6.1 SP1

Data Domain OS 5.2

Virtual machines (used for validation – not required for deployment)

Base operating system Microsoft Window Server 2012 Datacenter Edition

Server configuration guidelines

When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may alter the final purchase. From a virtualization perspective, if a system’s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement.

If the virtual machine pool does not have a high level of peak or concurrent usage, the number of vCPUs may be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased.

Software resources

Overview

Page 43: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

43

Table 4 lists the hardware resources that are used for compute.

Table 4. Hardware resources for compute

Hardware Configuration Notes

VMware vSphere servers

CPU:

One vCPU per virtual machine

Four vCPUs per physical core

Memory:

2 GB RAM per virtual machine

250 GB RAM across all servers for 125 virtual machines

500 GB RAM across all servers for 250 virtual machines

2 GB RAM reservation per vSphere host

Network:

Six 1 GbE NICs per server

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have one additional server.

Configured as a single vSphere cluster.

Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the

underlying requirements around bandwidth and redundancy are fulfilled.

VMware vSphere 5.1 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features and the items you need to consider when using them in the environment.

VMware vSphere memory virtualization for VSPEX

Page 44: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

44

In general, virtual machines on a single hypervisor consume memory as a pool of resources, as shown in Figure 6.

Figure 6. Hypervisor memory consumption

This basic concept is enhanced by understanding the technologies presented in this section.

Memory compression

Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vSphere is able to handle memory over-commitment without any performance degradation. However, if more memory

Page 45: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

45

than is present on the server is being actively used, vSphere might resort to swapping out portions of the memory of a virtual machine.

Non-Uniform Memory Access (NUMA)

vSphere uses a NUMA load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Applications that do not directly support NUMA also benefit from this feature.

Transparent page sharing

Virtual machines running similar operating systems and applications typically have similar sets of memory content. Page sharing enables the hypervisor to reclaim any redundant copies of memory pages and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries, total memory usage can be reduced to increase consolidation ratios.

Memory ballooning

By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is done with little to no impact to the performance of the application.

This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vSphere memory overhead and the virtual machine memory settings.

vSphere memory overhead

There is some associated overhead for the virtualization of memory resources. The memory space overhead has two components:

The fixed system overhead for the VMkernel.

Additional overhead for each virtual machine.

Memory overhead depends on the number of virtual CPUs and configured memory for the guest operating system.

Allocating memory to virtual machines

The proper sizing for virtual machine memory in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments for optimal results.

Memory configuration guidelines

Page 46: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

46

Network configuration guidelines

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account Jumbo Frames, VLANs, and Link Aggregation Control Protocol (LACP) on EMC unified storage. For detailed network resource requirements, refer to Table 5.

Table 5. Hardware resources for network

Hardware Configuration Notes

Network infrastructure

Minimum switching capacity:

Two physical switches

Six 1 GbE ports per vSphere server

One 1 GbE port per control station for management

Four 1 GbE ports per data mover for data

Redundant LAN configuration

Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the

underlying requirements around bandwidth and redundancy are fulfilled.

It is a best practice to isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs for the following usage:

Client access

Storage

Management

Overview

Page 47: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

47

Figure 7 depicts the VLANs.

Figure 7. Required networks

Note Figure 7 demonstrates the network connectivity requirements for a VNX array using 10 GbE connections. A similar topology should be created when using 1 GbE network connections.

The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network is used for communication between the compute layer and the storage layer. The Management Network is used for administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts.

Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks may be implemented if necessary, but they are not required.

This solution requires MTU set at 9000 (jumbo frames) for efficient storage and migration traffic.

Enable jumbo frames

Page 48: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

48

A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage configuration guidelines

This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance.

vSphere allows more than one method of utilizing storage when hosting virtual machines. The solutions described below are tested utilizing NFS, and the storage layout described adheres to all current best practices. A customer or architect with related background can make modifications based on their understanding of the system usage and load if required.

Table 6 lists the hardware resources that are used for storage.

Table 6. Hardware resources for storage

Hardware Configuration Notes

Storage Common

Two Data Movers (active / standby)

Four 1 GbE interfaces per data mover

One 1GbE interface per control station for management

For 125 Virtual Machines

EMC VNX5300

Seventy-five 300 GB 15k rpm 3.5-inch SAS drives

Three 300 GB 15k rpm 3.5-inch SAS drives as hot spares

For 250 Virtual Machines

EMC VNX5500

One hundred fifty 300 GB 15k rpm 3.5-inch SAS drives

Six 300 GB 15k rpm 3.5-inch SAS drives as hot spares

VNX shared storage

Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the

underlying requirements around bandwidth and redundancy are fulfilled.

Link aggregation

Overview

Page 49: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

49

VMware ESXi provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machines.

A virtual machine stores its operating system, and all other files that are related to the virtual machine activities in a virtual disk. The virtual disk itself is one or more files. VMware uses a virtual SCSI controller to present virtual disks to guest operating system running inside the virtual machines.

A datastore is where virtual disks reside. Depending on the type used, it can be either a VMware Virtual Machine File system (VMFS) datastore, or an NFS datastore. An additional option, Raw Device Mapping, allows the virtual infrastructure to connect a physical device directly to a virtual machine.

Figure 8. VMware virtual disk types

VMFS

VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage.

Raw Device Mapping

VMware also provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to directly access a volume on the physical storage, and can only be used with Fibre Channel or iSCSI.

NFS

VMware supports using NFS file systems from an external NAS storage system or device as a virtual machine datastore.

VMware vSphere storage virtualization for VSPEX

Page 50: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

50

Figure 9 shows the physical disk layout for 125 virtual machines.

Figure 9. Storage layout for 125 virtual machines

The reference architecture uses the following configuration:

Seventy 300 GB SAS disks are allocated to a block-based storage pool.

Note

System drives are specifically excluded from the pool, and not used for additional storage.

If more capacity is required, larger drives may be substituted. To meet the load recommendations, the drives all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give sub-optimal results.

Three 300 GB SAS disks are configured as hot spares.

Storage layout for 125 virtual machines

Page 51: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

51

Optionally, you can configure up to 10 flash drives in the array FAST Cache. LUNs or storage pools where virtual machines reside that have a higher than average I/O requirement can benefit by enabling the FAST Cache feature. These drives are not considered a required part of the solution, and additional licensing may be required in order to use the FAST Suite.

If the FAST Suite has been purchased and multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1-GB increments while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation.

At least one hot spare disk is allocated for every 30 disks of a given type.

At least two NFS shares are allocated to the vSphere cluster from a single storage pool to serve as datastores for the virtual servers.

Page 52: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

52

Figure 10 shows the physical disk layout for 250 virtual machines.

Figure 10. Storage layout for 250 virtual machines

Storage layout for 250 virtual machines

Page 53: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

53

The reference architecture uses the following configuration:

One hundred forty-five 300 GB SAS disks are allocated to a block-based storage pool.

Note

System drives are specifically excluded from the pool, and not used for additional storage.

If more capacity is required, larger drives may be substituted. To meet the load recommendations, the drives all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give sub-optimal results.

Six 300 GB SAS disks are configured as hot spares.

Optionally, you can configure up to 20 flash drives in the array FAST Cache. These drives are not considered a required part of the solution, and additional licensing may be required in order to use the FAST Suite.

If the FAST Suite has been purchased and multiple drive types have been implemented, FAST VP may be enabled to automatically tier data to leverage differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 1 GB increments, while infrequently accessed data can be migrated to a lower tier for cost-efficiency. This rebalancing of 1 GB data units, or slices, is done as part of a regularly scheduled maintenance operation.

At least one hot spare disk is allocated for every 30 disks of a given type.

At least two NFS shares are allocated to the vSphere cluster from each storage pool to serve as datastores for the virtual servers.

Page 54: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

54

High availability and failover

This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with little to no impact on business operations.

Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart virtual machines that fail. Figure 11 illustrates the hypervisor layer responding to a failure in the compute layer:

Figure 11. High Availability at the virtualization layer

Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure attempts to keep as many services running as possible.

While the choice of servers to implement in the compute layer is flexible, use enterprise class servers designed for the datacenter. This type of server has redundant power supplies, as shown in Figure 12. These should be connected to separate power distribution units (PDUs) in accordance with your server vendor’s best practices.

Figure 12. Redundant power supplies

Overview

Virtualization layer

Compute layer

Page 55: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

55

Configure high availability in the virtualization layer. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 11.

The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 13. These connections should be spread across multiple Ethernet switches to guard against component failure in the network.

Figure 13. Network layer High Availability (VNX)

By ensuring that there are no single points of failure in the network layer, you can ensure that the compute layer is able to access storage, and communicate with users even if a component fails.

Network layer

Page 56: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

56

The VNX family is designed for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk, as shown in Figure 14.

Figure 14. VNX series High Availability

EMC storage arrays are designed to be highly available by default. When configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability.

Storage layer

Page 57: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

57

Backup and recovery configuration guidelines

This section provides guidelines to set up backup and recovery for this VSPEX solution. It includes how the backup is characterized, and the backup layout.

The solution is sized with the following application environment profile, as listed in Table 7.

Table 7. Profile characteristics

Profile characteristic Value

Number of users 1250 for 125 virtual machines

2500 for 250 virtual machines

Number of virtual machines 125 for 125 virtual machines (20% DB, 80% Unstructured)

250 for 250 virtual machines (20% DB, 80% Unstructured)

Exchange data 1.2 TB (1 GB mail box per user) for 125 virtual machines

2.5 TB (1 GB mail box per user) for 250 virtual machines

SharePoint data 0.6 TB for 125 virtual machines

1.25 TB for 250 virtual machines

SQL server 0.6 TB for 125 virtual machines

1.25 TB for 250 virtual machines

User data 6.1 TB (5.0 GB per user) for 125 virtual machines

25 TB (10.0 GB per user) for 250 virtual machines

Daily Change Rate for the applications

Exchange data 10%

SharePoint data 2%

SQL server 5%

User data 2%

Retention per data types

All DB data 14 Dailies

User data 30 Dailies, 4 Weeklies, 1 Monthly

Overview

Backup characteristics

Page 58: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

58

Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with both Avamar and Data Domain managed as a single solution. This enables users to back up the unstructured user data directly to the Avamar system for simple file level recovery. The database and virtual machine images are managed by the Avamar software, but it is directed to the Data Domain system with the embedded Boost client library. This backup solution unifies the backup process with industry-leading deduplication backup software and storage, and achieves the highest levels of performance and efficiency.

Sizing guidelines The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. Guidance is provided on how to correlate those reference workloads to actual customer workloads, and how that may change the end delivery from the server and network perspective.

Modifications to the storage definition can be made by adding drives for greater capacity and performance, as well as the addition of features like FAST Cache and FAST VP. The disk layouts have been created to provide support for the appropriate number of virtual machines at the defined performance level and typical operations like snapshots. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per virtual machine, and a reduced user experience caused by higher response times.

Reference workload

When considering an existing server to move into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, which has been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

Backup layout

Overview

Page 59: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

59

To simplify the discussion, we have defined a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose.

For the VSPEX solutions, the reference workload is defined as a single virtual machine. Table 8 lists the characteristics of this virtual machine.

Table 8. Virtual machine characteristics

Characteristic Value

Virtual machine operating system Microsoft Windows Server 2012 Datacenter Edition

Virtual processors per virtual machine 1

RAM per virtual machine 2 GB

Available storage capacity per virtual machine 100 GB

I/O operations per second (IOPS) per virtual machine

25

I/O pattern Random

I/O read/write ratio 2:1

This specification for a virtual machine is not intended to represent any specific application. Rather, it represents a single common point of reference against which other virtual machines can be measured.

Applying the reference workload

When considering an existing server that will move into a virtual infrastructure, you have the opportunity to gain efficiency by right-sizing the virtual hardware resources assigned to that system.

The reference architectures create a pool of resources that are sufficient to host a target number of reference virtual machines with the characteristics shown in Table 1 on page 29. The customer virtual machines may not exactly match the specifications above. In that case, define a single specific customer virtual machine as the equivalent of some number of reference virtual machines together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the resource pool until no resources remain.

Defining the reference workload

Overview

Page 60: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

60

A small custom-built application server needs to move into this virtual infrastructure. The physical hardware that supports the application is not fully utilized. A careful analysis of the existing application reveals that the application can use one processor, and needs 3 GB of memory to run normally. The I/O workload ranges between 4 IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on local hard drive storage.

Based on these numbers, the following resources are needed from the resource pool:

CPU resources for one virtual machine

Memory resources for two virtual machines

Storage capacity for one virtual machine

I/Os for one virtual machine

In this example, an appropriate virtual machine uses the resources for two of the reference virtual machines. If the original pool had the resources to provide 125 reference virtual machines, the resources for 123 reference virtual machines remain.

The database server for a customer’s point of scale system needs to move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle.

The following are the requirements to virtualize this application:

CPUs of four reference virtual machines

Memory of eight reference virtual machines

Storage of two reference virtual machines

I/Os of eight reference virtual machines

In this case, the one appropriate virtual machine uses the resources of eight reference virtual machines. Implementing this one machine on a pool for 125 reference virtual machines would consume the resources of eight reference virtual machines, and leave resources for 117 reference virtual machines.

The customer’s web server needs to move into this virtual infrastructure. It is currently running on a physical system with two CPUs and eight GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle.

The following are the requirements to virtualize this application:

CPUs of two reference virtual machines

Memory of four reference virtual machines

Storage of one reference virtual machines

I/Os of two reference virtual machines

Example 1: Custom-built application

Example 2: Point of scale system

Example 3: Web server

Page 61: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

61

In this case, the one appropriate virtual machine would use the resources of four reference virtual machines. If this is implemented on a resource pool for 125 reference virtual machines, resources for 121 reference virtual machines remain.

The database server for a customer’s decision support system needs to move into this virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses five TB of storage and generates 700 IOPS during an average busy cycle.

The following are the requirements to virtualize this application:

CPUs of 10 reference virtual machines

Memory of 32 reference virtual machines

Storage of 52 reference virtual machines

I/Os of 28 reference virtual machines

In this case, the one virtual machine uses the resources of 52 reference virtual machines. If this is implemented on a resource pool for 125 reference virtual machines, resources for 73 reference virtual machines remain.

The four examples illustrate the flexibility of the resource pool model. In all four cases, the workloads simply reduce the amount of available resources in the pool. All four examples can be implemented on the same virtual infrastructure with an initial capacity for 125 reference virtual machines, and resources for 59 reference virtual machines would remain in the resource pool as shown in Figure 15.

Figure 15. Resource pool flexibility

In more advanced cases, there may be tradeoffs between memory and I/O or other relationships where increasing the amount of one resource decreases the need for another. In these cases, the interactions between resource allocations become highly complex, and are outside the scope of the document. Once the change in resource balance has been examined and the new level of requirements is known, these virtual machines can be added to the infrastructure using the method described in the examples.

Example 4: Decision-support database

Summary of examples

Page 62: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

62

Implementing the reference architectures

The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements.

The reference architectures define the hardware requirements for the solution in terms of four basic types of resources:

CPU resources

Memory resources

Network resources

Storage resources

This section describes the resource types, how they are used in the reference architecture, and key considerations for implementing them in a customer environment.

The architectures define the number of CPU cores that are required, but not a specific type or configuration. New deployments use recent revisions of common processor technologies. It is assumed that these perform as well as, or better than, the systems used to validate the solution.

In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual machine and required hardware resources in the reference architectures assume that there will be no more than four virtual CPUs for each physical processor core (4:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual machines; however, this ratio may not be appropriate in all use cases. Monitor the CPU utilization at the hypervisor layer to determine if more resources are required.

Each virtual server in the reference architecture is defined to have 2 GB of memory. In a virtual environment, it is common to provision virtual machines with more memory than the hypervisor physically has because of budget constraints. The memory over-commitment technique takes advantage of the fact that each virtual machine does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem.

If VMware ESXi runs out of memory for the guest operating systems, paging begins to take place, resulting in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, more disks need to be added

Overview

Resource types

CPU resources

Memory resources

Page 63: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

63

due to the demand for increased performance. Now, it is up to the administrator to decide whether it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option.

This solution is validated with statically assigned memory and no over-commitment of memory resources. If memory over-commit is used in a real-world environment, you should regularly monitor the system memory utilization, and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results.

The reference architecture outlines the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports, and have the option to add ports using EMC UltraFLEX I/O modules.

For reference purposes in the validated environment, EMC assumes that each virtual machine generates 25 I/Os per second with an average size of 8 KB. This means that each virtual machine is generating at least 200 KB/s of traffic on the storage network. For an environment rated for 100 virtual machines, this comes out to a minimum of approximately 20 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for:

User network traffic

Virtual machine migration

Administrative and management operations

The requirements for each of these vary, depending on how the environment is being used. It is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the above use cases.

Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload.

The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few factors to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vSphere Cluster. Each layer has a specific configuration that is defined for the solution and documented in the deployment guide.

It is generally acceptable to replace drive types with a type that has more capacity with the same performance characteristics or with ones that have higher performance

Network resources

Storage resources

Page 64: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

64

characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements.

In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system.

The requirements that are stated in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual server. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual machines differ significantly from the reference definition, and vary in the same resource group, then you may need to add more of that resource to the system.

Implementation summary

Page 65: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

65

Quick assessment

An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations, and help assess the customer environment.

First, summarize the applications that are planned for migration into the VSPEX Private Cloud. For each application, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual machines are required from the resource pool. Applying the reference workload provides examples of this process.

Fill out a row in the worksheet for each application, as listed in Table 9.

Table 9. Blank worksheet row

Application CPU (Virtual CPUs)

Memory (GB) IOPS Capacity

(GB)

Equivalent Reference Virtual Machines

Example Application

Resource Requirements

Equivalent Reference Virtual Machines

Fill out the resource requirements for the application. The row requires inputs on four different resources: CPU, Memory, IOPS, and Capacity.

Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all of the CPUs that are presented. Use a performance-monitoring tool, such as ESXTop, on vSphere hosts to examine the CPU Utilization counter for each CPU. If they are equivalent, then implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs that are required.

In any operation involving performance monitoring, it is a best practice to collect data samples for a period of time that includes all of the operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes.

Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by

Overview

CPU requirements

Memory requirements

Page 66: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

66

using a performance-monitoring tool, like VMware ESXTop, to determine if it is being used efficiently.

In any operation involving performance monitoring, it is a best practice to collect data samples for a period of time that includes all of the operational use cases of the system. Then use either the maximum or 95th percentile value of the resource requirements for planning purposes.

The storage performance requirements for an application are usually the least understood aspect of performance. Three components become important when discussing the I/O performance of a system. The first is the number of requests coming in – or IOPS. Equally important is the size of the request, or I/O size -- a request for 4 KB of data is significantly easier and faster than a request for 4 MB of data. That distinction becomes important with the third factor, which is the average I/O response time, or I/O latency.

The reference virtual machine calls for 25 I/O operations per second. To monitor this on an existing system use a performance-monitoring tool like VMware ESXTop. ESXTop provides several counters that can help here. The most common are:

Physical Disk NFS Volume \Commands/sec

Physical Disk NFS Volume \Reads/sec

Physical Disk NFS Volume \Writes/sec

Physical Disk NFS Volume \ Average Guest MilliSec/Command

The reference virtual machine assumes a 2:1 read: write ratio. Use these counters to determine the total number of IOPS, and the approximate ratio of reads to writes for the customer application.

The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The reference virtual machine assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even powers of 2 –4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average, so it is common to see 11 KB or 15 KB instead of the common I/O sizes.

The reference virtual machine assumes an 8 KB I/O size. If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, EMC recommends applying a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application is using mostly 32 KB I/O requests, use a factor of four (32 KB / 8 KB = 4). If that application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400 IOPS since the reference virtual machine assumed 8 KB I/O sizes.

The average I/O response time, or I/O latency, is a measurement of how quickly I/O requests are processed by the storage system. The VSPEX solutions were designed to meet a target average I/O latency of 20 ms. The recommendations in this document should allow the system to continue to meet that target, however it is worthwhile to monitor the system and re-evaluate the resource pool utilization if needed. To

Storage performance requirements

I/O operations per second (IOPS)

I/O size

I/O latency

Page 67: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

67

monitor I/O latency, use the “Physical Disk NFS Volume \ Average Guest MilliSec/Command” counters in ESXTop. If the I/O latency is continuously over the target, re-evaluate the virtual machines in the environment to ensure that they are not using more resources than intended.

The storage capacity requirement for a running application is usually the easiest resource to quantify. Determine how much space on disk the system is using, and add an appropriate factor to accommodate growth. For example, to virtualize a server that is currently using 40 GB of a 200 GB internal drive with anticipated growth of approximately 20% over the next year, 48 GB are required. EMC also recommends reserving space for regular maintenance patches and swapping files. In addition, some file systems, like Microsoft NTFS, degrade in performance if they become too full.

With all of the resources defined, determine an appropriate value for the Equivalent Reference Virtual Machines line by using the relationships in Table 10. Round all values up to the closest whole number.

Table 10. Reference Virtual Machine resources

Resource Value for Reference Virtual Machine (RVM)

Relationship between requirements and Equivalent Reference Virtual Machines

CPU 1 Equivalent Reference Virtual Machines = Resource Requirements

Memory 2 Equivalent Reference Virtual Machines = (Resource Requirements)/2

IOPS 25 Equivalent Reference Virtual Machines = (Resource Requirements)/25

Capacity 100 Equivalent Reference Virtual Machines = (Resource Requirements)/100

For example, the point of scale system used in Example 2: Point of scale system earlier in the paper requires four CPUs, 16 GB of memory, 200 IOPS and 200 GB of storage. This translates to four reference virtual machines of CPU, eight reference virtual machines of memory, eight reference virtual machines of IOPS, and two reference virtual machines of capacity. Table 11 on page 68 demonstrates how that machine fits into the worksheet row.

Storage capacity requirements

Determining Equivalent Reference Virtual Machines

Page 68: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

68

Table 11. Example worksheet row

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent Reference Virtual Machines

Example Application

Resource Requirements

4 16 200 200

Equivalent Reference Virtual Machines

4 8 8 2 8

Use the highest value in the row to fill in the column for Equivalent Reference Virtual Machines. As shown below, eight Reference Virtual Machines are required.

Figure 16. Required resource from the reference virtual machine pool

Once the worksheet has been filled out for each application that the customer wants to migrate into the virtual infrastructure, compute the sum of the “Equivalent Reference Virtual Machines” column on the right side of the worksheet as listed in Table 12 on page 69 to calculate the total number of reference virtual machines that are required in the pool. In the example, the result of the calculation from Table 10 on page 67 is shown for clarity, along with the value, rounded up to the nearest whole number, to use.

Page 69: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

69

Table 12. Example applications

Server Resources Storage Resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference Virtual Machines

Example Application #1: Custom Built Application

Resource Requirements

1 3 15 30

Equivalent Reference Virtual Machines

1 2 1 1 2

Example Application #2: Point of scale System

Resource Requirements

4 16 200 200

Equivalent Reference Virtual Machines

4 8 8 2 8

Example Application #3: Web Server

Resource Requirements

2 8 50 25

Equivalent Reference Virtual Machines

2 4 2 1 4

Example Application #4: Decision Support Database

Resource Requirements

10 64 700 5120

Equivalent Reference Virtual Machines

10 32 28 52 52

Total Equivalent Reference Virtual Machines 66

The VSPEX Virtual Infrastructure solutions define discrete resource pool sizes. For this solution set, the pool can support 125 or 250 reference virtual machines. Figure 17 shows 59 Reference Virtual Machines available after applying all four examples in the 125 virtual machine solution.

Page 70: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

70

Figure 17. Aggregate resource requirements from the referenced virtual machine pool

In the case of Table 12 on page 69, the customer requires 66 virtual machines of capability from the pool. Therefore, the 125 virtual machine resource pool provides sufficient resources for the current needs as well as room for growth.

In most cases, the recommended hardware for servers and storage is sized appropriately based on the process described. However, in some cases there is a desire to further customize the hardware resources that are available to the system. A complete description of system architecture is beyond the scope of this document; however, additional customization can be done at this point.

Storage resources

In some applications, there is a need to separate application data from other workloads. The storage layouts in the VSPEX architectures put all of the virtual machines in a single resource pool. In order to achieve workload separation, purchase additional disk drives for the application workload and add them to a dedicated pool.

It is not appropriate to reduce the size of the main resource pool in order to support application isolation, or to reduce the capability of the pool. The storage layouts presented in the 125 and 250 virtual machine solutions are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-to-predict impacts on other areas of the system.

Server resources

For the server resources in the VSPEX virtual infrastructure, it is possible to customize the hardware resources more effectively.

Figure 18. Customizing server resources

Fine tuning hardware resources

Page 71: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

71

To do this, first total the resource requirements for the server components as shown in Table 13. Note the addition of a “Server Component Totals” line at the bottom of the worksheet. In this line, add up the server resource requirements from the applications in the table.

Table 13. Server resource component totals

Server Resources Storage Resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference Virtual Machines

Example Application #1: Custom Built Application

Resource Requirements

1 3 15 30

Equivalent Reference Virtual Machines

1 2 1 1 2

Example Application #2: Point of scale System

Resource Requirements

4 16 200 200

Equivalent Reference Virtual Machines

4 8 8 2 8

Example Application #3: Web Server

Resource Requirements

2 8 50 25

Equivalent Reference Virtual Machines

2 4 2 1 4

Example Application #4: Decision Support Database

Resource Requirements

10 64 700 5120

Equivalent Reference Virtual Machines

10 32 28 52 52

Total Equivalent Reference Virtual Machines 66

Server Resource Component Totals

17 91

Page 72: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Solution Architecture Overview

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

72

Page 73: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

73

Chapter 5 VSPEX Configuration Guidelines

This chapter presents the following topics:

Configuration overview .............................................................................. 74

Pre-deployment tasks ................................................................................ 75

Customer configuration data ...................................................................... 77

Prepare switches, connect network, and configure switches ....................... 77

Prepare and configure storage array ........................................................... 79

Install and configure vSphere infrastructure ............................................... 89

Install and configure SQL server database .................................................. 93

Install and configure VMware vCenter server ............................................... 95

Summary ................................................................................................... 97

Page 74: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

74

Configuration overview

The deployment process is divided into the stages shown in Table 14. Upon completion of the deployment, the VSPEX infrastructure is ready for integration with the existing customer network and server infrastructure.

Table 14 lists the main stages in the solution deployment process. The table also includes references to chapters where relevant procedures are provided.

Table 14. Deployment process overview

Stage Description Reference

1 Verify prerequisites Pre-deployment tasks

2 Obtain the deployment tools Deployment prerequisites

3 Gather customer configuration data

Customer configuration data

4 Rack and cable the components

Refer to the vendor documentation.

5 Configure the switches and networks, connect to the customer network

Prepare switches, connect network, and configure switches

6 Install and configure the VNX Prepare and configure storage array

7 Configure virtual machine datastores

Prepare and configure storage array

8 Install and configure the servers

Install and configure vSphere infrastructure

9 Set up SQL Server (used by VMware vCenter™)

Install and configure SQL server database

10 Install and configure vCenter and virtual machine networking

Configure database for VMware vCenter

Deployment process

Page 75: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

75

Pre-deployment tasks

Pre-deployment tasks include procedures that are not directly related to environment installation and configuration, but whose results are needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. These tasks should be performed before the customer visit to decrease the time required onsite.

Table 15. Tasks for pre-deployment

Task Description Reference

Gather documents

Gather the related documents listed in the Appendix C. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution.

References: EMC documentation

Gather tools

Gather the required and optional tools for the deployment. Use Table 16 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process.

Table 16: Deployment prerequisites checklist

Gather data

Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration data sheet for reference during the deployment process.

Appendix B

Table 16 itemizes the hardware, software, and licenses required to configure the solution. For additional information, refer to Table 2 on page 40 and Table 3 on page 42.

Table 16. Deployment prerequisites checklist

Requirement Description Reference

Hardware Physical servers to host virtual servers: Sufficient physical server capacity to host 125 or 250 virtual servers

Table 2: Solution hardware

VMware vSphere® 5 servers to host virtual infrastructure servers

Note This requirement may be covered in the existing infrastructure

Networking: Switch port capacity and capabilities as required by the virtual server infrastructure.

Overview

Deployment prerequisites

Page 76: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

76

Requirement Description Reference

EMC VNX5300 (125 virtual machines) or EMC VNX5500 (250 virtual machines): Multiprotocol storage array with the required disk layout.

Software VMware ESXi™ 5.1 installation media

VMware vCenter Server 5.1 installation media

EMC VSI for VMware vSphere: Unified Storage Management

EMC Online Support EMC VSI for VMware vSphere: Storage Viewer

Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vCenter)

Microsoft SQL Server 2008 or newer installation media

Note This requirement may be covered in the existing infrastructure.

EMC vStorage API for Array Integration Plug-in

EMC Online Support

Microsoft Windows Server 2012 DataCenter installation media (suggested OS for virtual machine guest OS)

Licenses

VMware vCenter 5.1 license key

VMware ESXi 5.1 license keys

Microsoft Windows Server 2008 R2 Standard (or higher) license keys

Microsoft Windows Server 2012 DataCenter license keys

Note This requirement may be covered by an existing Microsoft Key Management Server (KMS)

Microsoft SQL Server license key

Note This requirement may be covered in the existing infrastructure

Page 77: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

77

Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process.

Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses.

Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information.

Prepare switches, connect network, and configure switches

This section provides the requirements for network infrastructure needed to support this architecture. Table 17 provides a summary of the tasks for switch and network configuration, and references for further information.

Table 17. Tasks for switch and network configuration

Task Description Reference

Configure infrastructure network

Configure storage array and ESXi host infrastructure networking as specified in Prepare and configure storage array and Install and configure vSphere infrastructure.

Prepare and configure storage array and Install and configure vSphere infrastructure.

Configure VLANs

Configure private and public VLANs as required.

Your vendor’s switch configuration guide

Complete network cabling

Connect the switch interconnect ports.

Connect the VNX ports.

Connect the ESXi server ports.

Overview

Page 78: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

78

For validated levels of performance and high availability, this solution requires the switching capacity that is provided in the Solution Hardware table of the Table 2 on page 40. If existing infrastructure meets the requirements, no new hardware is needed.

The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution.

Figure 19 shows a sample redundant Ethernet infrastructure for this solution. The diagram illustrates the use of redundant switches and links to ensure that there are no single points of failure.

Figure 19. Sample Ethernet network architecture

Prepare network switches

Configure infrastructure network

Page 79: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

79

Ensure adequate switch ports for the storage array and ESXi hosts that are configured with a minimum of three VLANs for:

Virtual machine networking, ESXi management (customer- facing networks, which may be separated if desired)

NFS networking (private network)

vMotion (private network)

Ensure that all servers, storage arrays, switch interconnects, and switch uplinks have redundant connections, and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network.

Note At this point, the new equipment is being connected to the existing customer network. Ensure that unforeseen interactions do not cause service issues on the customer network.

Prepare and configure storage array

Overview

This section describes how to configure the VNX storage array. In the solution, VNX series provides Network File System (NFS) or Virtual Machine File System (VMFS) data storage for VMware hosts.

Table 18. Tasks for storage configuration

Task Description Reference

Set up initial VNX configuration

Configure the IP address information and other key parameters on the VNX.

VNX5300 Unified Installation Guide

VNX File and Unified Worksheet

Unisphere System Getting Started Guide

Your vendor’s switch configuration guide

Provision storage for NFS datastores

Create NFS file systems that will be presented to the ESXi servers as NFS datastores that host the virtual servers.

Prepare VNX

VNX5300 Unified Installation Guide provides instructions on assembly, racking, cabling, and powering the VNX. For 250 virtual machines, refer to the VNX5500 Unified Installation Guide instead. There are no specific setup steps for this solution.

Configure VLANs

Complete network cabling

VNX configuration

Page 80: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

80

Set up initial VNX configuration

After completing the initial VNX setup, you need to configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

Storage network IP address

CIFS services and Active Directory Domain membership

The reference documents listed in Table 18 on page 79 provide more information on how to configure the VNX platform. Storage configuration guidelines provides more information on the disk layout.

Provision storage for NFS datastores

Complete the following steps in EMC Unisphere to configure NFS file systems on the VNX array to store virtual servers:

1. Create a block-based RAID 5 storage pool that consists of 70 (for 125 virtual machines) or one 145 (for 250 virtual machines) 300 GB SAS drives.

a. Log on to EMC Unisphere.

b. Select the array that is to be used in this solution.

c. Click Storage Storage Configuration Storage Pools.

d. Select the Pools tab.

e. Click Create.

Note System drives are specifically excluded from the pool, and not used for additional storage.

Create your Hot Spare disks at this point. Refer to the EMC VNX5300 Unified Installation Guide for additional information.

Figure 9 on page 50 depicts the target storage layout for the system for 125 virtual machines.

Figure 10 on page 52 depicts the target storage layout for the system for 250 virtual machines.

2. Use the pool created in step 1, and provision LUNs and present them to the Data Mover using the system-defined NAS storage group.

a. Click Storage LUNs.

b. Click Create.

Page 81: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

81

c. In the prompted dialog, select the pool created in step 1. For User Capacity, select MAX. The Number of LUNs to create is 50 for (125 virtual machines) or 100 for (250 virtual machines). 268 GB LUNs are provisioned after this operation.

d. Click Hosts Storage Groups.

e. Select ~filestorage.

f. Click Connect LUNs.

g. In the Available LUNs panel, select 100 LUNs created in the previous steps. The Selected LUNs panel appears immediately.

After this step, you can see a new Storage Pool for File is ready, from which we can create multiple file systems.

3. Create multiple file systems from the NAS pool to present to the ESXi servers as NFS datastores. The validated solution used five (125 virtual machines) or 10 (250 virtual machines) 2.5TB file systems from the pool. In a customer implementation, it may be proper to create logical separation between virtual machine groups by assigning some to one file system, and others to a separate one. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system.

a. Click Storage Storage Configuration File Systems.

b. Click Create.

c. In the prompted dialog, select create from Storage Pool, and the Storage Capacity is 1250 GB for (125 virtual machines) or 2500 GB for (250 virtual machines)

d. Keep default settings.

Note To enable an NFS performance fix for VNX File that significantly reduces NFS write latency, the file systems must be mounted on the Data Mover by using the Direct Writes mode as shown in Figure 20 on page 82. Set Advanced Options must be selected to enable Direct Writes Enabled.

Page 82: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

82

Figure 20. Direct Writes Enabled checkbox

4. Export the file systems using NFS, and give root access to ESXi servers.

a. Click Storage Shared Folders NFS.

b. Click Create.

c. In the dialog, add the IP addresses of all ESXi servers in Read/Write Hosts and Root Hosts.

FAST Cache configuration (optional)

To configure FAST Cache on the storage pools for this solution, complete the following steps:

5. Configure Flash drives as FAST Cache

a. To create FAST Cache, click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog (shown in Figure 21).

b. Select the FAST Cache tab to view FAST Cache information.

Page 83: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

83

Figure 21. Storage System Properties dialog box

c. Click Create to open the Create FAST Cache dialog box as shown in Figure 22.

The RAID Type field is displayed as RAID 1 when the FAST Cache has been created. The number of Flash drives can also be chosen from this screen. The bottom portion of the screen shows the Flash drives that are used for creating FAST Cache. You can choose the drives manually by selecting the Manual option.

d. Refer to Storage configuration guidelines to determine the number of Flash drives that are needed in this solution.

Note If a sufficient number of Flash drives are not available, FLARE displays an error message and FAST Cache cannot be created.

Page 84: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

84

Figure 22. Create FAST Cache dialog box

6. Enable FAST Cache on the storage pool

If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled. You can configure them from the Advanced tab in the Create Storage Pool dialog shown in Figure 23. After FAST Cache is installed on the VNX series, it is enabled by default when a storage pool is created.

Page 85: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

85

Figure 23. Advanced tab in the Create Storage Pool dialog

If the storage pool has already been created, use the Advanced tab in the Storage Pool Properties dialog to configure FAST Cache as shown in Figure 24.

Figure 24. Advanced tab in the Storage Pool Properties dialog

Note The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves.

FAST VP configuration (optional)

To configure FAST VP for this solution complete the following steps.

7. Configure FAST at the pool level

To view and manage FAST at the pool level, click Properties for a specific storage pool to open the Storage Pool Properties dialog. Figure 25 shows the tiering information for a specific FAST pool.

Page 86: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

86

Figure 25. Storage Pool Properties dialog box

The Tier Status area shows FAST relocation information specific to the selected pool.

Select the scheduled relocation at the pool level from the Auto-Tiering list. This can be set to either Automatic or Manual.

In the Tier Details area, you can see the exact distribution of your data.

You can also connect to the array-wide Relocation Schedule using the button on the top right corner, which presents the Manage Auto-Tiering dialog box as shown in Figure 26.

Page 87: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

87

Figure 26. Manage Auto-Tiering dialog box

From this status dialog, users can control the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O.

Note FAST (Fully Automated Storage Tiering) is a completely automated tool. To this end, relocations can be scheduled to occur automatically. Schedule the relocations during off-hours to minimize any potential performance impact the relocations may cause.

8. Configure FAST at the LUN level (Optional)

Some FAST properties are managed at the LUN level.

a. Click Properties for a specific LUN.

b. In this dialog, select the Tiering tab to view tiering information for this single LUN, as shown in Figure 27.

Page 88: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

88

Figure 27. LUN Properties dialog box

c. The Tier Details section displays the current distribution of slices within the LUN. Select the tiering policy at the LUN level from the Tiering Policy list.

Page 89: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

89

Install and configure vSphere infrastructure

This section provides the requirements for the installation and configuration of the ESXi hosts and infrastructure servers required to support the architecture. Table 19 describes the tasks that must be completed.

Table 19. Tasks for server installation

Task Description Reference

Install ESXi Install the ESXi 5.1 hypervisor on the physical servers being deployed for the solution.

vSphere Installation and Setup Guide

Configure ESXi Networking

Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and Jumbo Frames.

vSphere Networking

Connect VMware Datastores

Connect the VMware datastores to the ESXi hosts deployed for the solution.

vSphere Storage Guide

Upon initial power up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in each of the server BIOS. If the servers are equipped with a RAID controller, it is recommended to configure mirroring on the local disks.

Boot the ESXi 5.1 install media and install the hypervisor on each of the servers. ESXi hostnames, IP addresses, and a root password is required for installation. Appendix B provides appropriate values.

During the installation of VMware ESXi, a standard virtual switch (vSwitch) is created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, an additional NIC must be added either by using the ESXi console or by connecting to the ESXi host from the vSphere Client.

Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover.

VMware ESXi networking configuration including load balancing, link aggregation, and failover options are described in vSphere Networking. Choose the appropriate load balancing option based on what is supported by the network infrastructure.

Create VMkernel ports as required, based on the infrastructure configuration:

VMkernel port for NFS traffic

VMkernel port for VMware vMotion

Overview

Install ESXi

Configure ESXi networking

Page 90: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

90

Virtual server port groups (used by the virtual servers to communicate on the network)

vSphere Networking describes the procedure for configuring these settings. Refer to Appendix C or more information.

A Jumbo frame is an Ethernet frame with a “payload” greater than 1,500 bytes and up to 9,000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9,000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames must be enabled end-to-end. This includes the network switches, ESXi servers, and VNX Data Movers.

Jumbo frames can be enabled on the ESXi server on two different levels. If all the portals on the virtual switch need to be enabled for jumbo frames, this can be achieved by selecting the properties of the virtual switch and editing the MTU settings from vCenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel port under the network properties from vCenter.

To enable jumbo frames on the VNX, complete the following steps:

1. In Unisphere, click Settings Network Settings for File.

2. Select the appropriate network interface from the Interfaces tab.

3. Click Properties.

4. Set the MTU size to 9000.

5. Click OK to apply the changes.

Jumbo frames may also need to be enabled on each network switch. Consult your switch configuration guide for instructions.

Connect the datastores configured in Install and configure vSphere infrastructure to the appropriate ESXi servers. These include the datastores configured for:

Virtual server storage.

Infrastructure virtual machine storage (if required).

SQL Server storage (if required).

vSphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. Refer to Appendix C for more information.

Server capacity is required for two purposes in the solution:

To support the new virtualized desktop infrastructure.

Support the required infrastructure services such as authentication/authorization, DNS, and databases.

For information on minimum infrastructure requirements, refer to Table 2. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required.

Jumbo frames

Connect VMware datastores

Plan virtual machine memory allocations

Page 91: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

91

Memory configuration

Take care when configuring server memory in order to properly size and configure the solution. This section provides general guidance on memory allocation for the virtual desktops, and factors in vSphere overhead and the virtual machine configuration

ESX/ESXi memory management

Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple virtual machines, while avoiding resource exhaustion. In cases where advanced processors (i.e. Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself via a feature known as shadow page tables.

vSphere employs the following memory management techniques:

Allocation of memory resources greater than those physically available to the virtual machine is known as memory over commitment.

Identical memory pages that are shared across virtual machines are merged via a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse.

Memory compression - ESXi stores pages, which would otherwise be swapped out to disk through host swapping, in a compressed cache located in the main memory.

Host resource exhaustion can be relieved via a process known as memory ballooning. This process requests free pages to be allocated from the virtual machine to the host for reuse.

Lastly, hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk.

Additional information can be obtained in the following webpage:

http://www.vmware.com/files/pdf/mem_mgmt_perf_vsphere5.pdf

Page 92: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

92

Virtual machine memory concepts

Figure 28 shows the memory settings parameters in the virtual machine.

Figure 28. Virtual machine memory settings

Configured memory—Physical memory allocated to the virtual machine at the time of creation.

Reserved memory—Memory that is guaranteed to the virtual machine.

Touched memory—Memory that is active or in use by the virtual machine.

Swappable—Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machine s via ballooning, compression or swapping.

The following are the recommended best practices:

Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads.

Intelligently size memory allocation for virtual machines. Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machine sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, virtual machine performance might be adversely affected. Having performance baselines for your virtual machine workloads assists in this process.

Additional information on ESXTop can be found in the following document:

http://communities.vmware.com/docs/DOC-9279

Page 93: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

93

Install and configure SQL server database

Table 20 describes how to set up and configure a SQL Server database for the solution. At the end of this chapter, you will have Microsoft SQL server installed on a virtual machine, with the databases required by VMware vCenter configured for use.

Table 20. Tasks for SQL server database setup

Task Description Reference

Create a virtual machine for Microsoft SQL Server

Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements.

http://msdn.microsoft.com

Install Microsoft Windows on the virtual machine

Install Microsoft Windows Server 2008 R2 on the virtual machine created to host SQL Server.

http://technet.microsoft.com

Install Microsoft SQL Server

Install Microsoft SQL Server on the virtual machine designated for that purpose.

http://technet.microsoft.com

Configure database for VMware vCenter

Create the database required for the vCenter server on the appropriate datastore.

Preparing vCenter Server Databases

Configure database for VMware Update Manager

Create the database required for Update Manager on the appropriate datastore.

Preparing the Update Manager Database

Create the virtual machine with enough computing resources on one of the ESXi servers designated for infrastructure virtual machines, and use the datastore designated for the shared infrastructure.

Note The customer environment may already contain a SQL Server that is designated for this role. In that case, refer to Configure database for VMware vCenter.

The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings.

Overview

Create a virtual machine for Microsoft SQL server

Install Microsoft Windows on the virtual machine

Page 94: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

94

Install SQL Server on the virtual machine with the SQL Server installation media.

One of the installable components in the SQL Server installer is the SQL Server Management Studio (SSMS). You can install this component on the SQL server directly, as well as on an administrator’s console. SSMS must be installed on at least one system.

In many implementations, you may want to store data files in locations other than the default path. To change the default path, right-click the server object in SSMS and select Database Properties. This action opens a properties interface from which you can change the default data and log directories for new databases created on the server.

Note For High Availability, SQL Server can be installed on a Microsoft Failover Cluster, or on a virtual machine protected by VMware VMHA clustering. It is not recommended to combine these technologies.

To use VMware vCenter in this solution, create a database for the service to use. The requirements and steps to configure the vCenter Server database correctly are covered in Preparing vCenter Server Databases. Refer to the list of documents in Appendix C for more information.

Note Do not use the Microsoft SQL Server Express–based database option for this solution.

It is a best practice to create individual login accounts for each service accessing a database on SQL Server.

To use VMware Update Manager in this solution, create a database for the service to use. The requirements and steps to configure the Update Manager database correctly are covered in Preparing the Update Manager Database. It is a best practice to create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization’s policy.

Install SQL server

Configure database for VMware vCenter

Configure database for VMware Update Manager

Page 95: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

95

Install and configure VMware vCenter server

This section provides information on how to configure the VMware Virtual Center. Table 21 describes the tasks that must be completed.

Table 21. Tasks for vCenter configuration

Task Description Reference

Create the vCenter host virtual machine

Create a virtual machine to be used for the VMware vCenter Server.

vSphere Virtual Machine Administration

Install vCenter guest operating system

Install Windows Server 2008 R2 Standard Edition on the vCenter host virtual machine.

Update the virtual machine

Install VMware Tools, enable hardware acceleration, and allow remote console access.

vSphere Virtual Machine Administration

Create vCenter ODBC connections

Create the 64-bit vCenter and 32-bit vCenter Update Manager ODBC connections.

vSphere Installation and Setup

Installing and Administering VMware vSphere Update Manager

Install vCenter Server Install vCenter Server software.

vSphere Installation and Setup

Install vCenter Update Manager

Install vCenter Update Manager software.

Installing and Administering VMware vSphere Update Manager

Create a virtual datacenter

Create a virtual datacenter. vCenter Server and Host Management

Apply vSphere license keys

Type the vSphere license keys in the vCenter licensing menu.

vSphere Installation and Setup

Add ESXi hosts Connect vCenter to ESXi hosts.

vCenter Server and Host Management

Configure vSphere clustering

Create a vSphere cluster and move the ESXi hosts into it.

vSphere Resource Management

Perform array ESXi host discovery

Perform ESXi host discovery from the Unisphere console.

Using EMC VNX Storage with VMware vSphere–TechBook

Install the vCenter Update Manager plug-in

Install the vCenter Update Manager plug-in on the administration console.

Installing and Administering VMware vSphere Update Manager

Overview

Page 96: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

96

Task Description Reference

Deploy the VNX VAAI for NFS plug-in

Using VMware Update Manager, deploy the VNX VAAI for NFS plug-in to all ESXi hosts.

EMC VNX VAAI NFS plug-in–Installation HOWTO video available on www.youtube.com

vSphere Storage APIs for Array Integration (VAAI) Plug-in

Installing and Administering VMware vSphere Update Manager

Install the EMC VNX UEM CLI

Install the EMC VNX UEM command line interface on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

Install the EMC VSI plug-in

Install the EMC Virtual Storage Integrator plug-in on the administration console.

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

If the VMware vCenter Server is to be deployed as a virtual machine on an ESXi server installed as part of this solution, connect directly to an Infrastructure ESXi server using the vSphere Client. Create a virtual machine on the ESXi server with the customer’s guest OS configuration, using the Infrastructure server datastore presented from the storage array. The memory and processor requirements for the vCenter Server are dependent on the number of ESXi hosts and virtual machines being managed. The requirements are outlined in the vSphere Installation and Setup Guide.

Install the guest OS on the vCenter host virtual machine. VMware recommends using Windows Server 2008 R2 Standard Edition.

Before installing vCenter Server and vCenter Update Manager, you must create the ODBC connections required for database communication. These ODBC connections use SQL Server authentication for database authentication. Appendix B provides a place to record SQL login information.

Install vCenter by using the VMware VIMSetup installation media. Use the customer-provided username, organization, and vCenter license key when installing vCenter.

To perform license maintenance, log in to the vCenter Server and select the Administration > Licensing menu from the vSphere Client. Use the vCenter License console to enter the license keys for the ESXi hosts. After this, they can be applied to the ESXi hosts as they are imported into vCenter.

Create the vCenter host virtual machine

Install vCenter guest OS

Create vCenter ODBC connections

Install vCenter server

Apply vSphere license keys

Page 97: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

97

The VAAI for NFS plug-in enables support for the vSphere 5.1 NFS primitives. These primitives reduce the load on the hypervisor from specific storage-related tasks to free resources for other operations. Additional information about the VAAI for NFS plug-in is available in the plug-in download, vSphere Storage APIs for Array Integration (VAAI) Plug-in. Refer to Appendix C for more information.

The VAAI for NFS plug-in is installed using vSphere Update Manager. Refer to the process for distributing the plug demonstrated in the EMC VNX VAAI NFS plug-in – installation HOWTO video available on www.youtube.com. To enable the plug-in after installation, you must reboot the ESXi server.

The VNX storage system can be integrated with VMware vCenter by using EMC Virtual Storage Integrator (VSI) for VMware vSphere: Unified Storage Management. This provides administrators the ability to manage VNX storage tasks from the vCenter.

After the plug-in is installed on the vSphere console, administrators can use vCenter to:

Create datastores on VNX and mount them on ESXi servers

Extend datastores

Create Fast or Full Clones of virtual machines

Summary This chapter presents the required steps to deploy and configure the various aspects of the VSPEX solution, which included both the physical and logical components. At this point, you should have a fully functional VSPEX solution.

Deploy the VNX VAAI for NFS plug-in

Install the EMC VSI plug-in

Page 98: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VSPEX Configuration Guidelines

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

98

Page 99: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

99

Chapter 6 Validating the Solution

This chapter presents the following topics:

Overview ................................................................................................. 100

Post-install checklist ................................................................................ 101

Deploy and test a single virtual server ...................................................... 101

Verify the redundancy of the solution components ................................... 101

Page 100: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Validating the Solution

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

100

Overview This chapter provides a list of items that should be reviewed once the solution has been configured. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution, and ensure that the configuration supports core availability requirements.

Table 22 lists the tasks that must be completed.

Table 22. Tasks for testing the installation

Task Description Reference

Post install checklist

Verify that sufficient virtual ports exist on each vSphere host virtual switch.

vSphere Networking

Verify that each vSphere host has access to the required datastores and VLANs.

vSphere Storage Guide

vSphere Networking

Verify that the vMotion interfaces are configured correctly on all vSphere hosts.

vSphere Networking

Deploy and test a single virtual server

Deploy a single virtual machine using the vSphere interface.

vCenter Server and Host Management

vSphere Virtual Machine Management

Verify redundancy of the solution components

Perform a reboot of each storage processor in turn, and ensure that LUN connectivity is maintained.

Steps shown below

Disable each of the redundant switches in turn and verify that the vSphere host, virtual machine, and storage array connectivity remains intact.

Reference vendor’s documentation

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

vCenter Server and Host Management

Page 101: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Validating the Solution

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

101

Post-install checklist The following configuration items are critical to functionality of the solution, and should be verified prior to deployment into production.

On each vSphere server, verify the following items:

The vSwitch that hosts the client VLANs has been configured with sufficient ports to accommodate the maximum number of virtual machines it may host.

All required virtual machine port groups have been configured, and each server has access to the required VMware datastores.

An interface is configured correctly for vMotion using the material in the vSphere Networking guide.

Deploy and test a single virtual server To verify the operation of the solution, it is important perform a deployment of a virtual machine in order to verify that the procedure completes as expected. Verify that the virtual machine has been joined to the applicable domain, has access to the expected networks, and that it is possible to login to it.

Verify the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, it is important to test specific scenarios related to maintenance or hardware failure.

Perform a reboot of each VNX storage processor in turn and verify that connectivity to VMware datastores is maintained throughout each reboot. Use these steps:

a. Log in to the Control Station with administrator credentials.

b. Navigate to /nas/sbin.

c. Reboot SP A by using the ./navicli -h spa rebootsp command.

d. During the reboot cycle, check for presence of datastores on ESXi hosts.

e. When cycle completes, reboot SP B by using ./navicli -h spb rebootsp.

Perform a failover of each VNX Data Mover in turn and verify that connectivity to VMware datastores is maintained and that connections to NFS file systems are reestablished. For simplicity, use the following approach for each Data Mover; reboot can also be accomplished through the Unisphere interface.

f. From the Control Station prompt, use command server_cpu <movername> -reboot where <movername> is the name of the data mover

g. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of

Page 102: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Validating the Solution

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

102

the solution maintain connectivity to each other and to any existing client infrastructure as well.

On a vSphere host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

Page 103: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

103

Appendix A Bills of Materials

This appendix presents the following topic:

Bill of materials ....................................................................................... 104

Page 104: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Bills of Materials

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

104

Bill of materials Table 23. List of components used in the VSPEX solution for 125 virtual machines

Component Solution for 125 virtual machines

VMware vSphere Servers

CPU 1 x vCPU per virtual machine

4 x vCPUs per physical core

125 x vCPUs

Minimum of 32 Physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per vSphere host

Minimum of 250 GB RAM

Network – 1Gb option 6 x 1 GbE NICs per server

Network – 10Gb option 2 x 10 GbE NICs per server

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network Infrastructure

Common 2 x physical switches

1 x 1 GbE port per control station for management

1Gb network option 6 x 1 GbE ports per vSphere server

4 x 1 GbE ports per data mover for data

10Gb network option 2 x 10 GbE ports per vSphere server

2 x 10 GbE ports per data mover for data

EMC Next-Generation Backup

Avamar 1 x Gen4 utility node

1 x Gen4 3.9 TB spare node

3 x Gen4 3.9 TB Storage nodes

Data Domain 1 x Data Domain DD640

1 x ES30

15x1TB HDD

EMC VNX series storage array

Common EMC VNX5300

2 x Data Movers (active / standby)

75 x 300 GB 15k rpm 3.5-inch SAS disks

3 x 300 GB 15k 3.5-inch SAS disks as hot spares

1 Gb Network option 1 x 1 Gb IO module for each Data Mover

(each module includes four ports)

10 Gb Network option 1 x 10 Gb IO module for each Data Mover

(each module includes two ports)

Page 105: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Bills of Materials

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

105

Note The solution may use 1 Gb or 10 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

Page 106: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Bills of Materials

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

106

Table 24. List of components used in the VSPEX solution for 250 virtual machines

Component Solution for 250 virtual machines

VMware vSphere Servers

CPU 1 x vCPU per virtual machine

4 x vCPUs per physical core

250 x vCPUs

Minimum of 63 Physical CPUs

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per vSphere host

Minimum of 500 GB RAM

Network – 1Gb option 6 x 1 GbE NICs per server

Network – 10Gb option 2 x 10 GbE NICs per server

Note To implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network Infrastructure

Common 2 x physical switches

1 x 1 GbE port per control station for management

1Gb network option 6 x 1 GbE ports per vSphere server

4 x 1 GbE ports per data mover for data

10Gb network option 2 x 10 GbE ports per vSphere server

2 x 10 GbE ports per data mover for data

EMC Next-Generation Backup

Avamar 1 x Gen4 utility node

1 x Gen4 3.9 TB spare node

5 x Gen4 3.9 TB Storage nodes

Data Domain 1 x Data Domain DD670

2 x ES350

15x1 TB HDDs

EMC VNX series storage array

Common EMC VNX5500

2 x Data Movers (active / standby)

150 x 300 GB 15k rpm 3.5-inch SAS drives

6 x 300 GB 15k 3.5-inch SAS disks as hot spares

1 Gb Network option 1 x 1 Gb IO module for each Data Mover

(each module includes four ports)

10 Gb Network option 1 x 10 Gb IO module for each Data Mover

(each module includes two ports)

Note The solution may use 1Gb or 10Gb network infrastructure as long as the

underlying requirements around bandwidth and redundancy are fulfilled.

Page 107: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

107

Appendix B Customer Configuration Data Sheet

This appendix presents the following topic:

Customer configuration data sheet ........................................................... 108

Page 108: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Customer Configuration Data Sheet

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

108

Customer configuration data sheet Before you start the configuration, gather some customer-specific network, and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference.

The VNX File and Unified Worksheets should be cross-referenced to confirm customer information.

Table 25. Common server information

Server Name Purpose Primary IP

Domain Controller

DNS Primary

DNS Secondary

DHCP

NTP

SMTP

SNMP

vCenter Console

SQL Server

Table 26. ESXi server information

Server Name Purpose

Primary IP

Private Net (storage) addresses

VMkernel IP

VMotion IP

ESXi

Host 1

ESXi

Host 2

Page 109: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Customer Configuration Data Sheet

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

109

Table 27. Array information

Array name

Admin account

Management IP

Storage pool name

Datastore name

NFS Server IP

Table 28. Network infrastructure information

Name Purpose IP Subnet Mask

Default Gateway

Ethernet Switch 1

Ethernet Switch 2

Table 29. VLAN information

Name Network Purpose VLAN ID Allowed Subnets

Virtual Machine Networking

ESXi Management

NFS Storage Network

VMotion

Page 110: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

Customer Configuration Data Sheet

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

110

Table 30. Service accounts

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

root ESXi root

Array administrator

vCenter administrator

SQL Server administrator

Page 111: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

111

Appendix C References

This appendix presents the following topic:

References .............................................................................................. 112

Page 112: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

References

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

112

References

The following documents, available on EMC Online Support provide additional and relevant information. If you do not have access to a document, contact your EMC representative.

EMC VSI for VMware vSphere: Storage Viewer — Product Guide

EMC VSI for VMware vSphere: Unified Storage Management— Product Guide

VNX FAST Cache: A Detailed Review

EMC VSPEX Private Cloud: VMware vSphere 5.1 for up to 100 Virtual Machines

VFCache Installation and Administration Guide v1.5

VNX5300 Unified Installation Guide

VNX5500 Unified Installation Guide

Using EMC VNX Storage with VMware vSphere

The following documents, located on the VMware website, provide additional and relevant information:

vSphere Networking

vSphere Storage Guide

vSphere Virtual Machine Administration

vSphere Installation and Setup

vCenter Server and Host Management

vSphere Resource Management

Installing and Administering VMware vSphere Update Manager

vSphere Storage APIs for Array Integration (VAAI) Plug-in

Interpreting esxtop statistics

Understanding Memory Resource Management in VMware vSphere® 5.0

For documentation on Microsoft Products, refer to the Microsoft website:

Microsoft Developer Network

Plan for and Deploy Windows 8

EMC documentation

Other documentation

Page 113: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-

Generation Backup

113

Appendix D About VSPEX

This appendix presents the following topic:

About VSPEX ........................................................................................... 114

Page 114: EMC VSPEX PRIVATE CLOUD · VSPEX Proven Infrastructure EMC VSPEX Abstract This document describes the EMC VSPEX Proven Infrastrucutre solution for Private …

About VSPEX

VMware vSphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next-Generation Backup

114

About VSPEX EMC has joined forces with the industry leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-of-breed technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that leverages their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers looking to gain the simplicity that is characteristic of truly converged infrastructures, while at the same time gaining more choice in individual solution components.

VSPEX solutions are proven by EMC, and packaged and sold exclusively by EMC channel partners. VSPEX provides channel partners with more opportunity, faster sales cycles, and end-to-end enablement. By working even more closely together, EMC and its channel partners can now deliver infrastructure that accelerates the journey to the cloud for even more customers.