46
Virtualization Architecture Design For The Gallery Prepared by Guy Mannley and Supa Voemaan VMware Professional Services Prepared by <Consultant> VMware Professional Services VMware and The Gallery Confidential

Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

  • Upload
    hakhue

  • View
    230

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

For

The Gallery Prepared by Guy Mannley and Supa Voemaan VMware Professional Services

Prepared by <Consultant> VMware Professional Services

VMware and The Gallery Confidential

Page 2: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Contents

1. Overview .......................................................................................... 4

1.1 Executive Summary ....................................................................................................... 4 1.2 Business Background .................................................................................................... 4 1.3 Audience ........................................................................................................................ 4 1.4 Business Objectives ....................................................................................................... 5 1.5 Business Requirements ................................................................................................. 5 1.6 Assumptions, Risks, and Constraints ............................................................................ 6

2. Architecture Overview ...................................................................... 8

2.1 Conceptual Design ......................................................................................................... 8

3. Core Management Infrastructure Design ......................................... 9

3.1 Decisions for the vCenter Server System Design .......................................................... 9 3.2 vCenter Server System Logical Design ....................................................................... 12

4. Infrastructure Capacity Requirements ............................................ 14

4.1 CPU and Memory Requirements ................................................................................. 14 4.2 Required Number of Hosts .......................................................................................... 15

5. Virtual Data Center Infrastructure Design ....................................... 16

5.1 Decisions for the vSphere Cluster Design ................................................................... 16 5.2 vSphere Cluster Logical Design ................................................................................... 17 5.3 Decisions for Cluster Configuration ............................................................................. 18

6. Compute Infrastructure Design ...................................................... 20

6.1 Decisions for the Compute Infrastructure Design ........................................................ 20 6.2 ESXi Host Platform Logical Design.............................................................................. 22

7. Storage Platform Design ................................................................ 23

7.1 Storage Performance Requirements ........................................................................... 23 7.2 Storage Capacity Requirements .................................................................................. 23 7.3 Decisions for the Storage Platform Design .................................................................. 24 7.4 Storage Platform Logical Design ................................................................................. 26

8. Storage Management Design......................................................... 28

8.1 Decisions for the Storage Management Design .......................................................... 28 8.2 Storage Management Logical Design .......................................................................... 29

9. Network Component Design .......................................................... 30

Page 2 of 46

Page 3: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

9.1 Decisions for the Network Component Design ............................................................ 30 9.2 Network Component Logical Design............................................................................ 32

10. Network Management Design .................................................... 33

10.1 Decisions for the Network Management Design .......................................................... 33 10.2 Network Management Logical Design ......................................................................... 35

11. Virtual Machine Design .............................................................. 36

11.1 Decisions for the Virtual Machine Design .................................................................... 36

12. Infrastructure Security Design .................................................... 38

12.1 Decisions for the Infrastructure Security Design .......................................................... 38

13. vSphere Update Manager Design .............................................. 39

13.1 Decisions for the vSphere Update Manager System Design ...................................... 39 13.2 vSphere Update Manager System Logical Design ...................................................... 42

14. Infrastructure Management Design ............................................ 43

14.1 Decisions for the Infrastructure Management Design.................................................. 43

15. Infrastructure Recoverability Design .......................................... 45

15.1 Decisions for the Infrastructure Recoverability Design ................................................ 45

Page 3 of 46

Page 4: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

1. Overview

1.1 Executive Summary This document details the recommended implementation of the VMware vSphere® architecture and is based on VMware best practices and The Gallery specific requirements and business goals discussed during the assessment phase of the engagement. It provides both conceptual and logical design considerations, which encompass all vSphere related infrastructure components. These components include requirements and specifications for virtual machines, hosts, networking, storage, security, availability, recovery, and management. Physical design details and recommendations apply to the widest spectrum of possible hardware used by The Gallery, which has resulted in specific hardware, firmware, and features being omitted from this design document. This type of information can be found in the accompanying Virtualization Configuration Workbook. This design can be replicated across sites and differing hardware with minimum modification.

1.2 Business Background The Gallery is an online marketplace for contemporary art, based out of Berlin, Germany. With nearly 500,000 listings from artists and dealers throughout the world, the company's Web and mobile properties attract more than 1 million buyers every month, adding further value with reviews and shopping advice.

The Gallery currently has two data centers with multiple workloads and environments running on x86 hardware and Linux and Windows operating system platforms. The primary data center (called Prod) runs the major production workloads. The Prod site is located on the outskirts of Berlin. The second data center runs the Test and Development environment (called Dev). Some of the workloads at the Dev site already run in virtual machines on the VMware vSphere® platform. The remaining workloads run on physical servers. The Dev site is at a different site in Berlin. The Gallery is currently considering setting up a third data center in Frankfurt as a recovery site to failover workloads from the Prod site. The Gallery will evaluate VMware Site Recovery Manager™ to provide replication from the Prod site to the proposed disaster recovery site. However, this evaluation will be part of a separate project.

The Gallery began the data center virtualization approximately two years ago with the Dev server landscape virtualization. Virtualization was initially adopted to increase operational efficiency, lower power and cooling costs, and to take advantage of the higher availability and increased flexibility that comes with running virtualized workloads.

Because of the benefits of virtualization in their Dev infrastructure, The Gallery wants to consolidate the physical workloads running in both data centers. The Gallery also wants to use the virtualization platform to reduce the time taken to provision new servers to reduce the time to launch new projects and business initiatives. The Gallery wants to reduce software and operating system licensing costs by zoning by workload type rather than by physical boundaries. The Gallery wants to use the latest version of vSphere for all the virtual machines in both sites.

In addition to these immediate goals, The Gallery wants a solution that will prepare the way for a future in which IT as a service (ITaaS) can be delivered to internal business units.

1.3 Audience This design document is intended for those planning, designing, and implementing the virtualization components of the virtualization infrastructure. The audience includes the following roles:

• Project executive sponsor

• Virtualization architects

Page 4 of 46

Page 5: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

• Business decision makers

• Core technical teams, such as product development, server, storage, networking, security, and backup and recovery

It is assumed that the reader has knowledge of and familiarity with virtualization concepts and related topics (including storage and networking).

1.4 Business Objectives • Consolidate the physical workloads running in their Dev and Prod data centers in Berlin. • Reduce the time taken to provision new servers to reduce the time to complete new projects. • Economize software and operating system licensing costs by grouping like application workloads. • Prepare the way for a future in which they can deliver IT as a Service (ITaaS) to their internal

business units.

1.5 Business Requirements The Category column can help you organize business requirements based on the area they affect. Examples of categories are infrastructure, networking, storage, management, security, and so forth.

Table: Requirements

ID Category Business Requirement

BR01 Infrastructure Virtualize and consolidate all existing physical servers running on the x86 platform in the Berlin data centers.

BR02 Performance Architecture should meet the performance requirements calculated during the assessment phase of the project.

BR03 Geographies Architecture should support both the Prod and Dev data centers.

BR04 AD integration AD integration is required for vCenter Server for role-based access control.

BR05 Licensing The architecture should support grouping of like applications, databases and guest operating systems to save on licensing costs. For example, Oracle and SQL databases, Windows and Linux operating systems, and so on.

BR06 Scalability The design should be scalable to support a virtual machine growth of 10 percent year over year growth.

BR07 Availability The infrastructure must be able to achieve a highly available operation to sustain operations during system failures. A resiliency of N+1 is required.

BR08 Availability The infrastructure must be able to support vSphere Fault Tolerance for future implementation.

BR09 Security The design must support the existing security policies.

Page 5 of 46

Page 6: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

BR10 Manageability The system should integrate with existing management and monitoring systems and allow extensibility to vRealize Operations Manager for capacity and performance management.

BR11 Guest operating system

Virtual machine workload disks for Windows virtual machines should be right sized with given standards as determined in the Capacity Planning assessment.

BR12 Time The design should consider ESXi and virtual machine time synchronization as a crucial design factor to support the time sensitive retail environment.

BR13 Network The design should allow the network and security administrators to monitor network traffic of the desired virtual machines.

BR14 Manageability The design should provide a centralized management console to manage both data centers.

BR15 Backup All the virtual machines should be backed up, keeping in mind the existing backup policies of the organization.

1.6 Assumptions, Risks, and Constraints

Table: Constraints

ID Category Design Constraints

C01 Storage The Gallery must leverage the existing Fibre Channel storage arrays for most of the production workloads.

C02 Servers The Gallery is already using several blade servers at the Dev site and wants to add more of these servers to build the new design.

C03 Cluster Configuration Since operating system licensing is a constraint, Windows and Linux virtual machines will be hosted on separate ESXi hosts to optimize Windows licenses. Some Linux virtual machines, however, can run on ESXi hosts meant for Windows virtual machines.

C04 Backup The Gallery will use the existing backup methodology of agent-based backup for virtualized workloads.

C05 Networking As a short-term goal, The Gallery will replicate the physical network to the virtual environment to ensure that the availability of applications and services is not affected.

C06 RAID The Gallery uses RAID 10 (Distributed Parity) as a standard across all of its workloads and data centers.

Page 6 of 46

Page 7: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Table: Assumptions

ID Category Design Assumptions

A01 Storage The Gallery provides sufficient storage for building the environment.

A02 Users skill sets Users specifically with Administrator privileges have sufficient knowledge on VMware and on other required technologies. Training for The Gallery system administration team has been planned.

A03 Licenses The Gallery provides adequate licenses for application and operating systems.

A04 Application The Gallery application owners will prepare any required test plans for business applications and will be available to test the same during migration phases.

A05 Application The Gallery application owners will provide clear categorization of workloads being migrated on the basis of criticality. They will provide downtimes which allows the migration activity to be completed successfully with no or minimal impact to business.

Table: Risks and Mitigations

ID Category Risk Description Impact Mitigation Plan

RM01 Hardware Procurement

Some of the hardware identified in this architecture design has yet to be procured. Any changes to the hardware BOM during delivery can negatively impact this architecture design.

Low The Gallery has already started the procurement cycle to avoid any delays or change of hardware planned for this Project

RM02 Infrastructure Lack of adequate infrastructure (limitations on network address space, lack of servers, storage, etc.) can impact the design.

Low This requirement has been planned and communicated to the hardware team for procurement team to mitigate this risk

RM03 Architectural changes

The Gallery expectation in making architectural changes in the middle of the project or that is out of scope will impact the design.

Low Any changes in the architecture design will be communicated to all the parties involved to minimize the impact.

RM04 vSphere vMotion

The requirement to run vSphere vMotion and data traffic on the same physical network can lead to network disruptions if not designed carefully.

Low Although it is not a best practice to run vMotion and data traffic on the same physical network, this design choice will help The Gallery to use the 10 GigE networks more appropriately.

Page 7 of 46

Page 8: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

RM05 Network Hardware

The 10 GB network card with dual port is onboard while the 1 GB card with dual port is on the PCI slot. While a 10 GB card failure would be most likely due to motherboard failure, the 1 GB card could act as a single point of failure.

Medium The Ethernet modules will have 2 ports for port level redundancy. The 1 GB card is dedicated for management network, while a secondary management network heartbeat on 10 GB networks will be configured to ensure redundancy and avoid isolation due to this risk.

RM06 FC HBA Blade servers have a single HBA with 2 ports. Failure of HBA on a blade server will be a single point of failure.

Medium Although HBA level failures are rare, if it happens, virtual machines will be failed over to another host in the cluster using vSphere HA.

2. Architecture Overview

2.1 Conceptual Design Figure: Conceptual Design

Page 8 of 46

Page 9: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

3. Core Management Infrastructure Design This section describes the logical design of the vCenter Server systems and databases used for the design. The physical design is presented in the vSphere Configuration Workbook.

3.1 Decisions for the vCenter Server System Design The following table lists the design decisions regarding the vCenter Server systems.

3.1.1 vCenter Server Architecture The Gallery made the decisions listed in the following table.

Table: vCenter Server System Design Decisions

The Gallery made the decisions listed in the following table.

Decision Design Justification Design Implication

vCenter Server system platform

All vCenter Server instances required within the vSphere solution will be deployed using Windows virtual machines.

The Gallery wants to use existing SQL databases for the vCenter Server instances, and only the Windows platform supports an SQL database.

The vCenter Server system will need frequent patching.

vCenter Server database platform

Both vCenter Server systems will use existing Microsoft SQL database.

The embedded database is not large enough for the deployment and existing databases are already available.

SQL databases will have to be prepared.

Platform Services Controller deployment mode

External Platform Services Controller will be used

Multiple-linked vCenter Server systems are required in the environment.

None.

Number of vCenter Server instances

Two vCenter Server instances will be deployed.

The Gallery has a policy to separate Dev workloads from Production workloads, so they will have separate vCenter Server instances.

The Gallery will have to buy a license for each site, however manageability will be improved.

Number of Platform Services Controllers

Two Platform Services Controller instances will be deployed.

Two Platform Services Controllers behind a load balancer are recommended for the number of vCenter Server instances in the environment.

An F5 load balancer will be included in the design.

High availability protection method for vCenter Server systems

Page 9 of 46

Page 10: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

All vCenter Server systems will be protected using vSphere HA

vCenter Server in a virtual machine can be easily protected by using the HA feature available with vSphere clusters.

There will be a minimal downtime of the vCenter Server when an ESXi server hosting the vCenter Server fails. No protection against application level failures

High availability protection method for Platform Services Controllers

High availability provided for the external Platform Services Controller using N+1 redundancy and a load balancer

Availability for the Platform Services Controller is a requirement for the environment.

Added complexity to the environment due to the load balancer being required.

vCenter Server instances enhanced linked mode or standalone

Enhanced linked mode will be used.

The Gallery wants to be able to manage both sites from the same interface. They will use roles and permissions to limit access.

Multiple-linked vCenter Servers require an external PSC.

Single Sign-On identity sources

vCenter Single Sign-On will be configured to use Active Directory (Integrated Microsoft Windows authentication).

vCenter Single Sign-On will be connected to Active Directory to allow users to log in and be assigned permissions with their Active Directory credentials.

Reliant on Active Directory for the majority of users to login.

Default Single Sign-On domain

gallery.com The majority of users who will log into the vCenter Server system are included in this domain.

None.

Time synchronization method

NTP will be configured across the environment to avoid time synchronization issues.

Logging and performance data will be properly time stamped.

None.

vSphere log collection method

The Gallery will configure vSphere Syslog Collector for centralized logging.

Syslog requires no additional licenses. None.

Page 10 of 46

Page 11: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

ESXi core dump collection method

The Gallery will configure vSphere ESXi Dump Collector for centralized core dumps

The Gallery likes the recoverability benefits of centralizing core dump collection.

None.

3.1.2 vCenter Server Naming Conventions The following naming conventions will be used for vCenter Server systems, vCenter Server databases, and external VMware Platform Services Controller instances:

<System Type>##-<Data Center>

System Type can be one of the following abbreviations:

• VC: vCenter Server

• VCDB: vCenter Server database instance

• PSC: External VMware Platform Services Controller

• VUM: vSphere Update Manager

Example: VC01-Prod refers to a vCenter Server system in the Prod data center, and PSC01-Dev refers to an external VMware Platform Services Controller instance in the Dev data center.

Page 11 of 46

Page 12: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

3.2 vCenter Server System Logical Design The following figure and tables represent the logical design for the vCenter Server deployment architecture.

Figure: vCenter Server Deployment Architecture

Table: vCenter Server System Logical Specifications

Attribute Specification

vCenter Server version 6.0

Physical or virtual system Virtual

Number of CPUs

Processor type

Processor speed

4

VMware vCPU

N/A

Memory 16 GB

Number of NIC and ports 1/1

Number of disks and disk sizes 2 disks: 40 GB (C:) and 60 GB (D:)

Operating system and SP level Windows Server 2012 Standard, R2

Page 12 of 46

Page 13: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Table: vCenter Server Database Logical Specifications

Attribute Specification

Vendor and version Microsoft SQL Server 2012 SP2

Authentication method SQL Server Authentication

Recovery method Full

Database auto growth Enabled in 1MB increments

Transaction log auto growth In 10% increments; restricted to 2 GB maximum size

Estimated vCenter Server database size 30 GB (Prod)

15 GB (Dev)

Table: Platform Services Controller System Logical Specifications

Attribute Specification

vCenter Server version 6.0

Physical or virtual system Virtual

Number of CPUs

Processor type

Processor speed

2

VMware vCPU

N/A

Memory 8 GB

Number of NIC and ports 1/1

Number of disks and disk sizes 1 disk: 40 GB (C:)

Operating system and SP level Windows Server 2012 Standard, R2

Page 13 of 46

Page 14: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

4. Infrastructure Capacity Requirements To consolidate the physical and virtual x86 servers of the existing data centers, the performance and use of the existing 250 servers at the Dev and Prod data centers were analyzed using VMware Capacity Planner™. The 250 servers include 120 Linux servers and 130 Windows servers. In addition to the 250 servers shown in the report, the total virtual machine count was adjusted to account for a 10 percent growth requirement and an estimated 8 additional virtual machines to manage the virtual infrastructure.

Size hosts for peak utilization levels rather than average utilization. This method supports all systems running at their observed peak resource levels simultaneously. Memory sharing will not be used. In vSphere 6.0, memory sharing is turned off by default. Cap CPU and memory utilization for each host at 80 percent (20 percent allowed for overhead and unanticipated usage).

4.1 CPU and Memory Requirements The following tables show the results of the capacity analysis.

Table: CPU Resource Requirements

Metric Amount

Average number of CPUs per system to be virtualized 5 (cores)

Average CPU MHz 2,300 MHz

Average normalized CPU per system to be virtualized 11,500 MHz

Average CPU utilization per system to be virtualized 9% (1,035 MHz)

Average peak CPU utilization per system to be virtualized 13% (1,495 MHz)

Total CPU resources required for 283 virtual machines at peak 423,085 MHz

Table: Memory Resource Requirements

Metric Amount

Average amount of RAM per system to be virtualized 18,465 MB

Average memory utilization 30% (5,540 MB)

Average peak memory utilization 50% (9,233 MB)

Total RAM required for 283 virtual machines at peak before memory sharing

2,612,939 MB

Anticipated memory sharing benefit N/A

Total RAM required for 283 virtual machines at peak with memory sharing

2,612,939 MB

Page 14 of 46

Page 15: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

4.2 Required Number of Hosts Given the target host specifications provided by The Gallery, the estimated number of hosts required to support the peak CPU and memory utilization of the anticipated workloads is 24.

The following formula was used to calculate the estimated required host capacity to support the peak CPU utilization of the anticipated virtual machine workloads:

Total CPU required for total virtual machines at peak = # of ESXi hosts required

Available CPU per ESXi host

Using this formula, the following estimated required host capacity was calculated for the planned vSphere infrastructure:

423,085 MHz (Total CPU)

= 18 ESXi Hosts

23,040 MHz (CPU per Host)

The following formula was used to calculate the number of hosts required to support anticipated at peak RAM utilization:

Total RAM required for total virtual machines at peak with memory sharing = # of ESXi Hosts Required

Available RAM per ESXi Host

Using this formula, the following estimated required host capacity was calculated for the planned vSphere infrastructure:

2,612,939 MB (Total RAM)

= 24 ESXi Hosts 107,374 MB (RAM per Host)

From a CPU workload perspective, 18 ESXi hosts are needed, but from a memory workload perspective, 24 hosts are needed. Therefore, this design requires a minimum of 24 ESXi hosts.

Page 15 of 46

Page 16: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

5. Virtual Data Center Infrastructure Design This section describes the logical design of the vSphere clusters and cluster configuration policies. The physical design is presented on the Data Center and Clusters tab of the vSphere Configuration Workbook.

5.1 Decisions for the vSphere Cluster Design The following decisions were made to complete the cluster logical design.

5.1.1 vSphere Cluster Architecture The Gallery made the decisions listed in the following table.

Table: vSphere Cluster Architecture Design Decisions

Decision Design Justification Design Implication

Scale-up or scale-out cluster architecture

The Gallery has decided to go with a scale-out architecture.

The Gallery wants to minimize the risk of downtime during host failures and is satisfied with the achieved consolidation ratio as they have half height blade servers, which will help them save real estate costs.

The consolidation ratio will go down.

Use of purpose-built clusters

The Gallery will use purpose-built clusters. Payload clusters will be designed to accommodate existing zones and group like operating systems.

The Gallery will create clusters to save on OS, application and database costs in virtual infrastructure. They will not consolidate physical switches they call zones as of now.

Number of clusters will increase and consolidation ratio will go down.

Use of a separate management cluster

A separate management cluster will be deployed at the Production site.

A separate management cluster helps IT teams to manage efficiently and provides guaranteed resources to the management solutions. It is not required for a Test & Dev environment.

Additional ESXi hosts will be deployed for the management cluster and hence this will increase the CAPEX but reduce the OPEX.

Page 16 of 46

Page 17: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

5.1.2 vSphere Cluster Naming Convention Cluster will be named to identify the data center, network zone, and operating system type.

<Data Center Name>-<Network Zone>-<OS Type>

The following abbreviations will be used to identify operating systems:

• ORCL: Oracle

• LIN: Linux

• WIN: Windows

• BOTH: Linux and Windows

Example: PROD-APP-LIN refers to a cluster in the Prod data center that is in the Application zone and includes virtual machines with Linux guest operating systems.

5.2 vSphere Cluster Logical Design Based on the design decisions, the figures below give the logical layout of the vSphere clusters architecture.

Figure: vSphere Cluster Logical Design for the Production Data Center

Figure: vSphere Cluster Logical Design for the Development Data Center

Page 17 of 46

Page 18: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

5.3 Decisions for Cluster Configuration The following table lists the decisions for the configuration of each type of cluster in the design.

Table: vSphere Cluster Configuration Policies

The Gallery made the decisions listed in the following table.

Decision Design Justification Design Implication

Use of vSphere HA

vSphere HA will be enabled on the management cluster and all payload clusters with monitoring enable for hosts, VMCP, and virtual machine monitoring. Application monitoring will not be enabled.

N+1 redundancy is required. There must be sufficient resources on the remaining host to satisfy the server requirements in the event of a host outage.

vSphere HA heartbeat redundancy method(s)

The Gallery has decided to provide maximum redundancy to the HA heartbeat network by choosing the option to create two management port groups, each backed by redundant network cards.

This method provides complete network redundancy and is appropriate given the hybrid networks of 1G and 10G.

The Gallery has decided to provide maximum redundancy to the HA heartbeat network by choosing the option to create two management port groups, each backed by redundant network cards.

VM Component Protection policy

VM Component Protection to protect against APD and PDL events will be enabled, with VM Failover configured to Power-off and restart VMs. All other setting will be left as the default.

This policy will prevent storage loss caused by APD or PDL from causing extended periods of down time.

None.

Admission control policy

The management and payload clusters will have admission control enabled and enforced using the Define Failover Capacity By Static Number Of Hosts policy.

The amount of reserved failover capacity is 1 host.

The Gallery is not at risk of wasting resources since they neither have reservations nor do they plan a resiliency of more than 1 host in a cluster due to cluster sizes (N+1).

None.

Page 18 of 46

Page 19: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

vSphere FT usage

vSphere FT will not be configured at this time. However, the network will be provisioned for future use.

Since The Gallery is using 10 GB networks, they have enough bandwidth to support the requirements of the FT logging network.

FT logging network must be provisioned to ensure that they can use FT as and when they want.

vSphere DRS usage

vSphere DRS will be enabled on all clusters in Fully Automatic mode and set to the default medium setting.

This provides the best trade between load balancing and excessive vSphere vMotion events.

None as setting is the default.

Affinity and anti-affinity rule usage

No rules will be configured The environment does not have any specific requirements for these rules.

None.

EVC usage

EVC will be enabled on all clusters.

EVC mode will be set to “Intel® "Ivy-Bridge" Generation”

This permits newer hosts to be added at a later date and vSphere vMotion to occur between the new and older CPU families.

Clusters must contain hosts with CPUs from the same vendor for EVC to be enabled.

Allows clusters to be upgraded without downtime from the virtual machines.

vSphere DPM usage

The Gallery will not implement vSphere DPM.

The Gallery runs a 24/7-business operation and they do not see an opportunity where they can save on power using DPM.

None.

Resource Pool usage

Resource pools will not be configured.

The Gallery decided not to use resource pools as they do not benefit their architecture and they can introduce complexity in the design.

None.

Page 19 of 46

Page 20: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

6. Compute Infrastructure Design This section describes the logical design of the compute infrastructure. The physical design is presented on the Host References and ESXi Hosts tabs of the vSphere Configuration Workbook.

6.1 Decisions for the Compute Infrastructure Design This section lists the design decisions for the compute infrastructure design.

6.1.1 ESXi Host Design Decisions The Gallery made the decisions listed in the following table.

Table: ESXi Host Design Decisions

Decision Design Justification Design Implication

ESXi server platform: rack-mount or blade server

The compute design will use blade servers.

The Gallery already chose blade servers. Power and cooling requirements for blade chassis must be considered.

BIOS settings

ESXi hosts will use the recommended BIOS settings listed in the physical design.

These settings will ensure the best performance of the hosts

None.

ESXi boot method: boot from local disk or boot from SAN

ESXi hosts will boot from the local hard disk drive.

The Gallery has chosen to boot from the local hard disks to utilize the local disks. These local disks were pre-ordered with the servers and hence limit the boot choice to local storage.

None.

ESXi installation method

Installation of several initial hosts will be done interactively. Subsequent installations will be done with scripts created with the native utility.

The Gallery is comfortable with creating and running scripts.

Scripts must be created.

vSphere Auto Deploy method

vSphere Auto Deploy will not be used.

The overhead of building out the infrastructure to support vSphere Auto Deploy has been identified as too complex for this design.

None.

Page 20 of 46

Page 21: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

ESXi scratch partition configuration

The default scratch partition will be used.

The ESXi hosts have a physical hard drive big enough to support the default scratch partition configuration.

No changes for the swap partition are required.

None.

VM Compatibility setting

VM Compatibility will be set to Use datacenter setting and host version on all ESXi hosts.

All hosts will be running ESXi 6 so there is no need for backward compatibility.

None.

IP address assignment for ESXi hosts (static or DHCP)

Hosts will be assigned static IP addresses locally.

The Gallery has no requirement to use DHCP for host IP addresses.

None.

6.1.2 ESXi Host Naming Convention ESXi hosts will be named to identify the data center, the cluster in which it will be included, and a sequential server number.

<Data Center Name>-<Zone>->-<OS Type>-ESXiXX

Example: Prod-App-Win-ESXi01 is the first ESXi host in the Production data center that is in the App cluster containing Windows virtual machines.

Page 21 of 46

Page 22: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

6.2 ESXi Host Platform Logical Design This section details the ESXi hosts proposed for the vSphere infrastructure design for The Gallery.

The configuration and assembly process for each system will be standardized, with all components installed the same on each host.

Table: ESXi Host Logical Design Specifications

Attribute Specification

Host type and version* ESXi 6.0

Number of CPU sockets

Number of cores per CPU

Total number of cores

Processor speed

2

6

12

2.40 GHz

Memory 128 GB

Number of NIC ports 2 x 1 GB/s and 2 x 10 GB/s

Number of HBA ports 2

*The exact ESXi version to be deployed will be selected closer to implementation and will be chosen based on the available stable and supported released versions at that time.

Page 22 of 46

Page 23: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

7. Storage Platform Design To consolidate the existing data center’s physical and virtual x86 servers, storage performance and use of the existing 250 servers were analyzed using Capacity Planner.

7.1 Storage Performance Requirements The following table shows the results of the capacity analysis.

Table: IOPS Requirements

Metric Amount

Total number of virtual machines 250

Mean IOPS 176

Front-end IOPS 49,808 (283 x 176)

Back-end IOPS 64,751 (for 283 servers)

Read: Write ratio 70:30

RAID level RAID 10

The back-end (BE) IOPS are calculated using the following formula:

BE IOPS = (FE IOPS x Read percentage) + ((FE IOPS x Write percentage) * RAID I/O penalty)

Using the formula above and the corresponding I/O penalty for RAID 10, the back-end IOPS requirement for The Gallery is the following:

BE IOPS = (49,808 x .70) + ((49,808 x .30) x 2)

= 64,751 IOPS with RAID 10

7.2 Storage Capacity Requirements The amount of storage needed for the existing 250 servers is 25,500 GB.

The total amount of storage space needed to support this workload was based on the following guidelines:

• Total virtual machine storage space

• Storage space for virtual machine swap files

• Percentage of storage space to reserve for growth

• Percentage of storage space to reserve for virtual machine snapshots

Page 23 of 46

Page 24: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

The following table lists the storage space needed for all virtual machines (new and existing):

Table: Virtual Machine Storage Space Requirements

Requirement Amount (GB)

Storage space for current capacity (250 VMs) 25,500 GB

Storage space for growth of 10% annually, for 3 years (25 VMs) 7,650 GB

Storage space for 8 management virtual machines 1,024 GB

Swap space capacity 5,103 GB

Percentage of storage reserved for VM growth: 15% 5,126 GB

Percentage of storage reserved for VM snapshots: 15% 5,126 GB

Total storage space 49,529 GB

Swap space capacity is the average virtual machine memory size multiplied by the total number of virtual machines. The average virtual machine memory size for The Gallery is 20GB:

Swap space capacity = Average VM memory size x Total number of VMs

= 18,465 MB x 283

= 5,225,595 MB = 5,103 GB

Use the following formula to calculate total storage space, considering 15% of storage to be reserved for virtual machine growth and snapshots:

Total storage space = Virtual machine storage space

+ (Virtual machine storage space x Percentage of storage for growth)

+ (Virtual machine storage space x Percentage of storage for snapshots)

+ Swap space capacity

For example, if you use 15% for virtual machine growth, and 15% for snapshot storage, then the total storage space is calculated as follows:

Total storage space = 34,174 + (34,174 x .15) + (34,174 x .15) + 5103

= 49,529 GB, approximately 50 TB

7.3 Decisions for the Storage Platform Design This section lists the design decisions for the storage infrastructure design.

Before choosing the storage platform, the following considerations were taken into account:

• In-house storage expertise and installation base

• Cost, including both capital and long-term operational expenses

• The organization’s current relationship with storage vendors

Page 24 of 46

Page 25: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

7.3.1 Storage Platform Design Decisions The Gallery made the decisions listed in the following table.

Table: Storage Platform Design Decisions

Decision Design Justification Design Implication

Storage platforms (Fibre Channel, iSCSI, NFS)

Fibre Channel storage will be used for the production workloads.

The Gallery is using existing storage arrays.

The Gallery has an excellent storage team that has exposure to Fibre Channel technology, and therefore minimal-to-zero training is required.

The Gallery can leverage their existing investment in their SAN Fabric.

None.

Use of Virtual SAN

A Virtual SAN cluster will be deployed in the Dev environment.

The Gallery wants to evaluate the use of Virtual SAN clusters for some of their business-critical applications.

Additional physical hardware is required.

Use of Virtual Volumes

Virtual Volumes will be deployed in the Dev environment.

The Gallery wants to evaluate the use of Virtual Volumes for some of their business-critical applications.

Additional physical hardware is required.

Storage access control method

Single-initiator zoning will be used.

Single-initiator zoning prevents Registered State Change Notification (RSCN) messages from crossing zone boundaries and affecting normal I/O traffic.

This will help gain performance and avoid issues related to Fibre Channel packet collision.

Storage redundancy

The fixed multipathing policy will be used.

Each ESXi host will be configured using two single-port HBAs for redundancy and multipathing.

This policy is recommended by the storage vendor and VMware documentation.

Two single port HBAs are justified according to best practices, and so that each ESXi host can have ample slots

The use of two single-port HBAs is a hardware investment, and requires available slots in the ESXi host.

Page 25 of 46

Page 26: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

7.3.2 Datastore Naming Convention VMFS datastores will use the following naming convention:

<Data Center>-<Zone>-<OS type>-LUN##

## refers to the LUN ID on which the VMFS datastore is located.

Example: Prod-App-LIN-LUN01 is a datastore that is located in the Prod data center, and used by the App cluster containing Linux systems.

7.4 Storage Platform Logical Design The following figure and tables represent the logical design for the storage platform architecture.

Figure: Storage Platform Architecture

Page 26 of 46

Page 27: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Table: Storage Platform Logical Design Specifications

Attribute Specification

Storage type Fibre Channel SAN

Number of storage processors 2 (redundant)

Number of switches

Number of ports per host per switch

2 (redundant)

1

LUN size 200GB, 500GB and 1TB

Total LUNs Approximately 50 LUNs across all arrays

VMFS datastores per LUN 1

VMFS version 5

Zoning Single-initiator zoning

Multipathing As recommended by the storage vendor

Page 27 of 46

Page 28: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

8. Storage Management Design This section describes the logical design for storage management options. The physical design is presented on the Datastore Clusters tabs of the vSphere Configuration Workbook.

8.1 Decisions for the Storage Management Design This section lists the design decisions for the storage management design.

8.1.1 Storage Management Design Decisions The Gallery made the decisions listed in the following table.

Table: Storage Management Design Decisions

Decision Design Justification Design Implication

Storage management tools

Array-based tools and host-based tools will be used.

The storage team at The Gallery already uses array-based tools to monitor their Fibre Channel storage arrays.

vSphere administrators will use host-based tools to monitor the datastores, Virtual SAN and vSphere Virtual Volumes.

None.

Storage tiering

Storage tiering will not be used at the storage layer.

Storage tiering is complex to deploy in an existing storage infrastructure due to space limits and other constraints. With the current requirements, the provided RAID levels meet the application and workload requirements.

The Gallery will not have to buy additional drives for various tiers.

Virtual machine storage policies

Virtual machine storage policies will not be used in the environment.

Virtual machine storage policies are not needed today, however, they will be used when The Gallery implements disaster recovery.

None.

Datastore clusters

Datastore clusters will be used.

Datastore clusters automate the placement of virtual machines to ensure that load is balanced and distributed among the datastores.

None.

vSphere Storage DRS

Page 28 of 46

Page 29: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

vSphere Storage DRS will be used in Automatic Mode, with an 85% threshold for available space and a 15ms I/O latency threshold.

vSphere Storage DRS will automate load and capacity monitoring in the environment.

None.

vSphere Storage I/O Control

vSphere Storage I/O Control will be enabled on all datastores that are members of datastore clusters, however disk shares will be equal.

vSphere Storage I/O Control is not justified based on the current state analysis. However, vSphere Storage I/O Control can be used in the future by modifying the disk shares for I/O prioritization.

vSphere Storage I/O Control can help reduce I/O contention for high-priority virtual machines.

8.1.2 Datastore Cluster Naming Convention Datastore clusters will use the following naming convention:

DSC-<Data Center>-<Zone>-<OS type>

Example: DSC-Prod-Web-WIN is a datastore cluster that is located in the Prod data center, and is used by the Web cluster containing Windows systems.

8.2 Storage Management Logical Design The following figure and tables represent the logical design for the storage management architecture.

For example, if storage tiering will be used, then create a diagram that identifies the purpose of each tier, the storage type used at each tier, and any other characteristics of the tiered storage.

Figure: Storage Management Architecture

The Gallery will not use tiered storage.

Page 29 of 46

Page 30: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

9. Network Component Design This section describes the logical design for network component options. The physical design is presented on the Networking (Switch & Portgroup), Networking (ESXi Host VMkernel), and NIOC Settings tabs of the vSphere Configuration Workbook.

The following guidelines were taken into account in creating the network component design:

• Avoid any single point of failure within the network design.

• Isolate all types of traffic from each other to ensure secure and resilient communications.

• Use traffic management and shaping tools to optimize and efficiently use the available bandwidth.

• Place vMotion and data traffic on the same physical 10 GigE network.

9.1 Decisions for the Network Component Design This section lists the design decisions for the network component design.

9.1.1 Network Component Design Decisions The Gallery made the decisions listed in the following table.

Table: Network Component Design Decisions

Decision Design Justification Design Implication

Network architecture: Three-tier hierarchical, leaf and spine

A three-tier hierarchical network architecture will be used

The Gallery will leverage the network infrastructure that is currently used by their physical environment.

None.

Types of networks (vSphere vMotion, Management, and so on)

Separate networks will be created for vSphere vMotion, management, virtual machines, and vSphere FT.

Separating the major types of network traffic will reduce contention and latency, and improve performance.

None.

Network Segmentation: Physical separation, VLANs, Private VLANs

Network segmentation will be accomplished with VLANs. PVLANs will not be used.

VLANs are already widely in use at The Gallery.

There is currently no use case for PVLANs.

Activate VLANs on physical switches.

Virtual switch types (standard, distributed, or both)

Page 30 of 46

Page 31: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

Standard switches and distributed switches will be used.

Standard switches will be used for primary management traffic.

Since The Gallery owns an Enterprise Plus license, distributed switches will be used for virtual machine traffic, vSphere vMotion traffic, and secondary management traffic.

Distributed switches will simplify management.

Number of virtual switches (standard, distributed)

One standard switch and one distributed switch will be created for each site.

The Gallery wants to keep the number of switches to a minimum to keep the network configuration as simple as possible.

None.

Use of Jumbo Frames

Jumbo Frames will not be used in this design.

Administrative overhead to configure Jumbo Frames, end-to-end, outweighs the projected performance gains.

None.

9.1.2 Virtual Switch and Port Group Naming Conventions Distributed switches will use the following naming convention:

VDS-<Data center>-##

Where ## is a number starting with 00

For example, VDS-Prod-00 and VDS-Dev-00.

Port groups will use the following naming convention:

pg-<Port Group Type>-<VLAN ID>

Port Group Type can be one of the following:

MGMT (management), VM (virtual machine), VMOTION (vSphere vMotion), or FT (Fault Tolerance).

Example: pg-VM-4 is the port group used by virtual machines connected to VLAN 4.

Page 31 of 46

Page 32: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

9.2 Network Component Logical Design The following figure and tables represent the logical design for the network component architecture. The logical design includes virtual switches, port groups, VLANs, virtual NICs, physical switches, and the relationships between the components.

Figure: Network Component Architecture

Every ESXi host will have a standard switch for the management network. A distributed switch will be created for each site (Prod and Dev), which will handle management, vMotion, vSphere FT, and virtual machine traffic.

The figure shows the logical network architecture for an ESXi host.

Page 32 of 46

Page 33: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

10. Network Management Design This section describes the logical design for network management options. The physical design is presented on the Networking (Switch & Portgroup) and NIOC Settings tabs of the vSphere Configuration Workbook.

10.1 Decisions for the Network Management Design This section lists the design decisions for the network management design.

10.1.1 Network Management Design Decisions The Gallery made the decisions listed in the following table.

Table: Network Management Design Decisions

Decision Design Justification Design Implication

NIC Teaming

NIC Teaming will be configured for all virtual switches.

Redundancy and preventing a single point of failure is required for the design.

More physical NICs required for configuration.

Load balancing policy for NIC teaming

The following load-balancing policies will be used:

• For standard switches, Route Based on Originating Virtual Port

• For distributed switches, Route Based on Physical NIC Load

The Gallery will use load-balancing policies that follow VMware best practices.

None.

Network I/O Control

Network I/O Control will be used.

Using Network I/O Control allows for prioritization to occur for network traffic. Also, The Gallery will be able to efficiently utilize their 10 GigE networks.

None.

Traffic filtering and QoS tagging

Traffic filtering and QoS tagging will not be used.

No use cases currently exist for using this feature.

None.

Use of CDP and LLDP

CDP and LLDP will not be used.

The Gallery will use other tools for gathering networking information and troubleshooting network issues.

None.

Page 33 of 46

Page 34: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

Use of NetFlow and port mirroring

NetFlow and port mirroring will be used.

The security team will use NetFlow, when necessary, to monitor virtual machine traffic.

Port mirroring will be used, when necessary, to capture network packets for troubleshooting purposes.

None.

LACP

LACP will be used. The Gallery uses LACP in their physical environment, and will leverage this functionality in the virtual environment.

None.

Use of IPv6 addresses

IPv6 addresses will not be used.

No use cases currently exist for using this feature.

None.

Page 34 of 46

Page 35: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

10.2 Network Management Logical Design The following figure represents the logical design for the NIC teaming and failover.

Figure: NIC Teaming and Failover Logical Design

The following figure represents the logical design for Network I/O Control.

Figure: Network I/O Control Logical Design

Page 35 of 46

Page 36: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

11. Virtual Machine Design This section describes the logical design for virtual machines. The physical design is presented on the VM Workload Summary tab of the vSphere Configuration Workbook.

11.1 Decisions for the Virtual Machine Design This section lists the design decisions for the virtual machine design.

11.1.1 Virtual Machine Design Decisions The Gallery made the decisions listed in the following table.

Table: Infrastructure Security Design Decisions

Decision Design Justification Design Implication

Virtual machine hardware version

Virtual machine hardware version 11 will be used.

The Gallery will install ESXi 6.0. The Gallery cannot migrate to older ESXi hosts.

VMware Tools

VMware Tools will be installed in all virtual machines.

VMware Tools adds appropriate drivers and enhancements to the guest operating system.

VMware Tools is not available for all operating systems.

Single versus multiple vCPUs

Single vCPUs will be used for all virtual machines unless otherwise required by the software.

If applications do not use multiple CPUs, there is little benefit to configuring them.

None.

Use of shares, reservations, and limits

Shares, reservations and limits will not be used.

Although good for specific use cases, they can limit the scalability of the hosts.

None.

Single versus multiple virtual disks

Separate virtual disks will be used for the operating system and the application data.

Separate virtual disks allow for simplification of backup infrastructure.

None.

Placement of virtual machine files (single datastore, multiple datastores, shared storage)

System data and application data will be kept together on the same, shared datastore.

Virtual machine disks are stored on shared storage with the virtual machine files.

Keeping virtual disks together will simplify data replication.

Shared storage allows for a simplified management structure for virtual machines.

None.

Page 36 of 46

Page 37: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

Use of thin-provisioned disks

The Gallery will not use thin-provisioned disks.

Thin provisioning will be deployed at the storage array instead.

None.

Use of raw device mappings (RDMs)

RDMs will not be used. No use cases currently exist for using this feature.

None.

Location of virtual machine swap file

Virtual machine swap files will be stored on a different datastore than the rest of the virtual machine’s files.

This improves replication performance. Replication will be implemented as part of a future disaster recovery project.

None.

Use of raw device mappings

Raw device mappings (RDMs) will not be used.

RDMs introduce an extra layer to storage management. The Gallery does not plan to implement any applications that require RDMs.

Storage management is made simpler.

Use of Flash Read Cache

Flash Read Cache will not be used.

No use cases exist for this feature. None.

Virtual SCSI HBA type

Virtual machines will use the default HBA for their disks, unless there is a specific reason to change it.

Simplicity of management. None.

Virtual NIC type

Virtual machines will use the default NIC for the operating system.

If enhanced performance is required, then VMXNET3 will be used.

Simplicity of management Additional features of the VMXNET3 driver will not be available unless configured.

Virtual GPUs

Virtual GPUs will not be used. No use case currently exists for graphics hardware acceleration.

No graphics hardware acceleration will be available.

Page 37 of 46

Page 38: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

12. Infrastructure Security Design This section describes the logical design for infrastructure security. The physical design is presented on the Roles and Credentials tab of the vSphere Configuration Workbook.

12.1 Decisions for the Infrastructure Security Design This section lists the design decisions for the infrastructure security design.

12.1.1 Infrastructure Security Design Decisions The Gallery made the decisions listed in the following table.

Table: Infrastructure Security Design Decisions

Decision Design Justification Design Implication

vCenter Server security

vSphere administrators will access vCenter Server using the vSphere Web Client.

Active Directory users (administrators) will be assigned the appropriate vCenter Server privileges.

Restricting direct access to the vCenter Server system, and use of Active Directory users (instead of local or vCenter Server users) complies with internal security policies.

None.

ESXi host security

vSphere administrators will access ESXi hosts through the vSphere Web Client.

Restricting direct access to ESXi host access complies with internal security policies. Direct access will only be allowed for troubleshooting purposes.

None

ESXi host lockdown mode (strict, normal, or disabled)

Lockdown mode will not be used.

ESXi already complies with internal security policies and therefore, lockdown mode is not required.

None.

ESXi Shell and SSH services

The ESXi Shell and SSH services will be disabled.

The majority of ESXi administration can be performed with the vSphere Web Client.

None.

Certificates: VMCA or third-party

VMCA certificates will be used.

Use of VMware CA-signed certificates are sufficient and comply with internal security policies.

None.

Storage (iSCSI, NFS, FC) security

IP storage security is not needed.

The Gallery does not use IP storage. None.

Page 38 of 46

Page 39: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

Virtual machine network security

Virtual networks will be secured by using firewalls.

Placing firewalls between the management network and the user interface clients will comply with internal security policies.

None.

Virtual switch port security (promiscuous mode, MAC address changes, forged transmits)

Promiscuous mode traffic, MAC address changes, and forged transmits will all be rejected.

Virtual switches allow for an added level of security.

None.

Virtual machine security

McAfee anti-virus software will be used.

VMware vShield Endpoint will be evaluated for possible future use.

The Gallery currently uses McAfee as their anti-virus solution in the physical environment.

None.

13. vSphere Update Manager Design This section describes the logical design of the vSphere Update Manager systems and databases used for the design. The physical design is presented in the vSphere Update Manager tab in the vSphere Configuration Workbook.

13.1 Decisions for the vSphere Update Manager System Design This section lists the design decisions for the vSphere Update Manager systems.

13.1.1 vSphere Update Manager Design Decisions The Gallery made the decisions listed in the following table.

Table: vSphere Update Manager Design Decisions

Decision Design Justification Design Implication

Number of vSphere Update Manager instances

Two vSphere Update Manager instances will be deployed.

A one-to-one mapping of vCenter Server to vSphere Update Manager is required.

None.

Page 39 of 46

Page 40: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

vSphere Update Manager deployment model

The medium deployment model will be used: vSphere Update Manager is installed on the same host as vCenter Server, and a separate database from the vCenter Server database will be used.

This model was determined by the vSphere Update Manager Sizing Estimator spreadsheet.

None.

vSphere Update Manager database

An Oracle database instance will be created for vSphere Update Manager.

The Gallery currently uses Oracle databases in their environment.

None.

Patch download model (Internet-connected or air-gap)

The Internet-connected model will be used.

Direct access to the Internet is available. None.

Download settings (default sources and/or additional sources)

vSphere Update Manager will use the default download sources provided by VMware.

No additional third-party patch sources are required.

None.

Download schedule

The patch download schedule will be changed to every Sunday at 12:00AM GMT.

The Gallery must download patches during non-peak hours.

None.

ESXi host/cluster settings (list all that apply)

The VM Power State value will be set to Do Not Change VM Power State.

This value is the safest and ensures the highest uptime.

Manual intervention will be required if the migration fails.

Parallel remediation of hosts will be allowed, assuming enough resources are available to support this operation.

Remediation of host patches can occur more quickly.

During remediation, more resources are unavailable at the same time.

Powered off virtual machines will not be migrated.

This reduces the amount of time to start the remediation.

Any powered-off virtual machine is unavailable until the host comes back online.

Page 40 of 46

Page 41: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

Baselines to use and/or create

Default baselines for critical and non-critical patches will be configured for the all vSphere clusters.

The Gallery does not require any custom baselines.

All patches are added to the baselines as soon as the patches are released.

Updating guest operating systems and applications

The Gallery will use existing processes to update the guest operating systems and applications.

Less training is required, and there is less disruption to the current processes.

None.

Page 41 of 46

Page 42: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

13.2 vSphere Update Manager System Logical Design The Gallery will install vSphere Update Manager on the vCenter Server system.

The following table lists the logical specifications for the vSphere Update Manager system and vSphere Update Manager database.

Table: vSphere Update Manager Logical Specifications

Attribute Specification

vSphere Update Manager version 6.0

Physical or virtual system Virtual

Number of CPUs

Processor type

Processor speed

4

VMware vCPU

N/A

Memory 16 GB

Number of NIC and ports 1/1

Number of disks and disk sizes 2 disks – 1 40 GB (C:) and 60 GB (D:)

Operating system and SP level Windows Server 2012 Standard, R2

Table: vSphere Update Manager Database Logical Specifications

Attribute Specification

Vendor and version Microsoft SQL Server 2012 SP2

Authentication method SQL Server Authentication

Recovery method Full

Database auto growth Enabled in 1MB increments

Transaction log auto growth In 10% increments; restricted to 2GB maximum size

Estimated vSphere Update Manager database size

30 GB

Page 42 of 46

Page 43: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

14. Infrastructure Management Design This section describes the logical design for the infrastructure management architecture.

14.1 Decisions for the Infrastructure Management Design This section lists the design decisions for the infrastructure management architecture.

14.1.1 Infrastructure Management Design Decisions The Gallery made the decisions listed in the following table.

Table: Infrastructure Management Design Decisions

Decision Design Justification Design Implication

Method to install and configure ESXi hosts

ESXi hosts will be manually installed.

Manual installation is simple, and no scripting skills are required.

Manual installations might take longer to perform than scripted installations.

Host profiles

Host profiles will be used to ensure that all managed hosts are uniform in configuration.

Host profiles allow for a uniform configuration as well as compliance checks on hosts.

None.

Template management

The content library will be used to provide a single source for all templates and media files, across data centers.

This simplifies the design for maintaining templates and media in the environment.

None.

Virtual machine snapshots (vmfsSparse, vsanSparse)

vmfsSparse and vsanSparse snapshots will be allowed, and policies will be established to limit the use of snapshots in the production and test/dev environments.

Snapshots cause additional administrative overhead and resource utilization.

Storage usage should be carefully monitored to ensure that sufficient space exists for all workloads.

Use of CIM and SNMP

CIM and SNMP will not be used.

No use cases exist for either of these features.

None.

Settings for tasks and events retention policies

Page 43 of 46

Page 44: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

The task retention and event retention settings will be kept at the default setting of 30 days.

This setting ensures that the database is not overrun with old state data.

This setting might need to be changed based on the requirements of each environment (production, test/dev).

Statistics collection levels

The statistics collection level will remain at the default level, 1.

Additional performance information is not required in this environment.

Not all statistics will be available in case of errors.

Management tools (vSphere tools, vRealize tools, others)

vRealize Operations Manager will be used.

Other than vSphere and vRealize Operations Manager no other VMware management solutions will be used.

Additional performance monitoring details are available from vRealize Operations Manager.

The Gallery will evaluate other VMware management solutions in the near future.

Additional licenses are potentially required.

Page 44 of 46

Page 45: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

15. Infrastructure Recoverability Design This section describes the logical design for the infrastructure backup and recovery architecture.

15.1 Decisions for the Infrastructure Recoverability Design This section lists the design decisions for the infrastructure backup and recovery architecture.

15.1.1 Infrastructure Recoverability Design Decisions The Gallery made the decisions listed in the following table.

Table: Infrastructure Recoverability Design Decisions

Decision Design Justification Design Implication

ESXi host backup and recovery

ESXi hosts will be manually reinstalled, and host profiles will be used to restore configurations.

The time it takes to reinstall an ESXi host is minimal. Using host profiles to restore the configuration simplifies the design.

The vSphere administrators can use host profiles to provision ESXi hosts on demand, with no configuration efforts.

vCenter Server backup and recovery

The vCenter Server and vSphere Update Manager databases will be backed up nightly using the existing backup methods. Certificates and configuration files will also be backed up.

A database backup policy is already in place for the SQL Server databases.

Regular vCenter Server database backups are critical, in case of operating system or data corruption.

Distributed switch backup and recovery

All distributed switch configurations will be exported to a local machine.

Export and import of distributed switch configurations is a simple procedure that will save time.

None.

Resource pool backup and recovery

Resource pool tree snapshots will not be created.

The Gallery will not use resource pools in its design.

None.

Virtual machine backup and recovery

Virtual machines will be backed up using the company corporate policy.

The Gallery wants to leverage the backup tools that are currently being used in the physical environment.

None.

Use of vSphere Replication

Page 45 of 46

Page 46: Virtualization Architecture Design - · PDF fileVirtualization Architecture Design • Business decision makers • Core technical teams, such as product development, server, storage,

Virtualization Architecture Design

Decision Design Justification Design Implication

vSphere Replication will be used on specific production and dev workloads.

The Gallery will start using vSphere Replication on a specific set of workloads.

Using vSphere Replication will prepare them for their future disaster recovery implementation.

None.

Page 46 of 46