h8229 Vnx Vmware Tech Book

Embed Size (px)

Citation preview

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    1/224

    Using EMC VNX Storagewith VMware vSphere

    Version 1.0

    Configuring VMware vSphere on VNX Storage

    Cloning Virtual Machines

    Establishing a Backup and Recovery Plan for VMware

    vSphere on VNX Storage

    Using VMware vSphere in Data Restart Solutions

    Using VMware vSphere for Data Vaulting and Migration

    Jeff Purcell

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    2/224

    Using EMC VNX Storage with VMware vSphere2

    Copyright 2011 EMC Corporation. All rights reserved.

    EMC believes the information in this publication is accurate as of its publication date. The information issubject to change without notice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NOREPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THISPUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.

    For the most up-to-date regulatory document for your product line, go to the Technical Documentation andAdvisories section on EMC Powerlink.

    For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

    All other trademarks used herein are the property of their respective owners.

    h8229

    http://-/?-http://-/?-
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    3/224

    Using EMC VNX Storage with VMware vSphere 3

    Chapter 1 Configuring VMware vSphere on VNX Storage

    Introduction....................................................................................... 16Management options........................................................................ 19VMware vSphere on EMC VNX configuration road map.......... 24

    VMware vSphere installation.......................................................... 26VMware vSphere boot from storage .............................................. 27Unified storage considerations ....................................................... 33Network considerations................................................................... 48Storage multipathing considerations............................................. 50VMware vSphere configuration...................................................... 64Provisioning file storage for NFS datastores ................................ 71Provisioning block storage for VMFS datastores and RDM

    volumes (FC, iSCSI, FCoE) .............................................................. 76Virtual machine considerations ...................................................... 80Monitor and manage storage.......................................................... 92Storage efficiency ............................................................................ 100

    Chapter 2 Cloning Virtual Machines

    Introduction..................................................................................... 114

    Using EMC VNX cloning technologies........................................ 115Summary.......................................................................................... 126

    Chapter 3 Establishing a Backup and Recovery Plan for VMwarevSphere on VNX Storage

    Introduction..................................................................................... 128Virtual machine data consistency................................................. 129

    VNX native backup and recovery options.................................. 131Backup and recovery of a VMFS datastore................................. 134

    Contents

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    4/224

    Using EMC VNX Storage with VMware vSphere4

    Contents

    Backup and recovery of RDM volumes....................................... 138

    Replication Manager ...................................................................... 139vStorage APIs for Data Protection ............................................... 143Backup and recovery using VMware Data Recovery................ 145Backup and recovery using Avamar............................................ 148Backup and recovery using NetWorker ...................................... 157Summary.......................................................................................... 164

    Chapter 4 Using VMware vSphere in Data Restart Solutions

    Introduction..................................................................................... 168Definitions ....................................................................................... 169EMC remote replication technology overview........................... 172RDM volume replication ............................................................... 187Replication Manager ...................................................................... 191Automating Site Failover with SRM and VNX .......................... 193Summary.......................................................................................... 203

    Chapter 5 Using VMware vSphere for Data Vaulting and Migration

    Introduction..................................................................................... 206EMC SAN Copy interoperability withVMware file systems...................................................................... 207SAN Copy interoperability with virtual machinesusing RDM....................................................................................... 208

    Using SAN Copy for data vaulting.............................................. 209Transitional disk copies to cloned virtual machines.................. 217SAN Copy for data migration from CLARiiON arrays ............ 220SAN Copy for data migration to VNX arrays ............................ 222Summary.......................................................................................... 224

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    5/224

    Using EMC VNX Storage with VMware vSphere 5

    Title Page

    1 VNX storage with VMware vSphere........................................................... 172 EMC Unisphere .............................................................................................. 193 VSI Feature Manager ..................................................................................... 204 Storage Viewer presentation of VNX NFS datastore details ................... 22

    5 Storage Viewer presentation of VNX block storage details..................... 226 Configuration road map................................................................................ 247 Manual assignment of host logical unit for ESXi boot device................. 298 iSCSI port management................................................................................. 319 IBFt interface for VNX target configuration............................................... 3210 VNX FAST VP reporting and management interface............................... 3711 Disk Provisioning Wizard for file storage .................................................. 3912 Creation of a striped volume through Unisphere ..................................... 42

    13 Spanned VMFS-3 tolerance to missing physical extent............................ 4714 FC/FCoE topology when connecting VNX storage to an ESXi host ...... 5015 iSCSI topology when connecting VNX storage to ESXi host................... 5116 Single virtual switch iSCSI configuration................................................... 5217 VSI Path Management multipath configuration feature .......................... 5418 Multipathing configuration with NFS ........................................................ 5719 Unisphere interface........................................................................................ 5920 Data Mover link aggregation for NFS server............................................. 60

    21 vSphere networking configuration.............................................................. 6122 vSwitch1 Properties screen........................................................................... 6223 VMkernel Properties screen.......................................................................... 6324 VMkernel port configuration........................................................................ 6625 Virtual disk shares configuration................................................................. 6726 SIOC latency window.................................................................................... 6827 Network Resource Allocation interface...................................................... 7028 File storage provisioning with USM............................................................ 72

    29 Creating a new NFS datastore with USM................................................... 7330 Block storage provisioning with USM ........................................................ 77

    Figures

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    6/224

    Using EMC VNX Storage with VMware vSphere6

    Figures

    31 Creating a new VMFS datastore with USM............................................... 7832 Select the disk ................................................................................................. 8133 Guest disk alignment validation.................................................................. 8334 NTFS data partition alignment (wmic command).................................... 8435 Output of Linux partition aligned to a 1 MB disk

    boundary (starting sector 2048) ................................................................... 8436 Output for an unaligned Linux partition (starting sector 63).................. 8537 Enable NPIV for a virtual machine after adding an RDM volume ........ 8838 Manually register virtual machine (virtual WWN) initiator records..... 89

    39 Actions tab ...................................................................................................... 9340 Storage Viewer: Datastores view - VMFS datastore ................................. 9441 Adjustable percent full threshold for the storage pool............................. 9642 Create Storage Usage Notification interface .............................................. 9743 User-defined storage usage notifications ................................................... 9844 User-defined storage projection notifications............................................ 9945 Thick or zeroedthick virtual disk allocation ............................................ 10246 Thin virtual disk allocation......................................................................... 103

    47 Virtual machine disk creation wizard....................................................... 10448 Virtual machine out-of-space error message ........................................... 10549 File system Thin Provisioning with EMC VSI: USM feature................. 10650 Provisioning policy for a NFS virtual machines virtual disk............... 10851 LUN Compression property configuration.............................................. 10952 Performing a consistent clone fracture operation ................................... 11753 Create a SnapView session to create a copy of a VMware file system. 11854 Assign a new signature............................................................................... 121

    55 Create a writeable checkpoint for a NAS datastore ................................ 12256 ShowChildFsRoot parameter properties in Unisphere .......................... 13257 Snapshot Configuration Wizard................................................................ 13558 Snapshot Configuration Wizard (continued) .......................................... 13659 Replication Manager Job Wizard............................................................... 14060 Replica Properties in Replication Manager.............................................. 14161 Read-only copy of the datastore view in the vSphere client ................. 14262 VADP flow diagram .................................................................................... 144

    63 VMware data recovery................................................................................ 14564 VDR backup process.................................................................................... 14665 Sample Avamar environment .................................................................... 14966 Sample proxy configuration ....................................................................... 15167 Avamar backup management configuration options............................. 15268 Avamar virtual machine image restore .................................................... 15469 Avamar browse tree .................................................................................... 15570 NetWorkervirtualization topology view.............................................. 158

    71 VADP snapshot ............................................................................................ 15972 NetWorker configuration settings for VADP .......................................... 160

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    7/224

    7Using EMC VNX Storage with VMware vSphere

    Figures

    73 NDMP recovery using NetWorker............................................................ 162

    74 Backup with integrated checkpoint ........................................................... 16375 Replication Wizard....................................................................................... 17576 Replication Wizard (continued) ................................................................. 17677 Preserving dependent-write consistency with MirrorView

    consistency group technology .................................................................... 17978 EMC VMware Unisphere interface............................................................ 18179 Business continuity solution using MirrorView/S in a

    virtual infrastructure with VMFS ............................................................... 182

    80 EMC RecoverPoint architecture overview................................................ 18381 Disabling VAAI support on an ESXi host................................................. 18582 NFS replication using Replication Manager............................................. 19183 Registering a virtual machine with ESXi .................................................. 19284 VMware vCenter SRM configuration........................................................ 19585 SRM discovery plan ..................................................................................... 19786 MVIV reporting for SRM environments ................................................... 19987 Data vaulting solution using incremental SAN Copy in a virtual infra-

    structure ......................................................................................................... 21088 Minimum performance penalty data vaulting solution using incrementalSAN Copy ...................................................................................................... 211

    89 Identifying the canonical name associated withVMware file systems .................................................................................... 212

    90 Using Unisphere CLI/Agent to map the canonical name toEMC VNX devices ........................................................................................ 212

    91 Creating an incremental SAN Copy session............................................. 214

    92 Creating an incremental SAN Copy session (continued)....................... 21593 Creating a SAN Copy session to migrate data to a

    VNX storage array ........................................................................................ 222

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    8/224

    Using EMC VNX Storage with VMware vSphere8

    Figures

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    9/224

    Using EMC VNX Storage with VMware vSphere 9

    Title Page

    1 VNX disk types................................................................................................ 342 RAID comparison table.................................................................................. 353 Single-LUN and Multi-LUN datastore comparison................................... 444 Allocation policies when creating new virtual disks on a

    VMware datastore .........................................................................................1015 VNX-based technologies for virtual machine cloning options............... 1266 Backup and recovery options ...................................................................... 1657 EMC VMware replication options.............................................................. 1728 VNX MirrorView limits................................................................................ 1789 EMC RecoverPoint feature support............................................................ 18610 VNX to virtual machine RDM..................................................................... 18811 Data replication solutions ............................................................................ 203

    Tables

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    10/224

    Using EMC VNX Storage with VMware vSphere10

    Tables

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    11/224

    Using EMC VNX Storage with VMware vSphere 11

    Preface

    As part of an effort to improve and enhance the performance and capabilitiesof its product lines, EMC periodically releases revisions of its hardware andsoftware. Therefore, some functions described in this document may not be

    supported by all versions of the software or hardware currently in use. Forthe most up-to-date information on product features, refer to your productrelease notes.

    If a product does not function properly or does not function as described inthis document, please contact your EMC representative.

    Note: This document was accurate as of the time of publication. However, as

    information is added, new versions of this document may be released to theEMC Powerlink website. Check the Powerlink website to ensure that you areusing the latest version of this document.

    Audience This TechBook describes how VMware vSphere works with the EMCVNX series. The content in this TechBook is intended for storageadministrators, system administrators, and VMware vSphereadministrators.

    Note: Although this document focuses on VNX storage, most of the contentalso applies when using vSphere with EMC Celerra or EMC CLARiiONstorage.

    Note: In this document, ESXi refers to VMware ESX Server version 4.0 and4.1. Unless explicitly stated, ESXi 4.x, ESX 4.X, and ESXi are synonymous.

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    12/224

    12 Using EMC VNX Storage with VMware vSphere

    Preface

    Individuals involved in acquiring, managing, or operating EMC VNXstorage arrays and host devices can also benefit from this TechBook.Readers with knowledge of the following topics will benefit:

    EMC VNX series

    EMC Unisphere

    EMC Virtual Storage Integrator (VSI) for VMware vSphere

    VMware vSphere 4.0 and 4.1

    Relateddocumentation

    The following EMC publications provide additional information:

    EMC CLARiiON Asymmetric Active/Active Feature (ALUA)

    EMC VSI for VMware vSphere: Path ManagementProduct Guide

    EMC VSI for VMware vSphere: Path ManagementRelease Notes

    EMC VSI for VMware vSphere: Unified StorageManagementProduct Guide

    EMC VSI for VMware vSphere: Unified Storage ManagementReleaseNotes

    EMC VSI for VMware vSphere: Storage ViewerProduct Guide

    EMC VSI for VMware vSphere: Storage ViewerRelease Notes

    Migrating Data From an EMC CLARiiON Array to a VNX Platform

    using SAN Copy- white paper

    The following links to the VMware website provide moreinformation about VMware products:

    http://www.vmware.com/products/

    http://www.vmware.com/support/pubs/vs_pubs.html

    The following document is available on the VMware web site:

    vSphere iSCSI SAN Configuration Guide

    Conventions used inthis document

    EMC uses the following conventions for special notices:

    DANGER indicates a hazardous situation which, if not avoided, will

    result in death or serious injury.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    13/224

    Using EMC VNX Storage with VMware vSphere 13

    Preface

    WARNING indicates a hazardous situation which, if not avoided,could result in death or serious injury.

    CAUTION, used with the safety alert symbol, indicates ahazardous situation which, if not avoided, could result in minor ormoderate injury.

    NOTICE is used to address practices not related to personal injury.

    Note: A note presents information that is important, but not hazard-related.

    IMPORTANT

    An important notice contains information essential to software orhardware operation.

    Typographical conventionsEMC uses the following type style conventions in this document.

    Normal Used in running (nonprocedural) text for: Names of interface elements (such as names of windows,

    dialog boxes, buttons, fields, and menus)

    Names of resources, attributes, pools, Boolean expressions,buttons, DQL statements, keywords, clauses, environmentvariables, functions, utilities

    URLs, pathnames, filenames, directory names, computernames, filenames, links, groups, service keys, file systems,notifications

    Bold Used in running (nonprocedural) text for:

    Names of commands, daemons, options, programs,processes, services, applications, utilities, kernels,notifications, system calls, man pages

    Used in procedures for:

    Names of interface elements (such as names of windows,dialog boxes, buttons, fields, and menus)

    What user specifically selects, clicks, presses, or types

    http://title.pdf/http://title.pdf/
  • 8/10/2019 h8229 Vnx Vmware Tech Book

    14/224

    14 Using EMC VNX Storage with VMware vSphere

    Preface

    Wed like to hear from you!Your feedback on our TechBooks is important to us! We want our

    books to be as helpful and relevant as possible, so please feel free tosend us your comments, opinions and thoughts on this or any otherTechBook:

    [email protected]

    Italic Used in all text (including procedures) for: Full titles of publications referenced in text

    Emphasis (for example a new term)

    Variables

    Courier Used for:

    System output, such as an error message or script

    URLs, complete paths, filenames, prompts, and syntax whenshown outside of running text

    Courier bold Used for:

    Specific user input (such as commands)Courier italic Used in procedures for:

    Variables on command line

    User input variables

    < > Angle brackets enclose parameter or variable values supplied bythe user

    [ ] Square brackets enclose optional values

    | Vertical bar indicates alternate selections - the bar means or{ } Braces indicate content that you must specify (that is, x or y or z)

    ... Ellipses indicate nonessential information omitted from theexample

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    15/224

    Configuring VMware vSphere on VNX Storage 15

    1

    This chapter contains the following topics:

    Introduction ........................................................................................ 16

    Management options......................................................................... 19 VMware vSphere on EMC VNX configuration road map........... 24 VMware vSphere installation........................................................... 26 VMware vSphere boot from storage ............................................... 27 Unified storage considerations ........................................................ 33 Network considerations.................................................................... 48 Storage multipathing considerations .............................................. 50 VMware vSphere configuration....................................................... 64

    Provisioning file storage for NFS datastores.................................. 71 Provisioning block storage for VMFS datastores and RDM

    volumes (FC, iSCSI, FCoE) ............................................................... 76 Virtual machine considerations ....................................................... 80 Monitor and manage storage ........................................................... 92 Storage efficiency ............................................................................. 100

    Configuring VMware

    vSphere on VNXStorage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    16/224

    16 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    Introduction

    EMCVNXseries delivers uncompromising scalability andflexibility for the midtier while providing market-leading simplicityand efficiency to minimize total cost of ownership. Customers canbenefit from the following new VNX features:

    Next-generation unified storage, optimized for virtualizedapplications.

    Extended cache using Flash drives with FAST Cache. Fully Automated Storage Tiering for Virtual Pools (FAST VP) that

    can be optimized for the highest system performance and loweststorage cost on block and file.

    Multiprotocol support for file, block, and object with object accessthrough AtmosVirtual Edition (Atmos VE).

    Simplified management with EMC Unispherefor a singlemanagement framework for all NAS and SAN storage.

    Up to three times improvement in performance with the latestIntel multicore CPUs, optimized with Flash.

    6 Gb/s SAS back end with the latest drive technologiessupported: Flash, SAS, and NL-SAS.

    Expanded EMC UltraFlexI/O connectivityFibre Channel (FC),

    Internet Small Computer System Interface (iSCSI), CommonInternet File System (CIFS), Network File System (NFS) includingparallel NFS (pNFS), Multi-Path File System (MPFS), and FibreChannel over Ethernet (FCoE) connectivity for convergednetworking over Ethernet.

    The VNX series includes five new software suites and three newsoftware packs, making it easier and simpler to attain the maximumoverall benefits.

    Storage alternatives VMwarevSpheresupports storage device access for hosts andvirtual machines using the FC, FCoE, iSCSI, and NFS protocolsprovided by the VNX platform. VNX provides the CIFS protocol forshared file systems in a Windows environment.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    17/224

    Introduction 17

    Configuring VMware vSphere on VNX Storage

    The VNX system supports one active SCSI transport type at a time.An ESXi host can connect to a VNX Block System with any type of

    adapters. However, the adapters must be the same type, for example,FC, FCOE, or iSCSI. Connecting to a single VNX with different typesof SCSI adapters types is not supported.

    The previous statement does not apply to NFS which can be used incombination with any SCSI protocol.

    VMware ESXi uses VNX SCSI devices to create VMFS datastores orraw device mapping (RDM) volumes. LUNs and NFS file systems areprovisioned from VNX with Unisphere or through the VMwarevSphere Client using the EMC VSI for VMware vSphere: UnifiedStorage Management (USM) feature. VNX platforms deliver acomplete multiprotocol foundation for a VMware vSphere virtualdata center, as shown inFigure 1.

    Figure 1 VNX storage with VMware vSphere

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    18/224

    18 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    The VNX series is ideal for VMware vSphere in the midrange for thefollowing reasons:

    Provides configuration options for block (FC, FCoE, iSCSI) andfile (NFS, CIFS) storage allowing users to select the best optionbased upon capacity, performance, and cost.

    Has a modular architecture allowing users to mix Flash, SAS, andNear-Line SAS (NL-SAS) drives to satisfy application storagerequirements.

    Scales quickly to address the storage needs of virtual machines onVMware ESXi servers.

    Provides virtual machine Unisphere and EMC Virtual Storageintegrator (VSI) for VMware vCenter management options.Management options on page 19provides more information.

    Provides no single point of failure and five 9s availability, whichimproves application availability.

    VMware administrators can use the following features to managevirtual storage:

    Thin Provisioning:Improves storage utilization and simplifiesstorage management by presenting virtual machines withsufficient capacity for an extended period of time.

    File Compression: Improves the storage efficiency of file systemsby compressing virtual disks.

    File Deduplication: Eliminates redundant files in a file system.

    LUN Compression: Condenses data to improve storageutilization on inactive LUNs.

    FAST VP and FAST Cache: Automates sub-LUN data movementin the array to improve total cost of ownership.

    EMC Replication Manager: Provides a single interface to

    provision and manage application-consistent virtual machinereplicas on VNX platforms.

    vStorage APIs for Array Integration (VAAI): Supports efficientSCSI LUN reservation methods that increase virtual machinescalability, and reduces I/O traffic between the host and thestorage system during cloning or zeroing operations.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    19/224

    Management options 19

    Configuring VMware vSphere on VNX Storage

    Management options

    VMware administrators can use Unisphere or the Virtual StorageIntegrator (VSI) for VMware vSphere to manage VNX storage invirtual environments.

    EMC Unisphere Unisphere is a common web-enabled interface for remotemanagement of EMC VNX, Celerra, and CLARiiONplatforms. It

    offers a simple interface to manage file and block storage and easilymaps storage objects to their corresponding virtual storage objects.

    Unisphere has a modular architecture that enables users to integratenew features, such as RecoverPoint/SE management, into theUnisphere interface as shown inFigure 2.

    Figure 2 EMC Unisphere

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    20/224

    20 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    VSI for VMware

    vSphere

    Virtual Storage Integrator (VSI) is a vSphere Client plug-in that

    provides a single interface to manage EMC storage. The VSIframework enables discrete management components, which areidentified as features, to be added to support the EMC productsinstalled within the environment. This section describes the EMC VSIfeatures that are most applicable to the VNX platform: UnifiedStorage Management, Storage Viewer, and Path Management.

    Figure 3 VSI Feature Manager

    VSI Unified StorageManagement

    VMware administrators can use the VSI USM feature to provisionand mount new datastores and RDM volumes. For NFS datastores,use this feature to do the following:

    Provision new virtual machines replicas rapidly with full clones

    or space-efficient fast clones. Initiate file system deduplication to reduce the storage

    consumption of virtual machines created on NFS file systems.

    Simplify the creation of NFS datastores in accordance with bestpractices.

    Mount NFS datastores automatically to one or more ESXi hosts.

    Reduce the storage consumption of virtual machines usingcompression or Fast Clone technologies.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    21/224

    Management options 21

    Configuring VMware vSphere on VNX Storage

    Reduce the copy creation time of virtual machines using the FullClone technology.

    For VMFS datastores and RDM volumes on block storage, use thisfeature to do the following:

    Provision and mount new storage devices from storage pools orRAID groups.

    Assign tiering polices on FAST VP LUNs.

    Unmask VNX LUNs automatically to one or more ESXi hosts.

    Create VMFS datastores and RDM volumes in accordance withbest practices.

    VSI Storage Viewer Storage Viewer enables the vSphere Client to discover and identifyVNX storage devices. This feature performs the following functions:

    Merges data from several different storage mapping tools intoseamless vSphere Client views.

    Enables VMware administrators to relate VMFS, NFS, RDM, andvirtual disk storage to the backing storage devices presented byVNX.

    Presents the VMware administrators with details of storagedevices accessible to the ESXi hosts in the virtual data center.

    Provides storage mapping and connectivity details for VNXstorage devices.

    Figure 4 on page 22illustrates how Storage Viewer can be used toidentify properties of an NFS Datastore presented from a VNXStorage System. Figure 5 on page 22illustrates the use of StorageViewer to identify the properties of VNX for block devices.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    22/224

    22 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    .

    Figure 4 Storage Viewer presentation of VNX NFS datastore details

    Figure 5 Storage Viewer presentation of VNX block storage details

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    23/224

    Management options 23

    Configuring VMware vSphere on VNX Storage

    VSI: Path Management

    This feature displays multipath properties (including number ofpaths, state of paths, and the path management policy) for theVMware Native Multipathing plug-in (NMP) and PowerPath/VE.It enables administrators to do the following:

    Change the multipath policy based on both storage class andvirtualization object.

    Maintain consistent multipath policies across a virtual data centercontaining a wide variety of storage devices.

    The VSI framework and its features are freely available from EMC.Some features are specific to storage platforms such as SymmetrixDMX and VNX. The framework, features, and supportingdocuments can be obtained from the EMC Powerlinkwebsitelocated at: http://Powerlink.EMC.com/.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    24/224

    24 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    VMware vSphere on EMC VNX configuration road map

    Figure 6displays the configuration steps for VNX storage withVMware vSphere.

    Figure 6 Configuration road map

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    25/224

    VMware vSphere on EMC VNX configuration road map 25

    Configuring VMware vSphere on VNX Storage

    The primary configuration blocks in Figure 6 on page 24are:

    NIC and FC/FCoE/iSCSI HBA driver configuration withvSphere After installing ESXi, configure the physical interfaceused to connect the ESXi host to VNX. ESXi IP and FC driverconfiguration on page 64provides more details.

    VMkernel port configuration in vSphere Configure the ESXihost VMkernel interface for IP storage connections to VNX NFSand iSCSI storage. VMkernel port configuration in ESXi onpage 65provides more details.

    After you install and configure VMware ESXi, complete the followingsteps:

    1. Ensure that network multipathing and failover are configuredbetween ESXi and the VNX platform. Storage multipathingconsiderations on page 50provides more details.

    2. Complete the NFS, VMFS, and RDM configuration steps using

    EMC VSI for USM:a. NFS - Create and export the VNX file system to the ESXi host.

    Add NFS datastores to ESXi hosts from NFS file systemscreated on VNX. Provisioning file storage for NFSdatastores on page 71provides details to complete thisprocedure using USM.

    b. VMFS - Configure a VNX FC/FCoE/iSCSI LUN and present

    it to the ESXi server.

    Configure a VMFS datastore from the LUN that wasprovisioned from VNX. Provisioning block storage for VMFSdatastores and RDM volumes (FC, iSCSI, FCoE) on page 76provides details to complete this procedure using USM.

    c. RDM - Configure a VNX FC/FCoE/iSCSI LUN and present itto the ESXi server.

    Create and surface the LUN provisioned from VNX to avirtual machine for RDM use. Provisioning block storage forVMFS datastores and RDM volumes (FC, iSCSI, FCoE) onpage 76provides details to complete this procedure usingUSM.

    3. Provision newly created virtual machines on NFS or VMFSdatastores and optionally assign newly created RDM volumes.

    C fi i VM S h VNX St

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    26/224

    26 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    VMware vSphere installation

    Install ESXi on a local disk of the physical server, a SAN disk with aboot from SAN configuration, or a USB storage device. There are nospecial VNX considerations when installing the hypervisor imagelocally. However, consider the following:

    Do not create additional VMFS partitions during the ESXiinstallation because the installer does not create alignedpartitions. Virtual machine disk partitions alignment onpage 80.

    Install a VMware vCenter host as part of the VMware vSphereand VMware Infrastructure suite.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    27/224

    VMware vSphere boot from storage 27

    Configuring VMware vSphere on VNX Storage

    VMware vSphere boot from storage

    ESXi offers installation options for USB, Flash, or a SCSI device.Installing a disk image on the SAN can improve performance and hasthe following benefits:

    Increases the availability of the hypervisor in the virtualenvironment.

    Places the configuration and environmental information on tier 1

    storage to eliminate the impact of a local disk failure, whichresults in a host failure.

    Distributes the images across multiple spindles.

    Improves reliability through RAID protected storage andoptionally redundant host I/O paths to the boot device.

    Host replacement is as easy as a BIOS modification and zoningupdates resulting in minimal down time.

    VMware vSphereboot from SANFC/FCOE LUNs

    Complete the administrative tasks for host cabling and storagezoning to ensure that when the host is powered on, the HBAs log into the storage controllers on the VNX platform.

    If this is an initial installation and if zoning is complete, obtain theWorld Wide Names (WWNs) for the HBAs from the SAN switch orfrom the Unisphere Host Connectivity page after the host initiatorlogs in to the VNX SCSI targets:

    1. Gather the information to configure the environment using theselected front end ports on the array. This information shouldinclude:

    ESXi hostname

    IP addresses

    The HBA WWN if available

    VNX management IP address and credentials

    2. Power on the ESXi host.

    3. Modify the host BIOS settings to disable internal devices that arenot required and to establish the proper boot order.

    4. Ensure that the following are enabled:

    Virtual floppy or CD-ROM device.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    28/224

    28 Using EMC VNX Storage with VMware vSphere

    Configuring VMware vSphere on VNX Storage

    Local device follows the CD-ROM in the boot order.

    For software iSCSI, the iSCSI adapter is enabled for iSCSIBoot.

    5. Enable the FC, FCOE, or iSCSI adapter as a boot device.

    6. Verify that the adapter can access the VNX platform to show theproperties of the Array Controllers.

    7. Access the Unisphere interface to view the Host ConnectivityStatus and to verify that the adapters are logged in to the correct

    controllers and ports. In some cases, a rescan of the storageadapters is required to perform the SCSI IT Nexus. ThoughvSphere is integrated with VNX to automatically register initiatorrecords for a running ESXi server, boot from SAN requiresmanual registration of the HBAs. Select the new initiator recordsand manually register them using the fully qualified domainname of the host. The ALUA mode (failover mode 4) is requiredfor VAAI support.

    Note: In some servers, the host initiators may not appear until startingthe host operating system installation. Examples of this are ESXiinstallations and Cisco UCS which lacks an HBA BIOS probe capability.

    8. Create a LUN on which to install the boot image. The LUN neednot be any larger than 20 GB. Do not store virtual machineswithin this LUN.

    9. Create a storage group and add the host record and the new LUNto it.

    10. Rescan the Host Adapter to discover whether the new device isaccessible. If the LUN does not appear or appears as LUNZ,recheck the configuration and rescan the HBA.

    11. Reserve a specific Host LUN ID to identify the boot devices. For

    example, assign a Host LUN number of 0 to LUNs that containthe boot volume. Using this approach makes it easy to

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    29/224

    VMware vSphere boot from storage 29

    Configuring VMware vSphere on VNX Storage

    differentiate the boot volume from other LUNs assigned to thehost. If the host is accessing multiple storage systems, do not use

    the reserved HLU number when assigning LUNs.

    Figure 7 Manual assignment of host logical unit for ESXi boot device

    12. Ensure that the CD-ROM/DVD-ROM/USB/virtual media is inthe caddy and precedes the local device in the boot order.

    13. Install the ESXi code, select the DGC device, and follow the

    installation steps to configure the host.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    30/224

    30 Using EMC VNX Storage with VMware vSphere

    g g p g

    VMware vSphere

    boot from SAN iSCSILUNs

    With ESXi 4.1, VMware introduced software iSCSI initiator boot

    support.Booting from the VNX platform, the iSCSI protocol provides many ofthe same benefits as FC storage. iSCSI is easier to configure and lessexpensive than Fibre options. However, there may be a slightdifference in the response time because it is not a closed loop protocollike FC.

    The Network Card must support software initiator boot for this

    configuration to work properly. The card should support 1 Gigabit orgreater for iSCSI SAN boot. The VMHCL helps to verify whether thedevice is supported before beginning this procedure. Access the iSCSIadapter configuration utility during the system boot and to configurethe HBA:

    Set the IP Address and IQN name of the iSCSI initiator.

    Define the VNX iSCSI target address.

    Scan the target.

    Enable the boot settings and the target device.

    The vendor documentation provides instructions to enable andconfigure the iSCSI adapter:

    1. Some utilities use a default iSCSI Qualified Name (IQN). Eachinitiator requires a unique IQN for storage group assignment on

    the VNX platform.2. Configure an iSCSI portal on the VNX platform using Unisphere.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    31/224

    VMware vSphere boot from storage 31

    Figure 8 iSCSI port management

    Unisphere provides support for Jumbo Frames with valid MTU

    values of 1488-9000. When enabling Jumbo Frames, ensure thatall components in the I/O storage path from the host to the

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    32/224

    32 Using EMC VNX Storage with VMware vSphere

    storage interface, support jumbo frames, and the MTU size of theinterface card on the ESXi host, the Network port, and VNX port

    are consistent.

    3. Configure the first iSCSI target by specifying the IP address andthe IQN name of the VNX iSCSI port configured in the previousstep. Optionally, specify the CHAP properties for additionalsecurity of the iSCSI session.

    Figure 9 IBFt interface for VNX target configuration

    4. Configure the secondary target using the address information forthe iSCSI port on Storage Processor B of the VNX platform.

    5. Using Unisphere:

    Register the new initiator record

    Create a new storage group

    Create a new boot LUN Add the newly registered host to the storage group

    6. At this point, proceed with the installation of the ESXi image.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    33/224

    Unified storage considerations 33

    Unified storage considerations

    Configuring the VNX array appropriately is critical to ensure ascalable high-performance virtual environment.

    This section presents storage considerations when using VNX withvSphere.

    With the introduction of storage pools and FAST VP, storageconfiguration is simplified so that storage devices can be created with

    differing service levels.The array handles data placement based upon the demands of theservers and their applications. Though pools have been introducedfor simplicity and optimization, VNX preserves the RAID groupoption for internal storage devices used by VNX replicationtechnologies, and environments or applications with fixed resourcereservations.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    34/224

    34 Using EMC VNX Storage with VMware vSphere

    VNX supported disk

    types

    Table 1shows how the VNX platform enables users to mix drive

    types and sizes on the storage array and in storage pools toadequately support the applications.

    Table 1 VNX disk types

    Type of drive Available size Benefit Suggested usage

    Flash drives 100 GB

    200 GB

    Extreme performance

    Lowest Latency

    Virtual machine applicationswith low response time andhigh-throughput requirements

    Large-capacity,high-performance VMwareenvironments

    Serial Attached SCSI(SAS)

    10 and 15k rpm 300 GB and600 GB

    Cost Effective

    Better performance

    Most tier 1 and 2 businessapplications, such as SQL,Exchange, and

    performance-based virtualapplications, that require a lowresponse time and highthroughput.

    NL-SAS drives 7200 rpm 1 TB and 2 TB Performance and reliabilitythat is equivalent to SATAdrives

    Back up the VMwareenvironment and store virtualmachine templates and ISO

    images. Good solution for tier 2/3

    applications with lowthroughput and response timerequirements, that is,infrastructure services DNS,AD, and similar applications.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    35/224

    Unified storage considerations 35

    RAID configuration

    options

    VNX provides a wide range of RAID configuration algorithms to

    help address the performance and reliability requirements ofVMware environments. RAID protection is provided within the VNXoperating environment and used by all block and file devices. Anunderstanding of the application and storage requirements in thecomputing environment helps to identify the appropriate RAIDconfiguration. Table 2 on page 35illustrates RAID options.

    The storage and RAID algorithm chosen is largely based on thethroughput and data protection requirements of the applications orvirtual machines. The most attractive RAID configuration options forVMFS volumes are RAID, RAID 1/0, RAID 5, and RAID 6 options.Parity RAID provides the most efficient use of disk space to satisfythe requirements of the applications. RAID 1/0 provides higher

    transfer rates than RAID 5, but RAID 1/0 consumes more disk space.Based upon the testing performed in EMC labs, RAID 5 was chosenin most cases for virtual machine boot disks and the virtual diskstorage used for application data. RAID 6 provides the highest levelof protection against disk failure. It is used when extra protection isrequired.

    Table 2 RAID comparison table

    Algorithm Description RAID group support Pool support

    RAID 0 Striped RAID (no dataprotection)

    X

    RAID 1 Data is striped across allspindles

    X

    RAID 1/0 Data is mirrored and striped

    across all spindles

    X X

    RAID 3 Striped with dedicated paritydisk

    X

    RAID 5 Striped with distributed parityamong all disks

    X X

    RAID 6 Striped with distributed double

    parity amongst all disks.

    X X

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    36/224

    36 Using EMC VNX Storage with VMware vSphere

    FAST VP VNX FAST VP is the VNX feature that enables a single LUN to

    leverage the advantages of Flash, SAS, and NL-SAS drives throughthe use of pools. VNX supports three storage tiers using a differentphysical storage device type (Flash, SAS, and NL-SAS). Each tieroffers unique advantages. FAST VP can leverage all three of thesetiers at once or any two at a time.

    Note: Rotational speed is not differentiated within a pool tier. Therefore,disks with different speeds can be assigned to the same pool tier. However,

    that is not a recommended configuration.

    FAST VP provides automated sub-LUN-level tiering to classify andplace data on the most appropriate storage class. FAST VP collectsI/O activity statistics at a 1 GB granularity (known as a slice). It usesthe relative activity level of each slice to determine tier placement.Very active slices are promoted to higher tiers of storage. Lessfrequently used slices are candidates for migration to lower tiers of

    storage. Slice migration is performed manually or through anautomated scheduler.

    FAST VP is beneficial because it adjusts to the changing use of dataover time. As storage patterns change, FAST VP moves slices amongthe tiers matching the needs of the VMware environment with themost appropriate class of storage. VNX FAST VP currently supports asingle RAID type across all tiers in the pool. Additionally, the RAID

    configurations are constructed using five disks for RAID 5, and eightdisks for RAID 1/0 and RAID 6 pools. Pool expansion should adhereto the configuration rules and grow in similar increments to avoidparity overhead and unbalanced LUN distribution. InFigure 10 onpage 37, the tiering screen of Unisphere indicates that 47 GB of data

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    37/224

    Unified storage considerations 37

    has been identified to be moved to the performance tier and 28 GBwill be moved to the Extreme Performance Tier. This action can be

    scheduled for automatic migration or manually relocated.

    Figure 10 VNX FAST VP reporting and management interface

    VNX FAST Cache FAST Cache architecture is an optimization technology that cangreatly improve the performance of the VMware environment byusing Flash drives as a second-level cache. FAST Cache combines

    hard disk drive (HDD) storage with Flash drives, identifying andpromoting the most frequently used data to the highest class ofstorage, thus providing an order of magnitude performanceimprovement to that data. It is dynamic in nature and operates at a 64KB extent. As data blocks within the extent are no longer accessed orthe access patterns change, existing extents are destaged to HDD andreplaced with higher priority data.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    38/224

    38 Using EMC VNX Storage with VMware vSphere

    vStorage API for

    Array Integration(VAAI)

    VAAI storage integration improves the overall performance of the

    ESXi block storage environment by offloading storage-related tasks tothe VNX platform. It provides functions that accelerate commonvSphere tasks such as Storage vMotion. An ESXi host connected to aVAAI-capable target device passes the SCSI request to the array andmonitors its progress throughout the task. Storage blocks aremigrated within the array at an accelerated rate while limiting theimpact on resources required by the host and front end ports of theVNX platform. The primary functions are as follows:

    Copy: Initiated by vSphere Clone, Storage vMotion, and Deploy VMfrom Template tasks. With VAAI-enabled storage systems such asVNX, the host passes the copy request to the storage system thatperforms the operation internally.

    Zeroing of new blocks: Also called zero copy, it is used to fill data ina newly created Virtual Machine Disk (VMDK) file that containssparse or unallocated space. Rather than copying lots of zeros into a

    new VMDK file, the hardware accelerated init featureinstantaneously creates a file with the proper allocations andinitializes the blocks to zero. Reducing the amount of repetitive trafficover the fabric from the host to the array.

    Hardware Accelerated Locking: Addresses datastore contention thatresults from virtual machine metadata operations such as create,boot, and update. The VAAI feature added an extent-based solutionthat enables metadata to be updated without locking the entiredevice. Heavy metadata operations, such as booting dozens of virtualmachines within the same datastore, can take less time.

    These VAAI capabilities improve storage efficiency and performancewithin the VMware environment. They enable dense datastoreconfigurations with improved operational value.

    EMC recommends using VAAI on all VMFS datastores and the

    functionality is enabled by default.

    VNX storage pools VNX provides a capability to group disks into a higher-level storageabstraction called a storage pool. The VNX Operating Environmentuses predefined optimization and performance templates to allocateavailable physical disks to for File System and Block storage pools.

    A storage pool is created from a collection of disks within the VNX

    platform. Storage pools are segmented into 1 GB slices that are usedto create LUNs.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    39/224

    Unified storage considerations 39

    The primary differences between pools and RAID groups are asfollows:

    Pools can span the physical boundaries associated with RAIDgroups.

    Pools support Thin LUNs (TLUs).

    When configured to use FAST VP, configure pools to use acombination of any disk type on the system.

    Pools support LUN compression

    Management and configuration of the storage pools areaccomplished through Unisphere and the storage managementwizards accessed in Unisphere.

    Figure 11 Disk Provisioning Wizard for file storage

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    40/224

    40 Using EMC VNX Storage with VMware vSphere

    Thick LUNs A Thick LUN is the default device created when provisioning from a

    storage pool. Thick LUNs reserve storage space within the pool thatis equal to the size of the LUN (additionally, there is a small amountof overhead for metadata). The pool space is protected and cannot beused by any other storage device. Because the space is guaranteed, aThick LUN never encounters an out-of-space condition.

    Thin LUNs Thin LUNs (TLUs) are also created within storage pools. However, aTLU does not reserve or allocate any user space from the pool.Internal allocation reserves a few storage pool 1 GB slices when theLUN is created. No additional storage allocation occurs until the hostor guest writes to the LUN. Select the Thin LUN checkbox in the LUNcreation page of Unisphere to create a TLU.

    Note: After a device is written to the guest level, the blocks remain allocateduntil the device is deleted or migrated to another thin device. To free deleted

    blocks, you must compress the LUN.

    The primary difference between Thick and Thin LUN types is theway storage is allocated within the pool. Thin LUNs, reserve a 1 GBslice and 8KB blocks from that slice on demand, when the host issuesa new write to the LUN. Direct LUNs allocate space in 1 GBincrements as new writes to the VMFS datastore are initiated.Another difference is in the pool reservation. While both storage

    types perform on demand allocation, Thick LUN capacity isguaranteed within the pool and deducted from free space. Thin LUNcapacity is not reserved or guaranteed within the storage pool whichis why monitoring free space of pools with Thin LUNs is important.Monitoring and alerting are covered in Monitor and managestorage on page 92of this document. Since the goal of Thin iseconomical use of storage resources, TLUs allocate space at a muchmore granular level than thick LUNs. Thin LUNs reserve a 1 GB slice,

    and allocate blocks in increments of 8KB as needed.

    Comparisonbetween pool LUNsand VNX OE forblock LUNs

    VNX OE for block (VNX OE) LUNs or RAID Group LUNs are thetraditional storage devices that were used before the introduction ofstorage pools. VNX OE LUNs allocate all the disk space in a RAIDgroup at the time of creation. VNX OE LUNs are the only availableoption when creating a LUN from a RAID Group, and there is no thin

    option with VNX OE or RAID Group LUNs

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    41/224

    Unified storage considerations 41

    The use of Pool Storage LUNs provides a simplified configurationand storage provisioning option. Pools can be much larger than RAID

    Groups and they support a broader variety of options including FASTand FAST VP support.

    Additionally, the benefit of Pool LUNs is derived through the abilityto make more efficient use of the storage space within VNX withintelligent placement that aligns data with the usage patterns of theapplications. Pools provide a storage efficiency solution that issupported by FAST VP and FAST Cache.

    VNX OE for block LUNs are optimized for performance with all ofthe space allocated at creation time using contiguous space in theRAID Group. There is a high probability that the VNX OE for blockLUNs will have the best spacial locality of the three LUN types,which is an important consideration to achieve optimal performancefrom the VNX storage. The next best performing option is the ThickLUNs that will have better spacial locality than Thin LUNs.

    Thin LUNs preserve space on the storage system at the cost of apotentially modest increase in seek time due to reduced locality ofreference, which is only true for spinning media and not when Flashdrives are in use.

    Thick LUNs have a 10 percent performance overhead in comparisonto VNX OE for block LUNs, whereas thin LUNs can have up to 50percent overhead.

    VNX for fie volumemanagement

    Automatic Volume Management (AVM) and Manual VolumeManagement (MVM) are available for users to create and managevolumes and file systems for VMware. AVM and MVM allow users todo the following:

    Create and aggregate different volume types into usable filesystem storage.

    Divide, combine, or group volumes to meet specific configurationneeds.

    Manage VNX volumes and file systems without having to createand manage underlying volumes.

    AVM works well for most VMware deployments. Virtualizedenvironments consisting of databases and e-mail servers can benefitfrom MVM because it provides an added measure of control in the

    selection and layout of the storage used to support the applications.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    42/224

    42 Using EMC VNX Storage with VMware vSphere

    VNX for file

    considerations withFlash drives

    A Flash drive uses single-level, cell-based flash technology suitable

    for high-performance and mission-critical applications. VNXsupports 100 GB and 200 GB Flash drives, which are tuned-capacitydrives. Consider the following when using Flash drives with VNX forfile:

    Enable the write cache and disable the read cache for Flash driveLUNs.

    The only AVM Pools supported with Flash drives are RAID 5 (4+1

    or 8+1) or R1/0 (1+1). Create four LUNs per Flash drive RAID and balance the

    ownership of the LUNs between the VNX storage processors.This recommendation is unique to Flash drives. Traditional AVMconfiguration provided better spacial locality and performancewhen configured with 2 LUNs per RAID Group.

    Use MVM to configure Flash drive volumes with drive

    configurations and requirements that are not offered throughAVM.

    Unlike rotating media, striping across multiple dvols from thesame Flash drive RAID group is supported.

    Set the stripe element size for the volume to 256 KB.

    Figure 12 Creation of a striped volume through Unisphere

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    43/224

    Unified storage considerations 43

    LUN considerations

    with VNX andvSphere

    Most vSphere configurations use VMFS datastores to support the

    folders and files that constitute the virtual machines.There are enhancements to VNX and vSphere that enable larger LUNsizes, and these are largely focused on the use of Flash drives forFAST Cache/FAST VP, VAAI support, and Storage I/O Control(SIOC).

    Separately, each feature improves a particular area of scalability:

    Flash drives support a significant increase in IOPS with lower

    response times. A Flash drive provides 10 times the number ofIOPS of other drive types, which is beneficial for Flash LUNs,FAST Cache, and FAST VP LUNs.

    VAAI provides benefits in the form of reduced ESXi hostresources required to perform vSphere storage relatedadministrative tasks, and SIOC alleviates the condition thatoccurs if the storage resources are taxed beyond required service

    levels. SIOC provides a mitigation solution to address the edge

    conditions that may occur during very heavy I/O periods. SIOCensures that critical virtual machine applications receive thehighest priority during bursty I/O periods.

    If storage is configured using these options, larger LUN sizes can beused. The maximum LUN size for vSphere without using extents, is

    approximately 2 TB (2 TB512 bytes).Environments without Flash drive and SIOC

    Since SIOC requires an Enterprise Plus license, and not all systemswill have Flash Drives, we need to consider those environments aswell.

    Creating a single LUN can encounter resource contention because itforces the VMkernel to serially queue I/Os from all the virtual

    machines using the LUN. The VMware parameterDisk.SchedNumReqOutstanding prevents one virtual machine frommonopolizing the FC queue. Nevertheless, there is an unpredictableelongation of response time when there is a long queue against theLUN.

    The LUN sizes within these environments should be based upon theperformance requirements. The key criteria to decide the LUN size is

    understanding the workload, the required IOPS for the applicationsand virtual machines, the response times of the applications, and the

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    44/224

    44 Using EMC VNX Storage with VMware vSphere

    sizing for the peak periods of I/O activity. Balance the number ofvirtual machines running within a datastore against the IO profile of

    the VM and capabilities of the storage devices.Larger single-LUN implementations

    The previous paragraph presents the traditional recommendationsfor single LUN configurations. There are several technologies thathave been introduced to alleviate some of the congestion thatresulted in that recommendation.

    Table 3compares the use of single-LUN and multi-LUN

    configurations.

    Table 3 Single-LUN and Multi-LUN datastore comparison

    VNX OE for block single LUN MetaLUN benefits Single LUN

    Easier management.

    One VMFS to manage unusedstorage.

    Small management overhead.

    Storage provisioning has to be ondemand.

    One VMFS to manage (spanned).

    Similar to single LUN with potential foradditional drives.

    Can result in poor response time.

    Single SP with no load balancing.

    Multiple queues to storage ensureminimal response times.

    Opportunity to perform manual loadbalancing.

    Flash drive and FAST provideresponse time improvements.

    Still limited to a single SP with no loadbalancing.

    Limits the number of I/O-intensive virtualmachines.

    Multiple VMFS allows more virtualmachines per ESXi server.

    Response time of limited concern (canoptimize).

    Improved support for virtual machineswith Flash drives.

    All virtual machines share one LUN.

    Cannot leverage all available storagefunctionality.

    Multiple VMFS allow more virtualmachines per ESXi server.

    Response time of limited concern (can

    optimize).

    All virtual machines share one LUN.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    45/224

    Unified storage considerations 45

    General recommendations for storage sizing and configuration

    VNX enables users, with knowledge of the anticipated I/O workload,to provide different service levels to virtual machines from a singlestorage platform. If workload details are not available, use thefollowing general guidelines:

    Use a VMware file system or NFS datastore to store virtualmachine boot disks. Most modern operating systems generateminimal I/O to the boot disk, most of which page response-time

    sensitive activity. Separating the boot disks from application datamitigates the risk of response time elongation due toapplication-related I/O activity. If there are significant numbersof virtual machine disks on the datastore such as in a VirtualDesktop Infrastructure (VDI) environment, consider using aFAST Cache enabled LUN to mitigate boot storms and pagingoverhead.

    When using Virtual Desktop configurations with linked clones,

    use FAST VP on 15K R5 drives with Flash drives to accommodatethe hot regions of the VMFS file system.

    Databases such as Microsoft SQL Server or Oracle use an activelog or recovery data structure to track data changes. Store logfiles on a separate virtual disk stored in a RAID 1/0 or RAID 5VMFS datastore, NFS datastore, or RDM device.

    If a separate virtual disk is provided for applications (binaries,

    application log, and so on), configure the virtual disk to useRAID 5 protected devices on 15k rpm SAS drives. However, if theapplication performs extensive logging, a FAST Cache enabled15k SAS RAID 10 device may be more appropriate.

    Ensure that the datastores are 80 percent or less full to enableadministrators to quickly allocate space for user data and toaccommodate VMware snapshots for making copies of the

    virtual machines. Infrastructure servers, such as DNS, perform a vast majority of

    their activity using CPU and RAM. Therefore, low I/O activity isexpected from virtual machines supporting the enterpriseinfrastructure functions. Use FAST VP Thin LUNs or NFSdatastores with a combination of SAS and NL-SAS for theseapplications.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    46/224

    46 Using EMC VNX Storage with VMware vSphere

    Use RAID 10 protected devices on Flash drives or 15k rpm SASdrives for virtual machines that are expected to have a

    write-intensive workload. Use RAID 5 FAST VP Pools with a combination of SAS and

    NL-SAS drives for large file servers with storage consumed bystatic files because the I/O activity tends to be low.

    Medium-size SAS drives, such as the 300 GB, 15k rpm drive,may be appropriate for these virtual machines.

    Consider the 1 TB and 2 TB NL-SAS drives for virtual

    machines that are used for storing archived data. Configure 7.2K rpm NL-SAS drives in RAID 6 mode. This is

    true for all drives equal to or greater than 1 TB

    Applications with hot regions of data can benefit from theaddition of FAST Cache. FAST Cache warms and pulls heavilyused data into a Flash storage device where response time andIOPS are eight times faster than spinning media storage devices.

    Allocate RAID 10 protected volumes, Flash drive, or FAST Cacheto enhance the performance of virtual machines that generatehigh small block random I/O read workload. Also considerdedicated RDM devices for these virtual machines.

    Enable SIOC to control bursty conditions. Monitor SIOC responsetimes within vSphere. If continually high, rebalance virtualmachines using storage VMotion.

    Ensure VAAI is enabled to off load storage tasks to the VNXstorage system.

    Number of VMFSvolumes in an ESXi

    host or cluster

    Virtualization increases the utilization of IT assets. However, thefundamentals of managing information in the virtual environmentare the same as those in the physical environment. Consider thefollowing best practices.

    VMFS supports the concatenation of multiple SCSI disks to create asingle file system. Allocation schemes used in VMFS spread the dataacross all LUNs supporting the file system thus exploiting allavailable spindles. Use this functionality when using VMware ESXihosts with VNX platforms.

    Note: If a member of a spanned VMFS-3 volume is unavailable, the datastoreis available for use, except the data from the missing extent. An example ofthis situation is shown in Figure 13 on page 47.

    Configuring VMware vSphere on VNX Storage

    Al h h h l f h i l i lik l i VNX l f

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    47/224

    Unified storage considerations 47

    Although the loss of a physical extent is not likely in VNX platforms,good change control mechanisms are required to prevent the

    inadvertent loss of access.

    Figure 13 Spanned VMFS-3 tolerance to missing physical extent

    Use of VNX metaLUNs A metaLUN is used to aggregate extents from separate RAID Groupsinto a single striped or concatenated LUN to overcome the physicalspace and performance limitations of a single RAID group. VNXStorage Pools enable multi-terabyte LUNs. Therefore, metaLUNs aremost useful when an application requires a VNX OE LUN withreserved storage resources.

    VNX metaLUNs can be used in conjunction with VMFS spanning,where multiple LUNs are striped at the VNX Operating Environmentlevel and then concatenated as a VMFS volume to distribute the I/O

    load across all the disks.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    48/224

    48 Using EMC VNX Storage with VMware vSphere

    Network considerations

    The VNX platform supports many network configuration options forVMware vSphere including basic network topologies. This sectionlists items to consider before configuring the storage network forvSphere servers.

    Note: Storage multipathing is an important network configuration topic.Review the information in Storage multipathing considerations on page 50

    before configuring the storage network between vSphere and VNX.

    Network equipmentconsiderations

    The considerations for network equipment are as follows:

    Use CAT 6 cables rather than CAT 5/5e cables. Although GbEworks on CAT 5 cables, they are less reliable and robust.Retransmissions absolutely recover from errors, but have asignificant impact for IP storage than general networking use

    cases. With NFS datastores, use network switches that support a

    Multi-Chassis Link Aggregation technology such as cross-stackEtherchannel or Virtual Port Channeling. Multipathingconsiderations - NFS on page 56provides more details.

    With NFS datastores, use 10 GbE network equipment.Alternatively, use network equipment that includes a simpleupgrade path from 1 GbE to 10 GbE.

    With VMFS datastores over FC, consider using FCoE convergednetwork switches and CNAs over 10 GbE links. These havesimilar fabric functionality and administration requirements asstandard FC switches and HBAs but at a lower cost. FCoEnetwork considerations on page 49provides more details.

    IP-based network

    configurationconsiderations

    The considerations for IP-based network configuration are as follows:

    Dedicate a physical switch or an isolated network VLAN for IPstorage connectivity to ensure that iSCSI and NFS I/O are notaffected by other network traffic.

    On network switches used for storage network, enableFlow-Control, spanning tree protocol with either RSTP orport-fast enabled, and restrict bridge protocol data units onstorage network ports.

    Configuring VMware vSphere on VNX Storage

    Configure jumbo frames for NFS and iSCSI to improve the

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    49/224

    Network considerations 49

    Configure jumbo frames for NFS and iSCSI to improve theperformance of I/O-intensive workloads. Both VMware vSphere

    and VNX support jumbo frames for IP-based storage. Set jumbo frames on ESXi, the physical network switch, and VNX

    to enable them end-to-end in the I/O path.

    Ensure that the Ethernet switches have the proper number of portbuffers and other internals to properly support NFS and iSCSItraffic.

    FCoE networkconsiderations

    Native Fibre Channel over Ethernet (FCoE) support, included withthe VNX platform, offers a simplified physical cabling optionbetween servers and other peripheral hardware components such asswitches and storage subsystems. FCoE connectivity allows thegeneral server IP-based traffic, and I/O to the storage system, to becarried in and out of the server through fewer, high-bandwidthIP-based physical connections.

    Converged Network Adapters (CNAs) reduce the physical hardwarefootprint requirements to support the data traffic flowing into andout of the servers, while providing a high flow rate through theconsolidated data flow network. High performance block I/O dataflow, previously handled through a separate set of FC-based datatraffic network, can be merged through a single, IP-based networkleveraging the CNAs that provide efficient FCoE support.

    Additional configuration of the IP switches is necessary to enableFCoE data flow correctly. However, combining the IP and block I/Otraffic on the same switch port does not compromise the server'sability to deliver equivalent service performance for applications onthat server because of the 10 Gb speed on the switch ports and thebandwidth capacity of these IP switches.

    With the FCoE data frame support work offloaded to the CNAs, thereis no significant CPU or memory impact to the servers. Thus,

    application performance is not compromised by going to theconverged network as opposed to managing separate block data I/OSAN traffic network and node-to-node IP traffic.

    VNX includes 10 Gb FCOE connectivity options by adding expansionmodules to the Storage Controllers. Configuration options on theVNX are minimal and you must complete most management tasks atthe IP switch to enable and trunk the FCOE ports. Configure a

    separate VLAN and trunk for all FCOE ports.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    50/224

    50 Using EMC VNX Storage with VMware vSphere

    Storage multipathing considerations

    Multipathing and load balancing increase the level of availability forapplications running on ESXi hosts. The VNX platform offers anondisruptive upgrade (NDU) operation for the VMware nativefailover software and EMC PowerPath for block storage. In addition,configure VNX and vSphere advanced networking to increasestorage availability and performance to access file storage.

    MultipathingconsiderationVMFS/RDM

    When connecting an ESXi host to the VNX storage with FC/FCoE oriSCSI protocol, ensure that each HBA or network card has access toboth storage processors.Figure 14and Figure 15 on page 51provide acommon topology for FC/FCoE and iSCSI connectivity to the ESXihost.

    Figure 14 FC/FCoE topology when connecting VNX storage to an ESXi host

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    51/224

    Storage multipathing considerations 51

    Figure 15 iSCSI topology when connecting VNX storage to ESXi host

    Note: The iSCSI hardware-initiator configuration is similar to the FC HBAconfiguration.

    Configuring VMware vSphere on VNX Storage

    With port binding enabled, configure a single vSwitch with two NICs

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    52/224

    52 Using EMC VNX Storage with VMware vSphere

    p g , g gso that each NIC is bound to one VMkernel port. These NICs can be

    connected to the same SP port on the same subnet as shown inFigure 16.

    Figure 16 Single virtual switch iSCSI configuration

    After the iSCSI configuration is complete, use ESXcli to activate theiSCSI multipathing connection by using the following commands:

    # ESXcli swiscsi nic add -n -d

    Run the following command to verify that the ports are added to the

    software iSCSI initiator:# ESXicli swiscsi nic list -d

    Multipathing andfailover options

    VMware ESXi offers multipath software in its kernel. This failoversoftware, called Native Multipathing Plug-in (NMP), containspolicies for Fixed, Round Robin, and Most Recently Used (MRU)device path. Additionally, EMC provides PowerPath Virtual Edition

    (PowerPath/VE) to perform I/O load balancing across all availablepaths. The following summary describes the relevance of each option.

    NMP policies Round Robin Provides primitive load balancing when used

    with the VNX arrays. However, there is not an automatedfailback when a LUN is trespassed from one storage processor toanother.

    Configuring VMware vSphere on VNX Storage

    MRU Uses the first path it detects when the host boots, and

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    53/224

    Storage multipathing considerations 53

    uses it as long as it remains available.

    Fixed Uses a single active path for all I/O to a LUN. vSphere4.1 introduced a new policy called VMW_SATP_FIXED_AP,which selects the Array LUN preferred path when VNX is set forALUA mode. This policy offers automated failback but does notinclude load balancing.

    Use VMW_SATP_FIXED_AP for the following reasons:

    With the default VNX failovermode mode Asymmetric

    Active/Active mode or ALUA, the path selected is the preferredand optimal path of the LUN so I/O operations always use theoptimal path.

    Uses the auto-restore or failback to assign LUNs to their defaultstorage processor (SP) after an NDU operation. This prevents asingle storage processor from owning all LUNs after an NDU.

    Configuring VMware vSphere on VNX Storage

    Sends I/O down a single path. However, if there are multiple

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    54/224

    54 Using EMC VNX Storage with VMware vSphere

    LUNs in the environment, select a preferred path for a given

    LUN to achieve static I/O load balancing.

    Figure 17 VSI Path Management multipath configuration feature

    Note: You can set the policies for both NMP and PowerPath by using the

    EMC VSI Path Management feature for VMware vSphere. For details on howto configure the above policies, refer to the EMC VSI for VMware vSphere:Path Management document available on Powerlink.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    55/224

    Storage multipathing considerations 55

    Using EMC PowerPath/VE multipathing and failover

    PowerPath provides the most comprehensive pathing solution formultipathing I/O between a host and the VNX. It provides multipleoptions from basic failover to I/O distribution across all availablepaths.

    PowerPath is supported in FC and iSCSI (software and hardwareinitiator) configurations. The benefits of using PowerPath incomparison to VMware native failover are as follows:

    Has an intuitive CLI that provides an end-to-end view andreporting of the host storage resources, including HBAs.

    Eliminates the need to manually change the load-balancing policyon a per-device basis.

    Uses auto-restore to restore LUNs to the preferred SP when itrecovers, ensuring balanced load and performance.

    Provides the ability to balance queues on the basis of queue depthand block size.

    Note: PowerPath provides the most robust functionality. Though it doesrequire a license, it is the recommended multipathing option for VNX.

    Configuring VMware vSphere on VNX Storage

    Multipathing considerations - NFS

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    56/224

    56 Using EMC VNX Storage with VMware vSphere

    Multipathing considerations - NFS

    Multipathing for NFS is significantly limited in comparison to SCSImultipathing options. As a result, it requires manual configurationand distribution of the I/O workload.

    A highly available storage network configuration between ESXi hostsand VNX should have the following characteristics:

    Does not have any single point of failure (NIC ports, switch ports,physical network switches, and VNX Data Mover network ports)

    Optimally load balances the workload among the available I/Opaths.

    Note: VMware vSphere supports the NFSv3 protocol, which is limited to asingle TCP session per network link. Even if multiple links are used, an NFSdatastore uses just one physical link for the data traffic to the datastore.Higher throughput can be achieved by distributing virtual machines amongmultiple NFS datastores. However, there are limits to the number of NFS

    mounts that ESXi will support. The default number of NFS mounts is 8 with amaximum value of 64 provided through a host parameter change(NFS.MaxVolumes).

    Configuring VMware vSphere on VNX Storage

    Elements of a multipathing configuration over NFS

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    57/224

    Storage multipathing considerations 57

    Figure 18illustrates the recommended configuration that addresses

    high availability and load balancing in all these levels.

    Figure 18 Multipathing configuration with NFS

    The guidelines to achieve high availability and load balancing forNFS are as follows:

    Data Mover network ports, connections to switch - Linkaggregation on VNX Data Movers and network switches

    provides N+1 fault tolerance on port failures. It also enables loadbalancing between multiple network paths. The switch can beconfigured for static LACP for Data Mover and ESXi NIC ports.The Data Mover also supports dynamic LACP.

    Note: When Cisco Nexus 1000v pluggable virtual switch is used on theESXi hosts, configure dynamic LACP for the ESXi NIC ports.

    Configuring VMware vSphere on VNX Storage

    ESXi NIC ports- NIC Teaming on the ESXi hosts provides faulttolerance of NIC port failure. Set the load balancing on the virtual

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    58/224

    58 Using EMC VNX Storage with VMware vSphere

    tolerance of NIC port failure. Set the load balancing on the virtualswitch to route-based on IP hash for Ether channel.

    Physical network switch- Use multiple switches for physicalswitch fault tolerance and connect each Data Mover and ESXihost to both switches. If available, use Multi-chassis LinkAggregation to span two physical switches while offeringredundant port termination for each I/O path from the DataMover and from the ESXi host.

    Note: When using network switches that do not support Multi-chassisLink Aggregation technology, use Fail-Safe Network on the VNX DataMovers instead of link aggregation and use routing tables on ESXiinstead of NIC teaming. Use separate network subnets for each networkpath.

    Configure multiple network paths for NFS datastores

    This section describes how to build the configuration that is shown inFigure 18 on page 57.

    At the VNX Data Mover level, create one LACP device with linkaggregation. An LACP device uses two physical network interfaceson the Data Mover and IP addresses on the same subnet. At the ESXilevel, create a single VMkernel port in a vSwitch and add twophysical NICs to it. Configure the VMkernel IP address on the samesubnet as the two VNX network interfaces.

    Note: Separate the virtual machine network and the virtual machine storagenetwork with different physical interfaces and subnets. This is recommendedfor good performance.

    Configuring VMware vSphere on VNX Storage

    Complete the following steps to build a configuration with multiplepaths for NFS datastores (steps 1 through 13 are performed using

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    59/224

    Storage multipathing considerations 59

    p ( p g p gEMC Unisphere and steps 14 through 22 are performed usingvSphere Client):

    To access Unisphere, complete the following steps:

    1. Select the VNX platform from the Systems list box in the topmenu bar. From the top menu bar, select Settings > Network >Settings for file. The Settings for files page appears.

    Figure 19 Unisphere interface

    2. Click Devices, and then click Create. TheCreate Network Devicedialog box appears.

    3. In the Device Namefield, type a name for the LACP device.

    4. In the Typefield, select Link Aggregation.

    5. In the 10/100/1000 portsfield, select the two Data Mover portsthat are used.

    6. Enable Link Aggregation on the switches, the correspondingVNX Data Mover interfaces, and ESXi host network ports.

    7. Click OKto create the LACP device.

    8. In the Settingsfor filespage, click Interfaces.

    9. Click Create. The Create Network Interface page appears.

    Configuring VMware vSphere on VNX Storage

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    60/224

    60 Using EMC VNX Storage with VMware vSphere

    Figure 20 Data Mover link aggregation for NFS server

    10. Complete the following steps:

    a. Type the details for the first Network Interface: name, IPaddress. (In Figure 16, the IP address is set at 10.244.156.102

    and the interface name is set as DM2_LACP1)b. In the Device Namelist box, select the LACP device that was

    created in step 5 on page 59 .

    c. Click Applyto create the first network interface and keep theCreate Network Interfacepage open.

    11. In the Create Network Interface page, type the details for thesecond Network Interface: name and IP address.

    12. In the Device Namelist box, select the LACP device that wascreated in step 6. (In Figure 20 on page 60, LACP1 is selected.)

    13. Click OK to create the second network interface.

    Note: As noted in Figure 20 on page 60, for simplicity, only the primaryData Mover connections are shown. Make similar connections betweenthe standby Data Mover and the network switches.

    Configuring VMware vSphere on VNX Storage

    14. Access vSphere Client and complete steps 15 through 19 for eachESXi host.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    61/224

    Storage multipathing considerations 61

    15. Create a vSwitch for all the new NFS datastores in thisconfiguration.

    16. Create a single VMkernel port connection in the new vSwitch.Add two physical NICs to it and assign an IP address for theVMkernel in the same subnet as the two Network Interfaces ofthe VNX Data Mover. (In Figure 16, the VMkernel IP address isset to 10.6.121.183 with physical NIC vmnic0 and vmnic1connected to it.)

    17. Click Properties. The vSwitch1 Properties dialog box appears.

    Figure 21 vSphere networking configuration

    Configuring VMware vSphere on VNX Storage

    18. Select vSwitch, and then click Edit. The vSwitch1 Propertiespage appears.

  • 8/10/2019 h8229 Vnx Vmware Tech Book

    62/224

    62 Using EMC VNX Storage with VMware vSphere

    Figure 22 vSwitch1 Properties screen

    19. Click NIC Teaming, and select Route based on ip hashfrom theLoad Balancin