37
WHITE PAPER VMAX eNAS DEPLOYMENT FOR MICROSOFT WINDOWS AND SQL SERVER ENVIRONMENTS EMC VMAX Engineering White Paper ABSTRACT This document provides guidelines and best practices for deploying eNAS for Microsoft environment using SMB 3.0 file shares. It also covers specific applications use cases of deploying and migrating Microsoft SQL Server on eNAS file storage and using eNAS File Auto Recovery for replication February 2017

VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

Embed Size (px)

Citation preview

Page 1: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

WHITE PAPER

VMAX eNAS DEPLOYMENT FOR MICROSOFT

WINDOWS AND SQL SERVER ENVIRONMENTS

EMC VMAX Engineering White Paper

ABSTRACT

This document provides guidelines and best practices for deploying eNAS for

Microsoft environment using SMB 3.0 file shares. It also covers specific applications

use cases of deploying and migrating Microsoft SQL Server on eNAS file storage

and using eNAS File Auto Recovery for replication

February 2017

Page 2: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

2

The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to

the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or

its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA 02/17, White Paper, Part

Number H14241.2

Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to change

without notice.

Page 3: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

3

TABLE OF CONTENTS

EXECUTIVE SUMMARY ...........................................................................................................5

AUDIENCE ........................................................................................................................................ 5

Terminology....................................................................................................................................... 5

VMAX PRODUCT OVERVIEW ..................................................................................................6

VMAX3 and FAST Service Level Objective (SLO) ............................................................................ 6

VMAX Guest Container Infrastructure Overview ............................................................................... 7

VMAX ENAS DEPLOYMENT CONSIDERATIONS ..................................................................7

VMAX eNAS configurations options .................................................................................................. 8

Storage provisioning tasks for eNAS ................................................................................................. 8

VMAX eNAS volume and eNAS file system creation considerations ................................................ 9

VMAX eNAS to host connectivity best practices ............................................................................... 9

Number and size of eNAS devices for file systems ........................................................................... 9

Microsoft SMB 3.0 support and Continuous Availability .................................................................. 12

Microsoft Offloaded Data Transfer (ODX) ....................................................................................... 12

FILE AUTO RECOVERY WITH SRDF/S ................................................................................ 13

overview of file auto recovery .......................................................................................................... 13

FAR management using FARM....................................................................................................... 14

MICROSOFT APPLICATION DEPLOYMENT USE CASES WITH ENAS ............................ 15

Test Overview ................................................................................................................................. 15

Test Configuration ........................................................................................................................... 16

Use case 1 – SQL Database run with change in FAST SLO ........................................................... 17

Use case 2 – Performance scalability with Data Movers ................................................................. 18

ENAS FAR USE CASES ........................................................................................................ 19

Test Overview ................................................................................................................................. 19

Use case 3 – VDM migration to another system for load balancing ................................................ 24

CONCLUSION ........................................................................................................................ 25

REFERENCES ........................................................................................................................ 25

APPENDIX I – STEP-BY-STEP STORAGE PROVISIONING USING UNISPHERE ............. 25

APPENDIX II – VMAX AND ENAS CLI .................................................................................. 29

APPENDIX III – DISCOVERING ENAS SMI-S PROVIDER WITH SCVMM .......................... 31

APPENDIX IV – FILE AUTO RECOVERY CONFIGURATION AND MANAGEMENT .......... 35

Page 4: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

4

Page 5: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

5

EXECUTIVE SUMMARY

The VMAX family of storage arrays – VMAX All Flash and VMAX3 exemplifies the next major step in evolving VMAX hardware and

software targeted to meet changing industry requirements for scalability, performance, and availability. VMAX3 represents a great

advancement in making complex operations such as storage management, provisioning, and setting performance goals, simple to

execute and manage. In 2016, EMC released three newly engineered and purpose-built VMAX All Flash products: VMAX 250,

VMAX 450 and VMAX 850 that are available with F and FX software packges. The new VMAX architecture uses the latest, most

cost efficient 3D NAND flash drive technology, using multi-dimensional scale, large write-cache buffering, back-end write

aggregation, high IOPS, high bandwidth, and low latency.

In addition to traditional block storage, VMAX now offer file support using embedded network attached storage (eNAS) through a

new hypervisor layer which positions VMAX3 as converged solution for both file and block storage. VMAX3 eNAS offers

consolidated file storage across the datacenter and reduces file deployment costs by eliminating the need for separate hardware.

Because VMAX3 eNAS runs directly on VMAX3 directors, it offers the highest level of reliability and availability. VMAX3 data

services including Service Level Objective (SLO)-based provisioning. Dynamic host I/O limits for IOPS and bandwidth are offered

on VMAX eNAS as well, making it very easy to manage performance and throughput for both block and file storage. VMAX eNAS

uses VNX file storage management features such as Automatic Volume Management (AVM), different types of network virtual

devices, and multi-protocol file access including NFS 3, NFS 4, SMB 2.0, and SMB 3.0 for Microsoft Windows using IPv4 and IPv6.

The advanced features of VMAX eNAS for Microsoft environments include offloaded data transfer (ODX), multi-path I/O (MPIO),

and jumbo frame support, which allow users to make optimal use of resources for best performance. VMAX eNAS supports data

protection for files using easy-to-schedule periodic snapshots as well as local and remote file system replication. Common eNAS

use cases include running Oracle on NFS, VMware on NFS, Microsoft SQL on SMB 3.0, home directories, file shares, and

consolidating Windows Servers.

File Auto Recovery (FAR) integrates eNAS with industry standard VMAX remote replication using SRDF. FAR, with its manual and

automatic fail over capabilities, offers load balancing and migration of file based applications between local and remote storage

arrays. An enhanced version of File Auto Recovery Manager (FARM), GUI based application to manage FAR was introduced.

This white paper explains the basic VMAX design and operations with regard to storage provisioning, performance management,

and deployment best practices of file-based storage using VMAX eNAS. It also covers how VMAX eNAS simplifies the

management of file storage in a Microsoft SMB 3.0 environment, using Microsoft SQL Server examples. The paper also covers

VMAX eNAS FAR use cases for Microsoft SQL Server databases.

Note: Unless otherwise specified this document pertains to both VMAX All Flash and VMAX3.Family of storage systems

AUDIENCE

This white paper is intended for database and system administrators, storage administrators, and system architects who are

responsible for implementing, managing, and maintaining Microsoft Applications in SMB environments with VMAX storage systems.

Readers should have some familiarity with the EMC family of storage arrays, including EMC VMAX and VNX.

TERMINOLOGY

The following table explains important terms used in this paper.

Term Description

AVM Automatic volume management. Used in eNAS to manage volumes and file systems.

CIFS Common Internet File System. An access protocol that allows access to files and folders from Windows hosts located on network. It is based on Microsoft’s SMB protocol.

Disk volume eNAS volume that equates VMAX block devices presented to eNAS by using the appropriate masking view.

eNAS Disk Volume eNAS volume that equates VMAX block devices presented to eNAS by using appropriate masking view. eNAS uses the disk volume as a basic building block to create other type of volumes.

eNAS Metavolume

A logical volume on which eNAS File System must be created. The metavolume provides expandable storage capacity that might be needed to dynamically expand a file system and a means to form a logical volume that is larger than a single disk A metavolume can include disk volumes, slice volumes, stripe volumes, or other metavolumes.

eNAS Slice Volume Volume carved out of eNAS Disk Volume to create smaller volume for manageability.

eNAS Stripe Volume Volume organized into a set of interlaced stripes on Disk or Slice volumes to improve volume performance.

FAR File Auto Recovery. Feature that performs synchronous replication of eNAS based file systems.

FARM File Auto Recovery Manager. Windows based utility that allows automated and manual failover of eNAS

Page 6: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

6

replicated file systems using FARM.

FAST Fully automated storage tiering (FAST) automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers.

Host Initiator Group (IG) A collection of host bus adapter (HBA) ports for storage accessibility.

HYPERMAX OS

HYPERMAX OS is an open, converged storage hypervisor and operating system. It enables VMAX to embed storage infrastructure services like data mobility and data protection directly in the array. This delivers new levels of data center efficiency and consolidation by reducing footprint and energy requirements. In addition, HYPERMAX OS delivers the ability to perform real-time and non-disruptive data services.

Hypervisor A software capability that virtualizes hardware, creating and running virtual machines and hosting guests. For example, HYPERMAX OS acts as a hypervisor to create and run containers.

Masking View (MV) A construct that binds IG, PG, and SG together and allows automatic mapping and masking of storage devices to hosts for ease of storage provisioning.

Port Group (PG) A collection of VMAX front end (FA) ports used for storage provisioning for hosts.

SMB

The Server Message Block (SMB) Protocol is a network file sharing protocol. As implemented in Microsoft Windows, it is known as Microsoft SMB Protocol. The set of message packets that defines a particular version of the protocol is called a dialect. The Common Internet File System (CIFS) Protocol is a dialect of SMB. Latest version of SMB is 3.0

Storage Group (SG) A collection of VMAX devices that are host addressable. Storage Group can be used to (a) present devices to hosts (LUN masking), (b) specify FAST Service Levels (SLOs) to a group of devices, and (c) manage grouping of devices for replications software such as SnapVX and SRDF

®.

VDM Virtual Data Mover. Instance of an eNAS Data Mover that is portable and can be replicated.

VMAX Container The virtual machine created and provided by HYPERMAX OS.

VMAX CTD Cut-Through Driver. A proprietary driver that allows the VMAX hypervisor layer to access VMAX storage devices directly.

VMAX PRODUCT OVERVIEW

The EMC VMAX family of storage arrays is built on the strategy of simple, intelligent, modular storage. The VMAX incorporates a

Dynamic Virtual Matrix interface that connects and shares resources across all VMAX engines, allowing the storage array to

seamlessly grow from an entry-level configuration into the world’s largest storage array. It provides the highest levels of

performance and availability featuring new hardware and software capabilities.

The newest additions to the VMAX family—VMAX 250, 450 and 850 —deliver the latest in Tier-1 scale-out multi-controller

architecture with consolidation and efficiency for the enterprise. They offer dramatic increases in floor tile density, high capacity

flash, and hard disk drives in dense enclosures for both 2.5" and 3.5" drives, and support both block and file (eNAS) storage.

The VMAX family of storage arrays comes pre-configured from the factory to simplify deployment at customer sites and minimize

time to first I/O. Each array uses virtual provisioning to allow the user easy and quick storage provisioning. VMAX can ship as an

all-flash array with the combination of EFD (Enterprise Flash Drives) and large persistent cache that accelerates both writes and

reads even farther. It can also ship as hybrid, multi-tier storage that excels in providing performance management based on SLOs.

The new VMAX hardware architecture comes with more CPU power, larger persistent cache, and a new Dynamic Virtual Matrix

dual InfiniBand fabric interconnect that creates an extremely fast internal memory-to-memory and data-copy fabric.

Figure 1 shows possible VMAX components. Refer to EMC documentation and release notes to find the most up-to-date supported

components.

Figure 1. VMAX All Flash Storage Array

VMAX3 AND FAST SERVICE LEVEL OBJECTIVE (SLO)

1 – 8 redundant VMAX3 Engines

Up to 4 PB usable capacity

Up to 192 FC host ports

Up to 16 TB global memory (mirrored)

Up to 384 Cores, 2.7 GHz Intel Xeon E5-2697-v2

Up to 1920 2.5’’ drives

Page 7: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

7

With VMAX3, FAST is enhanced to include both intelligent storage provisioning and performance management, using SLOs. SLOs

automate the allocation and distribution of application data to the correct data pool (and therefore storage tier) without manual

intervention. Simply choose the SLO (for example, Platinum, Gold, or Silver), that best suits the application requirements. SLOs are

tied to the expected average I/O latency for both reads and writes. Therefore, both the initial provisioning and application’s on-going

performance are automatically measured and managed based on compliance to storage tiers and performance goals. FAST

continuously samples the storage activity and every 10 minutes, if necessary, moves data at FAST’s sub-LUN granularity of

5.25MB (42 extents of 128KB). SLOs can be dynamically changed at any time. FAST continuously monitors and adjusts data

location at the sub-LUN granularity across the available storage tiers to match the performance goals provided. All this is done

automatically, within the VMAX3 storage array, without having to deploy complex application ILM1 strategies or use host resources

for migrating data due to performance needs.

VMAX GUEST CONTAINER INFRASTRUCTURE OVERVIEW

HYPERMAX OS has incorporated a lightweight hypervisor that allows Virtual Machines (VMs) to run within VMAX3. It combines

industry-leading high availability, I/O management, data integrity validation, quality of service, data security, and storage tiering with

an open application platform. HYPERMAX OS features a real-time, non-disruptive storage hypervisor that manages and protects

embedded data services (running on virtual machines) by extending VMAX3 high availability to data services that traditionally have

run outside of the array. HYPERMAX OS provides the needed infrastructure to run guest virtual machines. eNAS uses the

hypervisor layer provided by HYPERMAX OS to create and run a set of virtual machines (containers) on VMAX3 controllers. This

embedded storage hypervisor reduces external hardware and networking requirements, and delivers the highest levels of

availability with lower latency.

Each VMAX3 engine has two directors. Each director can support multiple emulations, each emulation providing a different

functionality to the storage array. Front End (FA) emulation, for example, supports host access to the storage. eNAS components

run as virtual machines within the FA emulation using allocated director resources including assigned CPU cores and memory.

These virtual machines host two elements of eNAS: software Data Movers (DM) and Control Stations (CS), and are distributed

based on the mirrored pair architecture of VMAX3 to evenly consume VMAX3 resources for both performance and capacity. The

VMAX3 proprietary Cut Through Driver (CTD) allows the Guest Operating System (GOS) of the VM to access VMAX3 storage for

its use. The GOS can be assigned Ethernet and FC I/O modules for its exclusive usage during the configuration and installation

process. The Controls Station and Data Movers use an internal network to communicate with each other. Figure 2 shows the

components of eNAS and their interconnections on a single-engine system.

Figure 2. eNAS architecture for single-engine VMAX3

VMAX ENAS DEPLOYMENT CONSIDERATIONS

Embedded NAS (eNAS) extends the value of VMAX3 to file storage by including vital enterprise features including FAST Service

Level Objective-based provisioning and performance management, and host I/O limits. VMAX3 with eNAS is a multi-controller NAS

solution, designed for customers requiring consolidation for block and file storage in mission-critical environments. eNAS supports

1 ILM is Information Life Management.

Page 8: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

8

equivalent VNX2 NAS capabilities, features, and functionality as found on the VNX2 File operating environment. Refer to VNX2

documentation on support.emc.com for details.

VMAX ENAS CONFIGURATIONS OPTIONS

The default minimum configuration for eNAS on VMAX 100K includes two Control Station VMs and two Data Mover VMs. A

maximum of eight Data Movers, seven active and one standby, can be configured for VMAX 200K and VMAX 400K models.

Logical cores, memory and number of I/O modules for VMs come pre-configured from the factory. For host connectivity, the

following I/O modules are supported: 4-port 1GbE BaseT, 2-port 10GbE BaseT, and 2-port 10GbE optical. Refer to eNAS support

matrix for supported configurations. eNAS Data Mover on VMAX 200K and 400K can have up to six Ethernet I/O modules, while

VMAX 100K can have up to four Ethernet I/O modules. Note that each I/O module occupies a slot that could be otherwise used by

an FC module for block connectivity to the host. It is important to find a balance between file and block usage on the VMAX3

system while determining eNAS configuration. Table 1 shows eNAS configurations for various VMAX3 models.

Table 1. eNAS Configurations for VMAX Family

VMAX 100K

VMAX 250

F/FX

VMAX 200K

450F/FX

VMAX 400K

850F/FX

Data Mover (DM) Logical Cores 4 6 5 8

Memory (GB) 6 24 24 24

I/O Modules ≤ 4 ≤ 6 ≤ 6

Control Station (CS) Logical Cores 1 1 1 1

Memory (GB) 4 4 8 8

I/O Modules None Req'd None Req'd None Req'd None Req'd

Max DMs supported 2 4 4 8

Note: Check the EMC VMAX eNAS support matrix for the latest information regarding supported configurations.

eNAS comes preconfigured with its boot and control volumes in their own storage group (SG). It also has a preconfigured port

group (PG) and an initiator group (IG) so that all the administrator has to do is to create volumes for user data in a storage group

and mask the storage group with eNAS Data Movers.

For load balancing and high availability, CS and DM instances are distributed evenly across director boards based on the system

configuration. Configuration details discussed in this section are for information purposes only.

STORAGE PROVISIONING TASKS FOR ENAS

VMAX3 comes pre-configured with data pools and a Storage Resource Pool (SRP). With eNAS, even the boot and control volumes

are pre-configured. During configuration, you need to create user devices for eNAS, create file systems on those devices, and

export file systems to the host using CIFS/SMB protocol.

You can create file systems in a number of ways:

1. Use Unisphere for VMAX3 and Unisphere for VNX UI Intuitive Provisioning Wizards.

1.1 On VMAX Unisphere, use File Dashboard to provision storage for eNAS. Provide an appropriate storage group name,

select an SLO for the storage group, number the devices, and select a size for each device.

1.2 eNAS will discover the storage group created in the step above as a mapped storage pool. Launch Unisphere for VNX,

create file systems from the pool, and export them.

2. Use CLI (Refer to Appendix II for steps)

2.1 Use Solutions Enabler CLI to create Devices and Masking View on VMAX32

2.2 Use eNAS Control Station CLI to create a file system and export it over CIFS. See Appendix II for instructions for using

CLI to create file systems.

3. Use EMC SMI-S and Microsoft SCVMM. Refer to Appendix III for more details about discovering the eNAS provider.

2 Refer to Appendix II for steps to provision storage for eNAS using Unisphere and eNAS CLI

Page 9: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

9

VMAX eNAS VOLUME AND eNAS FILE SYSTEM CREATION CONSIDERATIONS

eNAS uses VMAX3 thin devices and implements an optimized volume management layout for the Storage Group created for eNAS

use. The eNAS volume management layer provides two options—Automatic Volume Management (AVM) and Manual Volume

Management (MVM)—to define an optimal volume layout for file systems to meet different application workload profiles.

Using Automatic Volume Management (AVM)

Unisphere for VNX supports AVM to simplify the selection of striping, concatenation, slicing, and volume creation optimized by

workload for ease of storage management for eNAS. These are the elements of AVM:

Mapped storage pools: VMAX3 storage groups with different SLO and workloads based on their definition at the VMAX3

Block level.

Auto extend: A file system created with AVM can be configured to automatically extend when it reaches a certain predefined

threshold.

Striping: When the storage administrator requests a file system of certain size, the eNAS system creates a striped volume of

required size across a set of devices in mapped storage pool, or creates an eNAS metavolume as necessary from the eNAS

storage pool. The default stripe size for system-defined storage pools is set at 256 KB.

The algorithm that AVM uses looks for a set of eight eNAS disk volumes. If the set of eight disk volumes is not found, then the

algorithm either looks for a set of four or two or a one-disk volume based on availability. AVM stripes the disk volumes together, if

the disk volumes are all of the same size. If the disk volumes are not the same size, AVM creates a metavolume on top of the disk

volumes. AVM then adds the stripe or the metavolume to the storage pool.

Note: For best performance results, create eNAS Storage group in multiples of eight devices. AVM, by default, stripes file systems across eight

devices if eight or more volumes are present in storage group. Anything less than eight devices per File System will degrade the performance. For

efficiency, create file systems that will be approximately 85% full with user data. If a file system needs to be extended at a later time, new devices

can be added to the storage pool.

Using Manual Volume Management (MVM)

Although AVM is a simple and preferred way to create volumes and file systems, automation can limit control over the location of

the storage allocated to a file system. Manual volume management allows the administrator to create and aggregate different

volume types into usable file system storage that meets specific configuration needs.

Note: When using MVM, for best performance results, stripe the file system across a multiple of eight devices.

VMAX ENAS TO HOST CONNECTIVITY BEST PRACTICES

When planning host to eNAS connectivity for performance and availability, connect at least two physical ports from each Data

Mover to the network. Similarly, connect at least two ports from each host to the network as well. In this way, even in case of a

component failure, eNAS can continue to service the host I/O.

For best performance and availability, use multiple file systems and spread them across all the Data Movers serviced by different

Ethernet interfaces. Each share created on the Data Mover is accessible from all ports on the Data Mover; therefore, it is essential

that the host has connectivity to all the ports of the Data Mover. With SMB 3.0, the host can take advantage of load balancing and

fault tolerance if multiple Ethernet ports are available on the host. For non SMB 3.0 environments requiring load balancing or high

availability, before creating an IP interface, create virtual network devices available to selected data movers, selecting a type of

Ethernet channel, link aggregation, or Fail Safe Network (FSN). VNX documentation provides information about configuring virtual

Ethernet devices.

NUMBER AND SIZE OF ENAS DEVICES FOR FILE SYSTEMS

VMAX lets you create thin devices with a capacity ranging from a few megabytes to multiple terabytes. With the wide striping in the

storage resource pool that VMAX provides, you might be tempted to create only a few very large host devices. However, you

should use a reasonable number of eNAS devices and sizes, preferably in multiples of eight in each storage group for eNAS

consumption. The reason is that each eNAS device creates its own I/O queue at the data mover that can service a limited number

of I/O operations simultaneously. A high level of database activity will generate more I/O than the queues can service, resulting in

artificially long latencies if only a few large devices are used. Another benefit of using multiple devices is that internally, VMAX can

use more parallelism when operations such as FAST data movement, local and remote replications take place. By performing

parallel copy operations simultaneously, the overall activity takes less time. Error! Reference source not found.Error! Reference

source not found. shows SQL performance with different numbers of devices in each file system.

Page 10: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

10

Figure 3. Application performance with different number of devices in each FS

HOST I/O LIMITS AND eNAS

The Host I/O Limits quality of service (QoS) feature was introduced in the previous generation of VMAX arrays. It offers VMAX

customers the option to place specific IOPS or bandwidth limits on any storage group, regardless of the SLO assigned to that

group. The I/O limit set on a storage group provisioned for eNAS will be applied to all file systems carved out of that storage group

cumulatively. If the Host I/O limits set at the storage group level need to be transparent to the corresponding eNAS file system,

there must be a one to one correlation between them. Assigning a specific Host I/O limit for IOPS, for example, to a storage group

(file system) with low performance requirements can ensure that a spike in I/O demand will not saturate its storage, cause FAST

inadvertently to migrate extents to higher tiers, or overload the storage, affecting performance of more critical applications. Placing

a specific IOPs limit on a storage group will limit the total IOPs for the storage group, but it does not prevent FAST from moving

data based on the SLO for that group. For example, a storage group with Gold SLO may have data in both EFD and HDD tiers to

satisfy SLO compliance, yet be limited to the IOPS set by Host I/O Limits.

USING VIRTUAL DATA MOVERS (VDM)

eNAS supports Virtual Data Movers (VDM). VDMs are used for isolating Data Mover instances within a secure logical partition.

VDMs are file system containers that allow a Virtual Data Mover with independence from other VDMs in the same Data Mover

container. VDM is a security mechanism as well as an enabling technology that simplifies the DR failover process. It maintains file

system context information (metadata) to avoid rebuilding these structures on failover. File systems can be mounted beneath

VDM's that are logically isolated from each other. VDMs can be used to support multiple LDAP domains within a customer

environment. VDMs can also be used to rebalance file loads across physical Data Movers by moving VDMs and their underlying file

systems between Data Movers. VDMs are important for in-company multi-tenancy, as well as ease of use, when deploying

replication solutions.

DATA PROTECTION OF ENAS FILE SYSTEMS

File System Snapshots using SnapSure

eNAS provides additional levels of protection at the file system level using SnapSure, which provides point-in-time, logical images

of a Production File System (PFS) called checkpoints. Checkpoints can be read-only or read-write. With SnapSure, you can restore

a PFS to a point in time from a read-only or writeable checkpoint. Create checkpoints using the data protection tab in Unisphere for

VNX. In Unisphere, you will select the file system, the checkpoint name, and the pool to be used for storing the checkpoints. Use

Unisphere for VNX to schedule automated snapshots, allowing at least 15 minutes between snapshots. Figure 1 shows the

Unisphere screen used to create a snapshot (checkpoint).

SQL Batch Requests/Sec, 857

SQL Batch Requests/Sec,

1546

SQL Batch Requests/Sec,

1697

SQL Batch Requests/Sec,

1369

0

200

400

600

800

1000

1200

1400

1600

1800

1 8 16 32

SQ

L B

atc

h R

eq

uests

/S

ec

Number of Devices per eNAS FS

Application performance with different number of devices per eNAS FS

Page 11: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

11

Figure 4. File system snapshots using Snapsure

File System Replication using Replicator

Use eNAS Replicator to replicate data at the file system level. Asynchronous local and remote replication is supported over the IP

network. eNAS Replication can perform continuous file system replication, or a one-time copy of file system and VDM replication.

The destination for the local replica can be the same as the Data Mover (loopback replication), or it can be another Data Mover on

the same eNAS system. A replication session creates a point-in-time copy of the source object and periodically transfers it to the

destination to make sure that the source and destination are consistent. You can create up to four replication sessions for each file

system. Configure replication using Unisphere for VNX (preferred) or eNAS software control station CLI. To set up a replication

session in Unisphere, specify the replication name, destination system, destination pool, and synchronization interval, as shown in

Figure 5

Figure 5. File system replication using Replicator

Page 12: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

12

MICROSOFT SMB 3.0 SUPPORT AND CONTINUOUS AVAILABILITY

eNAS supports SMB 3.0. One of the most important and relevant components of the Windows Server 2012 to the storage space is

the new CIFS capability released via SMB 3.0, particularly the Continuous Availability (CA) feature. The CA feature of SMB 3.0

allows Windows hosts to persistently access SMB shares without losing the session state during Data Mover failover. The CA

feature is disabled by default on eNAS. It can be enabled and configured only through eNAS CLI (see Appendix II for detailed

steps).

When CA is enabled on a share, the persistent handles option lets a CIFS server save specific metadata associated with an open

file handle on the disk. When a Data Mover failover occurs, the new primary Data Mover reads the metadata from the disk before

starting the CIFS service. The host (CIFS client) will re-establish its session to the Data Mover and attempt to re-open its files. The

Data Mover will return the persistent handle to the client. The end result is that there is no impact to the application accessing the

open files as long as Data Mover failover time does not exceed the application timeout.

This capability allows CIFS connections to endure client and file server failover processes. SMB 3.0 supports Multipath I/O (MPIO),

in which multiple TCP connections can be associated with a given SMB session. If one TCP connection is broken due to network

failure, the user session can still continue using the remaining active TCP connections. MPIO provides transparent network failover

and load balancing without any additional configuration. CIFS can be used as a robust connectivity methodology for SQL,

SharePoint and Hyper-V. Windows hosts can take advantage of multipath and high availability by configuring multiple Ethernet

ports on hosts and eNAS Data Movers.

MICROSOFT OFFLOADED DATA TRANSFER (ODX)

eNAS supports Microsoft’s offloaded data transfer (ODX) feature on Windows Server 2012. Instead of using buffered read and

buffered write operations, Windows ODX starts the copy operation with an offload read and retrieves a token representing the data

from the storage device. Then it uses an offload write command with the token to request data movement from the source disk to

the destination disk. The copy manager of eNAS performs the data movement according to the token. Use the Windows ODX

feature to move large files or data through the high-speed storage network without any load on the IP network or host resources.

Windows ODX significantly reduces client-server network traffic and CPU time usage during large data transfers, because all the

data movement is at the backend storage network, as seen in Error! Reference source not found.Figure 6, ODX can be used in

virtual machine deployment, massive data migration, and tiered storage device support. It can lower the cost of physical hardware

deployment through the ODX and thin provisioning storage features.

Figure 6. Microsoft Offloaded Data Transfer with eNAS

JUMBO FRAME CONFIGURATION

For best performance on the Windows host, set MTU to 9000 (Jumbo Frames), at the Data Mover Ethernet interfaces as well as on

the Ethernet interface. Ensure that all intermediate Ethernet switches support jumbo frames. Figure 7 shows jumbo frame settings

at the Windows host Ethernet interface and at the eNAS Data Mover.

Page 13: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

13

Figure 7. Jumbo frame configuration using MTU settings

FILE AUTO RECOVERY WITH SRDF/S

This section covers File Auto Recovery architectural overview and deployment best practices. It also provides use cases of using

File Auto Recovery on eNAS storage for Microsoft applications.

OVERVIEW OF FILE AUTO RECOVERY

File Auto Recovery (FAR) allows manual failover or migration of a virtual Data Mover (VDM) from a source eNAS system to a

destination eNAS system. This failover or move leverages block-level Symmetrix Remote Data Facility (SRDF) synchronous

replication for zero data loss in the event of an unplanned outage. This feature consolidates VDMs, file systems, file system

checkpoint schedules, CIFS servers, networking, and VDM configurations into one storage pool for each VDM which are

synchronously replicated at secondary site. It works for a recovery of file servers at secondary site when the source is unavailable.

An option is also provided to recover and clean up the source system and make it ready as a future destination for failback

operation. A VDM-level DR solution does not require a dedicated standby Data Mover. Two sites can act as standby sites for each

other. In case of failover, operational Data Mover takes on the additional load of the failed site. Figure 8 shows FAR configuration

Automated and manually initiated failover operations can be performed using EMC File Auto Recovery Manager (FARM). FARM

allows monitoring of sync-replicated VDMs and triggers automatic failover based on Data Mover, File System, Control Station, or IP

network unavailability at the source site. FARM also allows manually initiated failover and recovery of sync-replicated VDMs in the

event of planned maintenance at the primary site. FARM must be installed on a Windows system with network access to eNAS

Control Station (CS) and Data Mover (DM) network interfaces to be monitored.

Figure 8. File Auto Recovery configuration setup

Page 14: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

14

FAR DEPLOYMENT

FAR configuration requires the following steps:

1. Install and configure source and destination eNAS systems.

2. Configure, map, and mask additional eNAS control LUNs required for FAR.

3. Configure control station-to-control station communication.

4. Enable FAR, which will also create NAS_DB mirror between source and destination eNAS systems.

5. Configure a FAR-replicable VDM.

For more details regarding setup and configuration of eNAS File Auto Recovery, please refer to document EMC VMAX3 Family

Embedded NAS File Auto Recovery with SRDF/S.

Below are some of the considerations for FAR deployment:

1. Interfaces attached to VDM are for exclusive use by VDM and cannot be used by Datamover. Datamover should have its

own interfaces configured through which it can reach DNS, NTP and Domain Controller servers. This is especially

required if CIFS shares are configured.

2. For faster failover and clean up, keep Datamovers in healthy state. If a Datamover has failed over to local standby, it

should be manually restored to normal state as soon as possible.

3. Because FARM operates in an active/passive mode, FARM no longer actively monitors a VDM that failed over or was reversed to secondary site. After the VDM Restore operation, choose Configure > VDM Configurations > Storage Settings > VDM, and select the VDM from the list. This action ensures that the VDM is monitored again by FARM.

4. VDMs are failed over in a sequential fashion and each VDM takes at least three minutes to fail over. Consider this when

estimating failover time and total outage in case of an unplanned outage.

FAR MANAGEMENT USING FARM

FARM allows manual failover management and automatic failover management by setting priorities on VDMs. Figure 9 shows

manual Failover, Restore and Reverse role operations with SRDF using FARM. Figure 10 shows setting VDM priorities for

automatic FARM operations.

Figure 9. FARM manual operations

Page 15: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

15

Figure 10. Setting VDM priorities for FARM automatic failover operation

FAR BEST PRACTICES

VDM provides an abstraction to consolidate multiple file system and relevant NAS components. As FAR operates on VDMs the

number of components and size of the file systems abstracted by VDM will determine the data replicated by SRDF and amount of

time it takes to recover all the file systems. So when deploying FAR, configure VDMs, considering the importance of the

applications and recovery needs. When using automatic failover, assign high priority to critical file systems to minimize RTO for

mission critical applications. For a reasonable failover time, it is recommended not to have more than 6 or 7 VDM Sync sessions

per system. Each VDM which is being replicated should not use more than 8 VMAX devices in its storage pool. This would help to

keep SRDF group to a manageable size. Just like any other D/R or load balancing deployments, periodic testing of the overall

infrastructure and file recovery will ensure that the secondary site has enough resources to take additional load. eNAS pool ( VMAX

storage group for eNAS) should have sufficient space to hold file system and checkpoints (snaps) as file systems, while snaps

need to be in the same pool (storage group).

FAR uses SRDF/S and leverages industry standard reliability and scalability available on VMAX3. FAR requires initial sync

between source and destination sites which leverage the block copy efficiency of SRDF. Once the SRDF groups are synced there

is very minimal performance impact on the zero-data-loss solution for file auto recovery. Figure 11 shows that FAR does not cause

any performance impact after initial sync is done.

Figure 11. File auto recovery performance impact

MICROSOFT APPLICATION DEPLOYMENT USE CASES WITH ENAS

This section covers examples of using Microsoft SQL Server on eNAS storage with SLO management. It also covers uses cases

for file recovery using VMAX3 eNAS FAR.

TEST OVERVIEW

Test use cases

These use cases are described in this section:

0

500

1000

1500

2000

2500

SQ

L S

erver B

atc

h

Req

/S

ec

Time -->

Initial Sync-up duration

No FS Replication

FS Synchronous Replication

Page 16: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

16

1. Single database performance using different VMAX3 FAST SLOs for the SQL Server Data files.

2. Single database performance using different numbers of eNAS Data Movers for the SQL Server Data files.

General test notes:

OLTP1 was configured to run a 90/10% read/write ratio OLTP workload derived from an industry standard. No special

database tuning was done as the focus of the test was not on achieving maximum performance, but rather on comparative

differences of a standard database workload.

DATA and LOG Storage File Systems were created from a single VMAX3 storage group for ease of provisioning and

performance management. A single storage group of eight 200GB devices was used for data and log file systems.

Data collection storage performance metrics were gathered using Solutions Enabler and Unisphere. Host performance

statistics were collected using Windows Perfmon.

Figure 12. Test bed configuration details for OLTP SQL Application with eNAS CIFS export

TEST CONFIGURATION

Database configuration details

The following tables show the environment that was deployed for all use cases. Table 2 shows the VMAX3 storage and eNAS

environment, Table 3 shows the host environment, and Table 4 shows the database’s storage configuration. Table 5 shows SQL

Server database layout details. Please refer to Figure 12 for test bed configuration details.

Table 2. VMAX3 environment

Configuration aspect Description

Storage array VMAX 400K

HYPERMAX OS 5977.596

Drive mix (excluding spares) 60 x 200GB-EFDs - RAID5 (3+1)

240 x 300GB-15K HDD - RAID1

96 x 1TB-7K HDD - RAID6 (6+2)

eNAS version 8.1.4-53

eNAS H/W configuration

Component Memory (GB) Cores Network

CS (2) 4 2 1 GbE

Page 17: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

17

DM (2) 24 16 10 GbE x 2

Table 3. Host environment

Configuration aspect Description

Microsoft SQL Server SQL Server 2014 Enterprise Edition 64-bit

Windows Windows Server 2012 R2 64-bit

Multipathing EMC Powerpath 5.7 SP4 64-bit

Host 1 x Cisco C240, 96 GB memory

Table 4. Database configuration

Database

Thin devices (LUNs)

LUN layout SRP Start SLO

Name: OLTP1

Size: 1.2 TB

DATA: 3 x 2 TB thin LUNs Default Gold

LOG: 1 x 2 TB thin LUNs Default Gold

Table 5. SQL Database layout details

Database

SQL DB details

Mount point SQL Server File Groups SQL Server Data files Total SQL files sizes

OLTP1

\\cifs1\OLTP1_Data1

FIXED_FG,

GROWING_FG,

SCALING_FG

MSSQL_OLTP_root.mdf,

Fixed_1.ndf,

Growing_1.ndf,

Scaling_1.ndf

378GB

\\cifs2\OLTP1_Data2

Fixed_2.ndf,

Growing_2.ndf,

Scaling_2.ndf

370GB

\\cifs1\OLTP1_Data3

Fixed_3.ndf,

Growing_3.ndf,

Scaling_3.ndf

370GB

\\cifs2\OLTP1_Logs OLTP1_log.ldf 200GB

USE CASE 1 – SQL DATABASE RUN WITH CHANGE IN FAST SLO

Objective:

The purpose of this test case is to demonstrate how database performance can be controlled by changing the SLO on a Storage

Group used for SQL Server data files residing on CIFS file systems.

Test case execution steps:

Run an OLTP workload on OLTP1 SQL Server Database with SQL Server data files and transaction log storage group on Gold SLO. Run the test

for four hours. At the end of the test, note the SQL Server Database response time and SQL Batch Requests/sec.

Change the SLO for the SQL Server storage group to Platinum and gather performance statistics. Repeat the test for the Diamond SLO.

Test results:

The chart in Figure 13 shows the test results of Use Case 1, including the database transaction rate as measured in SQL batch requests per second and the SQL Server database response time (in milliseconds). Response time and batch requests per second both show incremental improvement as the SLO is changed from Gold to Diamond.

Page 18: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

18

Figure 13. SQL performance statistics as a direct effect of changes in SLO for SQL storage group used by eNAS

VMAX3 promoted active data extents to high performance storage tiers, including more EFD capacity, as the SLO changed from

Gold to Platinum. Therefore, the transaction rate increased. I/O latencies were reduced with more EFD allocations. With Gold SLO,

SQL Server database experienced an average latency of 11 ms which improved to 3 ms with Platinum SLO and to 2 ms with

Diamond, which includes eNAS latency overhead. The corresponding transaction rate increased from 415 with the Gold SLO to

1,378 with the Platinum SLO, and to 1,546 with the Diamond SLO.

USE CASE 2 – PERFORMANCE SCALABILITY WITH DATA MOVERS

Objective:

This test demonstrates near-linear performance scalability as the number of Data Movers is increased on an eNAS system.

Test case execution steps:

1. On VMAX3, set the SQL Server data files storage group and SQL Server transaction storage group SLO levels to Diamond.

2. Create file systems for data and logs and mount them from a single Data Mover.

3. Run the OLTP workload and gather performance statistics.

4. Repeat steps 2 and 3 with two and three Data Movers. Ensure that file systems are evenly distributed across Data Movers for

each run.

Test results:

Figure 14 shows SQL Server Batch Requests/sec and average SQL response time for the same database with one, two, and three

Data Movers. As we can see from the chart, eNAS provides almost linear scaling performance while maintaining a fairly constant

average response time. Since the backend storage and amount of VMAX3 cache remained constant for all three configurations,

they remain a limiting factor in scaling.

415

1378

1546

Avg. Response Time, 11ms

Avg. Response Time, 3ms Avg. Response

Time, 2ms

0

200

400

600

800

1000

1200

1400

1600

1800

0

5

10

15

20

Resp

on

se t

ime(m

s)

SQ

L B

atc

h R

eq

/S

ec

SLO based eNAS comparison

SQL Batch Requests/Sec Avg. Response Time

Gold Platinum Diamond

SLO

Page 19: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

19

Figure 14. SQL Server Performance and Scaling with Number of Data Movers

eNAS FAR USE CASES

This section covers VMAX3 eNAS FAR use cases.

TEST OVERVIEW

Test use cases

These use cases are described in this section:

1. Planned maintenance at the primary site.

2. Unplanned VDM failover from primary to secondary site.

3. VDM migration to another system for load balancing

General test notes:

Primary and secondary eNAS sites were setup and SRDF groups configured for synchronous replication for FAR.

Applications were configured on eNAS SMB shares and VDMs were configured to manage FAR use cases.

SQL Batch Req/Sec, 947

SQL Batch Req/Sec, 1615

SQL Batch Req/Sec, 1940

Average Response Time (ms), 1.022

Average Response Time (ms), 1.109

Average Response Time (ms), 1.294

0

500

1000

1500

2000

2500

0

2

4

6

8

10

1 2 3

SQ

L B

atc

h R

eq

/S

ec

Resp

on

se T

ime (

ms)

Number of Data Movers

SQL Data Mover Scaling and SQL Performance

SQL Batch Req/Sec Average Response Time (ms)

Page 20: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

20

Figure 15. Microsoft SQL Server FAR Configuration

TEST CONFIGURATION

Database configuration details

The following tables show the environment that was deployed for all use cases. Table 2 shows the VMAX3 storage and eNAS

environment, Table 6 shows the storage configuration with eNAS FAR configuration, Table 7 shows the database configuration and

Table 8 shows eNAS VDM setup. Please refer to Figure 15 for test bed configuration details. FARM GUI is used wherever possible

for FAR management in this section. Appendix IV describes eNAS CLI that can be used for FAR management.

Table 6. VMAX3 environment

Configuration aspect Description

Storage array VMAX 400K (R1 and R2)

HYPERMAX OS 5977.691.684 (5977 Q4 2015 SR)

eNAS version 8.1.7-70

eNAS H/W configuration

Component Memory (GB) Cores Network

CS (2) 4 2 1 GbE

DM (2) 24 16 10 GbE x 2

FARM Version 3.0.70

Table 7. Host environment

Configuration aspect Description

Microsoft SQL Server SQL Server 2014 Enterprise Edition 64-bit

Windows Windows Server 2012 R2 64-bit

Multipathing EMC Powerpath 5.7 SP4 64-bit

Host 1 x Cisco C240, 96 GB memory

Page 21: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

21

Table 8. eNAS environment

Configuration

aspect

Storage Aspect NAS Aspect Application Aspect

VDM 1

NAS Storage Group

(NAS_Data): 8 devices for

each storage group

eNAS Pool – SQL Pool

(NAS_Data1)

MS SQL Server Data and Logs DM 2

RDF Group 101 (only one

RDF group for all VDMs)

SQL FS1:

Vdm1_fs1(data),

Vdm1_fs2(logs)

USE CASE 1 – PLANNED MAINTENANCE AT THE PRIMARY SITE

Objective:

The purpose of this test case is to understand how FAR can be used in the event of planned maintenance at the primary site.

Test case execution steps:

1. Gracefully shut down the application running on eNAS at the primary site.

2. On the application host, un-mount/disconnect SMB shares mounted from eNAS.

3. Shut down the AFM service on FARM if it is running.

4. Use FARM GUI to fail over the VDM to the secondary site using the Reverse operation.

Detailed execution steps:

1. Detach or gracefully shut down the SQL server databases running on eNAS on the primary site prior to planned maintenance of the site.

2. Disconnect SMB shares mounted from eNAS.

3. As shown in Figure 16, launch FARM application and shut down FARM service if it is running. Service state should appear as “Stopped” at the

end of the step. Once the FARM service has been shut down, select one or more desired VDM sessions and execute Reverse operation.

4. As shown in Figure 17, confirm the execution of the Reverse operation.

5. Monitor the completion of the Reverse operation.

6. Once the Reverse operation is successful, mount the SMB shares back on the application host using the original share name and SMB server

IP address.

7. Restart the application.

Figure 16. AFM Reverse operation for planned failover

Page 22: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

22

Figure 17. Confirm Reverse operation

2017-01-13 13:50:36.958 - "Reverse SyncRep Session session1" is running.

2017-01-13 13:50:44.735 - Detected Active Control Station(Primary): 10.108.244.21 22

2017-01-13 13:50:49.577 - Detected Active Control Station(Secondary): 10.108.201.244 2022

2017-01-13 13:50:49.606 - VDM Prepared Reverse VDM_1 from 10.108.244.21 to 10.108.201.244.

2017-01-13 13:51:14.747 - Now doing precondition check... done: 18 s

2017-01-13 13:51:25.772 - Now doing health check... done: 12 s

2017-01-13 13:51:27.780 - Now cleaning local... done: 2 s

2017-01-13 13:51:27.786 - Service outage start......

2017-01-13 13:51:27.797 - INFO: In case the 'turning down remote network interface(s)' fail, refer to the CCMD 26317029389 to access the file

systems and/or ckpt file systems from the client.

2017-01-13 13:51:37.830 - Now turning down remote network interface(s)... done: 10 s

2017-01-13 13:51:37.841 - INFO: In case the SRDF switch failure, refer to the CCMD 26317029390 for remounting R1's file systems, checkpoint file

systems.

2017-01-13 13:51:55.885 - Now switching the session (may take several minutes)... done: 18 s

2017-01-13 13:52:17.933 - Now importing sync replica of NAS database... done: 22 s

2017-01-13 13:52:22.947 - Now creating VDM... done: 5 s

2017-01-13 13:52:22.951 - Now importing VDM settings... done: 0 s

2017-01-13 13:52:24.960 - Now mounting exported FS(s)/checkpoint(s)... done: 2 s

2017-01-13 13:52:26.968 - Now loading VDM... done: 1 s

2017-01-13 13:52:26.972 - Now turning up local network interface(s)... done: 1 s

2017-01-13 13:52:26.975 - Service outage end: 59 s

2017-01-13 13:52:26.979 -

2017-01-13 13:52:28.986 - Now mounting unexported FS(s)/checkpoint(s)... done: 2 s

2017-01-13 13:52:28.990 - Now importing schedule(s)... done: 0 s

2017-01-13 13:52:48.036 - Now unloading remote VDM/FS(s)/checkpoint(s)... done: 19 s

2017-01-13 13:52:56.796 - Now cleaning remote... done: 9 s

2017-01-13 13:52:56.799 - Elapsed time: 121s

2017-01-13 13:52:56.803 - done

2017-01-13 13:52:56.845 - VDM VDM_1 Reverse OK.

2017-01-13 13:53:06.568 - Configuration Updated.

2017-01-13 13:53:07.118 - "Reverse SyncRep Session session1" completed.

Figure 18. Monitor reverse operation log on FARM

Test results:

AFM with FAR on VMAX3 eNAS allows seamless maintenance of the primary site with minimal impact on the application. As soon as the VDM is migrated to the secondary site, the application can be restarted without any need for SMB share IP address changes or further recovery.

USE CASE 2 – UNPLANNED VDM FAILOVER FROM PRIMARY TO SECONDARY

Objective:

The purpose of this test case is to understand how FAR can be used in the event of unplanned failover from primary to secondary

site.

Test case execution steps:

1. When primary site is not reachable AFM initiates automatic failover to secondary site. Ensure that the failover from primary to secondary is

successful.

2. Mount the file shares from eNAS on the secondary site if needed and ensure that they are accessible.

3. Restore and restart the application on the secondary site.

4. VDM can be failed back to the primary site once the primary site is fully restored.

5. In the event of an unplanned failover, the primary site is not cleaned up as part the failover operation. Therefore, the primary site needs to be

cleaned up first using NAS CLI which will then resume the reverse replication from secondary to primary site.

Page 23: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

23

2017-01-13 16:18:35.850 - Check DataMover server_2 of the primary site. Result: FAILED.

2017-01-13 16:18:36.110 - /home/nasadmin/.vmsm/fo.sh 4101

2017-01-13 16:18:36.131 - Now doing precondition check... done: 53 s

2017-01-13 16:18:36.154 - Now doing health check... done: 2 s

2017-01-13 16:18:36.174 - Now cleaning local... done: 3 s

2017-01-13 16:18:36.194 -

2017-01-13 16:18:36.195 - INFO: In case the SRDF switch failure, refer to the CCMD 26317029390 for remounting R1's file systems, checkpoint file

systems.

2017-01-13 16:18:36.215 - Now switching the session (may take several minutes)... done: 8 s

2017-01-13 16:18:36.234 - Now importing sync replica of NAS database...

2017-01-13 16:18:36.254 - started R1 configuration import...

2017-01-13 16:18:36.273 - applying R1 configuration to local site...

2017-01-13 16:18:36.293 - applying R1 Filesystem configuration to local site...

2017-01-13 16:18:36.313 - Updating R2 device configuration on local site...

2017-01-13 16:18:36.333 - Updated R2 device configuration on local site...

2017-01-13 16:18:36.353 - importing volume table...

2017-01-13 16:18:36.373 - imported volume table...

2017-01-13 16:18:36.393 - Updating R2 device configuration on local site...

2017-01-13 16:18:36.413 - Updated R2 device configuration on local site...

2017-01-13 16:18:36.433 - importing volume table...

2017-01-13 16:18:36.454 - imported volume table...

2017-01-13 16:18:36.474 - applied R1 Filesystem configuration to local site...

2017-01-13 16:18:36.494 - Marking devices for server in progress...

2017-01-13 16:18:39.609 - Updated the disk type..

2017-01-13 16:18:39.615 - started check disk reachability for R2 devices...

2017-01-13 16:18:39.622 - started check fs id and name conflict during config merge...

2017-01-13 16:18:39.629 - id = 4101

2017-01-13 16:18:39.635 - name = root_fs_vdm_VDM_2

2017-01-13 16:18:39.642 - id = 4103

2017-01-13 16:18:39.648 - name = vdm_2_fs1

2017-01-13 16:18:39.655 -

2017-01-13 16:18:39.656 - importing sync replica of NAS database... done: 49 s

2017-01-13 16:18:39.662 - Now creating VDM... done: 5 s

2017-01-13 16:18:39.668 - Now importing VDM settings... done: 0 s

2017-01-13 16:18:39.675 - Now mounting exported FS(s)/checkpoint(s)... done: 2 s

2017-01-13 16:18:39.681 - Now loading VDM... done: 2 s

2017-01-13 16:18:39.688 - Now turning up local network interface(s)... done: 1 s

2017-01-13 16:18:39.695 - Service outage end: 125s

2017-01-13 16:18:39.702 -

2017-01-13 16:18:39.703 - Now mounting unexported FS(s)/checkpoint(s)... done: 0 s

2017-01-13 16:18:39.709 - Now importing schedule(s)... done: 0 s

2017-01-13 16:18:39.716 - Elapsed time: 127s

2017-01-13 16:18:39.721 - done

Figure 19. Monitoring unplanned failover log

Detailed execution steps:

1. If AFM detects that the VDM on the primary site is not reachable, it initiates a failover to the secondary site. Using nas_syncrep CLI, verify that

the replication has stopped. Use FARM GUI to check the current state of the failover process in the logs window. Ensure that it shows failover

has completed successfully.

2. Mount the SMB shares on the application host from the eNAS on secondary site. Since the network configuration has also moved from primary

to secondary site as part of the failover operation, the host will continue to use the same IP addresses for the SMB server. Thus the SMB

shares can be mounted using the original share name and SMB server IP address.

3. Restore and restart the application as needed.

Note: The data mover hosting the VDM will be rebooted as part of the cleanup process which will affect other VDMs hosted by the same data mover.

Resuming primary site operations

1. Once the primary site comes back up again, restore VDM to the primary site. After an unplanned failover, first clean up the primary site

using eNAS CLI. After the primary site cleanup completes, replication resumes from the secondary site to primary site. Issue the

following command shown below on the primary site eNAS control station for proper cleanup.

2. Once the cleanup operation is initiated verify that the reverse replication from secondary to primary site has started

3. To resume VDM on the primary site use FARM restore option on desired VDM sessions as shown in Figure 20

Page 24: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

24

Cleaning up all VDMs on primary site:

$ nas_syncrep –Clean -all

id name vdm_name remote_system session_status

4096 session1 VDM_1 <--CS-0-569 in_sync

To clean up some specific VDMs on primary site:

$ nas_syncrep –Clean session1

WARNING: You have just issued the nas_syncrep -Clean command. This may result in a reboot of the original source Data Mover

that the VDM was failed over from. Verify whether or not you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover and

plan for this reboot accordingly. Running the nas_syncrep -Clean command while you have working VDM(s)/FS(s)/checkpoint(s)

on this Data Mover will result in Data Unavailability during the reboot. Are you sure you want to proceed? [yes or no] yes

Now cleaning session session1 (may take several minutes)... done

Now rebooting Data Mover server_2... done

Now starting session session1... done

Figure 20. Restoring VDMs back to primary site isung FARM

Test results:

AFM with FAR on VMAX3 eNAS handles unplanned failover with minimal impact on application availability. Once the primary site is

up, the Restore operation migrates the file services back to the primary site.

USE CASE 3 – VDM MIGRATION TO ANOTHER SYSTEM FOR LOAD BALANCING

Objective:

The purpose of this test case is to understand how FAR can be used to provide load balancing.

Test case execution steps:

1. Identify the VDMs that need to be migrated to another site for load balancing.

2. Configure the failover of the VDMs using AFM.

3. Use planned failover steps as outlined in Use Case 1 to migrate the VDMs to another site.

4. Mount SMB shares and start applications on remote site after the VDM failover.

Test results:

VMAX3 eNAS with FAR allows load balancing across the eNAS sites using planned failover of specific VDMs from the primary site

to the secondary site. Figure 21 shows the effect of load balancing file services using primary and secondary sites. As shown,

migrating VDM to the secondary site allowed improved performance for both databases through effective resource utilization on

both sites.

Page 25: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

25

Figure 21. SQL DB load balancing after VDM FAR Migration

CONCLUSION

VMAX3 with eNAS provides a consolidation platform for Microsoft Server applications in SMB environments. It provides an easy

way to provision, manage, and operate file and block environments while keeping in mind application performance needs. SLO

management allows applications to meet compliance and latency requirements. With SMB 3.0 support, eNAS for VMAX3 for

provides data transfer offloading and load balancing benefits. Seamless, easy-to-use Unisphere UIs help the user perform file

system provisioning in just a few steps. File Auto Recovery integrates eNAS with proven block level replication using SRDF to allow

load balancing, and planned and unplanned failover for eNAS based applications

REFERENCES

EMC VMAX3 Family Documentation Set

Deployment best practice for SQL Server with VMAX3 Service Level Object Management

EMC VNX2 series documentation

Managing Volumes and File Systems on VNX® manually

VNX Replicator Documentation

Microsoft Offload data transfer

VNX SnapSure Documentation

Virtual Data Movers on EMC VNX

Configuring and Managing Network High Availability on VNX

VMAX eNAS 8.1.11.24 File Auto Recovery with SRDF/S

EMC VMAX3 Family Embedded NAS File Auto Recovery Manager Product Guide

APPENDIX I – STEP-BY-STEP STORAGE PROVISIONING USING UNISPHERE

CREATE STORAGE FOR ENAS USING UNISPHERE FOR VMAX

Device creation and masking on VMAX3 includes the following tasks:

817 957

1855 1805

SQL DB Performance

VDM1 VDM2 VDM1 VDM2

Both VDMs on same VDM2 migrated to remote eNAS

SQL

Bat

ch R

eq

/Sec

Page 26: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

26

Create a Storage Group (SG): SG is the grouping of devices. The SLO management and masking view controls are at storage

group level.

Create a Masking View (MV). Masking view brings together a combination of storage group, port group, and initiator group.

You do not need to create an Initiator Group or a Port Group for eNAS as they are already created by system at eNAS install time.

To provision storage for files using the System Dashboard in Unisphere for VMAX:

1. Select “Provision Storage for File” from COMMON TASKS.

2. In the new window that opens, provide a Storage Group Name, select the Service Level for the storage group, and

indicate the number of devices and size per device as shown in Figure 22. Create eight devices, or multiples of eight

devices, for each storage group. The Storage Group Name provided in this step will appear as the Storage Pool Name on

eNAS while configuring file systems. Do not add devices to system created Storage Group EMBEDDED_NAS_DM_SG

because it is exclusively for eNAS boot and control LUNs, and for internal use only.

Figure 22. Storage provisioning for eNAS using Unisphere for VMAX

CREATE MASKING VIEW (MV)

Provisioning storage for file as described above creates a masking view as well, so you do not need to create a masking view.

However if an existing storage group needs to be used by eNAS, use the system pre-configured port group

EMBEDDED_NAS_DM_PG and the initiator group EMBEDDED_NAS_DM_IG to create a masking view.

CREATE FILE SYSTEMS AND SMB SHARE

Use Unisphere for VNX to create File Systems and export them as an SMB share or an NFS export. Storage for file (eNAS) that

was created using Unisphere for VMAX is automatically discovered by eNAS as storage pools. SMB share creation is a two-step

process:

Create a file system

Configure the file system as an SMB Share

Figure 23 shows the Unisphere screen from which these tasks are accomplished.

Page 27: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

27

Figure 23. File system creation and export using Unisphere for VNX

To create a file system, select the Storage Pool to be used for file system, indicate the size of the file system and its maximum

capacity, and enable Auto Extend (if required) as shown in Figure 24. Check the Slice Volumes option if you need to create multiple

volumes from the same pools; otherwise the file system will consume all of the available space in the storage pool. If you are using

manual volume management, first create a metavolume for the file system.

Figure 24. Creating eNAS file system

Any file system created on eNAS can be exported as CIFS or NFS share. To configure a CIFS share, complete the Data Mover,

CIFS Share Name, File System, and CIFS Server fields, as shown in Figure 25. Set up a CIFS server on the Data Mover before

creating any CIFS shares. The system administrator must configure the CIFS server, including registration with an Active Directory

server, before configuring SMB shares.

Note: Creating and configuring eNAS Data movers and domain controllers are pre-requisites that need to be completed ahead of time, and are

beyond the scope of this paper.

An SMB/CIFS share is configured on top of a file system. Select the file system to be shared and give it a share name, which is the

name by which hosts will access it. If multiple CIFS servers were created on the data mover, you can select a particular CIFS

server. Figure 25 shows the parameters required for creating an SMB share.

Page 28: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

28

Figure 25. Creating an SMB share via eNAS

Page 29: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

29

APPENDIX II – VMAX AND ENAS CLI

SAMPLE VMAX3 SOLUTIONS ENABLER COMMANDS TO CREATE STORAGE FOR ENAS

Create a storage group that is to be used for eNAS consumption using the default SRP.

# symsg -sid 115 create SQL_SG -srp DEFAULT_SRP

Assign SLO to Storage Group, with workload type OLTP

# symsg -sid 115 -sg SQL_SG set -slo gold -wl oltp

Create masking view for pre-created or already existing storage group “SQL_SG”, using system defined port group and initiator group.

# symaccess -sid 115 create view -name NAS_SQL -sg SQL_SG -pg EMBEDDED_NAS_DM_PG -ig EMBEDDED_NAS_DM_IG -

celerra

Add devices for eNAS -lun option is required and should have a value 10 or greater since LUN ID 00 to 0F are reserved for system use.

# symaccess -sid 115 -type storage -name NAS_SQL add devs 153:162 -lun 153 -celerra

SAMPLE VMAX ENAS CLI COMMANDS TO CREATE FILESYSTEM, MOUNT POINTS AND CIFS

EXPORTS3

File system creation from storage pool using storage group created in Solutions Enabler above.

# nas_fs -name FS1 -type uxfs -create size=200G pool=SQL_SG -option slice=y worm=off

For backup or disaster recovery if there is requirement to place eNAS file system journal logs to be on the above created file system itself, then add

“log_type=split” to the above command

# nas_fs -name FS1 -type uxfs -create size=200G pool=SQL_SG -option slice=y worm=off log_type=split

Optional: To create a file system from existing meta volume instead of pool. Meta Volume name is M_1_2 in this example.

# nas_fs -name FS1 -type uxfs -create M_1_2 worm=off

Create a mount point from the above created filesystem

# server_mountpoint server_2 -c /FS1

Mount the filesystem (default)

# server_mount server_2 FS1 /FS1

Mount the filesystem SMB 3.0 (with continuous availability)

# server_mount server_2 –o smbca FS1 /FS1

Export filesystem as CIFS export (default)

# server_export server_2 -P cifs -name FS1 /FS1

Export filesystem as CIFS export with type=CA (continuous availability SMB3.0)

# server_export server_2 –P cifs –name FS1 –o type=CA /FS1

Add DNS server

# server_dns server_2 -p tcp domainsql.local 10.108.200.1, 10.108.200.2

Start CIFS service

# server_setup server_2 -P cifs -o start

3 Using eNAS CLI requires SSH access to eNAS control station.

Page 30: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

30

Add computer name

# server_cifs server_2 -add compname=cifs_1, domain=domainsql.local

Join domain and authenticate

# server_cifs server_2 -Join compname=cifs_1.domainsql.local, domain=domainsql.local, admin=sqladmin

server_2: Enter Password: *********

Page 31: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

31

APPENDIX III – DISCOVERING ENAS SMI-S PROVIDER WITH SCVMM

eNAS SMI-S Provider is pre-installed and runs natively on the control station itself. This section describes the steps needed to

discover the eNAS Software Control Station SMI-S Provider for SMB share provisioning in the System Center Virtual Machine

Manager Console (SCVMM). This process consists of the following operations:

Discover existing SMB shares created by VNX Unisphere or eNAS CLI

Create new file systems and SMB shares on eNAS

Delete unused file systems from eNAS

Install the control station root certificate, on the VMM server

1. Display the contents of the root CA certificate on the Control Station:

# /nas/sbin/nas_ca_certificate –display

2. Copy the entire contents from the “-----BEGIN CERTIFICATE----- to the -----END CERTIFCATE----“ lines to clipboard

3. Open notepad on SCVMM Server and paste the contents of the certificate, then save as “root.cer”

4. Import the certificate into the SCVMM server by double-clicking the “root.cer” file and making selections from dialogs as shown in Figure 26.

Figure 26. Import the certificate wizard

Modify settings in eNAS SMI-S ECOM administration page

Configure settings for security and SSLClientAuthentication through the ECOM webpage at the control station URL

https://<CS-IP>:5989/ECOMConfig, as shown in Figure 27.

Page 32: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

32

Figure 27. ECOM configuration control station URL

Click Dynamic settings on the ECOM Administration Page, and locate the setting for SSLClientAuthentication, as shown in Figure 28. Change the setting to "None“, then click Apply as shown in Figure 28. For more information, visit the Microsoft technet blog.

Figure 28. SSL Client Authentication security settings

ENAS FILE SHARE DISCOVERY IN SCVMM CONSOLE

1. Launch the SCVMM console to highlight the “Fabric Resources” icon.

2. Expand the “storage” tree, and click “providers”.

3. Add storage devices by selecting “Add a storage device that is managed by an SMI-S provider.” See Figure 29.

Page 33: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

33

Figure 29. SCVMM console for adding storage discovered by SMI-S Provider

4. Complete the following information, with storage provider discovery connection settings as shown in Figure 30.

o Protocol: SMI-S CIMXML

o Provider IP address or FQDN

o TCP/IP port : 5989

o Use Secure Sockets Layer (SSL) connection

o Create a “Run As account” which is the standard “nasadmin” user account and then select it, as shown in Figure 30.

Figure 30. Connection settings for storage provider

Page 34: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

34

5. Click “Next” for the Storage Devices Wizard. The eNAS cifs exports should appear under “Storage Devices”, for selection, and classification.

See Figure 31 for the storage devices and their assigned classifications.

Figure 31. Discovered eNAS cifs export for classification

6. At this point, the information about eNAS cifs exports can be found under storage > file servers. See Figure 32 for the list of file servers with

their attributes.

Figure 32. eNAS file shares after discovery

ENAS SYSTEM FILE MANAGEMENT TASKS USING MICROSOFT SCVMM Once eNAS SMI-S provider is discovered, Microsoft System Center Virtual Machine Manager (SCVMM) can be used to:

1. Create and discover new cifs exports on eNAS

2. Remove existing cifs exports if the cifs have no user data on them

3. Discover new CIFS exports created outside of SCVMM (Unisphere for VNX)

4. Once discovered, eNAS file exports can be used as storage for virtual hard disks in VMM by specifying UNC path and size

for VHDX, for example: New-VHD -Path \\SFSERVER00\SHARE00\VM00.VHDX -Dynamic -SizeBytes 100 GB

Page 35: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

35

APPENDIX IV – FILE AUTO RECOVERY CONFIGURATION AND MANAGEMENT

This section describes eNAS CLI and Symcli commands used to configure and manage FRA. Some of the file system operations

can also be carried out using Unisphere (for VNX).FAR operations like manual failover and reverse can be carried out using FARM.

Command line examples below show how all FAR operations can be done using eNAS CLI.

CREATE RDF DEVICES FOR ENAS FAR USAGE

Create a device of 2216 cylinders on both local and remote VMAX3 system and map is as lun 9. The device emulation has to be CELERRA_FBA.

Add the device to existing EMBEDDED_NAS_DM_SG storage group which contains other eNAS control devices.

# symconfigure -sid 572 -cmd "create dev count=1, size=2216 cyl, emulation=CELERRA_FBA, config=TDEV;" commit

# symaccess -sid 572 -type storage -name EMBEDDED_NAS_DM_SG add dev 30 -lun 9 -celerra

Discover newly added device on eNAS using diskmark command on eNAS Control Station

$ nas_diskmark -mark -all -discovery y –monitor y

Configure CS-CS relationship between systems. This command needs to be executed on both systems using name and Control Station IP address

of the other stsrem

$ nas_cel -create CS-0-569 -ip 10.108.246.45 -passphrase nasadmin

SET UP SYNCHRONOUS REPLICATION BETWEEN ENAS SYSTEMS

Set up synchronous replication. The command may fail the first two times. Run it at least 3 times until it passes. This problem will be fixed in later

releases of eNAS. Refer to FAR confirmation guide for details of each parameter.

$ nas_cel -syncrep -enable CS-0-569 -local_fsidrange 4096,12287 -remote_fsidrange 12288,24575 -local_storage

000197200572 sym_dir=1H:28,2H:28,1H:29,2H:29 rdf_group=110 -remote_storage 000197200569

sym_dir=1H:28,2H:28,1H:29,2H:29 rdf_group=110

VDM SETUP

Create sync-replicable VDM and assign IP interfaces to it. New interfaces can be created if they do not exist.

$ nas_server -name VDM1 -type vdm -create server_2 pool=NAS_Data1 -option syncreplicable=yes

$ nas_server -vdm VDM1 -attach 10-108-245-241,10-108-245-242

Existing non-sync-replicable VDM created on a mapped pool can also be converted to a sync-replicable VDM

$ nas_server -vdm VDM0 -option syncreplicable=yes

Create, mount, and export the file system

$ nas_fs -name vdm1_fs1 -create size=1024G pool=NAS_Data1 –option slice=y $ server_mount VDM1 -o smbca vdm1_fs1 /vdm1_fs1

$ server_export VDM1 -P cifs -name vdm1_fs1 -o type=CA /vdm1_fs1

VDM SYNC OPERATIONS

Create a synchronous replication session. This example uses two Ethernet interfaces on VDM.

$ nas_syncrep -create session1 -vdm VDM1 -remote_system CS-0-569 -remote_pool NAS_Data1 -remote_mover

server_2 -network_devices fxg-2-0:fxg-2-0,fxg-2-1:fxg-2-1

Now validating params... done

Now creating LUN mapping... done

Now creating remote network interface(s)... done

Now marking remote pool as standby pool... done

Now updating local disk type... done

Now updating remote disk type... done

Now generating session entry... done

Done

List replication sessions and view session information.

$ nas_syncrep -l

id name vdm_name remote_system session_status

4096 Session1 VDM_1 -->CS-0-569 in_sync

$ nas_syncrep -info Session1

id = 4096

Page 36: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

36

name = Session1

vdm_name = VDM_1

syncrep_role = active

local_system = 752-CS0

local_pool = NAS_1

local_mover = server_2

remote_system = CS-0-569

remote_pool = NAS_1

remote_mover = server_2

device_group = 60_572_60_569

session_status = in_sync

Create file system checkpoint and checkpoint schedule.

$ fs_ckpt vdm1_fs1 -name vdm1_fs1_ckpt_1 -Create pool=NAS_Data1

$ nas_ckpt_schedule -create vdm1_fs1_sched_1 -filesystem vdm1_fs1 -recurrence daily -runtimes 1:00 -keep 5

FAR OPERATIONS USING ENAS CLI

VDM failover

$ nas_syncrep -failover VDM1

WARNING: You have just issued the nas_syncrep -failover command. Verify whether the peer system or any of its

file storage resources are accessible. If they are, then you should issue the nas_syncrep -reverse command

instead. Running the nas_syncrep -failover command while the peer system is still accessible could result in

Data Unavailability or Data Loss. Are you sure you want to proceed? [yes or no] yes

Now doing precondition check... done: 23 s

Now doing health check... done: 0 s

Now cleaning local... done: 2 s

Now switching the session (may take several minutes)... done: 12 s

Now importing sync replica of NAS database... done: 78 s

Now creating VDM... done: 3 s

Now importing VDM settings... done: 0 s

Now mounting exported FS(s)/checkpoint(s)... done: 3 s

Now loading VDM... done: 2 s

Now turning up local network interface(s)... done: 1 s

Service outage end: 124s

Now mounting unexported FS(s)/checkpoint(s)... done: 12 s

Now importing schedule(s)... done: 1 s

WARNING: The failover has completed successfully but the source system is not available to initiate a Clean

of the stale VDM objects. Please run 'nas_syncrep -Clean <session_name>' on the original source side

manually.

Elapsed time: 145s

Clean Replication session on primary site after an unplanned outage.

$ nas_syncrep -Clean -all

WARNING: You have just issued the nas_syncrep -Clean command. This may result in a reboot of the original

source Data Mover that the VDM was failed over from. Verify whether or not you have working

VDM(s)/FS(s)/checkpoint(s) on this Data Mover and plan for this reboot accordingly. Running the nas_syncrep -

Clean command while you have working VDM(s)/FS(s)/checkpoint(s) on this Data Mover will result in Data

Unavailability during the reboot. Are you sure you want to proceed? [yes or no] yes

Now cleaning session Session1 (may take several minutes)... done

Now rebooting Data Mover server_2... done

Now starting session Session1... done

Reverse replication.

$ nas_syncrep -reverse session3

WARNING: You have just issued the nas_syncrep -reverse command. There will be a period of Data Unavailability

during the reverse operation, and, after the reverse operation, the VDM/FS(s)/checkpoint(s) protected by the

sync replication session will be reversed to the local site. Are you sure you want to proceed? [yes or no]

yes

Now doing precondition check... done: 26 s

Now doing health check... done: 10 s

Now cleaning local... done: 2 s

Service outage start......

INFO: In case the 'turning down remote network interface(s)' fail, refer to the CCMD 26317029389 to access

the file systems and/or ckpt file systems from the client.

Page 37: VMAX3 eNAS Deployment For Microsoft Windows and SQL Server

37

Now turning down remote network interface(s)... done: 10 s

INFO: In case the SRDF switch failure, refer to the CCMD 26317029390 for remounting R1's file systems,

checkpoint file systems.

Now switching the session (may take several minutes)... done: 20 s

Now importing sync replica of NAS database... done: 31 s

Now creating VDM... done: 4 s

Now importing VDM settings... done: 0 s

Now mounting exported FS(s)/checkpoint(s)... done: 2 s

Now loading VDM... done: 3 s

Now turning up local network interface(s)... done: 1 s

Service outage end: 71 s

Now mounting unexported FS(s)/checkpoint(s)... done: 0 s

Now importing schedule(s)... done: 0 s

Now unloading remote VDM/FS(s)/checkpoint(s)... done: 25 s

Now cleaning remote... done: 16 s

Elapsed time: 150s

done