133
Dell EMC Host Connectivity Guide for Windows P/N 300-000-603 REV 59 This document is not intended for audiences in China, Hong Kong, and Taiwan.

Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

  • Upload
    doandat

  • View
    310

  • Download
    19

Embed Size (px)

Citation preview

Page 1: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC Host Connectivity Guide for Windows

P/N 300-000-603REV 59

This document is not intended for audiences in China, Hong Kong, and Taiwan.

Page 2: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Copyright © 2015 - 2017 Dell Inc. or its subsidiaries. All rights reserved.

Published May 2017

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

Dell, EMC2, EMC, and the EMC logo are registered trademarks or trademarks of Dell Inc. or its subsidiaries. All other trademarks used herein are the property of their respective owners..

For the most up-to-date regulator document for your product line, go to Dell EMC Online Support (https://support.emc.com).

Dell EMC Host Connectivity Guide for Windows2

Page 3: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CONTENTS

Preface........................................................................................................................................ 9

Chapter 1 General Procedures and Information

General Windows information.................................................................... 14Terminology........................................................................................ 14Utilities and functions ......................................................................... 14

Windows environment ............................................................................... 15Hardware connectivity........................................................................ 15

Booting Windows from external storage.................................................... 16Boot-from-SAN .................................................................................. 16Benefits of boot-from-SAN ................................................................ 16Boot-from-SAN configuration restrictions.......................................... 16Risks of booting from the storage array.............................................. 17How to determine I/O latency and load on the boot LUN ................... 17Configuring Unity and VNX series systems for boot from SAN........... 18

SAN Booting a Windows Host to a Unity array .......................................... 19Prerequisites ...................................................................................... 19Configure host connections................................................................ 19

Create a LUN and configure to the host .................................................... 23Microsoft Windows Failover Clustering ..................................................... 28

Chapter 2 iSCSI Attach Environments

Introduction.............................................................................................. 30Terminology....................................................................................... 30Software............................................................................................ 30Boot device support .......................................................................... 30

Windows 2008 R2 iSCSI Initiator manual procedure.................................. 31Windows 2008 R2 iSCSI Initiator cleanup.......................................... 35

Using MS iSNS server software with iSCSI configurations........................ 38iSCSI Boot with the Intel PRO/1000 family of adapters............................. 39

Preparing your storage array for boot................................................. 39Post installation information ............................................................... 41

Notes on Microsoft iSCSI Initiator............................................................. 44iSCSI failover behavior with the Microsoft iSCSI initiator................... 44Microsoft Cluster Server ................................................................... 60Boot................................................................................................... 60NIC teaming........................................................................................ 61Using the Initiator with PowerPath..................................................... 61Commonly seen issues....................................................................... 65

Chapter 3 Virtual Provisioning

Virtual Provisioning on Symmetrix ............................................................. 70Terminology........................................................................................ 71Management tools .............................................................................. 72Thin device ......................................................................................... 72

Implementation considerations.................................................................. 74Over-subscribed thin pools ................................................................. 74

Dell EMC Host Connectivity Guide for Windows 3

Page 4: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Contents

Thin-hostile environments .................................................................. 75Pre-provisioning with thin devices in a thin hostile environment......... 75Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devices ............................................................. 76Cluster configurations ........................................................................ 76

Operating system characteristics .............................................................. 78

Chapter 4 Windows Host Connectivity with Dell EMC VPLEX

Dell EMC VPLEX ...................................................................................... 80Prerequisites ............................................................................................. 81Host connectivity...................................................................................... 82Configuring Fibre Channel HBAs ............................................................... 83

Setting queue depth and execution throttle for QLogic ...................... 83Setting queue depth and queue target for Emulex.............................. 88

Windows Failover Clustering with VPLEX.................................................. 91Setting up quorum on a Windows 2012/2012 R2 Failover Cluster for VPLEX Metro or Geo clusters .............................................................. 92Configuring quorum on Windows 2008/2008 R2 Failover Cluster for VPLEX Metro or Geo clusters ............................................................. 96

VPLEX Metro or Geo cluster configuration........................................ 96Prerequisites ...................................................................................... 97Setting up quorum on a Windows 2008/2008R2 Failover Cluster for VPLEX Metro or Geo clusters ....................................................... 97

Chapter 5 Dell EMC PowerPath for Windows

PowerPath and PowerPath iSCSI............................................................ 104PowerPath for Windows.......................................................................... 105

PowerPath and MSCS...................................................................... 105Integrating PowerPath into an existing MSCS cluster ...................... 105

PowerPath verification and problem determination ................................. 108Problem determination ...................................................................... 110Making changes to your environment ................................................ 113PowerPath messages ........................................................................ 113

Chapter 6 Microsoft Native MPIO and Hyper-V

Native MPIO with Windows Server 2008/Windows Server 2008 R2........ 116Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2 .................................................................. 116Configuring Native MPIO for Windows 2008 Server Core and Windows 2008 R2 Server Core.......................................................... 116

Native MPIO with Windows Server 2012 ................................................. 120Support for Native MPIO in Windows Server 2012 ........................... 120Configuring Native MPIO for Windows Server 2012 ......................... 120

Known issues .......................................................................................... 125Hyper-V................................................................................................... 126

Appendix A Persistent Binding

Understanding persistent binding ........................................................... 128Methods of persistent binding .......................................................... 130

4 Dell EMC Host Connectivity Guide for Windows

Page 5: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Contents

Appendix B Dell EMC Solutions Enabler

Dell EMC Solutions Enabler .................................................................... 132References ....................................................................................... 132

Appendix C Veritas Volume Management Software ......................................... 133

Dell EMC Host Connectivity Guide for Windows 5

Page 6: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Contents

6 Dell EMC Host Connectivity Guide for Windows

Page 7: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

FIGURES

FIGURES

1 Four paths ................................................................................................................ 422 PowerPathAdmin ..................................................................................................... 433 Advanced Settings dialog box .................................................................................. 444 Single iSCSI subnet configuration ........................................................................... 455 Multiple iSCSI subnet configuration ......................................................................... 516 iSCSI Initiator Properties dialog box ......................................................................... 627 Log On to Target dialog box ..................................................................................... 638 Advanced Settings dialog box .................................................................................. 639 Four paths ................................................................................................................ 6410 Virtual Provisioning on Symmetrix ............................................................................ 7011 Thin device and thin storage pool containing data devices ....................................... 7312 VPLEX Metro cluster configuration example ........................................................... 9613 PowerPath Administration icon .............................................................................. 10814 PowerPath Monitor Taskbar icons and status ........................................................ 10815 One path ................................................................................................................ 10916 Multiple paths ......................................................................................................... 11017 Error with an Array port .......................................................................................... 11218 Failed HBA path ...................................................................................................... 11319 MPIO Properties dialog box .................................................................................... 12220 Original configuration before the reboot ................................................................ 12921 Host after the reboot ............................................................................................. 129

Dell EMC Host Connectivity Guide for Windows 7

Page 8: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC Host Connectivity Guide for Windows8

Page 9: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

PREFACE

As part of an effort to improve and enhance the performance and capabilities of its product line, Dell EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, contact your Dell EMC Representative.

This guide describes the features and setup procedures for Windows 2016, 2012 R2, Windows 2012, and Windows 2008 R2 host interfaces to Dell EMC storage arrays over Fibre Channel or iSCSI.

Note: This document was accurate at publication time. New versions of this document might be released on Dell EMC Online Support. Check to ensure that you are using the latest version of this document.

Audience This guide is intended for use by storage administrators, system programmers, or operators who are involved in acquiring, managing, or operating Dell EMC VMAX™ All Flash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, EMC Unified VNX™ series, Dell EMC XtremIO™, Dell EMC VPLEX™, and host devices, and Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008, and Windows Server 2008 R2.

Readers of this guide are expected to be familiar with the above storage systems and the operation of them.

Dell EMC SupportMatrix

For the most up-to-date information, always consult the Dell EMC Simple Support Matrix (ESM) on E-Lab Interoperability Navigator (ELN).

VMAX All Flash,VMAX3, VMAX,

Symmetrix™, andVNX references

Unless otherwise noted:

◆ Any general references to VMAX3 include the VMAX All Flash Family, VMAX3 Family, VMAX Family, and DMX.

◆ Any general references to Unity include any array models in the Unity and Unified VNX Families.

Relateddocumentation

For Dell EMC documentation, refer to Dell EMC Online Support.

Conventions used inthis guide

Dell EMC uses the following conventions for notes and cautions.

IMPORTANT

An important notice contains information essential to software or hardware operation.

Note: A note presents information that is important, but not hazard-related.

Dell EMC Host Connectivity Guide for Windows 9

Page 10: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Typographical conventions

Dell EMC uses the following type style conventions in this document.

Where to get help Dell EMC support, product, and licensing information can be obtained as follows.

Dell EMC support, product, and licensing information can be obtained on the Dell EMC Online Support site as described next.

Note: To open a service request through the Dell EMC Online Support site, you must have a valid support agreement. Contact your Dell EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.

Normal Used in running (nonprocedural) text for:• Names of interface elements, such as names of windows, dialog

boxes, buttons, fields, and menus• Names of resources, attributes, pools, Boolean expressions, buttons,

DQL statements, keywords, clauses, environment variables,functions, and utilities

• URLs, pathnames, filenames, directory names, computer names,links, groups, service keys, file systems, and notifications

Bold Used in running (nonprocedural) text for names of commands,daemons, options, programs, processes, services, applications, utilities,kernels, notifications, system calls, and man pages

Used in procedures for:• Names of interface elements, such as names of windows, dialog

boxes, buttons, fields, and menus• What the user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis, for example, a new term• Variables

Courier Used for:• System output, such as an error message or script• URLs, complete paths, filenames, prompts, and syntax when shown

outside of running text

Courier bold Used for specific user input, such as commands

Courier italic Used in procedures for:• Variables on the command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by theuser

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections — the bar means “or”

{ } Braces enclose content that the user must specify, such as x or y or z

... Ellipses indicate nonessential information omitted from the example

10 Dell EMC Host Connectivity Guide for Windows

Page 11: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Product information

For documentation, release notes, software updates, or for information about Dell EMC products, licensing, and service, go to Dell EMC Online Support (registration required).

Technical support

Dell EMC offers a variety of support options.

Support by Product — Dell EMC offers consolidated, product-specific information on the Web at the Dell EMC Support By Product page.

The Support by Product web pages offer quick links to Documentation, White Papers, Advisories (such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content, such as presentations, discussion, relevant Customer Support Forum entries, and a link to EMC Live Chat.

Dell EMC Live Chat — Open a Chat or instant message session with an Dell EMC Support Engineer.

eLicensing support

To activate your entitlements and obtain your Symmetrix license files, visit the Service Center on Dell EMC Online Support, as directed on your License Authorization Code (LAC) letter that was e-mailed to you.

For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your Dell EMC Account Representative or Authorized Reseller.

For help with any errors applying license files through Solutions Enabler, contact the Dell EMC Customer Support Center.

If you are missing a LAC letter, or require further instructions on activating your licenses through the Online Support site, contact Dell EMC's worldwide Licensing team at [email protected] or call:

◆ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.

◆ EMEA: +353 (0) 21 4879862 and follow the voice prompts.

We'd like to hear from you!

Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Send your opinions of this document to:

[email protected]

Dell EMC Host Connectivity Guide for Windows 11

Page 12: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

12 Dell EMC Host Connectivity Guide for Windows

Page 13: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 1

This chapter provides general procedures and information about Windows hosts.

◆ General Windows information ..................................................... 14◆ Windows environment................................................................. 15◆ Booting Windows from external storage...................................... 16◆ Microsoft Windows Failover Clustering ...................................... 28

General Procedures and Information

General Procedures and Information 13

Page 14: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

General Windows informationThis section provides information that is common to all supported versions of Windows. Read the entire section before proceeding to the rest of the chapter.

Terminology

You should understand these terms:

◆ Free space—An unused and unformatted portion of a hard disk that can be partitioned or subpartitioned.

◆ Partition—A portion of a physical hard disk that functions as though it were a physically separate unit.

◆ Volume—A partition or collection of partitions that have been formatted for use by a file system. A volume is assigned a drive letter.

◆ Primary partition—A portion of a physical disk that can be marked for use by an operating system. A physical disk can have up to four primary partitions. A primary partition cannot be subpartitioned.

Utilities and functions

Here are some Windows functions and utilities you can use to define and manage VMAX3, Unity series, Dell EMC Unified systems, and XtremIO systems. The use of these functions and utilities is optional; they are listed for reference only:

◆ Disk Manager—Graphical tool for managing disks; for example, partitioning, creating, and deleting volumes.

◆ Registry Editor—Graphical tool for displaying detailed hardware and software configuration information. Not normally part of the Administrative Tools group, the registry editor, REGEDT.EXE, is in the Windows \system32 subdirectory.

◆ Event Viewer—Graphical tool for viewing system or application errors.

14 Dell EMC Host Connectivity Guide for Windows

Page 15: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Windows environmentThis section lists Fibre Channel support information specific to the Windows environment.

For more information, refer to the appropriate chapter:

◆ Chapter 2, ”iSCSI Attach Environments”

Hardware connectivity

Refer to the Dell EMC Simple Support Matrix or contact your Dell EMC representative for the latest information on qualified hosts, host bus adapters, and connectivity equipment.

Dell EMC does not recommend mixing HBAs from different vendors in the same host.

Windows environment 15

Page 16: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Booting Windows from external storageWindows hosts have been qualified for booting from Dell EMC array devices interfaced through Fibre Channel as described under Boot Device Support in the Dell EMC Simple Support Matrix. Refer to the appropriate Windows HBA guide, available on Dell EMC Online Support, for information on configuring your HBA and installing the Windows operating system to an external storage array:

◆ EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment 7

◆ EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

◆ EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

Boot-from-SAN

Although Windows servers typically boot the operating system from a local, internal disk, many customers want to utilize the features of VMAX3, Unity, Unified VNX, and XtremIO to store and protect their boot disks and data. Boot-from-SAN enables VMAX3, Unity, Unified VNX, and XtremIO to be used as the boot disk for your server instead of a directly-attached (or internal) hard disk. Using a properly configured Fibre Channel HBA, FCoE CNA, or blade server mezzanine adapter connected and zoned to the same switch or fabric as the storage array, a server can be configured to use a LUN presented from the array as its boot disk.

Benefits of boot-from-SAN

Boot-from-SAN can simplify management in the data center. Separating the boot image from each server allows administrators to leverage their investments in Dell EMC storage arrays to achieve high availability, better data integrity, and more efficient storage management. Other benefits can include:

◆ Improved disaster tolerance

◆ Reduced total cost through diskless servers

◆ High-availability storage

◆ Rapid server repurposing

◆ Consolidation of image management

Boot-from-SAN configuration restrictions

Refer to the Dell EMC Simple Support Matrix for any specific boot-from-SAN restrictions since this guide no longer contains restriction information. The information in the Dell EMC Simple Support Matrix supersedes any restriction references found in previous HBA installation guides.

16 Dell EMC Host Connectivity Guide for Windows

Page 17: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Risks of booting from the storage array

When using the storage array as a boot disk, Dell EMC recommends that you shut down the host server during any maintenance procedures that could make the boot disk unavailable to the host.

IMPORTANT

Microsoft Windows operating systems use virtual memory paging files that reside on the boot disk. If the paging file becomes unavailable to the memory management system when it is needed, the operating system will crash with a blue screen.

Any of these events could crash a system booting from the storage array:

◆ Lost connection to array (pulled or damaged cable connection)

◆ Array service/upgrade procedures, such as on-line microcode upgrades and/or configuration changes

◆ Array failures, including failed lasers on Fibre Channel ports

◆ Array power failure

◆ Storage Area Network failures, such as Fibre Channel switches, switch components, or switch power failures

◆ Storage Area Network service/upgrade procedures, such as firmware upgrades or hardware replacements

Note: Dell EMC recommends moving the Windows virtual memory paging file to a local disk when booting from the storage array. Consult your Windows manual for instructions on how to move the paging file.

How to determine I/O latency and load on the boot LUN

The current restrictions for boot-from-array configurations listed in the Dell EMC Simple Support Matrix represent the maximum configuration that is allowed using typical configurations. There are cases where your applications, host, array, or SAN may already be utilized to a point when these maximum values might not be achieved. Under these conditions, you may wish to reduce the configuration from the maximums listed in the Dell EMC Simple Support Matrix for improved performance and functionality.

Here are some general measurements than can be used to determine if your environment might not support the maximum allowed boot-from-array configurations:

◆ Using the Windows Performance Monitor, capture and analyze the Physical Disk and Paging File counters for your boot LUN. If response time (sec/operation), or disk queue depth seem to be increasing over time, you should review any additional loading that may be affecting the boot LUN performance (HBA/SAN saturation, failovers, ISL usage, and so forth).

◆ Use available Array Performance Management tools to determine that the array configuration, LUN configuration and access is configured optimally for each host.

Booting Windows from external storage 17

Page 18: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Possible ways to reduce the load on the boot LUN include:

◆ Move application data away from the boot LUN.

◆ Reduce the number of LUNs bound to the same physical disks.

◆ Select an improved performance RAID type.

◆ Contact your Dell EMC support representative for additional information.

Configuring Unity and VNX series systems for boot from SAN

By default, Unity series and Unified VNX storage systems are configured with all of the proper settings that a Windows server requires for a successful boot from SAN. Unity and Unified VNX series storage systems have two storage processors (SPs) which allow for highly available data access even if a single hardware fault has occurred. In order for a host to be properly configured for high availability with boot-from-SAN, the HBA BIOS should have connections to both SPs on the Unity and Unified VNX system.

At the start of the Windows boot procedure, there is no failover software running. HBA BIOS, with a primary path and secondary path(s) properly configured (with access to both SPs), will provide high availability while booting from SAN with a single hardware fault.

IMPORTANT

Dell EMC strongly recommends using failover mode 4 (ALUA active/active) when supported, as ALUA will allow I/O access to the boot LUN from either SP, regardless of which SP currently owns the boot LUN.

Failover mode 1 is an active/passive failover mode. I/O can only successfully complete if it is directed to the SP that currently owns the boot LUN. If HBA BIOS attempts to boot from a passive path, BIOS will have to time out before attempting a secondary path to the active (owning) SP, which can cause delays at boot time. Using ALUA failover mode whenever possible will avoid these delays.

To configure a host to boot from SAN, the server needs to have a boot LUN presented to it from the array, which requires that the WWN of the HBA(s) or CNA(s), or the iqn of an iSCSI host, be registered.

In configurations where a server is already running Windows and is being attached to a Unity series and Unified VNX systems, the Dell EMC Unisphere™/Navisphere™ Agent would be installed on the server. This agent would automatically register the server’s HBA(s) WWNs on the array. In boot-from-SAN configurations where the OS is going to be installed on the Unity series and Unified VNX series, there is no agent available to perform the registration. Manual registration of the HBA WWNs is required in order to present a LUN to the server for boot. The following SAN Booting a Windows Host to a Unity array includes this procedure.

For instructions on how to register a host to boot from an iSCSI-based SAN, refer to Chapter 2, ”iSCSI Attach Environments.”

18 Dell EMC Host Connectivity Guide for Windows

Page 19: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

SAN Booting a Windows Host to a Unity arrayUse the following procedure to add the host to the Unity array, and connect the SAN Boot LUN to the host in preparation for installing Windows Server.

Prerequisites

Zone the host with one HBA port to one target port of the Unity array. For the OS, it is necessary to have only one path to the boot LUN. After the operating system is installed, you can zone and configure multiple paths with the Unity array.

Configure host connections

1. Open the Unisphere Manager connection to the Unity array.

2. Under Access, select Hosts > + Host, as follows:

SAN Booting a Windows Host to a Unity array 19

Page 20: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

3. Type a Name and Description and click Next, as follows:

4. Select the Operating System, type the Network Address, and click Next, as follows:

20 Dell EMC Host Connectivity Guide for Windows

Page 21: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

5. On the Select iSCSI Initiators page, click Next, as follows:

6. Under Select Fibre Channel Initiators > Auto-Discovered Initiators, select the Initiator WWN that the host will use to access storage resources.

Note: If you do not find the initiator you want in the list, click Create Initiator to manually add an initiator, and then select it from the list of Manually Created Initiators.

Click Next when you are finished, as follows:

SAN Booting a Windows Host to a Unity array 21

Page 22: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

7. Review the host configuration and click Finish, as follows.

22 Dell EMC Host Connectivity Guide for Windows

Page 23: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Create a LUN and configure to the hostThis example uses Block storage.

1. Open Unisphere Manager

2. Under Storage, select Block > Launch the Create LUN Wizard by selecting the + sign under the LUNs.

3. Type a Name and Description and click Next, as follows:

4. In the Create a LUN dialog box, select a storage Pool, Tiering Policy, LUN Size, and Host I/O Limit, and then select Next, as follows:

Create a LUN and configure to the host 23

Page 24: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

5. In the Select Host Access dialog box, select the host name that was previously configured, and click OK, as follows:

6. Under Configure Access, select the hosts that can access the storage resource.

For block-based storage, you can configure each host to access the storage resource, snapshots of the storage resource, or both. Click Next when you are finished, as follows:

7. Under Configure Snapshot Schedule, select Enable Automatic Snapshot Creation, and select a Snapshot schedule from the list.

24 Dell EMC Host Connectivity Guide for Windows

Page 25: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

The following example shows default protection of every day at 3:00 AM for 2 days. You can also select New Schedule to create a schedule that is not in the list. Click Next when you are finished:

8. To set the replication mode and recovery point objective (RPO), select Enable Replication and follow the directions at the right side of the following dialog box:

Notes:

• To create a remote replication interfaces, navigate to the Data Protection > Replication > Interfaces.

• o set up replication using RecoverPoint, go back to step 6, Configure Access, and configure the RecoverPoint host to access the storage resource.

Create a LUN and configure to the host 25

Page 26: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

9. Review your settings in the following Summary. Click Back to change any settings, and click Finish to accept them:

The following Results screen confirms the completion of the preparation steps for installing Windows. After the installation, you can zone and configure secondary paths and a second HBA port:

IMPORTANT

Make note of the host name you chose during the manual registration process. If you install the Unisphere/Navisphere host agent on your Windows server after installation, you must ensure that your Windows server is given the same name that you used during registration. If the name is different, and you install the Unisphere/Navisphere host agent, your registration on the VNX series or Unity system could be lost and your server could lose access to the boot LUN and crash.

Manually registering an HBA WWN/WWPN Your server HBA(s) WWN/WWPN can also be registered manually if it has already been zoned and logged into the array port. To manually register an HBA WWN/WWPN that is already logged into the array, refer to the vendor HBA documentation.

26 Dell EMC Host Connectivity Guide for Windows

Page 27: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Using Naviseccli to create an initiator record or manually register an HBA WWN/WWPN

The secure Navisphere command line utility naviseccli may also be used to create an initiator record or manually register an HBA WWN/WWPN. All the selections required in the “Manually registering an HBA WWN/WWPN” examples can be included in a single naviseccli storagegroup command. Refer to the naviseccli documentation on Dell EMC Online Support for full details of the switches of the storagegroup command.

Configuring VMAX3 arrays for boot from SANUnlike Unity series and Unified VNX systems, VMAX3 arrays can not be configured with all of the proper settings a Windows server requires for successful boot from SAN. Specific VMAX3 director flags (sometimes referred to as director bits) are required. These flags must be enabled on every port that a Windows server is attached to. VMAX3 arrays are highly available, with multiple connections (FA ports) for failover if hardware faults occur. For a host to be properly configured for high availability with boot from SAN, the HBA BIOS should have connections to at least two connections on the VMAX3 array.

At the start of the Windows boot procedure, there is no failover software running. HBA BIOS, with a primary path and secondary path(s) properly configured (to separate FA ports), will provide high availability while booting from SAN with a single hardware fault.

To configure a host to boot from SAN, the server needs to have a boot LUN presented to it from the array. Unlike Unity series and Unified VNX systems, VMAX3 arrays do not require that an HBA's WWPN be registered. However, VMAX3 storage arrays do provide LUN masking features that require the HBA WWPN to be validated in the array’s device-masking database.

Various families of the Dell EMC HYPERMAX OS microcode use different techniques to enable and configure their LUN-masking features. In order to configure and apply LUN-masking for your array model, Dell EMC Solutions Enabler software can be used to issue commands to the VMAX3 array using the Solutions Enabler command line interface (CLI) to perform LUN-masking on a VMAX3 array.

Refer to the Solutions Enabler Array Controls and Management 8.3.0 CLI User Guide, located on Dell EMC Online Support, for instruction on using Solutions Enabler CLI to perform LUN-masking for your Symmetrix model.

Note: It is assumed that your host HBA WWN/WWPN has not yet been zoned to the Symmetrix array.

Create a LUN and configure to the host 27

Page 28: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

General Procedures and Information

Microsoft Windows Failover Clustering Failover clustering—a Windows Server feature that enables you to group multiple servers together into a fault-tolerant cluster—provides new and improved features for software-defined datacenter customers and for many other workloads that run clusters on physical hardware or in virtual machines.

A failover cluster is a group of independent computers that work together to increase the availability and scalability of clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected by physical cables and by software. If one or more of the cluster nodes fail, other nodes begin to provide service (a process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node.

Failover clusters also provide Cluster Shared Volume (CSV) functionality that provides a consistent, distributed namespace that clustered roles can use to access shared storage from all nodes. With the Failover Clustering feature, users experience a minimum of disruptions in service.

Failover Clustering has many practical applications, including:

◆ Highly available or continuously available file share storage for applications such as Microsoft SQL Server and Hyper-V virtual machines

◆ Highly available clustered roles that run on physical servers or on virtual machines that are installed on servers running Hyper-V

For more information, refer to Failover Clustering in Windows Server 2016 in the Microsoft Windows IT Center.

28 Dell EMC Host Connectivity Guide for Windows

Page 29: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 2

This chapter provides information on the Microsoft iSCSI Initiator and the Microsoft Cluster Server.

◆ Introduction ............................................................................... 30◆ Windows 2008 R2 iSCSI Initiator manual procedure ................... 31◆ Using MS iSNS server software with iSCSI configurations ........ 38◆ iSCSI Boot with the Intel PRO/1000 family of adapters ............. 39◆ Notes on Microsoft iSCSI Initiator.............................................. 44

iSCSI Attach Environments

iSCSI Attach Environments 29

Page 30: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

IntroductionMicrosoft Internet iSCSI Initiator enables you to connect a host computer that is running Windows Server 2008, Windows Server 2008 R2 Windows Server 2012, and Windows Server 2012 R2 to an external iSCSI-based storage array through an Ethernet network adapter. You can use Microsoft iSCSI Initiator in your existing network infrastructure to enable block-based storage area networks (SANs). SANs provide iSCSI target functionality without investing in additional hardware.

Terminology

You should understand these terms:

◆ Challenge Handshake Access Protocol (CHAP) — An authentication method that is used during the iSCSI login in both the target discovery and the normal login.

◆ iSCSI Network Portal — The host NIC IP address that is used for the iSCSI driver to create a session with the storage.

Software

Node-names The Microsoft iSCSI initiator supports an iSCSI target that is configured with a node-name as the following rules:

◆ Node-names are encoded in UTF8 Character set.

◆ The length of a node-name should be 223 characters or less.

◆ A name can include any of the following valid characters:

• a through z (upper or lower case; uppercase characters are always mapped to lowercase)

• 0 through 9

• . (period)

• - (dash)

• : (colon)

Refer to the Microsoft iSCSI Initiator x.0 User’s Guide for the complete set of rules for setting up the valid initiator and target node-names.

Boot device support

The Microsoft iSCSI initiator does not support booting the iSCSI host from iSCSI storage. Refer to the Dell EMC Simple Support Matrix for the latest information about boot device support.

30 Dell EMC Host Connectivity Guide for Windows

Page 31: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Windows 2008 R2 iSCSI Initiator manual procedure

Note: Microsoft iSCSI Initiator is installed natively on Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008. On these operating systems, no installation steps are required. For more details and Windows 2012/2012 R2 information, refer to the Microsoft iSCSI Initiator Step-by-Step Guide document located at technet.microsoft.com.

Prior to configuring the iSCSI initiator, ensure you have decided exactly which NIC will connect to which target.

For example:

NIC1 and SPA-0 and SPB-0 are on one network subnet. NIC2 and SPA-1 and SPB-1 are on a different subnet. This example connects NIC1 to SPA-0 and SPB-0, and NIC2 to SPA-1 and SPB-1.

Note: These could also be on the same subnet, but we do not recommend it.

◆ NIC1

• SPA-0

• SPB-0

◆ NIC2

• SPA-1

• SPB-1

To configure the iSCSI Initiator manually, complete the following steps:

1. While logged in as an Administrator on the server, open the Microsoft iSCSI Initiator through Control Panel (showing All Control Panel Items) or Administrative Tools.

Note: Do not use Quick Connect on the Targets tab. (If you used Quick Connect, see “Windows 2008 R2 iSCSI Initiator cleanup” on page 35).

Windows 2008 R2 iSCSI Initiator manual procedure 31

Page 32: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

2. Select Discovery > Discover Portal, in the iSCSI Initiator Properties window:

The Discover Target Portal dialog box displays:

3. Enter the IP Address of the Target Storage address and select Advanced.

32 Dell EMC Host Connectivity Guide for Windows

Page 33: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

The Advanced Setting dialog box displays:

4. Select Microsoft iSCSI Initiator in the Local adapter field.

5. Select the IP address of the NIC to be used.

6. Click OK and then OK again.

The iSCSI Initiator Properties window displays:

Windows 2008 R2 iSCSI Initiator manual procedure 33

Page 34: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

7. Select the Targets tab.

8. Highlight the first target iqn and select Connect.

The Connect to Target dialog box displays:

9. Select Enable multi-path if using Dell EMC PowerPath™ or Windows 2008 Native MPIO.

10. Click Advanced.

The Advanced Settings dialog box displays:

11. In the Local adapter field, select Microsoft iSCSI Initiator from the drop-down menu.

12. In the Initiator IP field, select the correct NIC IP address from the drop-down menu.

13. In the Target poral IP field, select the IP address from the drop-down menu.

14. Click OK and then OK again.

15. Connect each of the other three targets in the list following the same procedure listed in the previous steps.

34 Dell EMC Host Connectivity Guide for Windows

Page 35: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

16. In the iSCSI Initiator Properties window, select the Favorite Targets tab. This should show each of the targets that have been connected:

17. If the host has Unisphere/Navisphere Agent installed, you should now see it logged in and registered in Unisphere/Navisphere Manager. Otherwise you will need to manually register the NIC in Unisphere/Navisphere Manager.

18. Place the host in a Storage Group that has LUNs in it using Unisphere/Navisphere Manager, and then go back to the host and do a device manager Scan for Hardware Changes. After a few minutes, you should see the disk devices arrive in the PowerPath GUI and or in Disk Management.

Note: PowerPath only shows the one adapter in the PowerPath GUI, even though you might be using multiple NICs. The adapter seen here does not represent the NICs you have installed in your system, but rather it represents the MS iSCSI software initiator.

Windows 2008 R2 iSCSI Initiator cleanup

Note: If running Windows 2008 Failover Cluster, ensure that this host does not own any disk resources. Move resources to another node in the cluster or take the disk resources offline.

Similarly, any LUNs being used on a Standalone Windows host need to be offline. Use Disk Management to offline the disks.

Windows 2008 R2 iSCSI Initiator manual procedure 35

Page 36: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

To clean up the iSCSI Initiator, complete the following steps:

1. While logged in as an Administrator on the server, open the Microsoft iSCSI Initiator through Control Panel (showing All Control Panel Items) or Administrative Tools.

2. Select the Discovery tab, select one of the addresses in the Target Portals field, and click Remove:

3. A warning appears. Click OK:

4. Remove all the other Target Portals.

5. In the iSCSI Initiator Properties window, select the Volumes and Devices tab, select the volume in the Volume List field, and click Remove. Do this for each volume you want to remove.

6. In the iSCSI Initiator Properties window, select the Favorite Targets tab, select the target from the Favorite Targets field, and click Remove. Do this for each target that you want to remove:

36 Dell EMC Host Connectivity Guide for Windows

Page 37: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

7. In the iSCSI Initiator Properties window, select the Targets tab, select one of the targets in the Discovered targets field, and click Disconnect.

8. A warning message displays. Click Yes:

9. Follow steps Step 7 and Step 8 for each of the targets to be disconnected.

If you are running PowerPath, all of the devices will show as dead in the PowerPath GUI. To clean and remove these, complete the following steps:

1. Open a command prompt using Run as administrator:

2. Type powermt check and when asked to remove dead device, select "a" for ALL.

3. Check the Discovery tab to ensure that there are no further targets connected.

4. Check each of the iSCSI initiator tabs and ensure they are all empty.

Windows 2008 R2 iSCSI Initiator manual procedure 37

Page 38: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Using MS iSNS server software with iSCSI configurationsThe Microsoft iSNS Server is a Microsoft Windows service that processes iSNS registrations, deregistrations, and queries via TCP/IP from iSNS clients, and also maintains a database of these registrations. The Microsoft iSNS Server package consists of Windows service software, a control-panel applet, a command-line interface tool, and a set of WMI interfaces. Additionally, there are DLLs allowing Microsoft Cluster Server to manage Microsoft iSNS Server as a cluster resource.

When configured properly, the iSNS server allows iSCSI initiators to query for available iSCSI targets that are registered with the iSNS server. The iSNS server also allows administration of iSCSI networks by providing a form of “zoning” in order to allow initiators access only to targets designated by the administrator.

Prior to running the installation, we recommend that your iSCSI network interface controller (NIC) be configured to work with your iSCSI network. Symmetrix, VNX series, and CLARiiON iSCSI interfaces must be configured to recognize and register with the iSNS server software. Refer to the DMX MPCD For iSCSI Version 1.0.0 Technical Notes for information on the configuration of the MPCD for iSCSI into Symmetrix DMX systems. VNX series and CLARiiON configuration is done by an Dell EMC Customer Engineer (CE) through Unisphere/Navisphere Manager. The CE will configure your CX-Series system settings for each iSCSI port.

After your storage array target ports are configured, install the iSNS server software by starting the installation package downloaded from the Microsoft website.

To install the iSNS server software, refer to Microsoft documentation at http://microsoft.com.

38 Dell EMC Host Connectivity Guide for Windows

Page 39: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

iSCSI Boot with the Intel PRO/1000 family of adaptersIntel iSCSI Boot is designed for the Intel PRO/1000 family of PCI-Express Server Adapters. Intel iSCSI Boot provides the capability to boot from a remote iSCSI disk volume located on an iSCSI-based Storage Area Network (SAN).

The basic steps to configuring boot from SAN are:

1. Prepare you storage array for boot from SAN.

2. Install boot-capable hardware in your system.

3. Install the latest Intel iSCSI Boot firmware using the iscsicli DOS utility.

4. Connect the host to a network with the contains the iSCSI target.

5. Configure the iSCSI boot firmware on the NIC to boot from a pre-configured iSCSI target disk.

6. Configure the host to boot from the iSCSI target.

This section focuses on preparing your array to boot from SAN. The steps listed above are documented on the Intel iSCSI Remote Boot Support page, with a list of supported adapters.

Preparing your storage array for boot

This section explains how to prepare your array in order to successfully present a boot LUN to you host.

The first thing you need to consider is what the host name will be. Using the naming conventions explained in “Node-names” on page 30, record an appropriate iqn name for your host.

In the following example, we will use an iqn name of iqn.1992-05.com.microsoft:intel.hctlab.hct.

Configuring your CX3 for iSCSI boot

Note: The following example assumes that you are familiar with Unisphere/Navisphere Manager.

For a boot from SAN to work properly, you first need to present a LUN to the host.

To do this on a CX3 using Navisphere Manager:

1. Create new initiator records that identify the host to the array.

2. Create a record for each SP port that you might potentially connect.

3. Create a storage group with the new server and boot LUN. This LUN should be sized properly in order for the OS, and any other applications, to fit properly.

iSCSI Boot with the Intel PRO/1000 family of adapters 39

Page 40: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

After this is complete, follow these steps:

1. Right-click on the Storage Array and then select Connectivity Status:

2. Click New to display the Create Initiator Record window:

3. In the Create Initiator Record window:

a. Enter the iqn name of the host in the Initiator Name field. For our example, use iqn.1992-05.com.microsoft:intel.hctlab.hct.

b. Select the SP - port to which the host will connect.

c. Enter a host name and ip address in the Host Information section.

d. Click OK.

40 Dell EMC Host Connectivity Guide for Windows

Page 41: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Once these steps are complete, the host displays in the Connectivity Status window:

4. Remember that the above steps create an initiator record that has a path to B0. To create additional initiator records, repeat steps 2 and 3 on page page 40.

a. Select a different SP - port in the drop-down menu.

b. Instead of entering a new host name, select Existing Host and choose the host created during the creation of the initial initiator record.

Note: This is critical to provide uninterrupted access if an array side path failure should occur.

5. Once you have created initiator records for each SP port, you will be able to create a storage group as you normally would and assign the newly created host and a boot LUN.

At this point you have configured the array to present a boot LUN to the host at boot up. You can continue with the instructions documented by Intel to install your OS to a local hard drive and then image your host OS to the boot LUN assigned above.

Post installation information

This section contains the following installation:

◆ “Using two Intel NICs in a single host” on page 41

◆ “PowerPath for Windows” on page 42

Using two Intel NICs ina single host

The process in “Configuring your CX3 for iSCSI boot”, beginning on page 39, creates connections that can be accessed by the host on all ports available on the CX3 (in our example ports A0, A1, B0, and B1).

By using a dual-port PRO/1000, or two single-port PRO/1000s, the Intel BIOS will allow you to set up one port as the primary and another as the secondary. By configuring the primary login to connect to one SP, and the secondary login to connect to the other SP, your host will have access to both SPs.

iSCSI Boot with the Intel PRO/1000 family of adapters 41

Page 42: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Note: You do not need to configure Microsoft iSCSI Software Initiator for Windows to be able to detect the iSCSI Disk. Microsoft iSCSI Software Initiator automatically retrieves the iSCSI configurations from the PRO/1000 adapter iSCSI Boot firmware.

PowerPath forWindows

After you are successfully booting from your array, you can install PowerPath.

Before installing PowerPath ensure that your paths between the NICs and the array ports are set up properly. To do this, use a combination of the Intel BIOS logins, documented in the Intel guide in the "Firmware Setup" section, along with additional target logins using the Microsoft Initiator. What you are trying to accomplish is a path setup that looks much like what is discussed in “Using the Initiator with PowerPath” on page 61.

When complete, your paths will look like those shown in Figure 1 on page 42.

Figure 1 Four paths

Setting this up will be slightly different than what is discussed in “Using the Initiator with PowerPath” on page 61. Remember that you have already created two paths by configuring the Intel BIOS with primary and secondary logins. So, for example, if you configure the Intel BIOS to connect to A0 and B1, after you boot your host the Microsoft Initiator will show two connected logins to port A0 and B1 on the Target tab.

To complete the path setup you need to use the process beginning with Step 1 on page 62. When complete, you will have something similar to the following:

◆ NIC1 A0

◆ NIC2 A1

◆ NIC1B0

◆ NIC2 B1

42 Dell EMC Host Connectivity Guide for Windows

Page 43: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

You can now install PowerPath 4.6.x. Once installed, PowerPath Administrator will look similar to Figure 2. Failures on the array side (loss of path between SP port and switch and SP failures) will be managed correctly by PowerPath.

Figure 2 PowerPathAdmin

Note: PowerPath will sometimes show behavior that is not typically seen in non-boot implementations because of the design of the boot version of the Microsoft Initiator. The most notable difference is when a host side cable/NIC fault occurs. If the cable connected to the NIC that first found the LUN at boot time is disconnected, or if the NIC fails, PowerPath will show three dead paths instead of the two that would be expected. This behavior is expected with the Microsoft Initiator boot version. If the paths were set up as previously explained, a host side fault will not affect your system.

iSCSI Boot with the Intel PRO/1000 family of adapters 43

Page 44: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Notes on Microsoft iSCSI InitiatorThis section contains important information about Microsoft iSCSI Initiator, including:

◆ “Microsoft Cluster Server” on page 60

◆ “Boot” on page 60

◆ “Boot” on page 60

◆ “NIC teaming” on page 61

◆ “Using the Initiator with PowerPath” on page 61

◆ “Commonly seen issues” on page 65

iSCSI failover behavior with the Microsoft iSCSI initiator

When creating an iSCSI session using the Microsoft iSCSI Initiator, you must choose in the Advanced Settings dialog box (Figure 3 on page 44) is whether to:

◆ Have iSCSI traffic for that session travel over a specific NIC, or

◆ Allow the OS to choose which NIC will issue the iSCSI traffic.

This option also allows the Microsoft iSCSI Initiator to perform some failover (independent of PowerPath) in the case of a NIC failure.

Note: Multiple subnet configurations are highly recommended as issues can arise in single subnet configurations.

Figure 3 Advanced Settings dialog box

44 Dell EMC Host Connectivity Guide for Windows

Page 45: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

The Source IP pull-down menu in the Advanced Settings dialog box lists the IP address of each NIC on the server, as well as an entry labeled Default. Default allows Windows to choose which NIC to use for iSCSI traffic.

The examples in this section describe the different failover behaviors that can occur when a NIC fails in both a single subnet configuration and a multiple subnet configuration after choosing either a specific NIC from the pull-down menu or Default:

◆ “Single subnet, Source IP is "Default"” on page 47

◆ “Single subnet, Source IPs use specific NIC IP addresses” on page 49

◆ “Multiple subnets, Source IP is "Default"” on page 53

◆ “Multiple subnets, Source IPs use specific NIC IP addresses” on page 55

Single iSCSI subnet configurationFigure 4 illustrates a single iSCSI subnet configuration.

Figure 4 Single iSCSI subnet configuration

In this configuration, there is a single subnet used for all iSCSI traffic. This iSCSI subnet is routable with the corporate network, but only iSCSI ports on the array and server NICs sending/receiving iSCSI traffic are connected to switches on this subnet.

The Windows server has a total of three NICs:

◆ One connected to the corporate network, with a defined default gateway

◆ Two connected to the iSCSI subnet, for NIC redundancy

Partial output from an ipconfig /all command from the server returns:

Ethernet adapter NIC1:

Description . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server AdapterIP Address. . . . . . . . : 10.14.108.78Subnet Mask . . . . . . . : 255.255.255.0Default Gateway . . . . . :

Ethernet adapter NIC2:

Description . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2IP Address. . . . . . . : 10.14.108.79Subnet Mask . . . . . . : 255.255.255.0Default Gateway . . . . :

Notes on Microsoft iSCSI Initiator 45

Page 46: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Ethernet adapter Corporate:

Description . . . . . .. . : Intel 8255x-based PCI Ethernet Adapter (10/100)IP Address. . . . . . . : 10.14.16.172Subnet Mask . . . . . . : 255.255.255.0Default Gateway . . . . : 10.14.16.1

All four iSCSI ports on the VNX series and CLARiiON are also connected to the iSCSI subnet. Each iSCSI port has a default gateway configured (with an iSCSI subnet address). The management port on the VNX series and CLARiiON is connected to the corporate network, and also has a default gateway defined.

The VNX series and CLARiiON's network configuration is as follows:

Management port (10/100 Mb):IP Address 10.14.16.46, default gateway 10.14.16.1iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.108.48, default gateway 10.14.108.1iSCSI Port SP B0: IP Address 10.14.108.47, default gateway 10.14.108.1iSCSI Port SP B1: IP Address 10.14.108.49, default gateway 10.14.108.1

Fully licensed PowerPath is installed for all examples.

ipconfig information

The corporate network is the 10.14.16 subnet. The Server's Intel 8255x-based PCI Ethernet NIC connects to this subnet.

The iSCSI subnet is the 10.14.108 subnet. The Server's Intel Pro/1000 MT Dual Port NICs connect to this subnet.

An ipconfig /all command from the server returns:

Windows IP ConfigurationHost Name . . . . . . . . : compaq8502Primary Dns Suffix . . . : YAMAHA.comNode Type . . . . . . . . : UnknownIP Routing Enabled. . . . : NoWINS Proxy Enabled. . . . : NoDNS Suffix Search List. . : YAMAHA.com

Ethernet adapter NIC1:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter Physical

Address. . . . . .: 00-04-23-AB-83-42DHCP Enabled. . . . . . . : NoIP Address. . . . . . . . : 10.14.108.78Subnet Mask . . . . . . . : 255.255.255.0Default Gateway . . . . . :

Ethernet adapter NIC2:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2Physical Address. . . . . .: 00-04-23-AB-83-43DHCP Enabled. . . . . . . .: NoIP Address. . . . . . . . .: 10.14.108.79Subnet Mask . . . . . . . .: 255.255.255.0Default Gateway . . . . . .:

Ethernet adapter Corporate:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel 8255x-based PCI Ethernet Adapter (10/100)Physical Address. . . . . . : 08-00-09-DC-E3-9CDHCP Enabled. . . . . . . . : NoIP Address. . . . . . . . . : 10.14.16.172Subnet Mask . . . . . . . . : 255.255.255.0

46 Dell EMC Host Connectivity Guide for Windows

Page 47: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Default Gateway . . . . .. : 10.14.16.1DNS Servers . . . . . . . . : 10.14.36.200

10.14.22.13

Routing table information

A route print command from the server returns:

IPv4 Route Table===========================================================================Interface List0x1 .......................... MS TCP Loopback interface0x10003 ...00 04 23 ab 83 42 .....Intel(R) PRO/1000 MT Dual Port Server Adapter0x10004 ...00 04 23 ab 83 43 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter #20x10005 ...08 00 09 dc e3 9c ...... Intel 8255x-based PCI Ethernet Adapter (10/100)======================================================================================================================================================Active Routes:Network Destination Netmask Gateway Interface Metric

0.0.0.0 0.0.0.0 10.14.16.1 10.14.16.172 2010.14.16.0 255.255.255.0 10.14.16.172 10.14.16.172 20

10.14.16.172 255.255.255.255 127.0.0.1 127.0.0.1 2010.14.108.0 255.255.255.0 10.14.108.78 10.14.108.78 1010.14.108.0 255.255.255.0 10.14.108.79 10.14.108.79 10

10.14.108.78 255.255.255.255 127.0.0.1 127.0.0.1 1010.14.108.79 255.255.255.255 127.0.0.1 127.0.0.1 10

10.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 2010.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1010.255.255.255 255.255.255.255 10.14.108.79 10.14.108.79 10

127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1224.0.0.0 240.0.0.0 10.14.16.172 10.14.16.172 20224.0.0.0 240.0.0.0 10.14.108.78 10.14.108.78 10224.0.0.0 240.0.0.0 10.14.108.79 10.14.108.79 10

255.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 1255.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1255.255.255.255 255.255.255.255 10.14.108.79 10.14.108.79 1

Default Gateway: 10.14.16.1===========================================================================Persistent Routes:None

Example 1 Single subnet, Source IP is "Default"

The Default setting can be verified through the iscsicli sessionlist command. The bold output shows an Initiator Portal of 0.0.0.0/<TCP port>, which is what Default is displayed as follows:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2600c-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 1f 34Number Connections : 1

Connections:Connection Id : ffffffff8ae2600c-1Initiator Portal : 0.0.0.0/1049Target Portal : 10.14.108.46/3260CID : 01 00

Notes on Microsoft iSCSI Initiator 47

Page 48: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Session Id : ffffffff8ae2600c-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8ae2600c-2Initiator Portal : 0.0.0.0/1050Target Portal : 10.14.108.48/3260CID : 01 00

Session Id : ffffffff8ae2600c-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:Connection Id : ffffffff8ae2600c-3Initiator Portal : 0.0.0.0/1051Target Portal : 10.14.108.47/3260CID : 01 00

Session Id : ffffffff8ae2600c-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8ae2600c-4Initiator Portal : 0.0.0.0/1052Target Portal : 10.14.108.49/3260CID : 01 00

The operation completed successfully.

When two NICs are on the same iSCSI traffic subnet (10.14.108), Windows will only use one NIC for transmitting all iSCSI traffic. In this example, it uses the NIC1 (IP address 10.14.108.78), which is the highest entry in the routing table for the 10.14.108 subnet.

NIC1 handles iSCSI traffic to all four VNX series and CLARiiON iSCSI SP ports, while NIC2 (10.14.108.79) is idle.

If NIC1 fails, Windows automatically fails over to NIC2 for all iSCSI traffic. This failure is transparent to PowerPath, as shown in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 07 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 0

48 Dell EMC Host Connectivity Guide for Windows

Page 49: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

7 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 1 07 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 1 0

All paths remain listed as active no errors are indicated, and the Q-IOs Stats indicate that both paths to SP B are still being used for traffic to this LUN.

If NIC1 is subsequently repaired, Windows does not return the iSCSI traffic to NIC1. NIC1 remains idle while NIC2 is used for all iSCSI traffic to all four VNX series and CLARiiON iSCSI SP ports. After NIC1 is repaired, it would take an NIC2 failure to move iSCSI traffic back to NIC1.

Example 2 Single subnet, Source IPs use specific NIC IP addresses

The following iscsicli sessionlist output shows that each NIC is used for two iSCSI sessions; four in total. The bold output below shows the IP addresses used in the Initiator Portals.

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2700c-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:Connection Id : ffffffff8ae2700c-1Initiator Portal : 10.14.108.78/1394Target Portal : 10.14.108.46/3260CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : df 84Number Connections : 1

Connections:Connection Id : ffffffff8ae2700c-2Initiator Portal : 10.14.108.79/1395Target Portal : 10.14.108.48/3260CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 1f 34Number Connections : 1Connections:

Connection Id : ffffffff8ae2700c-3Initiator Portal : 10.14.108.78/1396Target Portal : 10.14.108.47/3260CID : 01 00

Session Id : ffffffff8ae2700c-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.com

Notes on Microsoft iSCSI Initiator 49

Page 50: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Target Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : df 84Number Connections : 1

Connections:Connection Id : ffffffff8ae2700c-4Initiator Portal : 10.14.108.79/1397Target Portal : 10.14.108.49/3260CID : 01 00

The operation completed successfully.

In this configuration both NICs are used for iSCSI traffic, even though they are on the same subnet. iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A0 and B0 are directed through NIC1. iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A1 and B1 are directed through NIC2.

If NIC1 fails, Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP ports A0 and B0, even though NIC2 can physically reach those ports. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark paths to those ports as “dead.” The following powermt display output shows paths to A0 and B0 marked as dead. All iSCSI traffic for this LUN is directed to the single surviving path on SP B, the current owner on this LUN.

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 17 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 17 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 2 0

If NIC1 is subsequently repaired, Windows re-establishes iSCSI sessions to VNX series and CLARiiON SP ports A0 and B0, and PowerPath marks those paths as "alive" and once again uses the path to B1 (as well as the path to B0) for IO to this LUN.

50 Dell EMC Host Connectivity Guide for Windows

Page 51: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Multiple iSCSI subnet configurationFigure 5 illustrates a multiple iSCSI subnet configuration.

Figure 5 Multiple iSCSI subnet configuration

In this configuration, there are two subnets used for all iSCSI traffic:

◆ iSCSI Subnet 108 is the 10.14.108 subnet.

◆ iSCSI Subnet 109 is the 10.14.109 subnet.

These iSCSI subnets are routable with the corporate network and with each other, but only iSCSI ports on the array and server NICs sending/receiving iSCSI traffic are connected to switches on this subnet.

The Windows server has a total of three NICs:

◆ One connected to the corporate network, with a defined default gateway

◆ One connected to each of the two iSCSI subnets

Partial output from an ipconfig /all command from the server returns:

Ethernet adapter iSCSI108:Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server AdapterIP Address. . . . . . . . . : 10.14.108.78Subnet Mask . . . . . . . . : 255.255.255.0Default Gateway . . . . . . :

Ethernet adapter iSCSI109:Description . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2IP Address. . . . . . . . : 10.14.109.78Subnet Mask . . . . . . . : 255.255.255.0Default Gateway . . . . . :

Ethernet adapter Corporate:Description . . . . . . . . .. . : Intel 8255x-based PCI Ethernet Adapter (10/100)IP Address. . . . . . . . : 10.14.16.172Subnet Mask . . . . . . . : 255.255.255.0Default Gateway . . . . . : 10.14.16.1

Notes on Microsoft iSCSI Initiator 51

Page 52: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

An iSCSI port on each Storage Processor of the VNX series and CLARiiON is connected to each iSCSI subnet. This is done to create high availability in case of either a subnet failure or an SP failure.

If a subnet fails, the remaining iSCSI subnet can continue to send iSCSI traffic to each SP, allowing both SPs to continue to process IO, thereby spreading the IO load.

If an SP fails, PowerPath must trespass all LUNs to the surviving SP, but PowerPath is able to load balance across both subnets, spreading the network load across them.

Each iSCSI port has a default gateway configured (with an iSCSI subnet address). The management port on the VNX series and CLARiiON is connected to the corporate network and also has a defined default gateway.

The VNX series and CLARiiON's network configuration is as follows:

Management port (10/100 Mb):IP Address 10.14.16.46, default gateway 10.14.16.1iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.109.48, default gateway 10.14.109.1iSCSI Port SP B0: IP Address 10.14.108.47, default gateway 10.14.108.1iSCSI Port SP B1: IP Address 10.14.109.49, default gateway 10.14.109.1

Fully licensed PowerPath is installed for all examples.

ipconfig information

The corporate network is the 10.14.16 subnet. The Server's Intel 8255x-based PCI Ethernet NIC connects to this subnet.

The iSCSI subnets are the 10.14.108 and the 10.14.109 subnets. The Server's Intel Pro/1000 MT Dual Port NICs connect to these subnets.

An ipconfig /all command from the server returns:

Windows IP ConfigurationHost Name . . . . . . . . . . . . : compaq8502Primary Dns Suffix . . . . . . . : YAMAHA.comNode Type . . . . . . . . . . . . : UnknownIP Routing Enabled. . . . . . . . : NoWINS Proxy Enabled. . . . . . . . : NoDNS Suffix Search List. . . . . . : YAMAHA.com

Ethernet adapter iSCSI108:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server AdapterPhysical Address. . . . . . . . . : 00-04-23-AB-83-42DHCP Enabled. . . . . . . . . . . : NoIP Address. . . . . . . . . . . . : 10.14.108.78Subnet Mask . . . . . . . . . . . : 255.255.255.0Default Gateway . . . . . . . . . :

Ethernet adapter iSCSI109:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Dual Port Server Adapter #2Physical Address. . . . . . . . . : 00-04-23-AB-83-43DHCP Enabled. . . . . . . . . . . : NoIP Address. . . . . . . . . . . . : 10.14.109.78Subnet Mask . . . . . . . . . . . : 255.255.255.0Default Gateway . . . . . . . . . :

Ethernet adapter Corporate:Connection-specific DNS Suffix . :Description . . . . . . . . . . . : Intel 8255x-based PCI Ethernet Adapter (10/100)Physical Address. . . . . . . . . : 08-00-09-DC-E3-9CDHCP Enabled. . . . . . . . . . . : NoIP Address. . . . . . . . . . . . : 10.14.16.172

52 Dell EMC Host Connectivity Guide for Windows

Page 53: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Subnet Mask . . . . . . . . . . . : 255.255.255.0Default Gateway . . . . . . . . . : 10.14.16.1DNS Servers . . . . . . . . . . . : 10.14.36.200

10.14.22.13

Routing table information

A route print command from the server returns:

IPv4 Route Table===========================================================================Interface List0x1 ........................... MS TCP Loopback interface0x10003 ...00 04 23 ab 83 42 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter0x10004 ...00 04 23 ab 83 43 ...... Intel(R) PRO/1000 MT Dual Port Server Adapter #20x10005 ...08 00 09 dc e3 9c ...... Intel 8255x-based PCI Ethernet Adapter (10/100)======================================================================================================================================================Active Routes:Network Destination Netmask Gateway Interface Metric

0.0.0.0 0.0.0.0 10.14.16.1 10.14.16.172 2010.14.16.0 255.255.255.0 10.14.16.172 10.14.16.172 20

10.14.16.172 255.255.255.255 127.0.0.1 127.0.0.1 2010.14.108.0 255.255.255.0 10.14.108.78 10.14.108.78 10

10.14.108.78 255.255.255.255 127.0.0.1 127.0.0.1 1010.14.109.0 255.255.255.0 10.14.109.78 10.14.109.78 10

10.14.109.78 255.255.255.255 127.0.0.1 127.0.0.1 1010.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 2010.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1010.255.255.255 255.255.255.255 10.14.109.78 10.14.109.78 10

127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1224.0.0.0 240.0.0.0 10.14.16.172 10.14.16.172 20224.0.0.0 240.0.0.0 10.14.108.78 10.14.108.78 10224.0.0.0 240.0.0.0 10.14.109.78 10.14.109.78 10

255.255.255.255 255.255.255.255 10.14.16.172 10.14.16.172 1255.255.255.255 255.255.255.255 10.14.108.78 10.14.108.78 1255.255.255.255 255.255.255.255 10.14.109.78 10.14.109.78 1

Default Gateway: 10.14.16.1===========================================================================Persistent Routes:None

Example 3 Multiple subnets, Source IP is "Default"

The Default setting can be verified through the iscsicli sessionlist command. The bold output shows an Initiator Portal of 0.0.0.0/<TCP port>:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8b0aa904-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:Connection Id : ffffffff8b0aa904-1Initiator Portal : 0.0.0.0/1066Target Portal : 10.14.108.46/3260CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000003

Notes on Microsoft iSCSI Initiator 53

Page 54: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8b0aa904-2Initiator Portal : 0.0.0.0/1067Target Portal : 10.14.109.48/3260CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:Connection Id : ffffffff8b0aa904-3Initiator Portal : 0.0.0.0/1068Target Portal : 10.14.108.47/3260CID : 01 00

Session Id : ffffffff8b0aa904-4000013700000005Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8b0aa904-4Initiator Portal : 0.0.0.0/1071Target Portal : 10.14.109.49/3260CID : 01 00

The operation completed successfully.In this configuration, NIC iSCSI108 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A0 and B0 since they are all on the same subnet (10.14.108). NIC iSCSI109 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A1 and B1 since they are all on the same subnet (10.14.109).

If NIC iSCSI108 fails, Windows automatically fails over iSCSI traffic targeted to the VNX series and CLARiiON SP ports on the 10.14.108 network. In this configuration, the Corporate NIC (10.14.16.172) is chosen, since that NIC has a default gateway defined and a routable network path to the 10.14.108 subnet.

This failure is transparent to PowerPath, as shown in the powermt display output below:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

54 Dell EMC Host Connectivity Guide for Windows

Page 55: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 07 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 1 07 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 1 0

All paths remain listed as active, no errors are indicated, and the Q-IOs Stats indicate that both paths to SP B are still being used for traffic to this LUN. Traffic to SP port B0 is being routed through the 10.14.16 subnet to the 10.14.108 subnet, while traffic to SP port B1 remains on the 10.14.109 subnet.

Since the Corporate network is now in use for iSCSI traffic, there may be performance implications from this failover. If the Corporate network or Corporate NIC is at a slower speed than the iSCSI subnet, throughput will be reduced. Additionally, iSCSI traffic is now competing on the same wire with non-iSCSI traffic, which may cause network congestion and/or reduced response times for both. If this is a concern, there are multiple ways to avoid this issue:

◆ Configure your iSCSI subnets so they are not routable with other subnets or the corporate network.

◆ Configure your iSCSI sessions to use a specific NIC, as described in “Multiple subnets, Source IPs use specific NIC IP addresses” on page 55.

◆ Do not configure a default gateway on the VNX series and CLARiiON iSCSI ports. This prevents iSCSI traffic on these ports from leaving their subnet.

If the Corporate NIC were to subsequently fail, Windows would not be able to failover iSCSI traffic targeted to SP ports on the 10.14.108 subnet since no default gateway exists for the lone-surviving NIC iSCSI109 on the 10.14.109 subnet. In this case, PowerPath would mark paths to SP ports A0 and B0 as "dead."

Assuming only NIC iSCSI108 failed, when NIC iSCSI108 is subsequently repaired, Windows does return the iSCSI traffic to NIC iSCSI108 since that is the shortest route to the 10.14.108 subnet.

Example 4 Multiple subnets, Source IPs use specific NIC IP addresses

The following iscsicli sessionlist output shows that each iSCSI NIC is used for two iSCSI sessions; four in total. The iSCSI sessions were created such that traffic directed to a VNX series and CLARiiON SP port is routed through the NIC on that SP port's subnet. The bold output shows the IP addresses used in the Initiator Portals:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8ae2f424-4000013700000002Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0ISID : 40 00 01 37 00 00TSID : 1f 34Number Connections : 1

Connections:Connection Id : ffffffff8ae2f424-1Initiator Portal : 10.14.108.78/1050Target Portal : 10.14.108.46/3260CID : 01 00

Notes on Microsoft iSCSI Initiator 55

Page 56: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Session Id : ffffffff8ae2f424-4000013700000003Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8ae2f424-2Initiator Portal : 10.14.109.78/1051Target Portal : 10.14.109.48/3260CID : 01 00

Session Id : ffffffff8ae2f424-4000013700000004Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0ISID : 40 00 01 37 00 00TSID : 5a 34Number Connections : 1

Connections:Connection Id : ffffffff8ae2f424-3Initiator Portal : 10.14.108.78/1052Target Portal : 10.14.108.47/3260CID : 01 00

Session Id : ffffffff8ae2f424-4000013700000006Initiator Node Name : iqn.1991-05.com.microsoft:compaq8502.yamaha.comTarget Node Name : (null)Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1ISID : 40 00 01 37 00 00TSID : 1a 85Number Connections : 1

Connections:Connection Id : ffffffff8ae2f424-5Initiator Portal : 10.14.109.78/1054Target Portal : 10.14.109.49/3260CID : 01 00

The operation completed successfully.

In this configuration, NIC iSCSI108 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A0 and B0 since they are all on the same subnet (10.14.108). NIC iSCSI109 will direct all iSCSI traffic targeted to VNX series and CLARiiON SP ports A1 and B1 since they are all on the same subnet (10.14.109).

If NIC iSCSI108 fails, Windows ON iSCSI SP ports A0 and B0, even though the Corporate NIC can physically reach those ports through its default gateway. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark paths to those ports as "dead."

The following powermt display output shows paths to A0 and B0 marked as dead. All iSCSI traffic for this LUN is directed to the single surviving path on SP B, the current owner on this LUN.

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=CLAROpt; priority=0; queued-IOs=2Owner: default=SP B, current=SP BArray failover mode: 1

56 Dell EMC Host Connectivity Guide for Windows

Page 57: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 17 port7\path0\tgt1\lun1 c7t1d1 SP A1 active alive 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 17 port7\path0\tgt3\lun1 c7t3d1 SP B1 active alive 2 0

If NIC iSCSI108 is subsequently repaired, Windows re-establishes iSCSI sessions to VNX series and CLARiiON SP ports A0 and B0, and PowerPath marks those paths as "alive" and once again uses the path to B1 (as well as the path to B0) for IO to this LUN.

Unlicensed PowerPath and iSCSI failover behaviorUnlicensed PowerPath provides basic failover for hosts connected to VNX series and CLARiiON systems. Unlicensed PowerPath will only use a single path from a host to each SP for IO. Any paths besides these two paths are labeled “unlicensed” and cannot be used for IO. The Microsoft iSCSI Initiator always logs in to its targets in a specific order, connecting all SP A paths before connecting any SP B paths, and always connecting to the ports in numeric order. For example, the MS iSCSI Initiator will log in to port A0 first, then A1, A2, etc., for all SP A ports, then port B0, B1, etc., until the last SP B port. Unlicensed PowerPath chooses the first path discovered on each SP for IO. In all of the examples in this section, PowerPath will only use paths to A0 and B0 for IO.

The following is an example of a powermt display command with unlicensed PowerPath:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy= BasicFailover; priority=0; queued-IOs=0Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 07 port7\path0\tgt1\lun1 c7t1d1 SP A1 active unlic 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 0 07 port7\path0\tgt3\lun1 c7t3d1 SP B1 active unlic 0 0

Unlicensed PowerPath does not change any of the failover behavior in Example 1 on page 47 and Example 3 on page 53, where the Source IP address is “Default”. This is because PowerPath does not know that a NIC has failed in these examples since the MS iSCSI Initiator automatically fails over iSCSI sessions to a surviving NIC. The only impact unlicensed PowerPath has in these examples is that only a single path will be used for IO to an SP (to SP ports A0 and B0).

Unlicensed PowerPath will have an impact on failover behavior in Example 2 on page 49 and Example 4 on page 55, where the Source IP address uses a specific NIC IP address.

Revisiting Example 2 on page 49 with unlicensed PowerPath, the following shows an abridged version of the iscsicli sessionlist command output:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Notes on Microsoft iSCSI Initiator 57

Page 58: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Session Id : ffffffff8ae2700c-4000013700000002Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0

Connections:Initiator Portal : 10.14.108.78/1394Target Portal : 10.14.108.46/3260

Session Id : ffffffff8ae2700c-4000013700000003Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1

Connections:Initiator Portal : 10.14.108.79/1395Target Portal : 10.14.108.48/3260

Session Id : ffffffff8ae2700c-4000013700000004Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0

Connections:Initiator Portal : 10.14.108.78/1396Target Portal : 10.14.108.47/3260

Session Id : ffffffff8ae2700c-4000013700000005Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1

Connections:Initiator Portal : 10.14.108.79/1397Target Portal : 10.14.108.49/3260

The operation completed successfully.

If NIC1 fails (10.14.108.78), Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI ports A0 and B0, even though NIC2 (10.14.108.79) can physically reach those ports. Instead, Windows will fail iSCSI sessions connected to SP ports A0 and B0, which in turn leads PowerPath to mark those paths as dead. However, the surviving two paths (to SP ports A1 and B1) are not licensed and not used for IO, as shown in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=0Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active dead 0 17 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic alive 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 17 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

Since all paths are either dead or unlicensed, IO fails because no usable path exists from the host to the VNX series and CLARiiON.

However, a configuration can be designed that will avoid IO errors with a single NIC failure and unlicensed PowerPath. Revisiting Example 4 on page 55 with unlicensed PowerPath, the array IP addresses can be changed so that SP ports A0 and B0 do not reside on the same subnet. The VNX series and CLARiiON’s network configuration would be as follows:

58 Dell EMC Host Connectivity Guide for Windows

Page 59: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Management port (10/100 Mb): IP Address 10.14.16.46, default gateway 10.14.16.1iSCSI Port SP A0: IP Address 10.14.108.46, default gateway 10.14.108.1iSCSI Port SP A1: IP Address 10.14.109.48, default gateway 10.14.109.1iSCSI Port SP B0: IP Address 10.14.109.49, default gateway 10.14.109.1iSCSI Port SP B1: IP Address 10.14.108.47, default gateway 10.14.108.1

Using this array IP configuration, the following shows an abridged version of the iscsicli sessionlist command output:

Microsoft iSCSI Initiator version 2.0 Build 1941

Total of 4 sessions

Session Id : ffffffff8a8d9204-4000013700000003Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a0

Connections:Initiator Portal : 10.14.108.78/2787Target Portal : 10.14.108.46/3260

Session Id : ffffffff8a8d9204-4000013700000004Target Name : iqn.1992-04.com.emc:cx.apm00063505574.a1

Connections:Initiator Portal : 10.14.109.78/2788Target Portal : 10.14.109.48/3260

Session Id : ffffffff8a8d9204-4000013700000005Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b0

Connections:Initiator Portal : 10.14.109.78/2789Target Portal : 10.14.109.49/3260

Session Id : ffffffff8a8d9204-4000013700000006Target Name : iqn.1992-04.com.emc:cx.apm00063505574.b1

Connections:Initiator Portal : 10.14.108.78/2790Target Portal : 10.14.108.47/3260

The operation completed successfully.

Output from a powermt display command shows that paths to VNX series and CLARiiON SP ports A0 and B0 are usable for IO:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=5Owner: default=SP B, current=SP BArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 0 07 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic alive 0 07 port7\path0\tgt2\lun1 c7t2d1 SP B0 active alive 5 07 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

Notes on Microsoft iSCSI Initiator 59

Page 60: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

In this configuration, NIC “iSCSI109” (10.14.109.78) is directing all iSCSI traffic to VNX series and CLARiiON SP port B0 for this LUN, since this is the only path that is usable by unlicensed PowerPath to this LUN’s owning SP. If NIC “iSCSI109” fails, Windows will not attempt to re-route iSCSI traffic targeted to VNX series and CLARiiON iSCSI SP port B0, even though the “Corporate” NIC can physically reach this port through its default gateway. Instead, Windows will fail the iSCSI session connected to SP port B0 (as well as the session connected to SP port A1), which in turn leads PowerPath to mark paths to those SP ports as dead, as indicated in the following powermt display output:

Pseudo name=harddisk2CLARiiON ID=APM00063505574 [Compaq8502]Logical device ID=600601609FD119003CE4D89460A6DB11 [LUN 44]state=alive; policy=BasicFailover; priority=0; queued-IOs=5Owner: default=SP B, current=SP AArray failover mode: 1==============================================================================---------------- Host --------------- - Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

7 port7\path0\tgt0\lun1 c7t0d1 SP A0 active alive 5 07 port7\path0\tgt1\lun1 c7t1d1 SP A1 unlic dead 0 17 port7\path0\tgt2\lun1 c7t2d1 SP B0 active dead 0 17 port7\path0\tgt3\lun1 c7t3d1 SP B1 unlic alive 0 0

However, a usable path for unlicensed PowerPath to this LUN still exists – the path from NIC “iSCSI108” (10.14.108.78) to VNX series and CLARiiON SP port A0. Therefore, PowerPath will trespass the LUN to SP A and successfully direct IO to this LUN through this surviving path.

If NIC “iSCSI109” is subsequently repaired, Windows will re-establish iSCSI sessions to VNX series and CLARiiON SP ports B0 and A1, and unlicensed PowerPath will mark these paths as alive. Additionally, since unlicensed PowerPath now has a healthy path to this LUN’s default SP, PowerPath will auto-restore this LUN by trespassing it back to SP B and then directing all iSCSI traffic to the path from NIC “iSCSI109” to VNX series and CLARiiON SP port B0.

Microsoft Cluster Server

Microsoft Cluster Server (MSCS) shared storage (including the quorum disk) can be implemented using iSCSI disk volumes as the shared storage. There is no special iSCSI cluster or application configuration needed to support this scenario. Since the cluster service manages application dependencies, it is not needed to make any cluster managed service (or the cluster service itself) dependent upon the Microsoft iSCSI service.

Microsoft MPIO and the Microsoft iSCSI DSM can be used with MSCS.

Boot

Currently, it is not possible to boot a Windows system using an iSCSI disk volume provided by the Microsoft software iSCSI Initiator kernel mode driver. It is possible to boot a Windows system using an iSCSI disk volume provided by an iSCSI HBA. The only currently supported method for booting a Windows system using an iSCSI disk volume is via a supported HBA.

60 Dell EMC Host Connectivity Guide for Windows

Page 61: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

NIC teaming

Microsoft does not support the use of NIC teaming on iSCSI interfaces. For iSCSI SAN interfaces, Microsoft recommends that customers use dual or quad ported NICs or multiple single-port NICs and allow Microsoft Windows components to handle fail-over and redundancy to ensure consistent visibility into the data path. Fail-over and load balancing of multiple paths to an iSCSI target from within the same server is supported through Microsoft MPIO and multiple connections per session (fail over and load balancing using multiple adapters in the same server.

Microsoft Cluster Server can also be used for fail over and load balancing of shared storage resources between servers (fail over and load balancing between servers). NIC teaming can still be used but only on LAN interfaces that are not used to connect to an iSCSI SAN.

Using the Initiator with PowerPath

This section provides an overview of how to set up the Initiator for use with PowerPath and describes how to use the Initiator to create the paths that PowerPath then takes control of. This setup can be compared to creating zones in a Fibre Channel environment.

Note: PowerPath version 5.1 SP2 is the minimum version required for support on Windows Server 2008. Upon installing PowerPath, the installer will load the MPIO feature and claim all disks associated with the Microsoft Initiator. During the install a DOS windows will open. At this point, the MPIO feature is loaded. Do not close this window. Once the MPIO feature is installed, PowerPath will close this window automatically.

There are no manual steps that need to be done to configure MPIO. PowerPath will perform all the required steps as part of the PowerPath install.

The Initiator allows you to log in multiple paths to the same target and aggregate the duplicate devices into a single device exposed to Windows. Each path to the target can be established using different NICs, network infrastructure, and target ports. If one path fails then another session can continue processing I/O without interruption to the application. It is PowerPath that aggregates these paths and manages them to provide uninterrupted access to the device.

PowerPath uses the Microsoft MPIO framework that is installed with the Initiator in conjunction with Dell EMC's DSM to provide multipathing functionality.

This section also describes how to use the Initiator to create the paths that PowerPath then takes control of. This setup can be compared to creating zones in a fibre channel environment.

This section is based on the following assumption:

Notes on Microsoft iSCSI Initiator 61

Page 62: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

◆ The following two network cards are installed in the server:

• NIC1 (192.168.150.155)

• NIC2 (192.168.150.156)

◆ We will log into the target on four different ports:

• A0 (192.168.150.102)

• A1 (192.168.150.103)

• B0 (192.168.150.104)

• B1 (192.168.150.105).

Logging into a target.

◆ We will log into the target to create four separate paths:

• NIC1 A0

• NIC2 A1

• NIC1B0

• NIC2 B1

Follow the next steps to log in to the target.

1. Select the port to log into and click Log On.

Figure 6 iSCSI Initiator Properties dialog box

62 Dell EMC Host Connectivity Guide for Windows

Page 63: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

The Log On to Target dialog box displays.

Figure 7 Log On to Target dialog box

2. Click Advanced.

The Advanced Settings dialog box displays.

Figure 8 Advanced Settings dialog box

Notes on Microsoft iSCSI Initiator 63

Page 64: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

3. For the source IP choose the IP address associated with NIC1 and the Target Portal (port) associated with port A0 on the target. Enter CHAP information, if needed, and click OK.

To create the other paths to the array repeat the steps above for each path you want to create.

In our example, you would choose a different port on the Targets tab (A1, B0….).

Note: You can only log into a port once from a host. Multiple logins are not supported.

On the Advanced Settings page choose your source and Target IPs.

Once completed, you should have four paths as shown in Figure 9.

Figure 9 Four paths

You can now install PowerPath, which will aggregate all four paths and present one device to Windows Disk Management. From a host perspective, a failure to a NIC or the path between and NIC and the target will be managed properly by PowerPath and allow access to the device on the target. For example, if NIC1 fails the host would still have an active paths to both SPs since NIC2 has a path to SPA1 and SPB1.

64 Dell EMC Host Connectivity Guide for Windows

Page 65: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Commonly seen issues

This section lists some of the common issues E-Lab has seen during testing. For a more detailed list, refer to the Microsoft Initiator 2.0 Users Guide.

MultipathingThis section lists multipathing errors that may be recorded in the event logs and discusses potential solutions for these error conditions. Even though Dell EMC does not support multiple connections per session (MCS) on VNX series and CLARiiON and Symmetrix, it is easy to confuse the configuration of MPIO and MCS.

Note: Dell EMC does support multiple connections per session (MCS) on VNXe. Refer to Dell EMC knowledgebase solution, How to Configure Windows Multipathing on a VNXe at http://support.emc.com/kb/14664, for more information.

The following errors might help point out that you are actually configuring MCS instead of MPIO.

◆ Error: "Too many Connections" when you attempt to add a second connection to an existing session.

This issue can occur if the target does not support multiple connections per session (MCS). Consult with the target vendor to see if they plan on adding support for MCS.

◆ When you attempt to add a second connection to an existing session, you may notice that the Add button within the Session Connections window is grayed out.

This issue can occur if you logged onto the target using an iSCSI HBA that doesn't support MCS.

For more information on MCS vs. MPIO, refer to the Microsoft iSCSI Software Initiator 2.x Users Guide located at http://microsoft.com.

Long boot timeIf your system takes a long time to display the login prompt after booting, or it takes a long time to log in after entering your login ID and password, then there may be an issue related to the Microsoft iSCSI Initiator service starting. First see the "Running automatic start services on iSCSI disks" section in the Microsoft Initiator 2.x Users Guide for information about persistent volumes and the binding operation. Check the system event log to see if there is an event Timeout waiting for iSCSIpersistently bound volumes… If this is the case, then one or more of the persistently bound volumes did not reappear after reboot which could be due to network or target error.

Another error you may see on machines that are slow to boot is an event log message Initiator Service failed to respond in time to a request toencrypt or decrypt data if you have persistent logins that are configured to use CHAP. Additionally, the persistent login will fail to log in. This is due to a timing issue in the service startup order. To work around this issue, increase the timeout value for the IPSecConfigTimeout value in the registry under:

HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance Number> \Parameters

Notes on Microsoft iSCSI Initiator 65

Page 66: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

This has been seen in some cases where clusters are present.

Logging out of targetThe MS iSCSI Initiator service will not allow a session to be logged out if there are any handles that are open to the device. In this case, if the session is attempted to be logged out, the error The session cannot be logged out since a deviceon that session is currently being used. is reported. This means that there is an application or device which has an open handle to the physical disk on the target. If you look in the system event log you should see an event that has the name of the device with the open handle.

Other event log errors of noteThe source for system events logged by Software Initiator will be iScsiPrt. The message in the log would convey the cause of that event.

Some of the common events are listed below. A complete list of events can be found in the Microsoft Initiator User’s Guide 2.x.

Event ID 1 Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data.

This event is logged when the Initiator could not make a TCP connection to the given target portal. The dump data in this event will contain the target IP address and TCP port to which Initiator could not make a TCP connection.

Event ID 9 Target did not respond in time for a SCSI request. The CDB is given in the dump data.

This event is logged when the target did not complete a SCSI command within the timeout period specified by SCSI layer. The dump data will contain the SCSI Opcode corresponding to the SCSI command. User can refer to SCSI specification for getting more information about the SCSI command.

Event ID 20 Connection to the target was lost. The Initiator will attempt to retry the connection.

This event is logged when the Initiator loses connection to the target when the connection was in iSCSI Full Feature Phase. This event typically happens when there are network problems, a network cable is removed, a network switch is shutdown, or target resets the connection. In all cases Initiator will attempt to reestablish the TCP connection.

Event ID 34 A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.

This event is logged when the Initiator successfully reestablishes a TCP connection to the target.

TroubleshootingNote the following potential problems and their solutions.

Problem Adding a Symmetrix IP address to the target portals returns initiator error.

66 Dell EMC Host Connectivity Guide for Windows

Page 67: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Solution

◆ Verify that the iSCSI login parameter is correct.

◆ Verify that the Volume Logix setup is correct.

Problem Adding a Symmetrix IP address to the target portals returns authentication error.

Solution

◆ Verify that the CHAP feature is enabled, and that the user name and secret are correct.

◆ Verify that the Symmetrix system has the correct user name and secret.

Problem Login to the Symmetrix target returns The target had already been logged via a

Symmetrix.

Solution

◆ Press the Refresh button to verify that only one iSCSI session is established.

◆ Log out of the current iSCSI session to the Symmetrix system, and log in again.

Problem Adding a target portal returns Connection Failed.

Solution

◆ Ensure that the IP address of the Symmetrix system is correct.

◆ Ensure the connectivity is correct by using the ping utility to the Symmetrix GE port and vice versa.

Problem File shares on iSCSI devices may not be re-created when you restart your computer.

Solution

This issue can occur when the iSCSI Initiator service is not initialized when the Server service initializes. The Server service creates file shares. However, because iSCSI disk devices are not available, the Server service cannot create file shares for iSCSI devices until the iSCSI service is initialized. To resolve this issue, follow these steps on the affected server:

1. Make the Server service dependant on the iSCSI Initiator service.

2. Configure the BindPersistentVolumes option for the iSCSI Initiator service.

3. Configure persistent logons to the target. To do this, use one of the following methods.

Method 1

a. Double-click iSCSI Initiator in Control Panel.

b. Click the Available Targets tab.

c. Click a target in the Select a target list, and then click Log On.

d. Click to select the Automatically restore this connection when the system boots check box.

e. Click OK.

Notes on Microsoft iSCSI Initiator 67

Page 68: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

iSCSI Attach Environments

Method 2

a. Click Start, click Run, type cmd, and then click OK.

b. At the command prompt, type the following command, and then press ENTER:

scsicli persistentlogintarget target_iqn T * * * * * * * * * * * * * * * 0

Note: target_iqn is the iSCSI qualified name (iqn) of the target.

Note: This resolution applies only when you specifically experience this issue with the iSCSI Initiator service. Refer to Microsoft Knowledge Base article 870964 for more information.

68 Dell EMC Host Connectivity Guide for Windows

Page 69: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 3Invisible Body Tag

This chapter provides information about Virtual Provisioning in a Windows environment.

Note: For further information about the correct implementation of Virtual Provisioning, refer to the Symmetrix Virtual Provisioning Implementation and Best Practices Technical Note, available on Dell EMC Online Support

◆ Virtual Provisioning on Symmetrix.............................................. 70◆ Implementation considerations................................................... 74◆ Operating system characteristics............................................... 78

Virtual Provisioning

Virtual Provisioning 69

Page 70: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Virtual Provisioning on SymmetrixDell EMC Virtual Provisioning™ enables organizations to improve speed and ease of use, enhance performance, and increase capacity utilization for certain applications and workloads. Virtual Provisioning integrates with existing device management, replication, and management tools, enabling customers to easily build Virtual Provisioning into their existing storage management processes. Figure 10 shows an example of Virtual Provisioning on Symmetrix.

Virtual Provisioning, which marks a significant advancement over technologies commonly known in the industry as “thin provisioning,” adds a new dimension to tiered storage in the array, without disrupting organizational processes.

Figure 10 Virtual Provisioning on Symmetrix

70 Dell EMC Host Connectivity Guide for Windows

Page 71: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Terminology

This section provides common terminology and definitions for Symmetrix and thin provisioning.

Symmetrix Basic Symmetrix terms include:

Thin provisioning Basic thin provisioning terms include:

Device A logical unit of storage defined within an array.

Device capacity The storage capacity of a device.

Device extent Specifies a quantum of logically contiguous blocks of storage.

Host accessible device A device that can be made available for host use.

Internal device A device used for a Symmetrix internal function that cannot be made accessible to a host.

Storage pool A collection of internal devices for some specific purpose.

Thin device A host accessible device that has no storage directly associated with it.

Data device An internal device that provides storage capacity to be used by thin devices.

Thin pool A collection of data devices that provide storage capacity for thin devices.

Thin pool capacity The sum of the capacities of the member data devices.

Thin pool allocated capacity A subset of thin pool enabled capacity that has been allocated for the exclusive use of all thin devices bound to that thin pool.

Thin device user pre-allocated capacity

The initial amount of capacity that is allocated when a thin device is bound to a thin pool. This property is under user control.

Bind Refers to the act of associating one or more thin devices with a thin pool.

Pre-provisioning An approach sometimes used to reduce the operational impact of provisioning storage. The approach consists of satisfying provisioning operations with larger devices that needed initially, so that the future cycles of the storage provisioning process can be deferred or avoided.

Virtual Provisioning on Symmetrix 71

Page 72: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Management tools

Configuring, replicating, managing, and monitoring thin devices and thin pools involve the same tools and the same or similar functions as those used to manage traditional arrays.

Use Symmetrix Management Console or Solutions Enabler to configure and manage Virtual Provisioning.

Thin device

Symmetrix Virtual Provisioning introduces a new type of host-accessible device called a thin device that can be used in many of the same ways that regular host-accessible Symmetrix devices have traditionally been used. Unlike regular Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the devices are created and presented to a host. The physical storage that is used to supply disk space for a thin device comes from a shared thin storage pool that has been associated with the thin device.

A thin storage pool is comprised of a new type of internal Symmetrix device called a data device that is dedicated to the purpose of providing the actual physical storage used by thin devices. When they are first created, thin devices are not associated with any particular thin pool. An operation referred to as binding must be performed to associate a thin device with a thin pool.

When a write is performed to a portion of the thin device, the Symmetrix allocates a minimum allotment of physical storage from the pool and maps that storage to a region of the thin device, including the area targeted by the write. The storage allocation operations are performed in small units of storage called data device extents. A round-robin mechanism is used to balance the allocation of data device extents across all of the data devices in the pool that have remaining unused capacity.

When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the storage pool to which the thin device is bound. Reads directed to an area of a thin device that has not been mapped does not trigger allocation operations. The result of reading an unmapped block is that a block in which each byte is equal to zero will be returned. When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage pools. New thin devices can also be created and associated with existing thin pools.

It is possible for a thin device to be presented for host use before all of the reported capacity of the device has been mapped. It is also possible for the sum of the reported capacities of the thin devices using a given pool to exceed the available storage capacity of the pool. Such a thin device configuration is said to be over-subscribed.

Over-subscribed thin pool A thin pool whose thin pool capacity is less than the sum of the reported sizes of the thin devices using the pool.

Thin device extent The minimum quantum of storage that must be mapped at a time to a thin device.

Data device extent The minimum quantum of storage that is allocated at a time when dedicating storage from a thin pool for use with a specific thin device.

72 Dell EMC Host Connectivity Guide for Windows

Page 73: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Figure 11 Thin device and thin storage pool containing data devices

In Figure 11, as host writes to a thin device are serviced by the Symmetrix array, storage is allocated to the thin device from the data devices in the associated storage pool. The storage is allocated from the pool using a round-robin approach that tends to stripe the data devices in the pool.

Virtual Provisioning on Symmetrix 73

Page 74: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Implementation considerationsWhen implementing Virtual Provisioning, it is important that realistic utilization objectives are set. Generally, organizations should target no higher than 60 percent to 80 percent capacity utilization per pool. A buffer should be provided for unexpected growth or a “runaway” application that consumes more physical capacity than was originally planned for. There should be sufficient free space in the storage pool equal to the capacity of the largest unallocated thin device.

Organizations also should balance growth against storage acquisition and installation time frames. It is recommended that the storage pool be expanded before the last 20 percent of the storage pool is utilized to allow for adequate striping across the existing data devices and the newly added data devices in the storage pool.

Thin devices can be deleted once they are unbound from the thin storage pool. When thin devices are unbound, the space consumed by those thin devices on the associated data devices is reclaimed.

Note: Users should first replicate the data elsewhere to ensure it remains available for use.

Data devices can also be disabled and/or removed from a storage pool. Prior to disabling a data device, all allocated tracks must be removed (by unbinding the associated thin devices). This means that all thin devices in a pool must be unbound before any data devices can be disabled.

The following information is provided in this section:

◆ “Over-subscribed thin pools” on page 74

◆ “Thin-hostile environments” on page 75

◆ “Pre-provisioning with thin devices in a thin hostile environment” on page 75

◆ “Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devices” on page 76

◆ “Cluster configurations” on page 76

Over-subscribed thin pools

It is permissible for the amount of storage mapped to a thin device to be less than the reported size of the device. It is also permissible for the sum of the reported sizes of the thin devices using a given thin pool to exceed the total capacity of the data devices comprising the thin pool. In this case the thin pool is said to be over-subscribed. Over-subscribing allows the organization to present larger-than-needed devices to hosts and applications without having to purchase enough physical disks to fully allocate all of the space represented by the thin devices.

The capacity utilization of over-subscribed pools must be monitored to determine when space must be added to the thin pool to avoid out-of-space conditions.

Not all operating systems, filesystems, logical volume managers, multipathing software, and application environments will be appropriate for use with over-subscribed thin pools. If the application, or any part of the software stack underlying the application, has a tendency to produce dense patterns of writes to all

74 Dell EMC Host Connectivity Guide for Windows

Page 75: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

available storage, thin devices will tend to become fully allocated quickly. If thin devices belonging to an over-subscribed pool are used in this type of environment, out-of-space and undesired conditions may be encountered before an administrator can take steps to add storage capacity to the thin data pool. Such environments are called thin-hostile.

Thin-hostile environments

There are a variety of factors that can contribute to making a given application environment thin-hostile, including:

◆ One step, or a combination of steps, involved in simply preparing storage for use by the application may force all of the storage that is being presented to become fully allocated.

◆ If the storage space management policies of the application and underlying software components do not tend to reuse storage that was previously used and released, the speed in which underlying thin devices become fully allocated will increase.

◆ Whether any data copy operations (including disk balancing operations and de-fragmentation operations) are carried out as part of the administration of the environment.

◆ If there are administrative operations, such as bad block detection operations or file system check commands, that perform dense patterns of writes on all reported storage.

◆ If an over-subscribed thin device configuration is used with a thin-hostile application environment, the likely result is that the capacity of the thin pool will become exhausted before the storage administrator can add capacity unless measures are taken at the host level to restrict the amount of capacity that is actually placed in control of the application.

Pre-provisioning with thin devices in a thin hostile environment

In some cases, many of the benefits of pre-provisioning with thin devices can be exploited in a thin-hostile environment. This requires that the host administrator cooperate with the storage administrator by enforcing restrictions on how much storage is placed under the control of the thin-hostile application.

For example:

◆ The storage administrator pre-provisions larger than initially needed thin devices to the hosts, but only configures the thin pools with the storage needed initially. The various steps required to create, map, and mask the devices and make the target host operating systems recognize the devices are performed.

◆ The host administrator uses a host logical volume manager to carve out portions of the devices into logical volumes to be used by the thin-hostile applications.

Implementation considerations 75

Page 76: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

◆ The host administrator may want to fully preallocate the thin devices underlying these logical volumes before handing them off to the thin-hostile application so that any storage capacity shortfall will be discovered as quickly as possible, and discovery is not made by way of a failed host write.

◆ When more storage needs to be made available to the application, the host administrator extends the logical volumes out of the thin devices that have already been presented. Many databases can absorb an additional disk partition non-disruptively, as can most file systems and logical volume managers.

◆ Again, the host administrator may want to fully allocate the thin devices underlying these volumes before assigning them to the thin-hostile application.

In this example it is still necessary for the storage administrator to closely monitor the over-subscribed pools. This procedure will not work if the host administrators do not observe restrictions on how much of the storage presented is actually assigned to the application.

Host boot/root/swap/dump devices positioned on Symmetrix VP (tdev) devices

A boot /root /swap /dump device positioned on Symmetrix Virtual Provisioning (thin) device(s) is supported with Enginuity 5773 and later. However, some specific processes involving boot /root/swap/dump devices positioned on thin devices should not have exposure to encountering the out-of-space condition. Host-based processes such as kernel rebuilds, swap, dump, save crash, and Volume Manager configuration operations can all be affected by the thin provisioning out-of-space condition. This exposure is not specific to Dell EMC's implementation of thin provisioning. Dell EMC strongly recommends that the customer avoid encountering the out-of-space condition involving boot / root /swap/dump devices positioned on Symmetrix VP (thin) devices using the following recommendations:

◆ We strongly recommend that Virtual Provisioning devices utilized for boot /root/dump/swap volumes be fully allocated or the VP devices must not be oversubscribe.

Should you use an over-subscribed thin pool, you need to take the necessary precautions to ensure that they do not encounter the out-of-space condition.

◆ We do not recommend implementing space reclamation, available with Enginuity 5874 and later, with pre-allocated or over-subscribed Symmetrix VP (thin) devices that are utilized for host boot/root/swap/dump volumes. Although not recommended, Space reclamation is supported on the listed types of volumes.

If you use space reclamation on this thin device, t be aware that this freed space may ultimately be claimed by other thin devices in the same pool and may not be available to that particular thin device in the future.

Cluster configurations

When using high availability in a cluster configuration, it is expected that no single point of failure exists within the cluster configuration and that one single point of failure will not result in data unavailability, data loss, or any significant application becoming unavailable within the cluster. Virtual provisioning devices (thin devices) are

76 Dell EMC Host Connectivity Guide for Windows

Page 77: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

supported with cluster configurations; however, over-subscription of virtual devices may constitute a single point of failure if an out-of-space condition should be encountered. To avoid potential single points of failure, appropriate steps should be taken to avoid under-provisioned virtual devices implemented within high availability cluster configurations.

Implementation considerations 77

Page 78: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Virtual Provisioning

Operating system characteristics Most host applications will behave in a similar manner in comparison to the normal devices when writing to thin devices. This same behavior can also be observed as long as the thin device written capacity is less than thin device subscribed capacity. However, issues can arise when the application writes beyond the provisioned boundaries. With the current behavior of the Windows Operating System, the exhaustion of the thin pool causes undesired results. Specifics are included below:

◆ Logical Volume Manager software SVM and VxVM

Cannot write to any volumes that are built on the exhausted pool.

◆ Windows NTFS File System

• The host reports the errors "File System is full" to the Windows system event log. The larger the data file size that is being written to the thin device, the more ‘file system is full’ error messages will be reported.

• The writing data file has corrupted data.

• Cannot create a file system on the exhausted pool.

• Cannot write a data file to the exhausted pool.

In the condition where the host is exposed to pre-provisioned thin devices that had not been bound to the thin pool, the host may take a little longer time during boot up.

78 Dell EMC Host Connectivity Guide for Windows

Page 79: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 4Invisible Body Tag

This chapter describes Dell EMC VPLEX. Topics include:

◆ Dell EMC VPLEX.................................................................................................. 80◆ Prerequisites ............................................................................... 81◆ Host connectivity....................................................................... 82◆ Configuring Fibre Channel HBAs ................................................ 83◆ Windows Failover Clustering with VPLEX ................................... 91◆ Setting up quorum on a Windows 2012/2012 R2 Failover

Cluster for VPLEX Metro or Geo clusters.................................. 125◆ Configuring quorum on Windows 2008/2008 R2 Failover

Cluster for VPLEX Metro or Geo clusters.................................. 131

Windows Host Connectivity with Dell EMC VPLEX

Windows Host Connectivity with Dell EMC VPLEX 79

Page 80: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Dell EMC VPLEX For detailed information about VPLEX, refer to the following VPLEX documentation available at Dell EMC Online Support for configuration and administration operations:

◆ EMC VPLEX with GeoSynchrony Product Guide

◆ EMC VPLEX with GeoSynchrony CLI Reference Guide

◆ EMC VPLEX with GeoSynchrony Security Configuration Guide

◆ EMC VPLEX Hardware Installation Guide

◆ EMC VPLEX Release Notes

◆ Implementation and Planning Best Practices for EMC VPLEX Technical Notes

◆ VPLEX online help, available on the Management Console GUI

◆ SolVe Desktop for VPLEX available at Dell EMC Online Support

◆ Dell EMC Simple Support Matrix, VPLEX and GeoSynchrony, available at Dell EMC E-Lab Navigator

For the most up-to-date support information, refer to the Dell EMC Simple Support Matrix.

80 Dell EMC Host Connectivity Guide for Windows

Page 81: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Prerequisites Before configuring VPLEX in the Windows environment, complete the following on each host:

◆ Confirm that all necessary remediation has been completed.This ensures that OS-specific patches and software on all hosts in the VPLEX environment are at supported levels according to the Dell EMC Simple Support Matrix.

◆ Confirm that each host is running VPLEX-supported failover software and has at least one available path to each VPLEX fabric.

Note: Always refer to the Dell EMC Simple Support Matrix for the most up-to-date support information and prerequisites.

◆ If a host is running PowerPath, confirm that the load-balancing and failover policy is set to Adaptive.

Prerequisites 81

Page 82: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Host connectivityConsult the Implementation and Planning Best Practices for EMC VPLEX Technical Notes available at Dell EMC Online Support for Windows host connectivity recommendations and best practices with VPLEX configurations.

For the most up-to-date information on qualified switches, hosts, host bus adapters, and software, refer to the always consult the Dell EMC Simple Support Matrix (ESM), available through Dell EMC E-Lab Navigator (ELN), or contact your Dell EMC Customer Representative.

The latest Dell EMC-approved HBA drivers and software are available for download at the following websites:

◆ https://www.broadcom.com/

◆ http:/www.QLogic.com

◆ http://www.brocade.com

The Dell EMC HBA installation and configurations guides are available at the Dell EMC-specific download pages of these websites.

Note: Direct connect from a host bus adapter to a VPLEX engine is not supported.

82 Dell EMC Host Connectivity Guide for Windows

Page 83: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Configuring Fibre Channel HBAsThis section details Fibre Channel HBA-related configuration details that must be addressed when using Fibre Channel with VPLEX.

IMPORTANT

The values provided are required and optimal for most scenarios. However, in host IO profiles with large-block reads, the values may need to be tuned if the performance of the VPLEX shows high front-end latency in the absence of high back-end latency, which has visible impact on host application(s). This may be an indication that there are too many outstanding IOs at a given time per port. If the recommended settings do not perform well in the environment, contact Dell EMC Support for additional recommendations.

For further information on how to monitor VPLEX performance, refer to the "Performance and Monitoring" section of the VPLEX Administration Guide. If host application(s) is seeing a performance issue with the required settings, contact Dell EMC Support for further recommendations.

Setting queue depth and execution throttle for QLogic

Note: Changing the HBA queue depth is designed for advanced users. Increasing the queue depth may cause hosts to over-stress other arrays connected to the Windows host, resulting in performance degradation while performing IO.

The execution throttle setting controls the amount of outstanding I/O requests per HBA port. The HBA execution throttle should be set to the QLogic default value, which is 65535. This can be done at the HBA firmware level using the HBA BIOS or the QConvergeConsole CLI or GUI.

The queue depth setting controls the amount of outstanding I/O requests per a single path. On Windows, the HBA queue depth can be adjusted using the Windows Registry.

Note: When the execution throttle in the HBA level is set to a value lower than the queue depth, it may limit the queue depth to a lower value than the set value.

The following procedures detail how to adjust the queue depth setting for QLogic HBAs:

◆ “Setting the queue depth for the Qlogic FC HBA” on page 83

◆ “Setting the execution throttle on the Qlogic FC HBA” on page 84

◆ “Setting the queue depth and queue target on the Emulex FC HBA” on page 89

Follow the appropriate procedure according to the HBA type. For any additional information, refer to the HBA vendor's documentation.

Setting the queue depth for the Qlogic FC HBA1. On the desktop, click Start, select Run, and open the REGEDIT (Registry Editor).

Configuring Fibre Channel HBAs 83

Page 84: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Note: Some driver versions do not create the registry by default. In these cases, the user must manually create the registry.

2. Select HKEY_LOCAL_MACHINE and follow the tree structure down to the QLogic driver as shown in the following figure and double-click DriverParameter:

The Edit String dialog box displays:

3. Change the value of qd to 20. The value is set in hexadecimal; 20 is 32 in decimal.

If additional driver parameters are already set, and the string qd= does not exist, append the following text to the end of the string using a semicolon (";") to separate the new queue depth value from previous entries:

;qd=20

4. Click OK.

The registry should appear as follows:

5. Exit the Registry Editor and reboot the Windows host.

Setting the execution throttle on the Qlogic FC HBA1. Install QConvergeConsole GUI or CLI.

2. Follow the GUI or CLI directions, outlined in this section to set the execution throttle.

84 Dell EMC Host Connectivity Guide for Windows

Page 85: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Using QConvergeConsole GUI

1. Start the QConvergeConsole GUI. The GUI displays, as shown in the following figure:

2. Select one of the adapter ports in the navigation tree on the left.

3. Select Host > Parameters > Advanced HBA Parameters.

4. Set the Execution Throttle to 65535.

5. Click Save to save the settings.

6. Repeat the above steps for each port on each adapter connecting to VPLEX.

Using QConvergeConsole CLI

1. Select 2:Adapter Configuration from the main menu:

Configuring Fibre Channel HBAs 85

Page 86: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

2. Select 3. HBA Parameters:

3. Select the HBA (1: Port 1: in the following example):

4. Select 2. Configure HBA Parameters:

86 Dell EMC Host Connectivity Guide for Windows

Page 87: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

5. Select 11. Execution Throttle:

6. Set the value to 65535:

Note: The current value is in the second set of square brackets. The first is the allowable range.

Configuring Fibre Channel HBAs 87

Page 88: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

7. Verify the options:

8. Validate that the Execution Throttle is set to the expected value of 65535:

9. Repeat the above steps for each port on each adapter connecting to VPLEX.

Setting queue depth and queue target for Emulex

Note: Changing the HBA queue depth is designed for advanced users. Increasing the queue depth may cause hosts to over-stress other arrays connected to the Windows host, resulting in performance degradation while performing IO.

Queue depth setting controls the amount of outstanding I/O requests per a single LUN/target. On Windows, the Emulex HBA queue depth can be adjusted via the Emulex UI (OneCommand).

Queue target controls I/O depth limiting on a per target or per LUN basis. If set to 0 = depth, limitation is applied to individual LUNs. If set to 1 = depth, limitation is applied across the entire target. On Windows, the Emulex HBA queue depth can be adjusted via the Emulex UI (OneCommand).

88 Dell EMC Host Connectivity Guide for Windows

Page 89: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

The following procedures detail adjusting the queue depth and queue target settings for Emulex HBAs as follows:

◆ Set the Emulex HBA adapter queue depth in Windows to 32.

◆ Set the Emulex HBA adapter queue target in Windows to 0.

Note: This means 32 outstanding IOs per ITL, so if a host has 4 paths then there are 32 outstanding IOs per path, resulting in a total of 128.

Follow the appropriate procedure according to the HBA type. For any additional information please refer to the HBA vendor's documentation.

Setting the queue depth and queue target on the Emulex FC HBA Setting the queue depth on the Emulex FC HBA is done using Emulex UI (OneCommand). OneCommand detects the active Emulex driver and enables changing the relevant driver's parameters, specifically queue depth.

Note: Setting the queue-depth per this procedure is not disruptive.

To set the queue depth and queue target on the Emulex FC HBA:

1. Install OneCommand.

2. Launch the OneCommand UI.

3. Select the relevant host name from the Hosts list.

4. Expand the HBA in the navigational tree and select the HBA port.

5. Select the Driver Parameters tab.

Configuring Fibre Channel HBAs 89

Page 90: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

6. In the Adapter Parameter list, locate the QueueDepth parameter and set its value to 32.

7. In the same list, locate the QueueTarget parameter and set its value to 0.

8. Click Apply.

9. Repeat the above steps for each port on the host that has VPLEX Storage exposed.

90 Dell EMC Host Connectivity Guide for Windows

Page 91: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Windows Failover Clustering with VPLEXMicrosoft strongly recommends that before you create a failover cluster, you validate your configuration; that is, run all tests in the Validate a Configuration Wizard. By running these tests, you can confirm that your hardware and settings are compatible with failover clustering.

IMPORTANT

With Windows Server 2012 or Windows Server 2012 R2, cluster validation storage tests may not discover VPLEX distributed devices when servers are geographically dispersed and configured on different VPLEX sites.

The reason for this is because a storage validation test selects only shared LUNs. A LUN is determined to be shared if its disk signatures, device identification number (page 0×83), and storage array serial number are the same on all cluster nodes.

When you have site-to-site mirroring configured (VPLEX distributed device), a LUN in one site (site A) has a mirrored LUN in another site (site B). These LUNs have the same disk signatures and device identification number (page 0×83), but the VPLEX storage array serial numbers are different. Therefore, they are not recognized as shared LUNs.

The following is an example of what is reported in the cluster validation logs:

Cluster validation message: List Potential Cluster Disks

Description: List disks that will be validated for cluster compatibility. Clustered disks which are online at any node will be excluded.

Start: 11/17/2013 5:59:01 PM.

• Physical disk 84d2b21a is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at the following node:

WNH6-H5.elabqual.emc.com

• Physical disk 6f473a9f is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at the following node:

WNH6-H13.elabqual.emc.com

To resolve the issue, run all the cluster validation tests before you configure distributed devices to the geographically dispersed servers.

Note: If the validation test is needed later for support situations, LUNs that are not selected for storage validation tests are supported by Microsoft and Dell EMC Shared LUNs (distributed devices).

For more information, refer to the Microsoft KB article, Storage tests on a failover cluster may not discover all shared LUNs.

Windows Failover Clustering with VPLEX 91

Page 92: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Setting up quorum on a Windows 2012/2012 R2 Failover Cluster for VPLEX Metro or Geo clusters

The recommended Windows Failover Clustering Quorum for stretched or cross connected clusters is the File Share Witness. To set up the File Share Witness quorum, complete the following steps.

1. In Failover Cluster Manager, select the cluster and from the drop-down menu select More Actions > Configure Cluster Quorum Settings, as follows:

The Configure Cluster Quorum Wizard displays:

2. Click Next.

92 Dell EMC Host Connectivity Guide for Windows

Page 93: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

The Select Quorum Configuration Option window displays:

3. Click the Select the quorum witness option and click Next.

The Select Quorum Witness window displays:

4. Select Configure a file share witness and click Next.

Setting up quorum on a Windows 2012/2012 R2 Failover Cluster for VPLEX Metro or Geo clusters 93

Page 94: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

The Configure a File Share Witness window displays:

For the server hosting the file share, follow these requirements and recommendations:

• Must have a minimum of 5 MB of free space

• Must be dedicated to the single cluster and not used to store user or application data

• Must have write permissions enabled for the computer object for the cluster name

The following are additional considerations for a file server that hosts the file share witness:

• A single file server can be configured with file share witnesses for multiple clusters.

• The file server must be on a site that is separate from the cluster workload. This enables equal opportunity for any cluster site to survive if site-to-site network communication is lost. If the file server is on the same site, that site becomes the primary site, and it is the only site that can reach the file share.

• The file server can run on a virtual machine if the virtual machine is not hosted on the same cluster that uses the file share witness.

• For high availability, the file server can be configured on a separate failover cluster.

5. Click Next.

94 Dell EMC Host Connectivity Guide for Windows

Page 95: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

The Confirmation screen displays:

6. Verify the settings and click Next.

The Summary window displays:

7. You can view this report or click Finish to complete the file share witness configuration.

Setting up quorum on a Windows 2012/2012 R2 Failover Cluster for VPLEX Metro or Geo clusters 95

Page 96: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

Configuring quorum on Windows 2008/2008 R2 Failover Cluster for VPLEX Metro or Geo clusters

This section contains the following information:

◆ “VPLEX Metro or Geo cluster configuration” on page 96

◆ “Prerequisites” on page 97

◆ “Setting up quorum on a Windows 2008/2008R2 Failover Cluster for VPLEX Metro or Geo clusters” on page 97

VPLEX Metro or Geo cluster configuration

Two VPLEX Metro clusters, connected within metro (synchronous) distances of approximately 60 miles (100 kilometers), form a Metro-Plex cluster. Figure 12 shows an example of a VPLEX Metro cluster configuration. VPLEX Geo cluster configuration is the same and adds the ability to dynamically move applications and data across different compute and storage installations across even greater distances.

Figure 12 VPLEX Metro cluster configuration example

Note: All connections shown in Figure 12 are Fibre Channel, except the network connections, as noted.

The environment in Figure 12 consists of the following:

◆ Node-1 – Windows 2008 or Windows 2008 R2 Server connected to the VPLEX instance over Fibre Channel.

◆ Node -2 – Windows 2008 or Windows 2008 R2 Server connected to the VPLEX

96 Dell EMC Host Connectivity Guide for Windows

Page 97: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

instance over Fibre Channel.◆ VPLEX instance – One or more engine VPLEX having a connection through the L2

switch to back-end and front-end devices.

Prerequisites

Ensure the following before configuring the VPLEX Metro or Geo cluster:

◆ VPLEX firmware is installed properly and the minimum configuration is created.

◆ All volumes to be used during the cluster test should have multiple back-end and front-end paths.

Note: Refer to the Implementation and Planning Best Practices for EMC VPLEX Technical Notes, available on Dell EMC Online Support, for best practices for the number of paths for back-end and front-end paths.

◆ All hosts/servers/nodes of the same configuration, version, and service pack of the operating system are installed.

◆ All nodes are part of the same domain and are able to communicate with each other before installing Windows Failover Clustering.

◆ One free IP address is available for cluster IP in the network.

◆ PowerPath or MPIO is installed and enabled on all the cluster hosts.

◆ The hosts are registered to the appropriate View and visible to VPLEX.

◆ All volumes to be used during cluster test should be shared by all nodes and accessible from all nodes.

◆ A network fileshare is required for cluster quorum.

Setting up quorum on a Windows 2008/2008R2 Failover Cluster for VPLEX Metro or Geo clusters

To set up a quorum on VPLEX Metro or Geo clusters for Windows Failover Cluster, complete the following steps.

1. Select the quorum settings. In the Failover Cluster Manager, right-click on the cluster name and select More Actions > Configure Cluster Quorum Settings > Node and File Share Majority:

Configuring quorum on Windows 2008/2008 R2 Failover Cluster for VPLEX Metro or Geo clusters 97

Page 98: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

The Node and File Share Majority model is recommended for VPLEX Metro and Geo environments.

2. In the Configure Cluster Quorum Wizard, click Next:

98 Dell EMC Host Connectivity Guide for Windows

Page 99: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

3. In the Select Quorum Configuration window, ensure that the Node and File Share Majority radio button is selected, and then click Next:

Configuring quorum on Windows 2008/2008 R2 Failover Cluster for VPLEX Metro or Geo clusters 99

Page 100: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

4. In the Configure File Share Witness window, ensure that the \\sharedfolder from any Windows host in a domain other than the configured Windows Failover Cluster nodes is in the Shared Folder Path, and then click Next:

5. In the Confirmation window, click Next to confirm the details:

100 Dell EMC Host Connectivity Guide for Windows

Page 101: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

6. In the Summary window, go to the Failover Cluster Manager and verify that the quorum configuration is set to \\sharedfolder:

7. Click Finish.

Configuring quorum on Windows 2008/2008 R2 Failover Cluster for VPLEX Metro or Geo clusters 101

Page 102: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Windows Host Connectivity with Dell EMC VPLEX

102 Dell EMC Host Connectivity Guide for Windows

Page 103: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 5Invisible Body Tag

This section provides information about PowerPath for Windows.

◆ PowerPath and PowerPath iSCSI....................................................................... 104◆ PowerPath for Windows .......................................................... 105◆ PowerPath verification and problem determination ................... 108

Dell EMC PowerPath for Windows

Dell EMC PowerPath for Windows 103

Page 104: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

PowerPath and PowerPath iSCSIPowerPath for Windows software is available in two different packages: PowerPath for Windows and PowerPath iSCSI for Windows. It is important to know the differences between the two packages before deploying the software:

PowerPath for Windows

PowerPath for Windows supports both Fibre Channel and iSCSI environments, and is Microsoft digitally certified only for Fibre Channel environments. PowerPath for Windows supports failover path management and load-balancing for up to 32 paths in heterogeneous storage environments. PowerPath for Windows is not currently supported by Microsoft for iSCSI implementations, although it is supported by Dell EMC for Dell EMC iSCSI storage systems.

PowerPath iSCSI for Windows

PowerPath iSCSI for Windows supports EMC VNX series and CLARiiON iSCSI storage systems, and is Microsoft digitally certified and is built on the Microsoft MPIO framework. PowerPath iSCSI for Windows supports failover path management for up to 8 paths in iSCSI storage environments.

104 Dell EMC Host Connectivity Guide for Windows

Page 105: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

PowerPath for WindowsThe following information is included in this section:

◆ “PowerPath and MSCS” on page 105

◆ “Integrating PowerPath into an existing MSCS cluster” on page 105

PowerPath and MSCS

If you are installing PowerPath and MSCS for the first time, install PowerPath first, and then install MSCS. Installing PowerPath first avoids having to disrupt cluster services at a later time.

Integrating PowerPath into an existing MSCS cluster

You can integrate PowerPath into an existing MSCS cluster without shutting down the cluster, if there is close coordination between the nodes and the storage system. Each node in a cluster can own a distinct set of resources. Node A is the primary node for its resources and the failover node for Node B’s resources. Conversely, Node B is the primary node for its resources and the failover node for Node A’s resources.

If after installing PowerPath on the cluster, you test node failover by disconnecting all cables for a LUN or otherwise disrupting the path between the active host and the array, Windows logs event messages indicating hardware or network failure and possible data loss. If working correctly, the cluster will failover to a node with an active path and you can ignore the messages from the original node as logged in the event log. (You should check the application generating I/O to see if there are any failures. If there are none, everything is working normally.)

Installing PowerPath in a clustered environment requires the following steps:

◆ Move all resources to Node A

◆ Install PowerPath on Node B

◆ Configure additional paths between storage array and Node B

◆ Move all resources to Node B

◆ Install PowerPath on Node A

◆ Configure additional paths between storage array and Node A

◆ Return Node A’s resources back to Node A

Moving resources toNode A

To move all resources to Node A:

1. Start the MSCS Cluster Administrator utility, select Start, Programs, Administrative Tools, Cluster Administrator.

2. In the left pane of the window, select all groups owned by Node B.

3. To move the resources to Node A, select File, Move Group. Alternatively, select Move Group by right-clicking all group names under Groups in the left pane.

4. To pause Node b, click Node B and select File, Pause Node. This keeps the node from participating in the cluster during PowerPath installation.

PowerPath for Windows 105

Page 106: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Installing PowerPathonto Node B

To install PowerPath onto Node B:

1. Install PowerPath.

2. Shut down Node B. In a cluster with greater than two nodes, install PowerPath on these other nodes.

For example, in a four-node cluster, replace Node B with Nodes B, C, and D in step 4 of the previous section, “Moving resources to Node A,” and also in steps 1 and 2, above.

Configuring additionalpaths between the

storage system andNode B

To configure additional paths:

1. If necessary, reconfigure the storage system so its logical devices appear on multiple ports.

2. If necessary, install additional HBAs on Node B.

3. Connect cables for new paths between Node B and the storage system.

4. Power on Node B.

5. To resume Node B, click Node B and select File, Resume Node.

In a cluster with greater than two nodes, configure additional paths between the storage system and these other nodes. For example, in a four-node cluster, replace Node B with Nodes B, C, and D in steps 2, 3, 4, and 5 above.

Moving resources toNode B

To move all resources to Node B:

1. In the left pane of the Cluster Administrator window, select all groups.

2. To move the resources to Node B, select File, Move Group.

In a cluster with greater than two nodes, move all resources to any of the remaining nodes. For example, in a four-node cluster, replace Node B with any combination of Nodes B, C, and D to which you want to move resources. For example, you could move resources to Nodes B and C or move them to B, C, and D, or any permutation of Nodes B, C, and D taken alone or together.

3. To pause Node A, click Node A and select File, Pause Node.

Installing PowerPathonto Node A

To install PowerPath onto Node A:

1. Install PowerPath.

2. Shut down Node A.

Configuring additionalpaths between the

storage system andNode A

To configure additional paths:

1. If necessary, configure the storage system so its logical devices appear on multiple ports.

2. If necessary, install additional HBAs on Node A.

3. Connect cables for new paths between Node A and the storage system.

4. Power on Node A.

5. To resume Node A, click Node A and select File, Resume Node.

106 Dell EMC Host Connectivity Guide for Windows

Page 107: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Returning Node A’sresources to Node A

To return Node A’s resources:

1. Using the MSCS Cluster Administrator utility, select all groups previously owned by Node A.

2. To move the resources back to Node A, select File, Move Group.

PowerPath for Windows 107

Page 108: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

PowerPath verification and problem determinationThe following section assumes that PowerPath has been installed properly. Refer to the appropriate PowerPath Installation and Administration Guide on Dell EMC Online Support for instructions on how to install PowerPath. This section will help to verify that PowerPath was installed correctly and help you to recognize some common failure points.

Click the circled icon shown in Figure 13 to access the PowerPath Administration.

Figure 13 PowerPath Administration icon

Figure 14 shows the administration icon and the status it represents.

Figure 14 PowerPath Monitor Taskbar icons and status

108 Dell EMC Host Connectivity Guide for Windows

Page 109: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Figure 15 shows what PowerPath Administrator would look like if installed correctly. Notice that in this case there is one path zoned between the HBA and one port on the storage device.

Figure 15 One path

PowerPath verification and problem determination 109

Page 110: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

When multiple paths are zoned to your storage device, PowerPath Administrator would look like Figure 16:

Figure 16 Multiple paths

Problem determination

Determining the cause of loss if connectivity to the storage devices can be simplified by using the PowerPath Administrator. Array ports that are offline, defective HBAs or broken paths show up in the administrator GUI in various ways.

Table 1 on page 111 shows the known possible failure states. Referencing this table can greatly reduce problem determination time.

110 Dell EMC Host Connectivity Guide for Windows

Page 111: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Table 1 Possible failure states

PowerPath verification and problem determination 111

Page 112: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Examples of some failures follow:

An error with an array port, or the path leading to the array port, is displayed in Figure 17. This is symbolized by the red X through one of the array ports. Notice that while the array port is down, access to the disk device is still available; degraded access is noted by a red slash.

Figure 17 Error with an Array port

112 Dell EMC Host Connectivity Guide for Windows

Page 113: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

Figure 18 shows the result of a problem with one of the HBA or the path leading to the HBA. The failed HBA/path is marked with a red X. Again, notice that access to the disk devices, while degraded, still exists.

Figure 18 Failed HBA path

Making changes to your environment

You must reconfigure PowerPath after making configuration changes that affect host-to-storage-system connectivity or logical device identification, for example:

◆ Fibre Channel switch zone changes

◆ Adding or removing Fibre Channel switches

◆ Adding or removing HBAs or storage-system ports

◆ Adding or removing logical devices

In most cases making changes to your environment will be detected automatically by PowerPath. Depending on the type HBA you are using you may have to scan for new devices in device manager. On some occasions and depending on the operating system version your are running you may need to reboot your system.

PowerPath messages

For a complete list of PowerPath messages and their meanings refer to the "PowerPath Product Guide - PowerPath Messages" chapter for the version of PowerPath you are running.

PowerPath verification and problem determination 113

Page 114: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC PowerPath for Windows

114 Dell EMC Host Connectivity Guide for Windows

Page 115: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

CHAPTER 6Invisible Body Tag

This section provides information for using Microsoft’s Native Multipath I/O (MPIO) with Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012.

◆ Native MPIO with Windows Server 2008/Windows Server 2008 R2................................................................................... 100

◆ Native MPIO with Windows Server 2012................................... 120◆ Known issues ............................................................................ 125◆ Hyper-V .................................................................................... 126

Microsoft Native MPIO and Hyper-V

Microsoft Native MPIO and Hyper-V 115

Page 116: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Native MPIO with Windows Server 2008/Windows Server 2008 R2

This section includes the following information:

◆ “Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2” on page 116

◆ “Configuring Native MPIO for Windows 2008 Server Core and Windows 2008 R2 Server Core” on page 116

Support for Native MPIO in Windows Server 2008 and Windows Server 2008 R2

Windows Server 2008 and Windows Server 2008 R2 include native multipathing (MPIO) support as a feature of the OS.

Native MPIO is supported with all Dell EMC storage arrays.

Note the following:

◆ For Windows Server 2008 Core and Windows 2008 R2 Server Core installations, Native MPIO is failover only. There are no load balancing options available in the default DSM for Dell EMC storage arrays.

◆ Default Microsoft MPIO Timer Counters are supported.

◆ Hosts running Windows Server 2008 and Windows Server 2008 R2 must be manually configured so that the initiators are registered using failover mode 4 [ALUA].

◆ CLARiiON systems need to be on FLARE 26 or above to support Native MPIO.

◆ R30 is the minimum version supported with CX4.

◆ VNX OE for Block v31 is the minimum.

Configuring Native MPIO for Windows 2008 Server Core and Windows 2008 R2 Server Core

For Windows 2008 Server Core and Windows 2008 R2 Server Core, use the procedure described in “Enabling Native MPIO on Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core” on page 119.

Note: Refer to Microsoft documentation for installing the Microsoft Multipath I/O feature.

Native MPIO must be configured to manage VPLEX, Symmetrix DMX, VNX series, and CLARiiON systems. Open Control Panel, then the MPIO applet.

The claiming of array/device families can be done in one of two ways as described in “Method 1,” next, and in “Method 2” on page 117.

Method 1 Manually enter the Vendor and Device IDs of the arrays for native MPIO to claim and manage.

116 Dell EMC Host Connectivity Guide for Windows

Page 117: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Note: This may be the preferred method if all arrays are not initially connected during configuration and subsequent reboots are to be avoided.

To manually enter the array vendor and product ID information:

1. Use the MPIO-ed Devices tab in the MPIO Properties control panel applet.

2. Select Add and enter the vendor and product IDs of the array devices to be claimed by native MPIO.

The vendor ID must be entered as a string of eight characters (padded with trailing spaces) and followed by the product ID entered as a string of sixteen characters (padded with trailing spaces).

For example, to claim a VNX series and CLARiiON RAID 1 device in MPIO, the string would be entered as

DGC*****RAID*1**********

where the asterisk is representative of a space.

The vendor and product IDs vary based on the type of array and device presented to the host, as shown in Table 2.

Method 2 Use the MPIO applet to discover, claim, and manage the arrays already connected during configuration.

Note: This may be the preferred method if ease-of-use is required and subsequent reboots are acceptable when each array is connected.

Table 2 Array and device types

Array type LUN type Vendor ID Product ID

VPLEX VS1/VS2 Any EMC Invista

DMX, DMX-2, DMX-3, DMX-4, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (Systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe

Any EMC SYMMETRIX

CX300, CX500, CX700, all CX3-based arrayAX4 Series, CX4 Series, CX3 Series, VNX series and CLARiiON Virtual Provisioning

JBOD (single disk) DGC DISK

RAID 0 DGC RAID 0

RAID 1 DGC RAID 1

RAID 3 DGC RAID 3

RAID 5 DGC RAID 5

RAID 6 DGC VRAID

RAID 1/0 DGC RAID 10

Native MPIO with Windows Server 2008/Windows Server 2008 R2 117

Page 118: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

IMPORTANT

MPIO limits the number of paths per LUN to 32. Exceeding this number will result in the host crashing with a Blue Screen stop message. Do not exceed 32 paths per LUN when configuring MPIO on your system.

Automatic discovery is configured using the Discover Multi-Paths tab of the MPIO Properties control panel applet. Note that only arrays which are connected with at least two logical paths will be listed as available to be added in this tab, as follows:

◆ Devices from VNX OE for Block v31 and CLARiiON systems (running FLARE R26 or greater, configured for failover mode 4 [ALUA]) will be listed in the SPC-3 compliant section of the applet

◆ Devices from DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (Systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe arrays will be listed in the Others section of the applet

◆ Devices from VPLEX arrays will be listed in the Others section of the applet

Select the array / device types to be claimed and managed by MPIO by selecting the Device Hardware ID, and clicking the Add button.

Note: The OS will prompt you to reboot for each device type added. A single reboot will suffice after multiple devices types are added.

Path management in Multipath I/O for VPLEX, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (Systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe, VNX series, and CLARiiON systems

Following reboot, after all device types have been claimed by MPIO, each VPLEX-based, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe-based, VNX series-based, and CLARiiON-based disk will be shown in Device Manager as a Multi-Path Disk Device. When managed by MPIO, a new tab, named MPIO, will be available under Properties of the selected disk device. Under the MPIO tab, the number of logical paths configured between the host and array should be reported.

The default Load Balance Policy (as reported in the MPIO tab) for each disk device depends upon the type of disk device presented:

◆ In Windows server 2008, DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe devices will report a default Load Balance Policy as “Fail Over Only”, where the first reported path is listed as “Active/Optimized” and all other paths listed as “Standby.” In Windows server 2008 R2, DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe devices will report a default Load Balance Policy as "Round Robin," where all the paths are listed as "Active/Optimized." The default

118 Dell EMC Host Connectivity Guide for Windows

Page 119: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

policy can be overridden by changing the Load Balance Policy to any available. See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance Policies.

DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe array devices attached to the host with a default Load Balance Policy of “Fail Over Only”, can be overridden by changing the Load Balance Policy to any available. See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance Policies. Note that the default Load Balance Policy cannot be changed globally for all disk devices, the change must be done on a per-disk device basis.

◆ VNX series and CLARiiON devices will report a default Load Balance Policy as “Round Robin With Subset”, where all paths to the SP owning the device as “Active/Optimized”, and all paths to the SP not owning the LUN as “Active/Unoptimized”.

VNX series and CLARiiON devices attached to the host in ALUA mode (as is required when using native MPIO) report the path state which is used directly by the host running native MPIO and cannot be overridden by changing the Load Balance Policy.

◆ VPLEX devices will report a default Load Balance Policy as "Round Robin" with all active paths as "Active/Optimized". The default policy can be overridden by changing the Load Balance Policy to any available, except "Fail Over Only". See the Windows Server 2008 and Windows Server 2008 R2 documentation for a detailed description of available Load Balance policies.

Note: The default Load Balance Policy cannot be changed globally for all disk devices. The change must be done on a per-disk device basis.

Enabling Native MPIO on Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core

MPIO and other features must be started from the command line since Windows Server 2008 Server Core and Windows Server 2008 R2 Server Core are minimal installations that do not have traditional GUI interfaces. Refer to http://technet.microsoft.com for more information on Windows Server Core installations.

To enable the native MPIO feature from the command line, type:

start /w ocsetup MultipathIo

After the system reboots, you can manage MPIO with the mpiocpl.exe utility. From the command prompt, type:

mpiocpl.exe

The MPIO Properties window displays. From here, arrays/devices can be claimed and managed as described in the section above for standard Windows installations.

For more information on Microsoft Native MPIO, refer to http://www.microsoft.com and http://technet.microsoft.com.

Native MPIO with Windows Server 2008/Windows Server 2008 R2 119

Page 120: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Native MPIO with Windows Server 2012This section incudes the following information:

◆ “Support for Native MPIO in Windows Server 2012” on page 120

◆ “Configuring Native MPIO for Windows Server 2012” on page 120

Support for Native MPIO in Windows Server 2012

Windows Server 2012 include native multipathing (MPIO) support as a feature of the OS.

Native MPIO is supported with Dell EMC CX4 Series, DMX-4, and VMAX storage array models.

Note the following:

◆ To use the Microsoft default DSM, storage must be compliant with SCSI Primary Commands-3 (SPC-3).

◆ Default Microsoft MPIO Timer Counters are supported.

◆ Hosts running Windows Server 2012 must be manually configured so that the initiators are registered using failover mode 4 [ALUA].

◆ CLARiiON systems need to be on FLARE 30 or above to support Native MPIO.

◆ VNX OE for Block v31 is the minimum.

Configuring Native MPIO for Windows Server 2012

This section explains how to configure native MPIO for Dell EMC storage arrays. Native MPIO is installed as an optional feature of the Windows Server 2012.

Note: Refer to Microsoft documentation for installing the Microsoft Multipath I/O feature.

Configuring MPIO and installing DSMWhen MPIO is installed, the Microsoft device-specific module (DSM) is also installed, as well as an MPIO control panel. The control panel can be used to do the following:

◆ Configure MPIO functionality

◆ Install additional storage DSMs

◆ Create MPIO configuration reports

Opening the MPIO control panel

Open the MPIO control panel either by using the Windows Server 2012 control panel or by using Administrative Tools.

To open the MPIO control panel using the Windows Server 2012 control panel, complete the following steps:

1. On the Windows Server 2012 desktop, move your mouse to the lower left corner and click Start.

120 Dell EMC Host Connectivity Guide for Windows

Page 121: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

2. Click MPIO.

To open the MPIO control panel using Administrative Tools, complete the following steps

1. On the Windows Server 2012 desktop, move your mouse to the lower left corner and click Start.

2. Point to Administrative Tools and click MPIO.

The MPIO control panel opens to the Properties dialog box.

Note: To access the MPIO control panel on Server Core installations, open a command prompt and type MPIOCPL.EXE.

Once installed, native MPIO must be configured to manage VPLEX, Symmetrix DMX, VNX series, and CLARiiON systems. Open Control Panel, then the MPIO applet.

Device discovery and claiming devices for MPIO

IMPORTANT

MPIO limits the number of paths per LUN to 32. Exceeding this number will result in the host crashing with a Blue Screen stop message. Do not exceed 32 paths per LUN when configuring MPIO on your system.

Automatic discovery is configured using the Discover Multi-Paths tab of the MPIO Properties control panel applet. Note that only arrays which are connected with at least two logical paths will be listed as available to be added in this tab, as follows:

◆ Devices from VNX OE for Block v31 and CLARiiON systems (running FLARE 30 or greater, configured for failover mode 4 [ALUA]) will be listed in the SPC-3 compliant section of the applet.

◆ Devices from VNX OE for Block v31 and CLARiiON systems (running FLARE 30 or greater, configured for failover mode 4 [ALUA]) will be listed in the SPC-3 compliant section of the applet.

◆ Devices from DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (Systems with SN xxx987xxxx), VMAX 10K (systems with SN xxx959xxxx), and VMAXe arrays will be listed in the Others section of the applet.

◆ Devices from VPLEX arrays will be listed in the Others section of the applet.

Native MPIO with Windows Server 2012 121

Page 122: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Select the array and device types to be claimed and managed by MPIO by selecting the Device Hardware ID, and clicking Add.

Figure 19 MPIO Properties dialog box

Note: The OS will prompt you to reboot for each device type added. A single reboot will suffice after multiple devices types are added.

Path management in Multipath I/O for VPLEX, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (Systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe, VNX series, and CLARiiON systems

Following reboot, after all device types have been claimed by MPIO, each VPLEX-based, Symmetrix DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (Systems with SN xxx959xxxx), and VMAXe-based, VNX series-based, and CLARiiON-based disk will be shown in Device Manager as a Multi-Path Disk Device.

When managed by MPIO, a new tab, MPIO, will be available under Properties of the selected disk device. Under the MPIO tab, the number of logical paths configured between the host and array should be reported.

This tab will also allow you to change the MPIO load balancing policy for a disk device.

Note: Some load balancing policies may not be available for specific array disk types. For example, the Round Robin policy is not available for VNX disk devices, but Round Robin with Subset is.

122 Dell EMC Host Connectivity Guide for Windows

Page 123: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Options available for load balancing policies are as follows:

◆ Fail Over Only — Policy that does not perform load balancing. This policy uses a single active path, and the rest of the paths are standby paths. The active path is used for sending all I/O. If the active path fails, then one of the standby paths is used. When the path that failed is reactivated or reconnected, the standby path that was activated returns to standby.

◆ Round Robin — Load balancing policy that allows the Device Specific Module (DSM) to use all available paths for MPIO in a balanced way. This is the default policy that is chosen when the storage controller follows the active-active model and the management application does not specifically choose a load-balancing policy.

◆ Round Robin with Subset — Load balancing policy that allows the application to specify a set of paths to be used in a round robin fashion, and with a set of standby paths. The DSM uses paths from a primary pool of paths for processing requests as long as at least one of the paths is available. The DSM uses a standby path only when all the primary paths fail. For example, given 4 paths: A, B, C, and D, paths A, B, and C are listed as primary paths and D is the standby path. The DSM chooses a path from A, B, and C in round robin fashion as long as at least one of them is available. If all three paths fail, the DSM uses D, the standby path. If paths A, B, or C become available, the DSM stops using path D and switches to the available paths among A, B, and C.

◆ Least Queue Depth — Load balancing policy that sends I/O down the path with the fewest currently outstanding I/O requests. For example, consider that there is one I/O that is sent to LUN 1 on Path 1, and the other I/O is sent to LUN 2 on Path 1. The cumulative outstanding I/O on Path 1 is 2, and on Path 2, it is 0. Therefore, the next I/O for either LUN will process on Path 2.

◆ Weighted Paths —Load balancing policy that assigns a weight to each path. The weight indicates the relative priority of a given path. The larger the number, the lower ranked the priority. The DSM chooses the least-weighted path from among the available paths.

◆ Least Blocks —Load balancing policy that sends I/O down the path with the least number of data blocks currently being processed. For example, consider that there are two I/Os: one is 10 bytes and the other is 20 bytes. Both are in process on Path 1, and both have completed Path 2. The cumulative outstanding amount of I/O on Path 1 is 30 bytes. On Path 2, it is 0. Therefore, the next I/O will process on Path 2.

The default Load Balance Policy (as reported in the MPIO tab) for each disk device depends upon the type of disk device presented:

Native MPIO with Windows Server 2012 123

Page 124: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

◆ In Windows Server 2012, DMX, VMAX 40K, VMAX 20K/VMAX, VMAX 10K (systems with SN xxx987xxxx), VMAX 10K (systems with SN xxx959xxxx), and VMAXe devices will report a default Load Balance Policy as "Round Robin," where all the paths are listed as "Active/Optimized."

◆ VNX series and CLARiiON devices will report a default Load Balance Policy as “Round Robin With Subset”, where all paths to the SP owning the device as “Active/Optimized”, and all paths to the SP not owning the LUN as “Active/Unoptimized”.

VNX series and CLARiiON devices attached to the host in ALUA mode (as is required when using native MPIO) report the path state, which is used directly by the host running native MPIO and cannot be overridden by changing the Load Balance Policy.

◆ VPLEX devices will report a default Load Balance Policy as "Round Robin" with all active paths as "Active/Optimized."

Load balancing policies should be changed based on your particular environment. In most cases, the default policy will be suitable for your I/O load needs. However, some environments may require a change to the load balancing policy to improve performance or better spread I/O load across storage front-end ports. Dell EMC does not require a specific load balancing policy for any environment, and our customers are free to make changes to their load balancing policies as they see fit to meet their environment's needs.

For more information on Microsoft Native MPIO, refer to http://www.microsoft.com and http://technet.microsoft.com.

124 Dell EMC Host Connectivity Guide for Windows

Page 125: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Known issuesThe following are known issues:

◆ When a Windows 2008 host with Native MPIO managing VNX series and CLARiiON systems boots, MPIO will move all CLARiiON LUNs to a single Storage Processor on the VNX series and CLARiiON system.

◆ Windows 2008 Native MPIO does not auto-restore a VNX series and CLARiiON LUN to its default Storage Processor after any type of fault is repaired. For example, after a non-disruptive upgrade of VNX series and CLARiiON software, all VNX series and CLARiiON LUNs will be owned on a single Storage Processor.

• To address the above behavior, VNX series and CLARiiON management software (Unisphere/Navisphere Manager or Navisphere Secure CLI) can be used to manually restore LUNs to their default storage processor.

• Also, a VNX series and CLARiiON LUN can be assigned a Load Balance Policy of Failover Only with the Preferred box selected on a path connected to the default storage processor. Native MPIO will attempt to keep the preferred path as active/optimized and will use that path for IO.

Only this single, preferred path will be used for IO; there is failover, but no multipathing, under this Load Balance Policy. If the preferred path fails, Native MPIO will select an alternate, healthy path for IO.

IMPORTANT

The implications of doing this should be clearly understood. There will be no multipathing to this LUN if the above method is implemented – only failover.

Known issues 125

Page 126: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Microsoft Native MPIO and Hyper-V

Hyper-VHyper-V in Windows Server enables you to create a virtualized server computing environment. This environment can improve the efficiency of your computing resources by utilizing more of your hardware resources. This is made possible by using Hyper-V to create and manage virtual machines and their resources. Each virtual machine is a self-contained virtualized computer system that operates in an isolated execution environment. This allows multiple operating systems to run simultaneously on one physical computer. Hyper-V is an available role in Windows Server 2008 and later.

For information on Hyper-V, its many features and benefits, and installation procedures, refer to http://www.microsoft.com and http://technet.microsoft.com.

126 Dell EMC Host Connectivity Guide for Windows

Page 127: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

APPENDIX AInvisible Body Tag

This appendix provides information on persistent binding.

◆ Understanding persistent binding.............................................. 128

Persistent Binding

Persistent Binding 127

Page 128: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Persistent Binding

Understanding persistent bindingPersistent binding is the mechanism to create a continuous logical route from a storage device object in the Windows host to a volume in the EMC storage array across the fabric.

Without a persistent binding mechanism, the host cannot maintain persistent logical routing of the communication from a storage device object across the fabric to an Dell EMC storage array volume. If the physical configuration of the switch is changed (for example, the cable is swapped or the host is rebooted), the logical route becomes inconsistent, causing possible data corruption if the user application is modifying data through inconsistent logical routing of the communication from the driver entry point to a volume in an Dell EMC storage array across the fabric.

The Windows OS does not provide a satisfactory means to allow persistent binding. Most software applications access storage using file systems managed by the Windows OS. (File systems are represented as <drive letter><colon>, that is, C:, D:, and so forth) For storage devices containing file systems, Windows writes a disk signature to the disk device. The operating system can then identify, and associate with, a particular drive letter and file system.

Since the signature resides on the disk device, changes can occur on the storage end (a cable swap, for example) that can cause a disk device to be visible to the host server in a new location. However, the OS looks for the disk signature and, providing that nothing on the disk changed, associate the signature with the correct drive letter and file system. This mechanism is strictly an operating system feature and is not influenced by the Fibre Channel device driver.

Some software applications, however, do not use the Windows file systems or drive letters for their storage requirements. Instead they access storage drives directly, using their own built-in “file systems.” Devices accessed in this way are referred to as raw devices and are known as physical drives in Windows terminology.

The naming convention for physical drives is simple and is always the same for software applications using them. A raw device under Windows is accessed by the name \\.\PHYSICALDRIVEXXX, where XXX is the drive number.

For example, a system with three hard disks attached using an Emulex Fibre Channel controller assigns the disks the names \\.\PHYSICALDRIVE0, \\.\PHYSICALDRIVE1, and \\.\PHYSICALDRIVE2. The number is assigned during the disk discovery part of the Windows boot process.

During boot-up, the Windows OS loads the driver for the storage HBAs. Once loaded, the OS performs a SCSI Inquiry command to obtain information about all the attached storage devices. Each disk drive that it discovers is assigned a number in a semi-biased first come, first serve fashion based on HBA. Semi-biased means the Windows system always begins with the controller in the lowest-numbered PCI slot where a storage controller resides. Once the driver for the storage controller is loaded, the OS selects the adapter in the lowest-numbered PCI slot to begin the drive discovery process.

It is this naming convention and the process by which drives are discovered that makes persistent binding (by definition) impossible for Windows. Persistent binding requires a continuous logical route from a storage device object in the Windows host to a volume in an Dell EMC storage array across the fabric. As mentioned above, each disk drive is assigned a number in a first-come, first-serve basis. This is where faults can occur.

128 Dell EMC Host Connectivity Guide for Windows

Page 129: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Persistent Binding

Example Imagine this scenario: A host system contains controllers in slots 0, 1, and 2. Someone removes a cable from the Emulex controller in host PCI slot 0, then reboots the host.

During reboot, the Windows OS loads the Emulex driver during reboot and begins disk discovery. Under the scenario presented above, there are no devices discovered on controller 0, so the OS moves to the controller in slot 1 and begins naming the disks it finds, starting with \\.\PHYSICALDRIVE0. Any software applications accessing \\.\PHSYICALDRIVE0 before the reboot will be unable to locate their data on the device, because it changed.

Figure 20 on page 129 shows the original configuration before the reboot. HBA0 is in PCI slot 0 of the Windows host. Each HBA has four disk devices connected to it, so Windows has assigned the name \\.\PHYSICALDRIVE0 to the first disk on HBA0. Each disk after that is assigned a number in sequence as shown in Figure 20.

Figure 20 Original configuration before the reboot

Figure 21 shows the same host after the cable attached to HBA0 has been removed and the host rebooted. Since Windows was unable to do a discovery on HBA0, it assigned \\.\PHYSICALDRIVE0 to the first device it discovered. In this case, that first device is connected to HBA1. Due to the shift, any software application accessing \\.\PHYSICALDRIVE0 will not find data previously written on the original \\.\PHYSICALDRIVE0.

Figure 21 Host after the reboot

Note: Tape devices are treated the same as disk devices in Windows with respect to persistent binding. Refer to your tape device documentation for more information.

HBA 0

HBA 1

HBA 2

PHYSICALDRIVE0

PHYSICALDRIVE4

PHYSICALDRIVE8

WindowsHost

HBA 0

HBA 1

HBA 2

WindowsHost

PHYSICALDRIVE0

PHYSICALDRIVE4

Understanding persistent binding 129

Page 130: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Persistent Binding

Methods of persistent binding

The Windows device naming convention and disk discovery process does not allow the Windows operating system to establish persistent binding.

◆ Dell EMC Volume Logix — Provides persistent binding through centralized control by the Symmetrix Fibre Channel fabric ports.

For more information, refer to the EMC Volume Logix Product Guide.

◆ Switch zone mapping — Provides persistent binding through centralized control by the Fibre Channel switch.

For more information, refer to the EMC Connectrix Enterprise Storage Network System Planning Guide.

◆ (Emulex HBAs only) Emulex configuration tool mapping — Provides persistent binding of targets through centralized control by the Emulex host adapter. This requires the user to modify the mapping manually.

For more information, refer to EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available in the Dell EMC OEM section of the Broadcom website or on Dell EMC Online Support.

a. Click drivers, software and manuals at the left side of the screen.

b. Click EMC at the upper center of the next screen.

c. Click the link to your HBA at the left side of the screen.

d. Under EMC Drivers, Software and Manuals, click the Installation and Configuration link under Drivers for Windows <version>.

130 Dell EMC Host Connectivity Guide for Windows

Page 131: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

APPENDIX BInvisible Body Tag

This appendix describes Dell EMC Solutions Enabler and migration considerations.

◆ Dell EMC Solutions Enabler....................................................... 132

Dell EMC Solutions Enabler

Dell EMC Solutions Enabler 131

Page 132: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

Dell EMC Solutions Enabler

Dell EMC Solutions EnablerThe Solutions Enabler SYMCLI is a specialized library consisting of commands that can be invoked via the command line or within scripts. These commands can be used to:

◆ Monitor device configuration and status

◆ Perform control operations on devices and data objects within your managed storage complex.

The target storage environments are typically Symmetrix-based, through some features are supported for Unity and VNX systems as well.

For more information, refer to the Solutions Enabler Array Controls and Management 8.3.0 CLI User Guide, available on Dell EMC Online Support.

References

More information can be found in the following guides, available on Dell EMC Online Support:

◆ For HBA configurations, see the appropriate host bus adapter guides:

• EMC Host Connectivity with Emulex Fibre Channel Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment, available on

• EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

• EMC Host Connectivity with Brocade Fibre Channel and Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

• For additional migration considerations, refer to EMC's Data Migration All Inclusive guide.

132 Dell EMC Host Connectivity Guide for Windows

Page 133: Dell EMC Host Connectivity Guide for Windows · PDF fileFlash Family, Dell EMC VMAX3™ Family, Dell EMC VMAX Family, Dell EMC Unity™ Family, ... Dell EMC Host Connectivity Guide

APPENDIX CInvisible Body Tag

Veritas Storage Foundation/Volume Manager from Symantec replaces the native volume manager of the Windows operating system to allow management of physical disks as logical devices. The added features of this software are designed to enhance data storage management by controlling space allocation, performance, data availability, device installation, and system monitoring of private and shared systems.

Refer to http://www.symantec.com for more information about Veritas Storage Foundation, documentation, and software availability.

Refer to the latest Dell EMC Simple Support Matrix to determine which Veritas Storage Foundation/Volume Manager configurations are supported and what service packs may be required for your configuration.

Veritas Volume Management Software

Veritas Volume Management Software 133