Dell Compellent Storage Center Dell Compellent Live Volume Best Practices

Embed Size (px)

DESCRIPTION

Configuring Live VolumesLive Volumes Best PracticeSAN Based ReplicationDR

Citation preview

  • Dell Best Practices

    Dell Compellent Storage Center Live Volume Dell Compellent October 2013

  • 2 Dell Compellent Storage Center Live Volume

    Revisions

    Date Revision Comments

    08/24/2010 1.0 Initial Draft created

    10/13/2010 1.1 Updated Autoswap section sample periods

    11/16/2011 1.2 Data Progression info updated

    12/15/2011 1.3 Updated VMware DRS/HA content

    7/25/2012 1.4 Update EM screenshots

    1/4/2013 1.5 AIX refresh, added HP-UX

    2/5/2013 1.6 Solaris 10 refresh and added

    10/2/2013 1.7 Solaris 11 Added, Solaris 10 sections consolidated

    THIS PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND

    TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF

    ANY KIND.

    2013 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express

    written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

    Dell, the DELL logo, the DELL badge, and Dell Compellent are trademarks of Dell Inc. Microsoft, Windows, and

    Windows Server are registered trademarks of Microsoft Corporation in the United States and/or other countries. Other

    trademarks and trade names may be used in this document to refer to either the entities claiming the marks and

    names or their products. Dell disclaims any proprietary interest in the marks and names of others.

  • 3 Dell Compellent Storage Center Live Volume

    Table of contents Revisions ............................................................................................................................................................................................. 2

    Preface ................................................................................................................................................................................................ 6

    Customer support ............................................................................................................................................................................. 6

    1 Live Volume overview ............................................................................................................................................................... 7

    1.1 Reference architecture .................................................................................................................................................. 7

    1.2 Proxy data access ............................................................................................................................................................ 8

    1.3 Live Volume requirements ............................................................................................................................................ 9

    1.3.1 Connectivity ..................................................................................................................................................................... 9

    1.4 Live Volume and replication attributes ..................................................................................................................... 10

    1.5 Replication attributes .................................................................................................................................................... 11

    1.6 Live Volume attributes.................................................................................................................................................. 12

    2 Data Progression and Live Volume....................................................................................................................................... 14

    2.1 Primary/Secondary Live Volume ................................................................................................................................ 14

    3 Live Volume and MPIO ........................................................................................................................................................... 15

    3.1 MPIO policies for Live Volume ................................................................................................................................... 15

    4 VMware and Live Volume ....................................................................................................................................................... 16

    4.1 MPIO ................................................................................................................................................................................ 16

    4.2 Single site MPIO configuration ................................................................................................................................... 16

    4.3 Multi-site MPIO configuration .................................................................................................................................... 17

    4.4 VMware vMotion and Live Volume ............................................................................................................................ 18

    4.5 VMware DRS/HA and Live Volume ............................................................................................................................ 19

    5 Microsoft Windows MPIO ...................................................................................................................................................... 21

    5.1 Microsoft Windows MPIO ............................................................................................................................................ 21

    5.1.1 Round Robin with Subset ............................................................................................................................................ 21

    5.1.2 Failover Only .................................................................................................................................................................. 22

    5.1.3 Sub-Optimal MPIO ....................................................................................................................................................... 23

    5.1.4 Hyper-V and Live Volume ........................................................................................................................................... 24

    5.2 Stand Alone Hyper-V .................................................................................................................................................... 24

    5.3 Clustering Hyper-V ....................................................................................................................................................... 24

    5.3.1 Single Site ....................................................................................................................................................................... 25

    5.3.2 Multi-Site ........................................................................................................................................................................ 25

  • 4 Dell Compellent Storage Center Live Volume

    5.4 SCVMM/SCOM and Performance and Resource Optimization (PRO) ................................................................ 26

    5.5 Live Volume and Cluster Shared Volumes ............................................................................................................... 26

    6 Live Volume Best Practices with Solaris 10 & 11 ................................................................................................................ 28

    6.1 Live Volume Setup ........................................................................................................................................................ 28

    6.2 Zoning while ZFS booting from Storage Center ..................................................................................................... 28

    6.3 Zoning to Live Volume Primary and Secondary Storage Centers ........................................................................ 28

    6.4 Solaris Server Setup ...................................................................................................................................................... 29

    6.5 UFS Live Volume ........................................................................................................................................................... 32

    6.6 ZFS Live Volume ............................................................................................................................................................ 33

    6.7 ZFS Considerations ....................................................................................................................................................... 34

    6.8 Mapping a Live Volume ZFS Replay View, to the same Solaris server ................................................................. 34

    6.9 Mapping a Live Volume ZFS Replay View, to an alternate Solaris server ............................................................ 35

    6.10 Appendix Solaris ............................................................................................................................................................ 36

    7 Live Volume Best Practices with AIX .................................................................................................................................... 37

    7.1 Live Volume Setup ........................................................................................................................................................ 37

    7.2 AIX Server fiber channel zoning ................................................................................................................................. 37

    7.3 AIX 6.1 ML2 server setup .............................................................................................................................................. 37

    7.4 Added Dell Compellent ODM/PCM to the server and rebooted ......................................................................... 38

    7.5 MPIO algorithm should be fail_over on each hdisk used for Live Volume. ....................................................... 38

    7.6 Volume Group, Logical Volume and JFS2 file system creation steps ................................................................. 38

    8 Live Volume Best Practices with HP-UX ............................................................................................................................. 40

    8.1 Live Volume Setup ....................................................................................................................................................... 40

    8.2 Set the Load Balancing Policy ................................................................................................................................... 40

    9 Live Volume Disaster Recovery ............................................................................................................................................. 44

    9.1 Overview ......................................................................................................................................................................... 44

    9.2 Disaster Recovery Plan ................................................................................................................................................. 44

    9.3 Fracture Overview ......................................................................................................................................................... 44

    9.4 Fracture Recovery ......................................................................................................................................................... 44

    10 Use Cases .................................................................................................................................................................................. 45

    10.1 Zero downtime SAN maintenance and data migration ......................................................................................... 45

    10.1.1 Requirements ................................................................................................................................................................. 45

    10.2 Storage migration for virtual machine migration .................................................................................................... 47

  • 5 Dell Compellent Storage Center Live Volume

    10.2.1 Requirements ................................................................................................................................................................. 47

    10.3 Disaster avoidance ........................................................................................................................................................ 48

    10.4 On-demand load distribution ..................................................................................................................................... 49

    10.5 Cloud computing .......................................................................................................................................................... 50

    11 Replay Manager & Live Volume ............................................................................................................................................. 52

  • 6 Dell Compellent Storage Center Live Volume

    Preface

    This document contains information and best practices for using Dell Compellent Live Volume with

    several technologies.

    Customer support

    Dell Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365

    days a year. For additional support, email Dell Compellent at [email protected]. Dell Compellent

    responds to emails during normal business hours.

  • 7 Dell Compellent Storage Center Live Volume

    1 Live Volume overview

    1.1 Reference architecture Live Volume is a new software option for Dell Compellent Storage Center that builds upon the Dell Fluid

    Data architecture (see Figure 1). Live Volume enables non-disruptive data access and migration of data

    between two Storage Centers.

    Figure 1

    Live Volume is a software-based solution integrated into the Dell Compellent Storage Center Controllers.

    Live Volume is designed to operate in a production environment, allowing both Storage Centers to remain

    operational during volume migrations.

    Live Volume increases operational efficiency, reduces planned outages, and enables a site to avoid

    disruption during impending severe weather. Live Volume provides these powerful new options:

    Storage follows the application in virtualized server environments.

    Live Volume automatically migrates data as virtual applications are moved.

    Zero downtime maintenance for planned outages.

    Live Volume enables all data to be moved non-disruptively between Storage Centers, enabling full

    planned site shutdown without downtime.

    On-demand load balancing. Live Volume enables data to be relocated as desired to distribute

    workload between Storage Centers.

    Stretch Microsoft

    clustered volumes between geographically disperse locations.

  • 8 Dell Compellent Storage Center Live Volume

    Live Volume allows Microsoft Clusters to see the same disk signature on the volume between data

    centers thereby allowing the volume to be clustered.

    Live Volume is designed to fit into existing physical and virtual environments without disruption, extra

    hardware requirements or any changes to configurations or workflow. Physical and virtual servers see a

    consistent, unchanging virtual volume. All volume mapping is consistent and transparent before, during,

    and after migration. Live Volume can be run automatically or manually and is fully integrated into the

    Storage Center software environment. Live Volume operates asynchronously and is designed for planned

    migrations where both Storage Centers are simultaneously available.

    A Live Volume can be created between two Dell Compellent Storage Centers residing in the same Data

    center or between two well-connected data centers.

    Using Dell Compellent Enterprise Manager, a Live Volume can be created from a new volume, an existing

    volume, or an existing replication. For more information on creating Live Volume, see the Dell Compellent

    Enterprise Manager User Guide.

    Figure 2

    1.2 Proxy data access A Dell Compellent Live Volume is a pair of replicating volumes: a primary Live Volume and a secondary

    Live Volume. A Live Volume can be accessed through either Storage Center participating in a Live Volume

    Replication; however, the Live Volume will be primary on one of the Storage Centers only. All read and

    write activity for a Live Volume happens on the Storage Center hosting the Primary Live Volume. If a server

  • 9 Dell Compellent Storage Center Live Volume

    is accessing the Live Volume through the secondary Live Volume Storage Center, data access is proxied

    over the replication link to the Primary Live Volume system.

    In Figure 3 below, Server 1 is mapped to and accessing a Live Volume via proxy access through the

    Secondary Live Volume system to the Primary Live Volume system. This type of proxy data access requires

    the Replication Link between the two Storage Centers to have enough bandwidth to support the I/O

    operations and latency requirements of the application data access.

    Figure 3

    1.3 Live Volume requirements Live Volume requirements vary depending on intended use. For example, if a site intends to use Live

    Volume to migrate workloadsuch as Virtual Machines from one data center to another while the Virtual

    Machines are runningthe requirements for Live Volume are going to be much different than if a site

    plans to shut down a workload in one data center and then bring it back online in another.

    1.3.1 Connectivity From the Dell Compellent Live Volume perspective, there are no restrictions on bandwidth or latency.

    However, to proxy data access from one Storage Center to another requires the Live Volume Storage

    Centers to be connected via a high bandwidth/low latency connection. Most operating systems and

    applications require disk latency under 15ms for optimal performance. However, performance may not be

  • 10 Dell Compellent Storage Center Live Volume

    adversely affected until disk latency reaches 25ms or greater. Some applications are more latency

    sensitive. This means that if average latency in the Primary data center to the storage is 5ms for the

    volume and the connection between the two data centers averages 30ms of latency, the disk latency

    writing data to the Primary Live Volume server across the link is going to be greater than 35ms. While this

    may be tolerable for some applications, it may not be tolerable for other applications.

    If Live Volume Proxy communication is utilized, it is strongly recommended to use fiber connectivity

    between the sites to ensure consistent bandwidth and latency. The amount of bandwidth required for the

    connectivity is highly dependent on the amount of changed data that requires replication, as well as the

    amount of other traffic on the same wire. If a site is not planning to proxy data access between Storage

    Centers, then latency isnt so much of a concern.

    It is recommended to have separate connections for storage traffic and LAN traffic, especially when

    spanning data centers. While this is not a requirement for Live Volume, it is a general Best Practice for data

    storage.

    For Hypervisor virtualization products such as VMware ESX(i), Microsoft Hyper-V, and Citrix XenServer, a

    site must have at least a 1GB connection with less than 10ms of latency between servers to support

    vMotion or live migration activities.

    High bandwidth, low latency

    For inter-data center, campus environment, or within a 60-mile radius, high speed fiber connectivity is

    possible. While inter-data center and campus environment may be able to run fiber speeds of up to 8Gb

    using Multi Mode fiber connectivity, Single Mode fiber connectivity of up to 1Gb via dark fiber can assist

    you with connecting your data centers together that may be up to 60 miles apart. This type of connectivity

    is required for live migrating virtual machine workloads between Dell Compellent Storage Centers.

    Low bandwidth, high latency

    If a site is planning on running Live Volume on a low bandwidth/high latent connection, it is

    recommended to control swap activities manually by shutting down the application running at site A,

    perform a Live Volume swap, and then bring the application up at the remote site. This scenario prevents

    any storage proxy traffic going across the link, as well as providing a pause in replication I/O for the link

    allowing the replication to catch up so a Live Volume swap can occur. Manual swap activities can be

    controlled by deselecting the Automatically Swap Roles on the Live Volume attributes as depicted in

    Figure 4.

    1.4 Live Volume and replication attributes Once a Live Volume is created, additional attributes can be modified by editing the replication properties

    of the Live Volume. To modify the Live Volume settings, select Replication & Live Volumes from Enterprise

    Manager, and then select Live Volumes as depicted in Figure 4.

  • 11 Dell Compellent Storage Center Live Volume

    Figure 4

    1.5 Replication attributes Live Volume uses the same Dell Compellent Replication mechanisms as regular Dell Compellent

    replicated volumes. More information about these attributes can be found in the Enterprise Manager Users

    Guide.

    Replicate Active Replay

    For a Live Volume, Dell Compellent recommends that you enable the Replicate Active Replay. This ensures

    that data is replicated as fast as possible which decreases the amount of time required to perform a Live

    Volume Swap role.

    Deduplication

    Copies only the changed portions of the Replay history on the source volume, rather than all data

    captured in each Replay. While this is a more processor-intensive activity, it may reduce the amount of

    replication traffic required. If sufficient bandwidth is present on the connection, Dell Compellent

    recommends that Deduplication be disabled for Live Volumes.

    Replicate Storage to Lower Tier

    Replicate Storage to Lowest Tier is automatically enabled for a new Live Volume. If you want the replicated

    data to go to Tier 1 on the destination Storage Center, then disable this option. Many users perform the

    initial Live Volume replication to the Lowest Tier, and then de-select this option once the initial replication

  • 12 Dell Compellent Storage Center Live Volume

    completes. For more information on Data Progression with Live Volume see the section Data Progression

    and Live Volume.

    QoS Definition

    The QoS Definition under Replication Attributes depicts the QoS you want to use when replicating from

    the current primary to the destination system. Currently, the Live Volume proxy traffic between the

    controllers is not governed by any QoS. If the link between the Live Volume Storage Controllers is shared

    by other traffic, you may want to throttle the replication traffic using a QoS definition to prevent the

    replication traffic from flooding the connection.

    For instance, if a 1GB connection exists between the data centers that are shared by all intra-data center

    traffic, a replication QoS could be set at 0.5 GB and thereby limits the amount of bandwidth used by

    replication traffic to half of the pipe capacity.

    Figure 5

    1.6 Live Volume attributes A Live Volume provides additional attributes that control the behavior of the Live Volume. The following

    sections explain those Live Volume attributes.

    Automatically Swap Roles

  • 13 Dell Compellent Storage Center Live Volume

    When Automatically Swap Roles is selected, the Live Volume will be automatically swapped to the Storage

    Center with the most I/O load as long as it meets the conditions for a swap. The Live Volume logic takes

    server access samples to determine the primary access to the Live Volume (either from servers accessing it

    directly on the primary Storage Center or servers accessing from a secondary Storage Center). Dell

    Compellent takes samples every 30 seconds and keeps the last 10 samples (5 minutes worth) around for

    analysis. This occurs constantly on the primary Live Volume Storage Center (it does not start once the 30-

    minute delay timer expires).

    The design for how autoswap works was meant to make infrequent/proper decisions on the autoswap

    movement of Live Volume primary systems.

    TimeAsPrimary

    Each Live Volume has a TimeAsPrimary (default setting of 30 minutes) timer that will prohibit an autoswap

    from occurring on it after a swaprole has finished. This means that following a swaprole of a Live Volume

    (either auto or user specified), you must wait this time period before expecting an autoswap to begin to

    occur again. The purpose of this was to prevent "thrashing" of autoswap in environments where the

    primary access point could be dynamic or when a Live Volume is shared by applications that can be

    running on servers both at the primary and secondary sites.

    Min Amount for Swap

    The first aspect is the amount of data accessed from a secondary system. If there is light, infrequent

    access to a Live Volume from a secondary Storage Center, does it make sense to move the primary to that

    system? If so, then set this value to a very small value. The criteria for this aspect are defined by the Min

    Amount for Swap attribute for the Live Volume. The value specifies an amount of (read/write

    access)/second per sample value. If a sample shows the secondary Storage Center access exceeds this

    value, this sample/aspect has been satisfied.

    Percentage of Total Access

    The second aspect is the percentage of total access of a Live Volume from the secondary Storage Center

    on a per sample basis. The criteria for this aspect are defined by the Min Secondary % for Swap attribute

    for the Live Volume. If a sample shows the secondary Storage Center accessed the Live Volume more than

    the defined setting for this aspect, this sample/aspect has been satisfied. The default setting for this option

    is 70%. Dell Compellent takes samples every 30 seconds and keeps the most recent 10 samples (5 minutes

    worth) for analysis. This means that the secondary Live Volume has to have more I/O than the primary

    system for 7 out of 10 samples (70%).

    Destination QoS

    The Destination QoS definition under Live Volume Attributes depicts the ideal QoS to use when replicating

    from the originally defined destination Storage Center to the originally defined primary Storage Center.

  • 14 Dell Compellent Storage Center Live Volume

    2 Data Progression and Live Volume Data Progression life cycles are managed independently on each Dell Compellent Storage Center involved

    with a Live Volume. If the Live Volume is not being replicated to the lowest tier on the destination Storage

    Center, it will follow the Data Progression lifecycle on that particular controller.

    2.1 Primary/Secondary Live Volume If a Live Volume is typically always Primary on Storage Center A and is not being replicated to the lowest

    tier on the destination Storage Center B, the data will progress down on the destination Storage Center to

    the next tier/raid level every 12 days. This is because the data on the destination Storage Center is never

    actually being read. All reads take place on the primary Live Volume Storage Center.

    For instance, if a Storage Center has two tiers of disk, 15K and SATA, and the Storage Profiles write data at

    Raid-10 on Tier 1, Replay Data at Tier 1 is at Raid-5, and Tier 3 is at Raid-5, then the first night the blocks of

    data written that day will progress from Tier 1 Raid-10 to Tier 1 Raid-5.

    If a Live Volume is frequently swapped as primary between the Live Volume Storage Centers, then the

    Data Progression pattern will be determined by how often the data is accessed on both systems.

  • 15 Dell Compellent Storage Center Live Volume

    3 Live Volume and MPIO By using Live Volume with a server that has access to both Storage Center controllers in a Live Volume

    scenario, multiple paths can be presented to the server through each Storage Center controller as Live

    Volume data access from the secondary system is proxied to the Primary Storage Center. For this reason,

    special consideration should be taken to control the I/O path for the Live Volume.

    3.1 MPIO policies for Live Volume For Live Volume Storage Centers on which a server has access to both the primary and secondary Live

    Volume controllers, the MPIO policy should be set to a policy that prevents primary data access through

    the secondary Live Volume Storage Center if possible. These types of MPIO policies are typically Failover

    Only, Fixed.

    Additional information on configuring Live Volume MPIO can be found in each of the sections of this

    document devoted to a specific application, such as VMware, Windows/Hyper-V, Solaris, and AIX.

  • 16 Dell Compellent Storage Center Live Volume

    4 VMware and Live Volume VMware and Live Volume can combine to give virtual environments new levels of uptime and storage performance options.

    4.1 MPIO VMware ESX(i) ships with three MPIO policies: Round Robin, Fixed, and Most Recently Used. When

    mapping a Live Volume through both the primary and secondary Storage Centers to a VMware host, the

    MPIO policy should be set to Fixed with the preferred path set to the primary Live Volume Storage Center

    controller.

    Figure 6 depicts a Round Robin policy on a Dell Compellent Live Volume going between two Storage

    Centers. This configuration is not optimal because 50% of the I/O traffic will have to traverse the Live

    Volume Replication proxy link.

    Figure 6

    Figure 7

    4.2 Single site MPIO configuration In a single site configuration, multiple ESX(i) hosts can be connected to both Storage Centers. If a Live

    Volume is mapped over both Storage Centers to the ESX(i) hosts, then the volume can participate in an

    MPIO configuration involving both Storage Centers. In this scenario, it is highly recommended to use a

    VMware Fixed MPIO policy to ensure traffic is always going to the Primary Live Volume Storage Center.

  • 17 Dell Compellent Storage Center Live Volume

    The preferred path is always used in this policy unless the primary path fails and the Fixed policy will

    default to one of the other connections in the policy.

    As depicted in Figure 8, a Live Volume replication exists between Storage Center A and Storage Center B.

    Two VMware ESX(i) hosts are connected and mapped to the Live Volume on each controller. The Primary

    Live Volume is running on Storage Center A, so the Fixed Preferred path (see Figure 7) on each ESX(i) host

    is set to use a connection to Storage Center A as the preferred path.

    Figure 8

    If maintenance is required on Storage Center A, for example, the preferred path for both ESX(i) hosts could

    be changed to Storage Center B. This will cause the Live Volume to Swap Roles making Storage Center B

    the Primary Live Volume controller, so that Storage Center A can be taken offline without a disruption of

    service.

    4.3 Multi-site MPIO configuration In a multi-site configuration, typically the VMware hosts are mapped to their corresponding Storage

    Center only (see Figure 9).

  • 18 Dell Compellent Storage Center Live Volume

    Figure 9

    In this configuration, the MPIO policy can be set to Round Robin as the mappings do not include multiple

    Dell Compellent Storage Centers. All inter-site disk access from the secondary Live Volume is proxied to

    the Primary Live Volume controller via the replication link(s) between the Storage Centers.

    Figure 10

    4.4 VMware vMotion and Live Volume Another way of controlling which Storage Center is Primary for the Live Volume is migrating (vMotion) a

    virtual machine from one node to another. In this scenario, ESX(i) host A would be mapped to Storage

  • 19 Dell Compellent Storage Center Live Volume

    Center A and ESX(i) host B would be mapped to Storage Center B. When a virtual machine running on a

    Live Volume is migrated (vMotion) from ESX(i) host A to ESX(i) host B, the Live Volume will see that the

    storage is being accessed through Storage Center B rather than Storage Center A and can automatically

    swap the Secondary Live Volume to become the Primary Live Volume.

    4.5 VMware DRS/HA and Live Volume VMware DRS technology uses vMotion to automatically move virtual machines to other nodes in a cluster.

    In a multi-site Live Volume VMware cluster, it is a best practice to keep virtual machines running on the

    same site as their Primary Live Volume. Additionally, it is best to keep virtual machines which share a

    common Live Volume enabled datastore together at the same site. If DRS is activated on a VMware

    Cluster with nodes in each site, DRS could automatically move some of the virtual machines running on a

    Live Volume datastore to a host that resides in the other data center. In vSphere 4.1 and later, DRS Host

    Groups and VM Groups can be used in a few ways to benefit a multi-site Live Volume environment. Virtual

    machines which share a common Live Volume datastore can be placed into VM Groups. Movement of

    virtual machines and management of their respective Live Volume datastore can then be performed at a

    containerized group level rather than at an individual virtual machine level. Hosts which share a common

    site can be placed into Host Groups. Once the host groups are configured, they can represent locality for

    the Primary Live Volume. At this point VM groups can be assigned to host groups using the DRS Groups

    Manager to ensure all virtual machines which share a common Live Volume datastore are consistently

    running from the same datastore. The virtual machines can be vMotioned as a group from one site to

    another. After a polling threshold is met, Storage Center will swaproles with the Live Volume enabled

    datastore to the site the VMs were migrated to.

    The infrastructure can be designed in such a way where separate DRS enabled clusters exist at both sites

    keeping automatic migration of virtual machines within the respective site where the Primary Live Volume

    resides. In the event of a Live Volume role swap, all virtual machines associated with the Live Volume can

    be vMotioned from the Site A cluster to the Site B cluster providing both clusters fall under the same

    Datacenter object in vCenter. HA is a cluster centric operation. In this design, in the event of a host

    failure, HA will attempt to restart virtual machines only within the same cluster meaning the VMs will

    always attempt start up at the same site they failed in. VMs will not attempt to restart at the remote site.

    If the VMware virtual infrastructure is version 4.0 or earlier, other steps should be taken to prevent virtual

    machines from unexpectedly running from the Secondary Live Volume. An individual VM or group of VMs

    may be associated with a DRS rule which keeps them together but this doesnt guarantee they will stay on

    the same host or group of hosts over a period of time where the Primary Live Volume is located. As a last

    resort, DRS can be configured for manual mode or disabled when using Live Volume in a multi-site

    configuration which will prevent the automatic migration of VMs to Secondary Live Volume hosts in the

    same cluster.

  • 20 Dell Compellent Storage Center Live Volume

    Figure 11

    An operational best practice in a multi-site VMware environment may be to create one VMware vSphere datacenter, then create a vSphere cluster for each physical site in that datacenter. In this scenario, each site can have VMware DRS and HA enabled and virtual machines will only migrate within that site or cluster. Since all of the cluster nodes are in the same vSphere datacenter, the virtual machines can be manually moved using vMotion between the clusters. This provides a great deal of flexibility and mobility. Virtual machines residing in clusters separated by datacenters cannot leverage vMotion. In this case, virtual machines must be powered off to migrate between datacenters.

  • 21 Dell Compellent Storage Center Live Volume

    5 Microsoft Windows MPIO

    5.1 Microsoft Windows MPIO Microsoft Windows servers running on Dell Compellent storage can use the in-box Microsoft MPIO DSM.

    Microsoft Windows 2008 R2 MPIO DSM comes with the following MPIO policies: Failover Only, Round

    Robin, Round Robin with Subset, Least Queue Depth, Weighted Paths, and Least Blocks. The two most

    common Microsoft Windows MPIO policies to control Live Volume access are Round Robin with Subset

    and Failover Only. Both of these policies allow you to define active and standby paths. When accessing a

    volume being proxied from the Secondary Live Volume to the Primary Live Volume, it adds a little extra

    latency and added traffic to the replication/Live Volumes links. If using a Round Robin policy which

    contains the Primary and Secondary Live Volume Storage Centers, the Live Volume will never auto swap

    because half of the I/O will always be going through each controller. For the best performance in your

    environment, Dell Compellent recommends using an MPIO Policy of Round Robin with Subset or

    Failover Only for Microsoft Windows hosts.

    5.1.1 Round Robin with Subset The Round Robin with Subset policy uses paths from a primary pool of paths for processing requests as

    long as at least one of the paths is available. This DSM uses a standby path only when all the primary paths

    fail.

    Figure 12

    By using the Round Robin with Subset policy, you can maintain Round Robin functionality to the Primary

    Live Volume controller and use the Secondary Live Volume controller as the failover path. These paths can

    be changed at any time on the host by modifying and applying the configuration. (See Figure 14)

  • 22 Dell Compellent Storage Center Live Volume

    Figure 13

    5.1.2 Failover Only Another option, which works best with servers containing only one HBA or if you do not want to round

    robin the I/O load between paths, is the MPIO policy of Failover Only. By using the Failover Only MPIO

    policy, you can define the primary path and all other paths are set to standby. (See Figure 15)

  • 23 Dell Compellent Storage Center Live Volume

    Figure 14

    5.1.3 Sub-Optimal MPIO In Figure 16, a sub-optimal Live Volume MPIO configuration is depicted. In this scenario, the server has

    two adapters and is mapped to two different Storage Centers. Since all four paths are included in an MPIO

    Round Robin policy, about half of the storage traffic would have to traverse the proxy link between the

    two Storage Centers. This configuration also prevents automatically swapping roles because 50% of the

    traffic will always be going through each Storage Center thus preventing an autoswap of the Live Volume

    roles.

  • 24 Dell Compellent Storage Center Live Volume

    Figure 15

    5.1.4 Hyper-V and Live Volume Live Volume works well with Microsoft Hyper-V in both clustered and non-clustered scenarios.

    5.2 Stand Alone Hyper-V In a non-clustered scenario, MPIO can be used to control which Storage Center is providing access to the

    data. It is recommended to use either the Round Robin with Subset or Failover Only MPIO policies. See

    the Microsoft Windows MPIO sections of this document for more information.

    5.3 Clustering Hyper-V With clustered Hyper-V servers on Microsoft Windows 2008 R2, virtual machines can be migrated from

    one host in a cluster to another via Live Volume. In this scenario, Node A could be mapped to Storage

    Center A and Node B could be mapped to Storage Center B. Therefore when a virtual machine is migrated

    from Node A to Node B, the Live Volume will automatically perform a swap role making Storage Center B

    the Primary. This configuration is most common in a multi-site cluster.

  • 25 Dell Compellent Storage Center Live Volume

    5.3.1 Single Site In a single site configuration, multiple Hyper-V servers can be connected to both Storage Centers. If a Live

    Volume is mapped over both Storage Centers to the Hyper-V servers, then the Live Volume can participate

    in an MPIO configuration. In this scenario, it is highly recommended to use a Windows Round Robin with

    Subset or Failover policy to ensure data access traffic is always going to the Primary Live Volume Storage

    Center.

    As depicted in the Figure 17, a Live Volume exists between Storage Center A and Storage Center B. Two

    Hyper-V servers are connected and mapped to the Live Volume on each controller. The Primary Live

    Volume is running on Storage Center A, so either the Round Robin with Subset or Failover Only Active

    path on each Hyper-V host set to use a connection to Storage Center A as the preferred path for the said

    Live Volume.

    Figure 16

    5.3.2 Multi-Site In a multi-site configuration, typically the Hyper-V hosts are mapped only to the Storage Center in the

    particular site. In this scenario, the MPIO policy can be set to Round Robin for Hyper-V hosts. Virtual

    machine placement determines which Storage Center will host the Primary Live Volume. The scenario in

    figure 18 depicts a virtual machine migrated from Host A to Host B. Storage Center B will see the Primary

    Access for the Live Volume going through Storage Center B and will automatically swap the roles so that

    the Storage Center B becomes Primary for the said Live Volume.

  • 26 Dell Compellent Storage Center Live Volume

    Figure 17

    5.4 SCVMM/SCOM and Performance and Resource Optimization

    (PRO) System Center Virtual Machine Manager with System Center Configuration Manager is capable of

    providing intelligent placement as well as automatic migrations of virtual machines from highly utilized

    nodes to lower utilized nodes, depending on the action setting of Automatic or Manual. If using Live

    Volume in a multi-site Hyper-V cluster with PRO, it is recommended to utilize the Manual action for virtual

    machine placement.

    In a multi-site Live Volume Hyper-V cluster, it is a best practice to keep the virtual machines running in the

    same site as their Primary Live Volume. If PRO is activated on a Hyper-V Cluster with nodes in each site,

    PRO could automatically migrate some of the virtual machines running on a Live Volume CSV to a server

    that resides in the other data center thereby splitting the I/O between data centers.

    5.5 Live Volume and Cluster Shared Volumes Hyper-V has a feature called Cluster Shared Volume (CSV) that allows administrators to place multiple

    virtual machines on a cluster volume. CSVs also have a feature called Network Redirection that by design

    makes Hyper-V cluster data access a little more fault tolerant. If the CSV is owned by a node of the cluster

    which has access to the volume, it can redirect data access to that volume through the network so hosts

  • 27 Dell Compellent Storage Center Live Volume

    that may have lost access to the volume can still communicate through the cluster volume owner to the

    Storage Center volume.

    One of the best practices with CSVs is the controlling of the CSV owner. The CSV should be owned by a

    cluster node that is in the primary site and mapped directly to the Storage Center. In this way, if the CSV

    goes into Network Redirected Mode, the CSV owner is in the same site and downtime can be eliminated

    or reduced.

    Figure 19 depicts a multi-site Hyper-V cluster with Live Volume. In this figure, Storage Center B was taken

    offline. CSV network redirection can take over and proxy all the data traffic through the CSV owner on

    Node A.

    Figure 18

    As depicted in Figure 19, if a failure happens that takes down Storage Center B, Hyper-V can redirect

    access to the volume over the network using CSV Network Redirected Access.

    Note: This was only tested with systems that had a flat Ethernet network spanned between the two sites

    via 1GB connectivity.

  • 28 Dell Compellent Storage Center Live Volume

    6 Live Volume Best Practices with Solaris 10 & 11

    If not explicitly stated otherwise, the procedures contained herein are applicable to both Solaris 10

    Update 3 or newer & Solaris 11 OS installations.

    6.1 Live Volume Setup Storage Center A is the Primary Live Volume Storage Center.

    Storage Center B is the Secondary Live Volume Storage Center.

    Figure 19

    6.2 Zoning while ZFS booting from Storage Center Per Dell Compellent Storage Center Best Practices, both fiber channel ports in both servers are mapped to

    the two sets of FEP/FER pairs on the dual-controller Storage Center.

    6.3 Zoning to Live Volume Primary and Secondary Storage Centers Per Dell Compellent Storage Center Best Practices, both fiber channel ports in both servers are mapped to

    the four sets of FEP/FER pairs on the dual-controller Storage Centers.

  • 29 Dell Compellent Storage Center Live Volume

    6.4 Solaris Server Setup The server name is hadrian, running Solaris 11.1.

    The mounted file system /pool01/2gZFS as shown below is configured as a Live Volume. The file system

    rides on top of a ZFS pool named pool01 which is a mirrored pair of two (2) Dell Compellent Storage

    Center volumes as highlighted in the mpathadm further below.

    root@hadrian:/# df -k

    Filesystem 1024-blocks Used Available Capacity Mounted on

    rpool/ROOT/solaris 286949376 2294697 273895394 1% /

    [snip]

    rpool/export 286949376 32 273895394 1% /export

    rpool/export/home 286949376 31 273895394 1% /export/home

    rpool 286949376 73 273895394 1% /rpool

    /dev/dsk/c4t6d0s2 694700 694700 0 100%

    /media/Oracle_Solaris-11_1-Text-SPARC

    pool01 10257408 32 10257277 1% /pool01

    pool01/2gZFS 10257408 31 10257277 1% /pool01/2gZFS

    root@hadrian:/# mpathadm list lu

    /dev/rdsk/c0t5000C50048697F5Fd0s2

    Total Path Count: 1

    Operational Path Count: 1

    /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2 disk 1 of ZFS

    pool01

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C6d0s2 disk 2 of ZFS

    pool01

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C5d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C4d0s2

    Total Path Count: 8

    Operational Path Count: 8

    Each of these multipath Storage Center volumes is represented by eight (8) Total Path Counts and eight

    (8) Operational Path Counts. Of these 8 Operational Paths, 4 paths are presented from the Primary Storage

    Center and 4 paths are presented from the Secondary Storage Center as shown below.

    root@hadrian:/# mpathadm show lu

    /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

  • 30 Dell Compellent Storage Center Live Volume

    Logical Unit: /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

    mpath-support: libmpscsi_vhci.so

    Vendor: COMPELNT

    Product: Compellent Vol

    Revision: 0604

    Name Type: unknown type

    Name: 6000d3100000650000000000000017c7

    Asymmetric: no

    Current Load Balance: round-robin

    Logical Unit Group ID: NA

    Auto Failback: on

    Auto Probing: NA

    Paths:

    Initiator Port Name: 21000024ff3ebd25

    Target Port Name: 5000d31000006508

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd25

    Target Port Name: 5000d31000006507

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd24

    Target Port Name: 5000d31000006505

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd24

    Target Port Name: 5000d31000006506

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd25

    Target Port Name: 5000d31000006511 from Secondary Storage

    Center

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd25

  • 31 Dell Compellent Storage Center Live Volume

    Target Port Name: 5000d31000006512 from Secondary Storage

    Center

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd24

    Target Port Name: 5000d31000006514 from Secondary Storage

    Center

    Override Path: NA

    Path State: OK

    Disabled: no

    Initiator Port Name: 21000024ff3ebd24

    Target Port Name: 5000d31000006513 from Secondary Storage

    Center

    Override Path: NA

    Path State: OK

    Disabled: no

    Target Ports:

    Name: 5000d31000006508

    Relative ID: 0

    Name: 5000d31000006507

    Relative ID: 0

    Name: 5000d31000006505

    Relative ID: 0

    Name: 5000d31000006506

    Relative ID: 0

    Due the nature of how Dell Compellent Live Volume operates, any IO presented to the 4 paths via the

    Secondary Storage Center is proxied IO-access to the Primary Storage Center. This may introduce latency

    to the IO transactions and likewise the application stacks which operate on top of it.

    The intention is to have IO traverse ONLY the 4 Primary Storage Center paths; it is thus recommended that

    the following method is used to disable all paths leading to the Secondary Storage Center, where the

    Disabled flag (as referenced above) is configured from no to yes. All Storage Center vendor/product ID

    stanzas stored in the /etc/driver/drv/scsi_vhci.conf file is left intact including the statement load-

    balance=round-robin.

    root@hadrian:/# mpathadm disable path -i 21000024ff3ebd25 -t 5000d31000006511 -l

    \ /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

  • 32 Dell Compellent Storage Center Live Volume

    root@hadrian:/# mpathadm disable path -i 21000024ff3ebd25 -t 5000d31000006512 -l

    \ /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

    Repeat the above commands four (4) times in total for each /dev/rdsk device which needs to be

    configured, replacing the values of Initiator and Target port WWPNs respectively. Note that any multipath

    paths disabled in this manner is NOT persistent across reboots. Additionally, the disabled paths are NOT

    reflected numerically in Operational Path Count either. Managing path state persistence across reboots

    can be achieved by either scripting logic into a set of boot time startup scripts or alternatively and

    manually editing the /kernel/drv/fp.conf and mpt.conf files to hard code the state of the HBA controller

    ports on a per port basis. This latter topic remains outside the scope of this document and storage design

    discussions may be requested on a per need basis.

    The final output from the mpathadm command would show this accordingly, where Operational Path

    Count shows 4 instead of 8.

    root@hadrian:/# mpathadm list lu

    /dev/rdsk/c0t5000C50048697F5Fd0s2

    Total Path Count: 1

    Operational Path Count: 1

    /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C6d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C5d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C4d0s2

    Total Path Count: 8

    Operational Path Count: 8

    Please note that in the case of a Live Volume planned maintenance or cut over event, these paths shown

    above will need to be manually (or via scripting) switched over to make the Secondary Storage Center

    paths Active and likewise disable the four (4) paths to what was formerly the Primary Storage Center.

    6.5 UFS Live Volume The best way to treat a Dell Compellent Replay View is like a dd image of the source volume. If the UFS file

    system is mounted when the Replay is taken, all mount flags are preserved in the Replay.

    When the Replay View is then presented to another server, these UFS mount flags are still maintained

    intact. The administrator may be alerted that a file system consistency check (i.e. fsck) should be run

    against the Dell Compellent volume prior to mounting.

    Here is an example of the steps to use a Replay of a Live Volume:

  • 33 Dell Compellent Storage Center Live Volume

    1. Map the Replay View of the Live Volume UFS file system to the server.

    You can easily create the Replay View as a live volume in the wizard or the Replay View can be

    created without the Live Volume feature, the discovery and mapping of the Replay View is the

    same regardless.

    2. Run the devfsadm command on the Solaris server and look for the new listing in the output of the

    mpathadm command. In this scenarios, a Replay View from the host vibe has been presented to

    the host hadrian.

    root@hadrian:/# mpathadm list lu

    /dev/rdsk/c0t5000C50048697F5Fd0s2

    Total Path Count: 1

    Operational Path Count: 1

    /dev/rdsk/c0t6000D3100000650000000000000017C8d0s2

    Total Path Count: 4

    Operational Path Count: 4

    /dev/rdsk/c0t6000D3100000650000000000000017C7d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C6d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C5d0s2

    Total Path Count: 8

    Operational Path Count: 8

    /dev/rdsk/c0t6000D3100000650000000000000017C4d0s2

    Total Path Count: 8

    Operational Path Count: 8

    root@hadrian:/# mkdir /vibe_lv

    root@hadrian:/# mount /dev/rdsk/c0t6000D3100000650000000000000017C8d0s2 /vibe_lv

    root@hadrian:/# df -k

    Filesystem kbytes used avail capacity Mounted on

    [snip]

    /dev/rdsk/c0t6000D3100000650000000000000017C8d0s2

    1032667066 9785996 1012554400 1% /vibe_lv

    6.6 ZFS Live Volume A Solaris server utilizing the ZFS file system is capable of taking both local ZFS-based snapshots and

    triggering Storage Center Replays for the associated volumes.

  • 34 Dell Compellent Storage Center Live Volume

    The general recommendation is to keep a sufficient number of recent ZFS snapshots for quick access and

    recovery and keep longer retention snapshots as Replays on the Storage Center.

    6.7 ZFS Considerations ZFS Snapshots can be created almost instantly and initially these snapshots consume no additional disk

    space within the pool. However, as data within the active dataset changes, the ZFS snapshot consumes

    disk space by continuing to reference the old data and so prevents the space from being freed.

    ZFS Snapshots consume space inside the allocated volume space unlike Replays that exist outside the

    allocated space. Hence, the volume can fill with ZFS snapshots if too many are kept for too long.

    The Storage Center and Data Progression cannot distinguish between ZFS Snapshot data and active data.

    For example, configure a ZFS Snapshot every 12 hours and keep it for 5 days. This provides the ability to

    quickly recover files from human error.

    At the Storage Center level, configure a Replay for once a week to keep for 4 weeks and twice a month to

    keep for 12 weeks. This provides longer retention for business needs.

    In this situation, the ZFS snapshots will be small or roll over frequently. Since the Replays are keeping a

    much longer delta, they will likely be larger. However, the Storage Center and Data Progression can work

    to move this data to the lowest tier of storage and it will not consume allocated space.

    6.8 Mapping a Live Volume ZFS Replay View, to the same Solaris

    server A Dell Compellent Replay View volume mapped back to the same Solaris server will NOT show up as a

    candidate ZFS pool target because the ZFS pool GUID is already in use on that server.

    You may view the ZFS pool GUID of a disk as listed from mpathadm output with the following command

    shown below.

    root@hadrian:/# zdb -l /dev/rdsk/c0t6000D3100000650000000000000017C7d0s0

    ------------------------------------------

    LABEL 0

    ------------------------------------------

    timestamp: 1381146274 UTC: Mon Oct 7 11:44:34 2013

    version: 34

    name: 'pool01'

    state: 0

    txg: 27

    pool_guid: 12837671566150668190

    hostid: 2248979974

    hostname: 'hadrian'

    top_guid: 14604095093096813380

  • 35 Dell Compellent Storage Center Live Volume

    guid: 14291772779925177758

    vdev_children: 1

    vdev_tree:

    type: 'mirror'

    id: 0

    guid: 14604095093096813380

    metaslab_array: 27

    metaslab_shift: 26

    ashift: 9

    asize: 10724048896

    is_log: 0

    create_txg: 4

    children[0]:

    type: 'disk'

    id: 0

    guid: 14291772779925177758

    path: '/dev/dsk/c0t6000D3100000650000000000000017C7d0s0'

    devid: 'id1,ssd@n6000d3100000650000000000000017c7/a'

    phys_path: '/scsi_vhci/ssd@g6000d3100000650000000000000017c7:a'

    whole_disk: 1

    create_txg: 4

    children[1]:

    type: 'disk'

    id: 1

    guid: 1069479611183447092

    path: '/dev/dsk/c0t6000D3100000650000000000000017C6d0s0'

    devid: 'id1,ssd@n6000d3100000650000000000000017c6/a'

    phys_path: '/scsi_vhci/ssd@g6000d3100000650000000000000017c6:a'

    whole_disk: 1

    create_txg: 4

    ------------------------------------------

    LABEL 1 - CONFIG MATCHES LABEL 0

    ------------------------------------------

    [snip]

    6.9 Mapping a Live Volume ZFS Replay View, to an alternate Solaris

    server A Dell Compellent ZFS Replay View volume CAN be mapped to an alternate Solaris server. The devfsadm

    may be used to discover the new volume and the zpool import command, which will scan for the new

    volume and identify it as a ZFS file system.

    ZFS file system may then be mounted by either the identifying pool name or the ZFS pool GUID number as

    shown from the zpool import command. In this scenario below, the host pinto is importing an exported

    100GB ZFS pool named vibe100gbzfs which was previously exported from the host vibe.

  • 36 Dell Compellent Storage Center Live Volume

    pinto.techsol.beer.town# zpool import

    pool: vibe100gbzfs

    id: 17566337651005486195

    state: ONLINE

    status: The pool was last accessed by another system.

    action: The pool can be imported using its name or numeric identifier and the '-

    f' flag.

    pinto.techsol.beer.town# zpool -f import vibe100gbzfs

    6.10 Appendix Solaris Oracle Solaris 11 Info Library

    http://www.oracle.com/technetwork/server-storage/solaris11/documentation/index.html

    http://docs.oracle.com/cd/E23824_01/

    http://www.oracle.com/technetwork/server-storage/solaris11/documentation/solaris-11-cheat-sheet-

    1556378.pdf

    Sun StorEdge SAN Foundation 4.4 Documentation

    http://docs.oracle.com/cd/E19310-01/index.html

    http://www.oracle.com/technetwork/documentation/san-software-194281.html

    Sun StorEdge SAN Foundation 4.4 Software Download

    http://thamurali.blogspot.com/2012/08/where-is-san44x-software-san-foundation.html

    Oracle Solaris Administration: SAN Configuration and Multipathing

    http://docs.oracle.com/cd/E23824_01/html/E23097/toc.html

    http://docs.oracle.com/cd/E23824_01/html/E23097/getmw.html#getmr

  • 37 Dell Compellent Storage Center Live Volume

    7 Live Volume Best Practices with AIX

    7.1 Live Volume Setup Storage Center A is the Primary Live Volume Storage Center (operating in Legacy port mode)

    Storage Center B is the Secondary Live Volume Storage Center (operating in Legacy port mode)

    Figure 20

    7.2 AIX Server fiber channel zoning As per Dell Compellent Best Practices for AIX, the server is mapped to the FEP/FER pairs on both the

    primary and secondary Live Volume Storage Centers.

    7.3 AIX 6.1 ML2 server setup The AIX server is called tyrant mapped to Storage Center A and Storage Center B for Live Volume testing

    which are running in legacy mode on the fiber channel and iSCSI front end.

  • 38 Dell Compellent Storage Center Live Volume

    7.4 Added Dell Compellent ODM/PCM to the server and rebooted The addition of the Dell Compellent Object Database Manager (ODM) Path Control Module (PCM) allows

    AIX to recognize the Dell Compellent volumes as multipath capable, increase the queue depth per disk to

    32 and identifies Dell Compellent volumes by name in the output of the lsdev Cc disk command.

    7.5 MPIO algorithm should be fail_over on each hdisk used for Live

    Volume. By default, MPIO uses the round-robin algorithm when sending data to a multipathed Dell Compellent

    volume. To avoid writing to the secondary Storage Center, this algorithm change should be completed on

    the hdisk before the disk is made part of a Volume Group under the AIX Logical Volume Manager.

    If the disk is already part of a Volume Group, the P option can be used to make the change, but a server

    reboot will be needed for the parameter changes to take effect. Note: The lsattr -HE -l hdisk# will show

    the algorithm as changed before the reboot of the server is complete.

    This following script MAY be used to configure the various Dell Compellent presented volumes. This script

    is provided AS-IS without any support or warranty of any kind. This script configures all Dell Compellent

    volumes to an algorithm of fail_over.

    # chdev -l hdisk23 -a algorithm=fail_over

    Method error (/usr/lib/methods/chgdisk):

    0514-062 Cannot perform the requested function because the

    specified device is busy.

    OR

    # chdev -l hdisk23 -a algorithm=fail_over -P

    hdisk23 changed

    Reboot the AIX server for changes to take effect.

    7.6 Volume Group, Logical Volume and JFS2 file system creation

    steps #> mkvg -S -y sc11 hdisk23

    0516-1254 mkvg: Changing the PVID in the ODM.

    sc11

    #> lspv hdisk23

    # lspv hdisk23

    PHYSICAL VOLUME: hdisk23 VOLUME GROUP: sc11

    PV IDENTIFIER: 0000093ef663e9aa VG IDENTIFIER

    0000093e0000d70000000129f663ea2d

  • 39 Dell Compellent Storage Center Live Volume

    PV STATE: active

    STALE PARTITIONS: 0 ALLOCATABLE: yes

    PP SIZE: 256 megabyte(s) LOGICAL VOLUMES: 1

    TOTAL PPs: 1999 (511744 megabytes) VG DESCRIPTORS: 2

    FREE PPs: 0 (0 megabytes) HOT SPARE: no

    USED PPs: 1999 (511744 megabytes) MAX REQUEST: 256 kilobytes

    FREE DISTRIBUTION: 00..00..00..00..00

    USED DISTRIBUTION: 400..400..399..400..400

    MIRROR POOL: None

    #> mklv -t jfs2 -y lv500GB sc11 1999

    lv500GB

    #> mkdir /500gb

    #> crfs -v jfs2 -a log=INLINE -d lv500GB -m /LV500gb

    File system created successfully.

    523485388 kilobytes total disk space.

    New File System size is 1048051712

    #> mount /dev/lv500GB /LV500gb

  • 40 Dell Compellent Storage Center Live Volume

    8 Live Volume Best Practices with HP-UX

    8.1 Live Volume Setup Storage Center A is the Primary Live Volume Storage Center (operating in Legacy port mode)

    Storage Center B is the Secondary Live Volume Storage Center (operating in either Legacy/Virtual port

    mode)

    Figure 21

    8.2 Set the Load Balancing Policy By default, all paths presented to HP-UX per Volume inherit a round_robin load balancing policy. In a

    Live Volume configuration, the paths presented from Storage Center B are proxied data paths as Storage

    Center B is not actually serving any Volume access; this proxying of the paths passes the IO requests to

    the primary Storage Center (Storage Center A) for fulfillment. These proxied paths introduce latency to the

    IO processes and should be avoided by setting the HP-UX load balancing policies as follows.

    The load_bal_policy attribute on every Dell Compellent Storage Center presented Volume needs to be

    set to weighted_rr. Additionally, the wrr_path_weight attribute on these proxied paths from these

    Volumes from Storage Center B needs to be set to 0 (default is 1).

    Two volumes are presented from Storage Center A as seen below, each volume is presented on two paths

    each. In its default state, it looks as follows.

  • 41 Dell Compellent Storage Center Live Volume

    bash-4.2# ioscan -m dsf

    Persistent DSF Legacy DSF(s)

    ========================================

    /dev/rdisk/disk24 /dev/rdsk/c35t0d1

    /dev/rdsk/c45t0d1

    /dev/rdisk/disk25 /dev/rdsk/c39t0d1

    /dev/rdsk/c47t0d1

    These commands return the respective LUN instance ID and LUN path for these devices.

    bash-4.2# ioscan -kfnN | grep lunpath | grep disk24 | awk '{print $2,$3}'

    38 0/2/1/0/4/0.0x5000d31000006917.0x4001000000000000

    27 0/2/1/0/4/1.0x5000d31000006919.0x4001000000000000

    bash-4.2# ioscan -kfnN | grep lunpath | grep disk25 | awk '{print $2,$3}'

    39 0/2/1/0/4/0.0x5000d31000006909.0x4001000000000000

    29 0/2/1/0/4/1.0x5000d3100000690b.0x4001000000000000

    And these commands identify the respective (default) load_bal_policy for each device.

    bash-4.2# scsimgr get_attr -D /dev/rdisk/disk24 -a load_bal_policy | grep

    current

    current = round_robin

    bash-4.2# scsimgr get_attr -D /dev/rdisk/disk25 -a load_bal_policy | grep

    current

    current = round_robin

    Live Volume is established between Storage Center A and Storage Center B using the Dell Compellent

    Enterprise Manager. The same volumes on Storage Center B are then presented back to the host, the

    command insf e is executed and the following output is captured (observe the additional paths for

    devices disk24 and disk25 respectively).

    /dev/rdisk/disk24 /dev/rdsk/c35t0d1

    /dev/rdsk/c45t0d1

    /dev/rdsk/c8t0d1

    /dev/rdsk/c11t0d1

    /dev/rdisk/disk25 /dev/rdsk/c39t0d1

    /dev/rdsk/c47t0d1

    /dev/rdsk/c86t0d1

    /dev/rdsk/c87t0d1

    These commands show the additional LUN instance ID and LUN paths.

  • 42 Dell Compellent Storage Center Live Volume

    bash-4.2# ioscan -kfnN | grep lunpath | grep disk24 | awk '{print $2,$3}'

    38 0/2/1/0/4/0.0x5000d31000006917.0x4001000000000000

    46 0/2/1/0/4/0.0x5000d3100002cc1b.0x4001000000000000

    48 0/2/1/0/4/0.0x5000d3100002cc1c.0x4001000000000000

    27 0/2/1/0/4/1.0x5000d31000006919.0x4001000000000000

    bash-4.2# ioscan -kfnN | grep lunpath | grep disk25 | awk '{print $2,$3}'

    39 0/2/1/0/4/0.0x5000d31000006909.0x4001000000000000

    43 0/2/1/0/4/0.0x5000d3100002cc07.0x4001000000000000

    42 0/2/1/0/4/0.0x5000d3100002cc08.0x4001000000000000

    29 0/2/1/0/4/1.0x5000d3100000690b.0x4001000000000000

    This following script MAY be used to configure the various Dell Compellent presented volumes. This script

    is provided AS-IS without any support or warranty of any kind. This script configures all Dell Compellent

    volumes to a queue depth max_q_depth of 32, IO retries max_retries to 60 and load_bal_policy to

    weighted_rr.

    for i in 24 25

    do

    scsimgr save_attr -D /dev/rdisk/disk${i} -a max_q_depth=32

    scsimgr save_attr -D /dev/rdisk/disk${i} -a max_retries=60

    scsimgr save_attr -D /dev/rdisk/disk${i} -a load_bal_policy=weighted_rr

    done

    Additionally, these commands are applied to the Storage Center B paths ONLY to disable IO from

    traversing these paths.

    for i in 48 27 42 29

    do

    scsimgr save_attr -C lunpath -I ${LUNInstance} -a wrr_path_weight=0

    done

    The final result should look as follows.

    === info for /dev/rdisk/disk24

    LUN Path: 0/2/1/0/4/0.0x5000d31000006917.0x4001000000000000 Instance: 38,

    wrr_path_weight:

    1

    LUN Path: 0/2/1/0/4/0.0x5000d3100002cc1b.0x4001000000000000 Instance: 46,

    wrr_path_weight:

    1

    LUN Path: 0/2/1/0/4/0.0x5000d3100002cc1c.0x4001000000000000 Instance: 48,

    wrr_path_weight:

    0

  • 43 Dell Compellent Storage Center Live Volume

    LUN Path: 0/2/1/0/4/1.0x5000d31000006919.0x4001000000000000 Instance: 27,

    wrr_path_weight:

    0

    === info for /dev/rdisk/disk25

    LUN Path: 0/2/1/0/4/0.0x5000d31000006909.0x4001000000000000 Instance: 39,

    wrr_path_weight:

    1

    LUN Path: 0/2/1/0/4/0.0x5000d3100002cc07.0x4001000000000000 Instance: 43,

    wrr_path_weight:

    1

    LUN Path: 0/2/1/0/4/0.0x5000d3100002cc08.0x4001000000000000 Instance: 42,

    wrr_path_weight:

    0

    LUN Path: 0/2/1/0/4/1.0x5000d3100000690b.0x4001000000000000 Instance: 29,

    wrr_path_weight:

    0

    Finally, note that the destination Storage Center B can operate in either Legacy or Virtual port mode. If the

    latter is true, Live Volume replicated volumes will be presented on all paths from Storage Center B to the

    host. This would result in each volume being visible on 6 (six) paths instead of 4 (four) as shown above.

    This change does NOT negate the need to configure the load_bal_policy or wrr_path_weight attributes.

  • 44 Dell Compellent Storage Center Live Volume

    9 Live Volume Disaster Recovery

    9.1 Overview As with all replications on a Dell Compellent Storage Center, Live Volume replications can also be

    protected by the Enterprise Manager Disaster Recovery plan. This plan is necessary to bring the volume

    back online in the event of an unplanned failure to the Primary Live Volume controller or Data Center. The

    primary Live Volume Storage Center may be in the same facility or across sites, so the definition of this

    failure is anything that prevents the two Live Volume Storage Centers from communicating.

    9.2 Disaster Recovery Plan When using a Live Volume, a Pre-Defined Disaster Recovery Plan configuration can still be established on

    Live Volume just like a regular replicated Volume. This allows users to run their recovery plan if the primary

    Live Volume controller is taken off line. For information on using Dell Compellent Recovery plans, see the

    Enterprise Manager User Guide.

    Note: A recovery plan can be created for each Dell Compellent Storage Center involved with the Live

    Volume so that the DR plan can be executed if the Live Volume is Primary on Storage Center A or B.

    9.3 Fracture Overview When the secondary Live Volume Storage Center cannot talk to the primary Live Volume Storage Center,

    Live Volume access on the secondary Storage Center is taken off line. This is because the secondary

    Storage Center is not actually serving any volume accessit only proxies the access to the Live Volume on

    the primary Storage Center.

    The process of making the Live Volume read/writable on the secondary Storage Center when the primary

    Storage Center is inaccessible is known as fracturing.

    At present, a Live Volume can be fractured with the assistance of Dell Compellent Co-Pilots only because

    Live Volume fracturing requires using the Dell Compellent CPI.

    9.4 Fracture Recovery Once a Live Volume is fractured on the destination controller, you are now at risk to enter a split-brained

    scenario on the Live Volumes when the primary Live Volume Storage Center is back on line and reachable.

    When the primary controller comes back on line and since it was the original Live Volume primary, it will

    bring the Live Volume back on line and servers will then have read/write access to it.

  • 45 Dell Compellent Storage Center Live Volume

    10 Use Cases The following describes some examples of how Live Volume can be used. Live Volume is not limited to

    these use cases.

    10.1 Zero downtime SAN maintenance and data migration By utilizing Live Volume you can perform maintenance activities on a Storage Center, such as taking a

    Storage Center off line to move its location, perform service-affecting enclosure or disk firmware updates,

    or move the volume to a new SAN, without any down time.

    10.1.1 Requirements The requirements for this operation would be the following:

    MPIO installed and appropriately configured on the host computers.

    Server(s) properly zoned into both Dell Compellent Storage Centers.

    Server(s) configured on both Storage Centers.

    At least a 1Gb low latency replication link between Storage Centers.

    Summary: In advance of a planned outage, Live Volume can non-disruptively migrate all volumes from

    one Storage Center to another, enabling continuous operation for all applications and volumes even

    after one Storage Center has completely powered down.

    Operation: In an on-demand, operator-driven process, Live Volume can transparently move volumes

    from one Storage Center to another. The applications operate continuously. This enables several options

    for improved system operation:

    Redefine remote site as Primary for all volumes on local site

    Shut down local site

    Reverse process after planned outage is completed

  • 46 Dell Compellent Storage Center Live Volume

    Figure 22

    Figure 23

  • 47 Dell Compellent Storage Center Live Volume

    10.2 Storage migration for virtual machine migration As VMware, Hyper-V, or XenServer virtual machines are migrated from data center to data center, Live

    Volume can automatically migrate the related volumes to optimize performance and minimize I/O

    network overhead.

    Live Volume continuously monitors for changes in I/O traffic for each volume and non-disruptively moves

    the primary storage to the Storage Center for optimum efficiency.

    Migrate the virtual machine using the server virtualization software

    Live Volume will track the changes in I/O traffic mapping and will perform a Primary/Secondary

    swap after a fixed amount of time and data have been transferred

    Figure 24 Storage Follows the Application (Server Virtualization)

    10.2.1 Requirements The requirements for this operation would be the following:

    Server(s) properly zoned into both Dell Compellent Storage Centers.

  • 48 Dell Compellent Storage Center Live Volume

    Server(s) configured on both Storage Centers.

    1Gb LAN/WAN interconnect(s) between Storage Centers with sub 10ms latency

    1Gb iSCSI or Fiber Channel connectivity between the Storage Centers for Replication and Proxy

    traffic that has low latency.

    10.3 Disaster avoidance In anticipation of an unplanned outage (like a hurricane), Live Volume can migrate data to remote systems

    before the local system has an outage. Live Volume used in this manner will prevent data loss and will

    enable an extremely rapid restart at the remote site.

    Configurations: Note that if Storage Centers are over 60 miles apart, the use of dark fiber is typically not

    available. If not using fiber connectivity, latencies cannot be guaranteed and may prevent the operation

    from being non-disruptive.

    Operation: In an on-demand, operator-driven process, Live Volume can transparently move volumes from

    one Storage Center to another. The applications operate continuously. This enables several options for

    improved system operation:

    Redefine remote site as Primary for all volumes on local site

    Shut down applications on local site

    Restart applications on remote site

    Reverse process after risk of potential outage is gone

  • 49 Dell Compellent Storage Center Live Volume

    Figure 25 Disaster avoidance

    10.4 On-demand load distribution Transparently distribute workload, balance storage utilization, or balance I/O traffic between any two

    Storage Centers within synchronous distances.

    Configurations: Note that Storage Centers must be connected via high bandwidth and low latency

    connections.

    Operation: In an on-demand, operator-driven process, Live Volume can transparently move volumes from

    one Storage Center to another. The applications operate continuously. This enables several options for

    improved system operation:

    Distribution of I/O workload

    Distribution of storage

    Distribution of front end Ioad traffic

    Reallocation of workload to match capabilities of heterogeneous systems

  • 50 Dell Compellent Storage Center Live Volume

    Figure 26 On demand local distribution

    10.5 Cloud computing Summary: Transparently distribute workload, balance storage utilization, or balance I/O traffic between

    multiple Storage Centers within a data center, enabling continuous flexibility to meet changes in workload

    and to provide a higher level system up-time.

    Configurations: Live Volumes can be created between any two Storage Centers in a data center. Each

    Storage Center can have multiple Live Volumes, each potentially connecting to a different Storage Center.

    Operation: In an on-demand, operator-driven process, Live Volume can transparently move volumes

    from one Storage Center to another. The applications operate continuously. This enables several options

    for improved system operation:

    Distribution of I/O workload

    Distribution of storage

    Distribution of front end Ioad traffic

    Reallocation of workload to match capabilities of heterogeneous systems

  • 51 Dell Compellent Storage Center Live Volume

    Figure 27 Cloud computing

  • 52 Dell Compellent Storage Center Live Volume

    11 Replay Manager & Live Volume Replay Manager is only supported when all servers are mapped to the same Storage Center. If servers are

    split between data centers, then full Replay Manager support is not currently possible. While the consistent

    Replays are replicated to the secondary Storage Center, they are not accessible as recovery points from

    Replay Manager.