vnx_dp_manage_luns.pdf

Embed Size (px)

Citation preview

  • 8/18/2019 vnx_dp_manage_luns.pdf

    1/19

    Here is Your Customized Document

    Your Configuration is

    Manage storage pools

     Model - VNX5300

     

    Storage Type - Unified (NAS and SAN)  Connection Type - Fibre Channel Switch or Boot from SAN 

    Operating System - ESX Server 5i 

    Path Management Software - VMware native

    Document ID - 1428635554847

    Reporting Problems

    To send comments or report errors regarding this document,please email: [email protected] Issues not related to this document, contact your service provider.Refer to Document ID:1428635554847

    Content Creation Date April 9, 2015

  • 8/18/2019 vnx_dp_manage_luns.pdf

    2/19

    EMC® VNX® Series

    Managing LUNs on your VNX® System

    November, 2014

    This guide describes how to manage LUNs within Unisphere® for EMC® VNX® platforms.

    Topics include:

    u Starting Unisphere..................................................................................................... 2u Committing VNX for Block Operating Environment (OE) software with Unisphere........ 3u Configuring cache with Unisphere..............................................................................3u Enabling storage groups with Unisphere.................................................................... 3u Verifying that each LUN is fully initialized using Unisphere.........................................3u MetaLUNs overview....................................................................................................4u Allocating storage on a new system with the Unisphere LUN Provisioning Wizard.......5u Create pool LUNs....................................................................................................... 6u Create classic LUNs....................................................................................................7u Create a LUNs folder ...................................................................................................8u

    Add LUNs to folders................................................................................................... 8u Remove LUNs from a folder ........................................................................................ 8u Setting LUN properties...............................................................................................9u Set classic LUN write cache or FAST Cache properties.................................................9u Auto assig n for a LUN...............................................................................................10u Default owner of a LUN.............................................................................................10u Source LUN.............................................................................................................. 11u Destination LUN definition....................................................................................... 11u Verify priority for a LUN.............................................................................................11u Rebuild priority for a LUN......................................................................................... 12u Start the Storage Expansion wizard.......................................................................... 12u Delete LUNs............................................................................................................. 13u LUN migration overview............................................................................................14u Start a LUN migration...............................................................................................14u Cancel (stop) a LUN migration.................................................................................. 15u Display the status of active LUN migrations..............................................................15u Creating storage groups with Unisphere...................................................................15u Making virtual disks visible to an ESXi Server ...........................................................16u Verifying that native multipath failover sees all paths to the LUNs............................16

  • 8/18/2019 vnx_dp_manage_luns.pdf

    3/19

    Starting UnisphereProcedure

    1. Log in to a host (which can be a server) that is connected through a network to thesystem’s management ports and that has an Internet browser: Microsoft InternetExplorer, Netscape, or Mozilla.

    2. Start the browser.

    3. In the browser window, enter the IP address of one of the following that is in the samedomain as the systems that you want to manage:

    l A system SP with the most recent version of the VNX Operating Environment (OE)installed

    Note

    This SP can be in one of the systems that you want to manage.

    l A Unisphere management station with the most recent Unisphere Server and UIsinstalled

    Note

    If you do not have a supported version of the JRE installed, you will be directed to the

    Sun website where you can select a supported version to download. For information

    on the supported JRE versions for your version of Unisphere, refer to Environment and

    System Requirements in the Unisphere release notes on the EMC Online Support

    website.

    4. Enter your user name and password.

    5. Select Use LDAP if you are using an LDAP-based directory server to authenticate user credentials.

    If you select the Use LDAP option, do not include the domain name.

    When you select the LDAP option, the username / password entries are mapped to anexternal LDAP or Active Directory server for authentication. Username / passwordpairs whose roles are not mapped to the external directory will be denied access. If the user credentials are valid, Unisphere stores them as the default credentials.

    6. Select Options to specify the scope of the systems to be managed.

    Global (default) indicates that all systems in the domain and any remote domains canbe managed. Local indicates that only the targeted system can be managed.

    7. Click Login.

    When the user credentials are successfully authenticated, Unisphere stores them asthe default credentials and the specified system is added to the list of managedsystems in the Local domain.

    8. If you are prompted to add the system to a domain, add it now.

    The first time that you log in to a system, you are prompted to add the system to aUnisphere domain. If the system is the first one, create a domain for it. If you alreadyhave systems in a domain, you can either add the new system to the existing domainor create a new domain for it. For details on adding the system to a domain, use theUnisphere help.

    2  

    http://support.emc.com/http://support.emc.com/http://support.emc.com/http://support.emc.com/

  • 8/18/2019 vnx_dp_manage_luns.pdf

    4/19

    Committing VNX for Block Operating Environment (OE) softwarewith Unisphere

    If you did not install a VNX for Block OE update on the system, you need to commit theVNX for Block OE software now.

    Procedure

    1. From Unisphere, select All Systems > System List.

    2. From the Systems page, right-click the entry for the system for which you want committhe VNX for Block OE and select Properties.

    3. Click the Software tab, select VNX-Block-Operating-Environment, and click Commit.

    4. Click Apply.

    Configuring cache with UnisphereProcedure

    1. From Unisphere, select All Systems > System List.

    2. From the Systems page, right-click the entry for the system for which you want toverify cache properties and select Properties.

    3. Enable or configure the cache as described in the Unisphere online help.

    Note

    The latest version of Unisphere automatically sets the read and write cache sizes. If 

    your system is running an older version of Unisphere, refer to the system's version of 

    the online help for advice on setting read/write cache values and setting watermarks.

    Enabling storage groups with Unisphere You must enable storage groups using Unisphere if only one server is connected to thesystem and you want to connect additional servers to the system.

    Procedure

    1. From Unisphere, select All Systems > System List.

    2. From the Systems page, right-click the icon for the system, and click Properties.

    3. Click the General tab, and select Storage Groups.4. Click OK .

     Verifying that each LUN is fully initialized using UnisphereAlthough the storage group with a new LUN is assigned to the server, the server cannotsee the new LUN until it is fully initialized (completely bound). The time the initializationprocess takes to complete varies with the size of the LUN and other parameters. While aLUN is initializing, it is in a transitioning state, and when the initialization is complete, itsstate becomes ready.

    To determine the state of a LUN:

    Managing LUNs on your VNX® System

    Committing VNX for Block Operating Environment (OE) software with Unisphere  3

  • 8/18/2019 vnx_dp_manage_luns.pdf

    5/19

    Procedure

    1. From Unisphere, navigate to the LUN you want to verify ( Storage > LUNs ).

    2. Right-click the LUN and click Properties.

    3. Verify that the state of the LUN is Normal.

    If the state is Transitioning, wait for the state to change to Ready before continuing.

    MetaLUNs overviewMetaLUNS are available for classic LUNs only.

    NOTICE 

    EMC strongly recommends that you do not expand LUN capacity by concatenating LUNs

    of different RAID types. Do this only in an emergency situation when you need to add

    capacity to a LUN and you do not have LUNs of the same RAID type or the disk capacity to

    create new ones. Concatenating metaLUN components with a variety of RAID types could

    impact the performance of the resulting metaLUN. Once you expand a LUN, you cannotchange the RAID type of any of its components without destroying the metaLUN.

    Destroying a metaLUN destroys all LUNs in the metaLUN, and therefore causes data to be

    lost.

    A metaLUN is a type of LUN whose maximum capacity can be the combined capacities of all the LUNs that compose it. The metaLUN feature lets you dynamically expand thecapacity of a single LUN (base LUN) into a larger unit called a metaLUN. Do this by adding LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase itscapacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate inSnapView, MirrorView, and SAN Copy sessions.

    Note

    Thin LUNs cannot be part of a metaLUN.

    A metaLUN may include multiple sets of LUNs and each set of LUNs is called acomponent. The LUNs within a component are striped together and are independent of other LUNs in the metaLUN. Any data that is written to a metaLUN component is stripedacross all the LUNs in the component. The first component of any metaLUN alwaysincludes the base LUN.

     You can expand a LUN or metaLUN in two ways — stripe expansion or concatenateexpansion:

    u A stripe expansion takes the existing data on the LUN or metaLUN you are expanding,

    and restripes (redistributes) it across the existing LUNs and the new LUNs you areadding. The stripe expansion may take a long time to complete.

    u A concatenate expansion creates a new metaLUN component that includes the newexpansion LUNs, and appends this component to the existing LUN or metaLUN as asingle, separate, striped component. No restriping of data between the originalstorage and the new LUNs occurs. The concatenate operation completes immediately.

    During the expansion process, the host is able to process I/O to the LUN or metaLUN, andaccess any existing data. It does not, however, have access to any added capacity untilthe expansion is complete. Whether you can actually use the increased user capacity of the metaLUN depends on the operating system running on the servers connected to thestorage system.

    4  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    6/19

     Allocating storage on a new system with the Unisphere LUNProvisioning Wizard

    NOTICE 

    If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.

    Procedure

    1. Select the system for which you want to allocate storage.

    2. Select Storage > LUNS > LUNS.

    3. Under the Wizards list, select the LUN Provisioning Wizard.

    4. On the Select Servers page, select Assign LUNs to the Servers, and select the serversor virtual machines that will have access to the new LUNs.

    5. Select the system in which the new LUNs will reside.6. Create a LUN:

    a. Select a pool or RAID group in which to create a LUN, or create a new pool for theLUN.

    We recommend you use an existing pool or create a pool instead of a RAID groupbecause a pool supports options, such as Fully Automated Storage Tiering (FAST)and Thin Provisioning, which a RAID group does not support.

    b. If you are creating a pool LUN and you want the LUN to be a thin LUN, select ThinLUN.

    The Thin LUN option is available and will be selected by default if the Thin

    Provisioning enabler is installed. To learn about pools and thin LUNs, click the ?icon next to Thin LUN.

    c. Select the properties for the LUN.

    d. Add the LUNs to a user-defined folder or do not place them in a folder.

    e. Click Finish to create the LUN.

    7. Verify that the server was assigned to the storage group containing the LUNs youcreated:

    l If you know the name of the storage group in which the LUNs reside, fromUnisphere, select Storage > Storage Pools.

    l If you know the name of the server or virtual machine to which the storage group is

    assigned, from Unisphere, select Storage > LUNs and confirm that the new LUNsare listed.

    If you do not see any of the LUNs you just created, you may not have selected the Assign LUNs to a server  option in the Select Servers page of the LUN Provisioning wizard. You can use the Storage Assignment Wizard for Block  to assign the LUNs to aserver.

    8. Create a hot spare policy (a RAID group with a hot spare RAID Type) as described inthe Unisphere online help. To do this, select System > Hardware > Hot Spare Policy.

    A hot spare is a single disk that serves as a temporary replacement for a failed disk ina 6, 5, 3, 1, or 1/0 RAID group. Data from the failed disk is reconstructedautomatically on the hot spare from the parity or mirrored data on the working disks in

    the LUN, so the data on the LUN is always accessible.

    Managing LUNs on your VNX® System

    Allocating storage on a new system with the Unisphere LUN Provisioning Wizard  5

  • 8/18/2019 vnx_dp_manage_luns.pdf

    7/19

    Note

    Only RAID group LUNs can be hot spares.

    Note

    Vault drives (the first 4 drives) cannot be qualified as hot spares.

    Create pool LUNsLets you create one or more pool LUNs of a specified size within a storage pool andspecify details such as LUN name, and the number of LUNs to create.

    Procedure

    1. In the systems drop-down list on the menu bar, select a storage system.

    2. Select Storage > LUNs > LUNs.

    3. In the LUNs view, click Create.4. In the Create LUN dialog, under Storage Pool Properties:

    a. Select Pool.

    b. Select a RAID type for the pool in which the LUN will be created.

    For Pool LUNs, only RAID 6, RAID 5, and RAID 1/0 are valid. RAID 5 is the defaultRAID type. If mixed tiers are available that use different RAID types, this fielddisplays Mixed.

    If available, the software populates Storage Pool for new LUN with a list of poolsthat have the specified RAID type, or displays the name of the selected pool. TheCapacity section displays information about the selected pool. If there are no

    pools with the specified RAID type, click New to create a new one.5. In LUN Properties, the Thin is selected by default. If you do not want to create a thin

    LUN, clear the Thin checkbox.

    6. Assign a User Capacity and ID to the LUN you want to create.

    7. If you want to create more than one LUN, select a number in Number of LUNs tocreate.

    Note

    For multiple LUNs, the software assigns sequential IDs to the LUNs as they are

    available. For example, if you want to create five LUNs starting with LUN ID 11, the LUN

    IDs might be 11, 12, 15, 17, and 18.

    8. In LUN Name, either specify a name or select Automatically assign LUN IDs as LUNNames.

    9. Choose one of the following:

    l Click Apply to create the LUN with the default advanced properties, or 

    l Click the Advanced tab to assign the properties yourself.

    10.Assign optional advanced properties for the LUN:

    a. Select a default owner (SP A or SP B) for the new LUN or accept the default value of  Auto.

    b. Set the FAST VP tiering policy option.

    6  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    8/19

    11.Click Apply to create the LUN, and then click Cancel to close the dialog box.

    An icon for the LUN is added to the LUNs view window.

    Create classic LUNsIf you need to create one or more LUNs of a specified size within a RAID group and specifydetails such as SP owner, element size, and the number of LUNs to create, you may wantto determine if the RAID Group has enough free space to accommodate the new LUNs.

    Note

    If no LUNs exist on a storage system connected to a NetWare server, refer to the Release

    Notice for the NetWare Unisphere Agent for information on how to create the first LUN.

    IIf you are creating LUNs on a storage system connected to a Solaris server, and nofailover software is installed, refer to the Storage System Host Utilities for Solaris Administrator’s Guide  for information on how to create the first LUN.

    If the LUNs you are creating reside on a storage system connected to a VMware ESX

    server, and these LUNs will be used with layered applications such as SnapView,configure the LUNs as raw device mapping volumes set to physical compatibility mode.

     You may receive a message that this ID is already being used by a private classic LUN. If you get this message, assign a new ID, keeping in mind that the system assigns highnumbers to private LUN IDs.

    Procedure

    1. Select Storage > LUNs > LUNs.

    2. In the LUNs view, click Create.

    3. In the General tab, under Storage Pool Properties, select RAID Group.

    4. Select a RAID type that you want to assign to the LUN.

    5. If there are no RAID groups with the specified RAID type, click New to create a newRAID group.

    The software populates Storage Pool for new LUN with a list of RAID Groups with thespecified RAID type, or displays the name of the selected RAID group. The Capacitysection displays information about the selected RAID group.

    6. In LUN Properties, assign a user capacity and ID to the LUN you want to create.

    If you want to create more than one LUN, select a number in Number of LUNs tocreate.

    For multiple LUNs, the software assigns sequential IDs to the LUNs as they are

    available. For example, if you want to create five LUNs, starting with LUN ID 11, theLUN IDs may be similar to 11, 12, 15, 17, and 18.

    7. In LUN Name, either type a name or select the Automatically assign LUN IDs as LUNNames checkbox.

    8. Choose one of the following:

    l Click Apply to create the LUN with the default advanced properties.

    l Click the Advanced tab to manually assign the properties.

    9. Assign advanced properties for a classic LUN.

    a. By default, the Use SP Write Cache checkbox is selected to enable write caching for the classic LUN. Clear the checkbox if you want to disable write caching.

    Managing LUNs on your VNX® System

    Create classic LUNs  7

  • 8/18/2019 vnx_dp_manage_luns.pdf

    9/19

    b. If you want to perform an initial background verify to eliminate latent soft mediaerrors on the newly bound LUN, do NOT select the No Initial Verify checkbox(cleared is the default).

    c. If you do NOT want to perform the background verify operation, select the No Initial Verify checkbox.

    NOTICE 

    Do not send data to the LUN until the background verify operation is complete.

    d. In the Rebuild Priority list, select a rebuild priority of either ASAP, High (default),Medium, or Low.

    e. In the Verify Priority list, select a verify priority of either ASAP, High, Medium(default), or Low.

    f. Select a default owner (SP A or SP B) for the new LUN.

    Create a LUNs folder Lets you create a new user-defined LUNs folder, which is a folder created by you in order to organize your LUNs. You can modify user-defined folders.

    Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Select Storage > LUNs > LUN Folders.

    3. Click Create.

    4. In Folder Name, enter a name for the new folder.

    EMC recommends that the name you select is one that will help you identify the LUNs

    in the folder. For example, you might use a name of Accounts Payable.5. Click OK  to save the changes and close the dialog box.

     Add LUNs to foldersLets you add a LUN to one or more folders.

    Procedure

    1. In the systems drop-down list on the menu bar, select the system that includes thefolders.

    2. Select Storage > LUNs > LUNs.

    3. In the LUNs view, right-click the icon for a LUN, and then click Select Folders.

    4. In Available Folders, double-click the folder to which you want to add the LUN.

    The folder moves into the Selected Folders list.

    5. Click OK  to save the changes and close the dialog box.

    The software adds the LUN to the specified folder.

    Remove LUNs from a folder Lets you remove LUNs from the selected folder.

    8  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    10/19

    Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Select Storage > LUNs > LUN Folders.

    3. In the Folders view, right-click the folder from which you want to remove LUNs andselect Select LUNs.

    4. In the LUNs tab, under Selected LUNs, select one or more LUNs and click Remove.

    5. Click OK  to save the changes and close the dialog box.

    The software removes the selected LUNs from the folder.

    Setting LUN properties

    Note

    In this topic, the term LUN refers to both pool LUNs and classic LUNs.

    The LUN properties determine the individual characteristics of a LUN. You set LUNproperties when you create the LUN. You can also change some LUN properties after theLUN is created.

    Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Select Storage > LUNs > LUNs.

    3. In the LUNs view, select a LUN and click Properties.

    4. Click one of the property tabs to view and change the current properties for the LUN.

    Set classic LUN write cache or FAST Cache propertiesNOTICE 

    For a classic LUN to use write cache, write cache must be enabled for the system. For a

    classic LUN to use the FAST Cache, FAST Cache must be configured on the system and

    enabled on the LUN.

    Procedure

    1. In the systems drop-down list on the menu bar, select the storage system.

    2. Select Storage > LUNs > LUNs.

    3. Right-click the icon for the classic LUN, and then click Properties.4. Select the Cache tab.

    5. By default, the Use SP Write Cache checkbox is selected to enable write caching for the classic LUN. Clear the checkbox if you want to disable write caching.

    6. Select the FAST Cache checkbox to enable the FAST Cache for the classic LUN, or clear it to disable the FAST Cache for the classic LUN.

     You should not enable the FAST Cache for write intent log LUNs and Clone PrivateLUNs. Enabling the FAST Cache for these LUNs is a suboptimal use of the FAST Cacheand may degrade the cache's performance for other LUNs.

    Managing LUNs on your VNX® System

    Setting LUN properties  9

  • 8/18/2019 vnx_dp_manage_luns.pdf

    11/19

    Note

    If the FAST Cache enabler is not installed, FAST Cache is not displayed.

    7. Click Apply to save changes without closing the dialog box, or click OK  to savechanges and close the dialog box.

     Auto assign for a LUN

    NOTICE 

    Enable this LUN property only if the connected host does not use failover software. The

    auto assign property is ignored when the storage system's failover mode for an initiator is

    set to 1. This property will not interfere with PowerPath's control of a LUN.

    Auto assign enables or disables (default) auto assign for a LUN. Auto assign controls theownership of the LUN when an SP fails in a storage system with two SPs. You enable or disable auto assign for a LUN when you bind it. You can also enable or disable it after the

    LUN is created without affecting the data on it.

    With auto assign enabled, if the SP that owns the LUN fails and the server tries to accessthat LUN through the second SP, the second SP assumes ownership of the LUN to enableaccess. The second SP continues to own the LUN until the failed SP is replaced and thestorage system is powered up. Then, ownership of the LUN returns to its default owner. If auto assign is disabled in this situation, the second SP does not assume ownership of the LUN, and access to the LUN does not occur.

    If you are running failover software on a server connected to the LUNs in a storagesystem, you must disable auto assignment for all LUNs that you want the software to failover when an SP fails. In this situation, the failover software, not auto assign, controlsownership of the LUN in a storage system with two SPs.

    Note

    The auto assign property is not available for a Hot Spare LUN.

    Default owner of a LUNThe default owner is the SP that assumes ownership of the LUN when the storage systemis powered up. If the storage system has two SPs, you can choose to create some LUNsusing one SP as the default owner and the rest using the other SP as the default owner, or you can select Auto, which tries to divide the LUNs equally between SPs. The primaryroute to a LUN is the route through the SP that is its default owner, and the secondary

    route is through the other SP.

    If you do not specifically select one of the Default Owner  values, default LUN owners areassigned according to RAID Group IDs as follows:

    Table

    Default LUN owners

    RAID Group IDs Default LUN owner 

    Odd-numbered SP A

    Even-numbered SP B

    10  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    12/19

    Note

    The default owner property is unavailable for a Hot Spare LUN.

    Source LUN A classic LUN, metaLUN, or thin LUN from which data is moved. After a LUN migrationcompletes, the source LUN is destroyed (becomes private).

    NOTICE 

    The source LUN cannot be:

    u a Hot Spare

    u in the process of being created

    u in the process of expanding 

    u a private LUN

    u a component of a metaLUN.

    Destination LUN definitionA classic LUN, metaLUN, or thin LUN to which data is moved. After a LUN migrationcompletes, the destination LUN assumes the identity of the source LUN, and the sourceLUN is destroyed. The capacity of the destination LUN must be equal to or greater thanthe capacity of the source LUN. The destination can be a different RAID type than that of the source LUN.

    NOTICE 

    The destination LUN cannot be:

    u a Hot Spare

    u in the process of being created

    u in the process of expanding 

    u in a Storage Group

    u a private LUN

    u a LUN that is participating in a MirrorView, SnapView, or SAN Copy session.

     Verify priority for a LUNThe verify priority is the relative importance of validating the consistency of redundantinformation in a LUN. The priority dictates the amount of resources the SP devotes tochecking LUN integrity versus performing normal I/O. You set the verify priority for a LUNwhen you create it, and you can change it after the LUN is bound without affecting thedata on the LUN.

    If an event happens, such as when an SP fails and the LUN is taken over by the other SP,a background verification begins to check the redundant information within the LUN.Valid verify priorities are ASAP, High, Medium (default), and Low. The ASAP setting 

    checks and verifies as fast as possible, but may degrade storage-system performance.

    Managing LUNs on your VNX® System

    Source LUN  11

  • 8/18/2019 vnx_dp_manage_luns.pdf

    13/19

    Note

    When creating a RAID 0, Disk or Hot Spare LUN, the verify priority property is unavailable.

    Rebuild priority for a LUNThe rebuild priority is the relative importance of reconstructing data on either a hot spareor a new disk that replaces a failed disk in a LUN. It determines the amount of resourcesthe SP devotes to rebuilding instead of to normal I/O activity. Valid rebuild priorities are:

    Table 2

    Rebuild priorities

     Value Target rebuild rate in GB/hour 

    ASAP 0 (as quickly as possible)

    High 12 (default value)

    Medium 6Low 4

    Rebuild priorities correspond to target rebuild rates in the table above. Actual time torebuild a LUN is dependent on I/O workload, LUN size and LUN RAID type. Each LUNbuilds at its own specified rate.

    A rebuild operation with an ASAP or High (default) priority restores the LUN faster thanone with Medium or Low priority, but may degrade storage system performance.

     You set the rebuild priority for a LUN when you create it, and you can change it after theLUN is bound without affecting the data on the LUN.

    Note

    The rebuild priority property is unavailable for a RAID 0, Disk, or Hot Spare LUN.

    Start the Storage Expansion wizardThe Storage Expansion wizard is supported for classic LUNs only.

    TThe RAID Group LUN Expansion Wizard lets you dynamically expand the capacity of newor existing LUNs by combining multiple LUNs into a single unit called a metaLUN. You canadd additional LUNs to a metaLUN to increase its capacity even more. The wizard

    preserves the expanded LUN's data. You do not have to unbind the LUN you want toexpand and lose all the data on this LUN. Once you create a metaLUN, it acts like astandard LUN. You can expand it, add it to a Storage Group, view its properties, anddestroy it.

    For existing metaLUNs, you can expand only the last component of the metaLUN. If youclick a component other than the last one and select Add LUNs, the software displays anerror message.

    A metaLUN can span multiple RAID Groups and, depending on expansion type(concatenate or stripe), the LUNs in a metaLUN can be different sizes and RAID Types.

    12  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    14/19

    Note

    The software allows only four expansions per storage system to be running at the same

    time. Any additional requests for expansion are added to a queue, and when one

    expansion completes, the first one in the queue begins.

    Procedure

    1. In the systems drop-down list on the menu bar, select a storage system.

    2. From the task list, under Wizards, select RAID Group LUN Expansion Wizard.

    3. Follow the steps in the wizard, and when available, click the Learn more links for additional information.

    Delete LUNs

    NOTICE 

    Deleting a LUN (classic LUN or pool LUN) will delete all data stored on the LUN. If the LUNis part of a Storage Group, you must remove the LUN from the Storage Group before you

    unbind it. Before unbinding a LUN, make a backup copy of any data on it that you want to

    retain.

    Typically, you delete a LUN only if you want to:

    u Delete a storage pool (RAID group or pool) on a storage system. You cannot delete astorage pool that includes LUNs.

    u Add disks to it. If the LUN is the only LUN in a storage pool, you can add disks to it byexpanding the storage pool.

    u Use its disks in a different storage pool.

    u Recreate it with a different capacity of disks.

    In any of these situations, you should make sure that the LUN contains the disks that youwant.

    Procedure

    1. To determine which disks make up a LUN, do the following:

    a. In the systems drop-down list on the menu bar, select a system.

    b. Select Storage > LUNs > LUNs.

    c. Open the LUN Properties dialog box by double-clicking the LUN icon, or byselecting the LUN and clicking the Properties button.

    d. Select Disks to view a list of disks.

    2. To delete a LUN, do the following:

    a. In the systems drop-down list on the menu bar, select a system.

    b. Select Storage > LUNs > LUNs.

    c. Right-click the LUN icon, and select Delete, or select the LUN and click the Deletetab.

    d. Click Yes to continue with the operation, or click No to cancel the operation.

    Managing LUNs on your VNX® System

    Delete LUNs  13

  • 8/18/2019 vnx_dp_manage_luns.pdf

    15/19

    LUN migration overviewThe LUN migration feature, included in the Unisphere software and the VNX for Block CLI,lets you move the data in one LUN, thin LUN, or metaLUN to another LUN, thin LUN, or 

    metaLUN. You might do this to:u Change the type of drive the data is stored on (for example, from more economical

    NL-SAS to faster SAS, or vice-versa).

    u Select a RAID type that better matches the data usage.

    u Recreate a LUN with more disk space.

    For example, you may have a metaLUN that has been expanded several times byconcatenation with other LUNs (not by addition of another entire disk unit), and whoseperformance suffers as a result. You can use the migration feature to copy the metaLUNonto a new LUN, which, being a single entity and not a group of several entities, providesbetter performance.

    During a LUN migration, the Unisphere software copies the data from the source LUN to adestination LUN. After migration is complete:

    u The destination LUN assumes the identity (World Wide Name and other IDs) of thesource LUN.

    u The source LUN consumes the destination LUN's storage, and frees the storage itconsumed in its former storage pool or RAID group.

    u The destination LUN is removed.

    The migration operation detects the zeros on the source LUN and deallocates them on thetarget LUN which frees up more storage capacity on the target LUN. For better performance and improved use of space, make sure that the target LUN is a newlycreated LUN with no existing data.

    Using the Unisphere software, you can start migrations, display and modify migrationproperties, and display a summary of all current migrations on one storage system or allthe systems in the domain. You can also cancel (stop) a migration, which deletes thedestination copy and restores the storage system to its original state.

    The number of supported active and queued migrations is based on the storage systemtype.

    Start a LUN migrationLets you configure and start the LUN migration operation. Prior to starting the migrationoperation, if the source LUN and the destination LUN belong to different SPs, the softwaretrespasses the destination LUN to the SP that owns the source LUN.

    Note

    If the destination LUN is a thin LUN, the migration operation detects the zeros on the

    source LUN and deallocates them on the target LUN which frees up more storage capacity

    on the target LUN. For better performance and improved use of space, make sure that the

    target LUN is a newly created LUN with no existing data.

    Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Select Storage > LUNs > LUNs.

    14  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    16/19

    3. Navigate to the LUN that you want to be the source LUN for the migration operation,and right-click Migrate.

    4. In the Start Migration dialog box, select a migration rate, and then select theparticipating destination LUN.

    5. Click OK  to start the data migration, or click Cancel to close the dialog box.

    Cancel (stop) a LUN migrationLets you cancel an active LUN migration. Canceling a LUN migration deletes thedestination copy and restores the storage system to its original state.

    Procedure

    1. Select Storage > LUNs > LUNs and navigate to the source LUN that is participating inthe data migration.

    2. Click Properties.

    3. In the LUN Properties dialog box, select the Migration tab, and click Cancel Migration.

    The Unisphere software displays a confirmation dialog box, asking you to confirm thecancel request.

    4. Click Yes to cancel LUN migration.

    Display the status of active LUN migrationsShows a summary of all the currently active migrations for a particular storage system, or for all storage systems within the domain that support the LUN migration feature.

    u Display status of active migrations for a specific storage system.

    u

    Display status of active migrations for all supported storage systems in the domain.Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Select Storage > LUNs > LUNs.

    3. In the task list, under Block Storage, select LUN Migration Summary.

    Creating storage groups with UnisphereIf you do not have any storage groups created, create them now.

    Procedure

    1. In the systems drop-down list on the menu bar, select a system.

    2. Hosts > Storage Groups.

    3. Under Storage Groups, select Create.

    4. In Storage Group Name, enter a name for the Storage Group to replace the defaultname.

    5. Choose from the following:

    l Click OK  to create the new Storage Group and close the dialog box, or 

    l Click Apply to create the new Storage Group without closing the dialog box. Thisallows you to create additional Storage Groups.

    Managing LUNs on your VNX® System

    Cancel (stop) a LUN migration  15

  • 8/18/2019 vnx_dp_manage_luns.pdf

    17/19

    6. Select the storage group you just created and click the Connect hosts.

    7. Move the host from Available host to Host to be connected and click OK .

    Making virtual disks visible to an ESXi Server To allow ESXi Server to access the virtual disks you created, must make the virtual disksvisible to ESXi:

    Procedure

    1. Log in to the VMware vSphere Client as administrator.

    2. From the inventory panel, select the server, and click the Configuration tab.

    3. Under Hardware, click Storage Adapters.

    4. In the list of adapters, select the adapter (HBA), and ciick Rescan above the Storage Adapters panel.

    Note

    NICs are listed under iSCSI Software Adapters.

    5. In the Rescan dialog box, select Scan for New Storage Devices and Scan for New VMFS Volumes, and click OK .

    6. Verify that the new virtual disks that you created are in the disk/LUNs list.

     Verifying that native multipath failover sees all paths to the LUNs

    Note

    If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.

    Procedure

    1. For paths to VMFS volumes:

    a. Log in to the VMware vSphere Client as administrator.

    b. From the inventory panel, select the server, and click the Configuration tab.

    c. Under Hardware, click Storage and select the LUN.

    d. Click Properties and then Volume Properties.

    e. In the Volume Properties page, click Manage Paths.

    The Manage Paths windows lists the paths and their states for all paths from theserver to the LUNs. ESX Server scans for paths to LUNs. When it finds a LUN through apath, it assigns a name to the LUN.For example, vmhba6:1:2, read from left to right, is the adapter, the target (SP), and

    the LUN 2.

    2. For paths to RDM volumes:

    a. Log in to the VMware vSphere Client as administrator.

    b. From the inventory panel, select the server, and click the Configuration tab.

    c. In the Configuration tab, click Storage Adapters.

    d. Select the adapter, right-click the path, and click Manage Paths.

    16  

  • 8/18/2019 vnx_dp_manage_luns.pdf

    18/19

    The Manage Paths window lists the paths and their states for all paths from the server to the LUNs ESX Server scans for paths to LUNs. When it finds a LUN through a path, itassigns a name to the LUN.For example, vmhba36:C0:T2:L0, read from left to right, is the adapter, the target

    (SP), and the LUN (LUN 2).

    3. For each LUN, verify that all paths to the system are working:In the Status column of the Manage Paths, you should see:

    l One active path to each LUN

    l One or more standby (non-active) paths to each LUN

    l No dead paths.

    The active path is the path that the server is currently using to access data on the LUN.The standby paths are available for failover, should the active path fail. You shouldnot see any dead paths, which are paths that have failed and need to be repaired.Disabled paths are paths that have been intentionally turned off.

    For example, for a switch configuration, you should see something like:

    Runtime Name Target Status

    vmhba2:C0:T0:L2 50:06:01:60:bb:60:00:56 50:06:01:6c:3b:

    60:00:56

    Active

    vmhba2:C0:T1:L2 50:06:01:60:bb:60:00:56 50:06:01:6d:3b:

    60:00:56

    Standby

    vmhba1:C0:T0:L2 50:06:01:60:bb:60:00:56 50:06:01:64:3b:

    60:00:56

    Standby

    vmhba1:C0:T1:L2 50:06:01:60:bb:60:00:56 50:06:01:65:3b:

    60:00:56

    Standby

    Under SAN Identifier , the world wide names (WWNs) indicate the SP used for eachpath. In the WWN, the fourth set of digits from the left indicate the SP. For the exampleWWN 50:xx:xx:nn :xx:xx:xx:xx, the nn  indicates the SP port as follows:

    60 = SP A port 0

    61 = SP A port 1

    68 = SP B port 0

    69 = SP B port 1

    The switch configuration example shows four paths through two HBAs (6 and 7), andfour SP ports (SP A port 0, SP A port 1, SP B port 0, and SP B port 1) to LUN 0.

    Managing LUNs on your VNX® System

    Verifying that native multipath failover sees all paths to the LUNs  17

  • 8/18/2019 vnx_dp_manage_luns.pdf

    19/19

    Copyright © 2006-2014 EMC Corporation. All rights reserved. Published in USA.

    Published November, 2014

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change withoutnotice.

    The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for aparticular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software

    license.

    EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.

    All other trademarks used herein are the property of their respective owners.

    For the most up-to-date regulatory document for your product line, go to EMC Online Support ( https://support.emc.com ).