452
Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention Utility Hitachi Dynamic Provisioning Hitachi Dynamic Tiering Hitachi LUN Manager Hitachi Resource Partition Manager MK-92RD8014-11 October 2016

Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Provisioning Guide for Open SystemsHitachi Virtual Storage Platform G1000 and G1500

Hitachi Virtual Storage Platform F1500

Hitachi Data Retention Utility

Hitachi Dynamic Provisioning

Hitachi Dynamic Tiering

Hitachi LUN Manager

Hitachi Resource Partition Manager

MK-92RD8014-11

October 2016

Page 2: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

© 2014, 2016 Hitachi, Ltd. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronicor mechanical, including copying and recording, or stored in a database or retrieval system forcommercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Data SystemsCorporation (collectively “Hitachi”). Licensee may make copies of the Materials provided that any suchcopy is: (i) created as an essential step in utilization of the Software as licensed and is used in noother manner; or (ii) used for archival purposes. Licensee may not make any other copies of theMaterials. “Materials” mean text, data, photographs, graphics, audio, video and documents.

Hitachi reserves the right to make changes to this Material at any time without notice and assumesno responsibility for its use. The Materials contain the most current information available at the timeof publication.

Some of the features described in the Materials might not be currently available. Refer to the mostrecent product announcement for information about feature and product availability, or contactHitachi Data Systems Corporation at https://support.hds.com/en_us/contact-us.html.

Notice: Hitachi products and services can be ordered only under the terms and conditions of theapplicable Hitachi agreements. The use of Hitachi products is governed by the terms of youragreements with Hitachi Data Systems Corporation.

By using this software, you agree that you are responsible for:1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from

authorized employees and other individuals to access relevant data; and2. Verifying that data continues to be held, retrieved, deleted, or otherwise processed in

accordance with relevant laws.

Notice on Export Controls. The technical data and technology inherent in this Document may besubject to U.S. export control laws, including the U.S. Export Administration Act and its associatedregulations, and may be subject to export or import regulations in other countries. Reader agrees tocomply strictly with all such regulations and acknowledges that Reader has the responsibility to obtainlicenses to export, re-export, or import the Document and any Compliant Products.

Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries.

AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON,FlashCopy, IBM, Lotus, MVS, OS/390, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli,z/OS, z9, z10, z13, z/VM, and z/VSE are registered trademarks or trademarks of InternationalBusiness Machines Corporation.

Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo,Microsoft, the Microsoft Corporate Logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight,SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio, Windows, the Windows logo,Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vistaare registered trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots arereprinted with permission from Microsoft Corporation.

All other trademarks, service marks, and company names in this document or website are propertiesof their respective owners.

2Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 3: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Contents

Preface............................................................................................... 13Intended audience................................................................................................. 14Product version......................................................................................................14Release notes........................................................................................................ 14Changes in this revision..........................................................................................14Related documents.................................................................................................15Document conventions........................................................................................... 15Conventions for storage capacity values...................................................................16Accessing product documentation........................................................................... 17Getting help...........................................................................................................17Comments.............................................................................................................18

1 Introduction to provisioning............................................................. 19About provisioning................................................................................................. 20Key terms..............................................................................................................20Basic provisioning...................................................................................................21

Overview of fixed-sized provisioning...................................................................22Overview of custom-sized provisioning............................................................... 23Basic provisioning workflow...............................................................................25

Dynamic Provisioning............................................................................................. 25About Dynamic Provisioning.............................................................................. 25When to use Dynamic Provisioning.....................................................................26Advantages of using Dynamic Provisioning ........................................................ 26DP-VOL with data direct mapping attribute......................................................... 27Dynamic Provisioning advantage example...........................................................31Dynamic Provisioning high-level workflow...........................................................31Capacity saving and accelerated compression functions....................................... 31Capacity saving function: data deduplication and compression............................. 32Pools containing pool volumes carved from accelerated compression-enabled parity groups.................................................................................................... 33

Accelerated compression-enabled parity groups.............................................33Monitoring used pool capacity and used pool capacity reserved for writing...... 34

Dynamic Tiering ....................................................................................................36Dynamic Tiering............................................................................................... 36Overview of tiers.............................................................................................. 37

3Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 4: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When to use Dynamic Tiering............................................................................38Active flash ...........................................................................................................38Data retention strategies........................................................................................ 40Requirements........................................................................................................ 41

System requirements........................................................................................ 41Shared memory requirements............................................................................41Cache management device requirements............................................................42

Calculating the number of cache management devices required for DP-VOLs...42Maximum capacity of cache management devices......................................... 43Calculating the number of cache management devices required by a volumethat is not a DP-VOL....................................................................................43Viewing the number of cache management devices....................................... 43

2 Managing virtual storage machine resources..................................... 45About virtual storage machines and virtualized resources.......................................... 46Provisioning operations for resources in a virtual storage machine............................. 47Pair operations with virtual storage machine pairs.................................................... 47Software operations for resources in a virtual storage machine..................................49Editing virtualization management settings...............................................................49

3 Configuring resource groups............................................................ 53Resource group strategies.......................................................................................54System configuration using resource groups.............................................................54

Meta_resource..................................................................................................55Resource lock...................................................................................................55Resource group assignments............................................................................. 55User groups..................................................................................................... 56

Resource groups examples..................................................................................... 56Example of resource groups sharing a port.........................................................56Example of resource groups not sharing ports.................................................... 58

Resource group rules, restrictions, and guidelines.....................................................60Using Resource Partition Manager and other storage products...................................61

Dynamic Provisioning........................................................................................61Encryption License Key......................................................................................61LUN Manager................................................................................................... 62Performance Monitor.........................................................................................64ShadowImage.................................................................................................. 64Thin Image...................................................................................................... 65TrueCopy......................................................................................................... 65Global-active device.......................................................................................... 66Universal Replicator.......................................................................................... 67Universal Volume Manager................................................................................ 68Open Volume Management................................................................................70Virtual Partition Manager...................................................................................70Volume Shredder.............................................................................................. 71Server Priority Manager.....................................................................................71

Managing resource groups......................................................................................72Creating resource groups.................................................................................. 72Editing resource groups.....................................................................................73Deleting resource groups...................................................................................74

4Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 5: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

4 Configuring custom-sized provisioning.............................................. 75Virtual LUN functions..............................................................................................76Virtual LUN specifications........................................................................................76

Virtual LUN specifications for open systems........................................................ 76CV capacity by emulation type for open systems................................................. 77

Virtual LUN size calculations....................................................................................77Calculating OPEN-V volume size (CV capacity unit is MB)..................................... 78Calculating OPEN-V volume size (CV capacity unit is blocks).................................79Calculating fixed-size open-systems volume size (CV capacity unit is MB)..............79Calculating fixed-size open-systems volume size (CV capacity unit is blocks)......... 80Management area capacity of an open-systems volume.......................................81Boundary values of volumes.............................................................................. 81Capacity of a slot..............................................................................................81Configuring volumes in a parity group ............................................................... 82

Enabling accelerated compression........................................................................... 82Disabling accelerated compression.......................................................................... 84Configuration of interleaved parity groups................................................................85SSID requirements................................................................................................. 86Creating and deleting volumes................................................................................ 86

About creating volumes.....................................................................................86Notes on performing quick formats.................................................................... 87Creating volumes..............................................................................................88Create Volumes dialog box................................................................................ 89About shredding volume data............................................................................ 91Shredding volume data..................................................................................... 91About deleting unallocated volumes................................................................... 92Deleting unallocated volumes............................................................................ 92

Create LDEV function............................................................................................. 93Creating an LDEV............................................................................................. 93Finding an LDEV ID...........................................................................................98Finding an LDEV SSID ......................................................................................98Editing an LDEV SSID ...................................................................................... 99Changing LDEV settings.................................................................................. 100Removing an LDEV to be registered................................................................. 101

Blocking and restoring LDEVs................................................................................101Blocking LDEVs...............................................................................................101Blocking LDEVs in a parity group......................................................................102Block LDEVs window....................................................................................... 103Restoring blocked LDEVs................................................................................. 105Restoring blocked LDEVs in a parity group........................................................ 105Restore LDEVs window....................................................................................106

Formatting LDEVs.................................................................................................108About formatting LDEVs.................................................................................. 108Storage system operation when LDEVs are formatted........................................108Quick Format function..................................................................................... 108Quick Format specifications............................................................................. 110Formatting LDEVs in a Windows environment................................................... 111Formatting a specific LDEV.............................................................................. 111Formatting all LDEVs in a parity group..............................................................112Format LDEVs wizard...................................................................................... 113

Format LDEVs window............................................................................... 113

5Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 6: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Format LDEVs confirmation window............................................................113Assigning an MP blade..........................................................................................114

Guidelines for changing the MP blade assigned to an LDEV................................ 115Assigning an MP blade to a resource................................................................ 115Changing the MP blade assigned to an LDEV.....................................................116Changing the ALUA mode setting of LDEV........................................................ 117Components window.......................................................................................118DKC: MP Blades tab........................................................................................ 119Assign MP Blade wizard...................................................................................121

Assign MP Blade window............................................................................121Assign MP Blade confirmation window.........................................................122

Edit MP Blades wizard..................................................................................... 123Edit MP Blades window.............................................................................. 123Edit MP Blades confirmation window...........................................................124

Viewing LDEVs of ALUs or SLU attribution.............................................................. 125

5 Configuring thin provisioning .........................................................127Dynamic Provisioning overview..............................................................................129Dynamic Tiering overview..................................................................................... 129Active flash overview............................................................................................ 129Thin provisioning requirements..............................................................................129

License requirements...................................................................................... 129Pool requirements...........................................................................................130Pool-VOL requirements....................................................................................132DP-VOL requirements......................................................................................134Deduplication system data volume requirements............................................... 135Requirements for increasing DP-VOL capacity................................................... 135Estimating the required capacity of pool-VOLs with system area in the pool withdata direct mapping enabled............................................................................137V-VOL page reservation requirement ............................................................... 137Operating system and file system capacity........................................................138

Using Dynamic Provisioning or Dynamic Tiering or active flash with other softwareproducts.............................................................................................................. 139

Interoperability of DP-VOLs and pool-VOLs....................................................... 139ShadowImage pair status for reclaiming zero pages.......................................... 142TrueCopy........................................................................................................143Universal Replicator........................................................................................ 144ShadowImage................................................................................................ 144Thin Image.................................................................................................... 145Virtual Partition Manager CLPR setting..............................................................146Volume Migration............................................................................................146Resource Partition Manager............................................................................. 146

Dynamic Provisioning workflow............................................................................. 146Migrating V-VOL data...................................................................................... 148Restoring backup data.....................................................................................148

Dynamic Tiering and active flash........................................................................... 148About tiered storage....................................................................................... 148Tier monitoring and data relocation..................................................................149Multi-tier pool.................................................................................................149Tier monitoring and relocation cycles................................................................150

Auto execution mode.................................................................................150

6Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 7: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Manual execution mode............................................................................. 152Tier relocation workflow.................................................................................. 154Tier relocation rules, restrictions, and guidelines............................................... 157Buffer area of a tier.........................................................................................161Setting external volumes for each tier.............................................................. 162Example of required Dynamic Tiering cache capacity......................................... 163Execution modes for tier relocation.................................................................. 163

Execution modes when using Hitachi Device Manager - Storage Navigator ....164Viewing monitor and tier relocation information using HDvM - SN................. 165Execution modes when using Command Control Interface............................ 167Viewing monitor and tier relocation information using CCI............................ 168

Relocation speed.............................................................................................168Monitoring modes........................................................................................... 169Cautions when using monitoring modes............................................................170Notes on performing monitoring...................................................................... 171Downloading the tier relocation log file.............................................................171Tier relocation log file contents........................................................................ 172Tiering policy..................................................................................................177

Custom policies......................................................................................... 177Tiering policy examples..............................................................................178Setting tiering policy on a DP-VOL.............................................................. 180Tiering policy levels................................................................................... 180Viewing the tiering policy in the performance graph.....................................181Reserving tier capacity when setting a tiering policy.....................................182Example of reserving tier capacity.............................................................. 183Notes on tiering policy settings...................................................................185New page assignment tier..........................................................................187Relocation priority..................................................................................... 188Assignment tier when pool-VOLs are deleted............................................... 189Formatted pool capacity.............................................................................190Rebalancing the usage level among parity groups........................................191Execution mode settings and tiering policy.................................................. 192

Functions overview for active flash and Dynamic Tiering....................................193Relocating pages whose latest I/Os frequency is suddenly high by active flash.... 195Dynamic Tiering workflow............................................................................... 196Active flash workflow...................................................................................... 198

Thresholds...........................................................................................................200Pool utilization thresholds................................................................................ 200Pool subscription limit..................................................................................... 201Monitoring total DP-VOL subscription for a pool.................................................202

Working with pools...............................................................................................203About pools....................................................................................................203About pool-VOLs.............................................................................................203Pool status..................................................................................................... 204Creating pools................................................................................................ 205

Creating Dynamic Provisioning pools by selecting pool-VOLs manually...........206Creating Dynamic Provisioning pools by selecting pool-VOLs automatically.....209Creating Dynamic Tiering pools by selecting pool-VOLs manually.................. 212Creating a Dynamic Tiering pool by automatically selecting pool-VOLs...........216

Enabling deduplication on an existing pool........................................................220Configuring a Dynamic Tiering pool for use by active flash.................................222Deleting some capacity saving-enabled DP-VOLs in a pool................................. 223

7Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 8: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Deleting all capacity saving-enabled DP-VOLs in a pool......................................223Disabling deduplication on a pool..................................................................... 224Deleting a pool............................................................................................... 225

Working with DP-VOLs..........................................................................................226About DP-VOLs...............................................................................................226Relationship between a pool and DP-VOLs........................................................ 226Creating DP-VOLs........................................................................................... 227Enabling and disabling the DP-VOL protection function options...........................231Enabling capacity saving functions on DP-VOLs................................................. 232Disabling the capacity saving functions on DP-VOLs...........................................233Deleting a DP-VOL.......................................................................................... 235

Virtualizing storage capacity (DP/HDT)...................................................................236About virtualizing storage capacity................................................................... 236Creating a DP pool..........................................................................................238Create Pool dialog box.....................................................................................240Verifying DP pool information...........................................................................244Expanding DP pools........................................................................................ 245Shrinking a DP pool.........................................................................................247Modifying DP pool settings...............................................................................248Deleting DP pools........................................................................................... 248Expanding DP volumes....................................................................................249Reclaiming zero pages.....................................................................................249

Virtualizing storage tiers (HDT)..............................................................................249About virtualizing storage tiers.........................................................................250Manually starting or stopping the monitoring of HDT pools................................ 252Manually starting or stopping the tier relocation of an HDT pool.........................253Scheduling monitoring and tier relocation of HDT pools..................................... 253Editing tier relocation for HDT volumes.............................................................254Applying a tiering policy to HDT volumes.......................................................... 254Customizing a tiering policy for HDT volumes....................................................255Changing a tiering policy name........................................................................ 256Notes on data placement profiles for HDT volumes............................................257Creating a data placement profile for HDT volumes........................................... 258Updating a data placement profile for HDT volumes.......................................... 259Editing a data placement profile for HDT volumes............................................. 260Applying a data placement profile for HDT volumes...........................................260Scheduling data placement profiles for HDT volumes.........................................261Editing an external LDEV tiering rank for an HDT pool....................................... 262

Monitoring capacity and performance.....................................................................262Monitoring pool capacity..................................................................................263Monitoring pool usage levels............................................................................263Monitoring performance.................................................................................. 263Managing I/O usage rates example.................................................................. 264Tuning with Dynamic Tiering............................................................................265Improving performance by monitoring pools..................................................... 265

Working with SIMs .............................................................................................. 268About SIMs.....................................................................................................268SIM reference codes....................................................................................... 268Automatic completion of a SIM ....................................................................... 271Manually completing a SIM..............................................................................272Complete SIMs window................................................................................... 273

Enabling data direct mapping for external volumes, pools, and DP-VOLs.................. 274

8Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 9: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Creating external volumes with data direct mapping enabled............................. 274Creating pools with data direct mapping enabled.............................................. 276Creating DP-VOLs with data direct mapping enabled......................................... 278Editing the data direct mapping attribute for a pool...........................................280

6 Configuring access attributes..........................................................283About access attributes.........................................................................................284

Access attribute requirements..........................................................................284Access attributes and permitted operations.......................................................285Access attribute restrictions............................................................................. 285Access attributes workflow.............................................................................. 286

Working with access attributes.............................................................................. 286Assigning an access attribute to a volume.........................................................286Changing an access attribute to read-only or protect......................................... 287Changing an access attribute to read/write....................................................... 288Enabling or disabling the expiration lock........................................................... 289Disabling an S-VOL......................................................................................... 290Reserving volumes..........................................................................................291Data Retention window................................................................................... 292Error Detail dialog box.....................................................................................295

7 Managing logical volumes.............................................................. 297LUN Manager overview......................................................................................... 298

LUN Manager Function.................................................................................... 298LUN Manager operations................................................................................. 298Fibre Channel operations.................................................................................298Rules, restrictions, and guidelines for managing LUs..........................................301

Allocating and unallocating volumes.......................................................................302About allocating volumes.................................................................................302Volume allocation methods.............................................................................. 303Prerequisites for allocating volumes..................................................................305Allocating volumes from general tasks.............................................................. 305Allocating volumes to selected hosts................................................................ 306Allocating volumes to selected file servers........................................................ 307Allocating selected volumes to hosts................................................................ 308Allocating volumes to clustered hosts............................................................... 309Allocating volumes by using a keyword search.................................................. 310Allocating volumes by using a criteria search.................................................... 311Allocating volumes by using existing volume settings.........................................312Allocate Volumes dialog box.............................................................................312About clustered-host storage........................................................................... 322Creating clustered-host storage........................................................................323About unallocating volumes............................................................................. 324Unallocating volumes from hosts......................................................................325Unallocating volumes from file servers..............................................................326Unallocate volumes dialog box......................................................................... 327

Managing logical units workflow............................................................................ 329Configuring Fibre Channel ports.............................................................................329

Setting the data transfer speed on a Fibre Channel port.....................................329Combination of data-transfer speed and connection type................................... 331

9Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 10: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Setting the Fibre Channel port address............................................................. 331Addresses for Fibre Channel ports.................................................................... 332Setting the fabric switch.................................................................................. 333Fibre Channel topology....................................................................................334Example of FC-AL and point-to-point topology...................................................335Setting the Fibre Channel topology...................................................................335

Overview for iSCSI............................................................................................... 336Network configuration for iSCSI....................................................................... 336

Managing hosts....................................................................................................339Configure hosts workflow................................................................................ 339Host modes for host groups.............................................................................339Host mode options..........................................................................................340Find WWN of the host bus adapter...................................................................344

Finding a WWN on Windows...................................................................... 345Finding a WWN on Oracle® Solaris.............................................................345Finding a WWN on AIX, IRIX, or Sequent....................................................346Finding WWN for HP-UX.............................................................................346

Changing settings for a manually registered host.............................................. 347Changing settings for a host registered by using Device Manager agent..............348Editing the host mode and host mode options...................................................349Editing a WWN nickname................................................................................ 350Changing HBA iSCSI name or nickname of a host bus adapter........................... 351Changing iSCSI target setting.......................................................................... 352Removing hosts from iSCSI targets.................................................................. 354Deleting an iSCSI target.................................................................................. 355Deleting login iSCSI names..............................................................................355Adding a selected host to a host group.............................................................356Adding a host to the selected iSCSI target........................................................ 357Setting the T10 PI mode on a port................................................................... 358Deleting logical groups.................................................................................... 359Creating iSCSI targets and registering hosts in an iSCSI target...........................359Editing port settings........................................................................................362Adding CHAP users......................................................................................... 363Editing CHAP users......................................................................................... 364Removing CHAP users..................................................................................... 365Removing target CHAP users........................................................................... 365

Managing LUN Paths.............................................................................................366About LUN path management..........................................................................366Editing LUN paths........................................................................................... 367Editing LUN paths when exchanging a failed HBA..............................................369Editing LUN paths when adding or exchanging an HBA...................................... 370Removing LUN paths after adding an HBA........................................................ 371

Releasing LUN reservation by host.........................................................................372Configuring LUN security.......................................................................................373

LUN security on ports......................................................................................373Examples of enabling and disabling LUN security on ports................................. 373Enabling LUN security on a port....................................................................... 375Disabling LUN security on a port...................................................................... 376

Setting Fibre Channel authentication......................................................................376User authentication.........................................................................................377

Settings for authentication of hosts.............................................................378

10Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 11: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Settings for authentication of ports (required if performing mutualauthentication)..........................................................................................378

Host and host group authentication..................................................................379Example of authenticating hosts in a Fibre Channel environment.................. 380Port settings and connection results............................................................382

Fabric switch authentication.............................................................................382fabric switch settings and connection results.....................................................384Mutual authentication of ports......................................................................... 385Fibre Channel authentication........................................................................... 385

Enabling or disabling host authentication on a host group............................ 385Registering host user information............................................................... 386Changing host user information registered on a host group.......................... 387Deleting host user information....................................................................388Registering user information for a host group (for mutual authentication)......389Clearing user information from a host group................................................390

Fibre channel port authentication.....................................................................391Setting Fibre Channel port authentication....................................................391

Registering user information on a Fibre Channel port.........................................392Registering user information on a fabric switch................................................. 393Clearing fabric switch user information............................................................. 394Setting the fabric switch authentication mode................................................... 395Enabling or disabling fabric switch authentication..............................................396

8 Configuring VASA integrated storage systems..................................399Creating LDEVs of ALU attribution..........................................................................401Viewing LDEVs of ALUs or SLU attribution.............................................................. 402Unbinding LDEVs of SLUs attribution......................................................................403

9 Troubleshooting............................................................................ 405Troubleshooting Virtual LUN.................................................................................. 406Troubleshooting Dynamic Provisioning................................................................... 406Troubleshooting Data Retention Utility................................................................... 411

Data Retention Utility troubleshooting instructions.............................................411Troubleshooting provisioning while using Command Control Interface...................... 411

Errors when operating CCI (Dynamic Provisioning, SSB1: 0x2e31/0xb96d/0xb980)................................................................................. 412Errors when operating CCI (Data Retention Utility, SSB1:2E31/B9BF/B9BD).........414

Calling customer support...................................................................................... 414

A CCI command reference................................................................ 417Hitachi Device Manager - Storage Navigator tasks and CCI command list................. 418

B Guidelines for pools when accelerated compression is enabled..........421Checking whether accelerated compression can be enabled.....................................422Estimating required FMC capacity.......................................................................... 422

Hitachi Accelerated Flash Compression Estimator Tool....................................... 422Estimating FMC capacity for a new pool............................................................423Estimating FMC capacity to expand an existing pool.......................................... 426

11Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 12: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Workflow for creating parity groups, LDEVs, and pools with accelerated compression.........................................................................................................427Monitoring the pool capacity................................................................................. 430Estimating FMC capacity when pool capacity is insufficient...................................... 430Disabling accelerated compression on a parity group.............................................. 431

C Notices......................................................................................... 435LZ4 Library.......................................................................................................... 436

Glossary............................................................................................ 437

Index................................................................................................ 445

12Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 13: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

PrefaceThis document describes and provides instructions for using the provisioningsoftware to configure and perform operations on the Hitachi Virtual StoragePlatform G1000 (VSP G1000), Hitachi Virtual Storage Platform G1500 (VSPG1500), and Hitachi Virtual Storage Platform F1500 (VSP F1500) storagesystems. The provisioning software includes Hitachi Dynamic Provisioning,Hitachi Dynamic Tiering, Hitachi LUN Manager, Hitachi Virtual LUN, andHitachi Data Retention Utility.

Please read this document carefully to understand how to use these products,and maintain a copy for your reference.

□ Intended audience

□ Product version

□ Release notes

□ Changes in this revision

□ Related documents

□ Document conventions

□ Conventions for storage capacity values

□ Accessing product documentation

□ Getting help

□ Comments

Preface 13Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 14: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Intended audienceThis document is intended for system administrators, Hitachi Data Systemsrepresentatives, and authorized service providers who install, configure, andoperate the Hitachi Virtual Storage Platform G1000 (VSP G1000), HitachiVirtual Storage Platform G1500 (VSP G1500), and Hitachi Virtual StoragePlatform F1500 (VSP F1500) storage systems.

Readers of this document should be familiar with the following:• Data processing and RAID storage systems and their basic functions.• The VSP G1000, VSP G1500, or VSP F1500 storage system and the

Product Overview.• The Hitachi Command Suite software and Hitachi Command Suite User

Guide.• The Hitachi Device Manager - Storage Navigator software and System

Administrator Guide.• The concepts and functionality of storage provisioning operations.

Product versionThis document revision applies to the following product versions:• VSP G1000, G1500, VSP F1500: microcode 80-05-0x or later• SVOS 7.0 or later

Release notesRead the release notes before installing and using this product. They maycontain requirements or restrictions that are not fully described in thisdocument or updates or corrections to this document. Release notes areavailable on Hitachi Data Systems Support Connect: https://knowledge.hds.com/Documents.

Changes in this revision• Added support for the Hitachi Virtual Storage Platform G1500 and Hitachi

Virtual Storage Platform F1500 storage systems.• Added support for the data deduplication and compression functions

(Capacity saving function: data deduplication and compression onpage 32, Capacity saving and accelerated compression functions onpage 31, Enabling deduplication on an existing pool on page 220, Enabling capacity saving functions on DP-VOLs on page 232).

• Added information about volumes used in Hitachi Thin Image cascadepairs and Thin Image volumes with the clone attribute.

14 PrefaceHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 15: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Updated the host mode option (HMO) information (Host mode options onpage 340).○ Added HMO 88 for nondisruptive migration with HP-UX hosts.○ Added new HMO 105 for Task Set Full response.

Related documentsThe documents below are referenced in this document or contain moreinformation about the features described in this document.

Hitachi Virtual Storage Platform G1000, G1500, F1500 documents• Hitachi Thin Image User Guide, MK-92RD8011• Performance Guide, MK-92RD8012• Hitachi SNMP Agent User Guide, MK-92RD8015• Hitachi TrueCopy® User Guide, MK-92RD8019• Hitachi ShadowImage® User Guide, MK-92RD8021• Hitachi Universal Replicator User Guide, MK-92RD8023• Hitachi Universal Volume Manager User Guide, MK-92RD8024• Hitachi Volume Shredder User Guide, MK-92RD8025• Hitachi Virtual Storage Platform G1000, G1500, F1500 Product Overview,

MK-92RD8051

For a complete list of user documents, see the Hitachi Virtual StoragePlatform G1000, G1500, F1500 Product Overview.

Hitachi Command Suite documents• Hitachi Command Suite User Guide, MK-90HC172• Hitachi Command Suite Installation and Configuration Guide, MK-90HC173• Hitachi Command Suite Messages, MK-90HC178• Hitachi Command Suite System Requirements, MK-92HC209

Document conventionsThis document uses the following typographic conventions:

Convention Description

Bold • Indicates text in a window, including window titles, menus, menu options,buttons, fields, and labels. Example:Click OK.

• Indicates emphasized words in list items.

Italic • Indicates a document title or emphasized words in text.• Indicates a variable, which is a placeholder for actual text provided by the

user or for output by the system. Example:pairdisplay -g group

Preface 15Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 16: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Convention Description

(For exceptions to this convention for variables, see the entry for anglebrackets.)

Monospace Indicates text that is displayed on screen or entered by the user. Example:pairdisplay -g oradb

< > angle brackets Indicates variables in the following scenarios:• Variables are not clearly separated from the surrounding text or from

other variables. Example:Status-<report-name><file-version>.csv

• Variables in headings.

[ ] square brackets Indicates optional values. Example: [ a | b ] indicates that you can choose a,b, or nothing.

{ } braces Indicates required or expected values. Example: { a | b } indicates that youmust choose either a or b.

| vertical bar Indicates that you have a choice between two or more options or arguments.Examples:

[ a | b ] indicates that you can choose a, b, or nothing.

{ a | b } indicates that you must choose either a or b.

This document uses the following icons to draw attention to information:

Icon Label Description

Note Calls attention to important or additional information.

Tip Provides helpful information, guidelines, or suggestions for performingtasks more effectively.

Caution Warns the user of adverse conditions and/or consequences (forexample, disruptive operations, data loss, or a system crash).

WARNING Warns the user of a hazardous situation which, if not avoided, couldresult in death or serious injury.

Conventions for storage capacity valuesPhysical storage capacity values (for example, disk drive capacity) arecalculated based on the following values:

Physical capacity unit Value

1 kilobyte (KB) 1,000 (10 3) bytes

1 megabyte (MB) 1,000 KB or 1,0002 bytes

1 gigabyte (GB) 1,000 MB or 1,0003 bytes

1 terabyte (TB) 1,000 GB or 1,0004 bytes

16 PrefaceHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 17: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Physical capacity unit Value

1 petabyte (PB) 1,000 TB or 1,0005 bytes

1 exabyte (EB) 1,000 PB or 1,0006 bytes

Logical capacity values (for example, logical device capacity) are calculatedbased on the following values:

Logical capacity unit Value

1 block 512 bytes

1 cylinder Mainframe: 870 KB

Open-systems:• OPEN-V: 960 KB• Others: 720 KB

1 KB 1,024 (210) bytes

1 MB 1,024 KB or 1,0242 bytes

1 GB 1,024 MB or 1,0243 bytes

1 TB 1,024 GB or 1,0244 bytes

1 PB 1,024 TB or 1,0245 bytes

1 EB 1,024 PB or 1,0246 bytes

Accessing product documentationProduct user documentation is available on Hitachi Data Systems SupportConnect: https://knowledge.hds.com/Documents. Check this site for themost current documentation, including important updates that may havebeen made after the release of the product.

Getting helpHitachi Data Systems Support Connect is the destination for technical supportof products and solutions sold by Hitachi Data Systems. To contact technicalsupport, log on to Hitachi Data Systems Support Connect for contactinformation: https://support.hds.com/en_us/contact-us.html.

Hitachi Data Systems Community is a global online community for HDScustomers, partners, independent software vendors, employees, andprospects. It is the destination to get answers, discover insights, and makeconnections. Join the conversation today! Go to community.hds.com,register, and complete your profile.

Preface 17Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 18: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

CommentsPlease send us your comments on this document to [email protected] the document title and number, including the revision level (forexample, -07), and refer to specific sections and paragraphs wheneverpossible. All comments become the property of Hitachi Data SystemsCorporation.

Thank you!

18 PrefaceHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 19: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

1Introduction to provisioning

There are several provisioning strategies that you can implement on yourstorage system to solve business requirements. Provisioning your storagesystem requires balancing the costs of the solution with the benefits that thesolution provides.

□ About provisioning

□ Key terms

□ Basic provisioning

□ Dynamic Provisioning

□ Dynamic Tiering

□ Active flash

□ Data retention strategies

□ Requirements

Introduction to provisioning 19Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 20: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

About provisioningProvisioning is a method or strategy of managing the logical devices (LDEVs),also called volumes, on a storage system. Some provisioning methods arehost-based, while other methods use inherent storage system capabilitiessuch as concatenated parity groups. Provisioning methods can also beprimarily hardware-based or software-based. Each method has its particularuses and benefits in a specific storage environment, such as optimizingcapacity, reliability, performance, or cost. When used in the right scenario,each method can be cost-effective, efficient, reliable, and straightforward toconfigure and maintain. On the other hand, inappropriate implementationscan be expensive, awkward, time-consuming to maintain, and potentiallyerror prone. Your support representatives are available to help you configurethe highest quality solution for your storage environment.

Provisioning strategies fall into the following two fundamental categories:• Basic provisioning on page 21 (or traditional provisioning). Basic

provisioning involves defining logical devices (LDEVs) on physical storagethat are fixed-size volumes or custom-sized volumes.

• Dynamic Provisioning on page 25 (or virtual provisioning). Thinprovisioning involves the use of virtualization to pool physical storage andprovide on-demand allocation of volumes to hosts.

Key termsTerm Description

access attributes Security function used to control the access to a logical volume. UsingData Retention Utility, you can assign an access attribute to each volume:read only, read/write, or protect.

capacity expansion The data compression services provided by the FMC drives, calledaccelerated compression.

capacity saving The data deduplication and data compression functions provided by thestorage system controllers.

CV Custom-size volume. CVs are created by dividing a fixed-size volume (FV)into user-defined sizes.

deduplication systemdata volume

The volume used to manage data deduplication in a pool. Thededuplication system data volume (also called DSD volume) is createdwhen you enable deduplication on a pool.

DP pool A group of DP-VOLs. The DP pool consists of one or more pool-VOLs.

DP-VOL A virtual volume (V-VOL) used for Dynamic Provisioning.

expiration lock Security option used to allow or prevent changing of the Data RetentionUtility access attribute on a volume.

FMC (flash modulecompression)

A large-capacity flash module drive (FMD) that supports the acceleratedcompression functionality. A dedicated drive box is required for the FMC

20 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 21: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Term Description

drives. The FMC drives and the dedicated FMC drive box are collectivelyreferred to as Hitachi Accelerated Flash DC2 (HAF DC2).

FV Fixed-sized volume.

With the exception of OPEN-V, an FV is a logical volume of a specificdevice emulation type (for example, OPEN-3) that constitutes a paritygroup immediately after installation. The FV size varies according to theemulation type.

meta_resource A resource group to which additional resources (other than externalvolumes) and the resources existing before installing Resource PartitionManager belong.

page In Dynamic Provisioning, a page is 42 MB of continuous storage allocatedfrom a DP pool to store data written to a DP-VOL.

pool A set of volumes that are reserved for storing Dynamic Provisioning orThin Image write data.

pool threshold In Dynamic Provisioning, the proportion (%) of used capacity of the poolto the total pool capacity. Each pool has its own pool threshold values forwarning and depletion.

pool volume (pool-VOL)

A volume that is reserved for storing Dynamic Provisioning data or ThinImage operations.

resource group A group that is assigned one or more resources of the storage system.The resources that can be assigned to the resource group are LDEV IDs,parity groups, iSCSI targets, external volumes, ports, and host group IDs.

subscription limit In a thin-provisioned storage system, the proportion (%) of total DP-VOLcapacity associated with the pool versus the total capacity of the DP pool.

You can set the percentage of DP-VOL capacity that can be created to thetotal capacity of the pool. This can help prevent DP-VOL blocking causedby a full pool.

For example, when the subscription limit is set to 100%, the total DP-VOLcapacity is equal to the DP pool capacity.

tier boundary The value of the reached maximum I/O counts that each tier can process.

tier relocation A combination of determining the appropriate storage tier and migratingthe pages to the appropriate tier.

tiered storage A storage hierarchy of layered structures of data drives consisting ofdifferent performance levels, or tiers, that match data accessrequirements with the appropriate performance tiers.

Basic provisioningSeveral basic provisioning techniques traditionally are used to managestorage volumes. These strategies are useful in specific scenarios based onuser needs, such as what type of storage to use or how to manually sizevolumes. Basic provisioning relies on carving up physical storage into logicaldevices. Custom sizing is possible and requires using the Virtual LUNsoftware.

Basic provisioning includes fixed-size provisioning and custom-sizeprovisioning:

Introduction to provisioning 21Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 22: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Overview of fixed-sized provisioning on page 22• Overview of custom-sized provisioning on page 23

Overview of fixed-sized provisioningTwo traditional fixed-size host-based volume management methods typicallyare used on open systems to organize storage space on a server. One methodis the direct use of physical volumes as devices for use either as raw space oras a local file system. These are fixed-size volumes with a fixed number ofdisks, and as such, each has a certain inherent physical random input/outputoperation per second (IOPS) or sequential throughput (megabytes persecond) capacity. A system administrator manages the aggregate serverworkloads against them. As workloads exceed the volume's available spaceor its IOPS capacity, the data is manually moved onto a larger or fastervolume, if possible.

The following figure illustrates a simple fixed-size provisioning environmentusing individual fixed volumes on a host:

The other method is to use a host-based Logical Volume Manager (LVM)where the planned workloads require either more space or IOPS capacitythan the individual physical volumes can provide. LVM is the diskmanagement feature available on UNIX-based operating systems, includingLinux, that manages their logical volumes.

The following illustrates a fixed-size provisioning environment using LUNs inhost-managed logical volumes:

22 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 23: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

With either method, hosts recognize the size as fixed regardless of the actualused size. Therefore, it is not necessary to expand the volume (LDEV) size inthe future if the actual used size does not exceed the fixed size.

When such a logical volume runs out of space or IOPS capacity, you canreplace it with one that was created with even more physical volumes thencopy over all of the user data. In some cases, it is best to add a secondlogical volume then manually relocate only part of the existing data toredistribute the workload across two such volumes. These two logicalvolumes would be mapped to the server using separate host paths.

Disadvantages

Some disadvantages to using fixed-sized provisioning are:• If you use only part of the entire capacity specified by an emulation type,

the rest of the capacity is wasted.• After creating fixed-sized volumes, typically some physical capacity will be

wasted.• In a fixed-sized environment, manual intervention can become a costly

and tedious exercise when a larger volume size is required.

When to use fixed-sized provisioning

Use fixed-sized provisioning when custom-sized provisioning is notsupported.

Overview of custom-sized provisioningCustom-sized (or variable-sized) provisioning has more flexibility than fixed-sized provisioning and is the traditional storage-based volume managementstrategy.

Introduction to provisioning 23Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 24: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

To create custom-sized volumes on a storage system, an administratorcreates volumes of the desired size from individual array groups. Thesevolumes are then individually mapped to one or more host ports as logicalunits (LUs).

Custom-sized provisioning provides advantages in the following threescenarios:• In fixed-sized provisioning, when several important files are located on the

same volume and one unimportant file is being accessed, users cannotaccess the important files because of logical device contention. If thecustom-sized feature is used to divide the volume into several smallervolumes and I/O workload is balanced (each file is allocated to a differentvolume), then access contention is reduced and access performance isimproved.

• In fixed-sized provisioning, all of the volume's capacity might not be used.Unused capacity on the volume will remain inaccessible to other users. Ifthe custom-sized feature is used, you can create smaller volumes that donot waste capacity.

• Applications that require the capacity of many fixed-sized volumes caninstead be given fewer large volumes to relieve device addressingconstraints.

The following illustrates custom-sized provisioning in an open-systemsenvironment using standard volumes of independent array groups:

Disadvantages

A disadvantage is that manual intervention can become costly and tedious.For example, to change the size of a volume already in use, you must firstcreate a new volume larger (if possible) than the old volume, and then movethe contents of the old volume to the new volume. The new volume is thenremapped on the server to take the mount point of the old volume, which isthen retired.

24 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 25: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When to use custom-sized provisioning

Use custom-sized provisioning when you want to manually control andmonitor your storage resources and usage scenarios.

Basic provisioning workflowThe following illustrates the basic provisioning workflow:

Virtual LUN software is used to configure custom-sized provisioning. Fordetailed information, see Configuring custom-sized provisioning onpage 75.

Dynamic ProvisioningThin provisioning is an approach to managing storage that maximizesphysical storage capacity. Instead of reserving a fixed amount of storage fora volume, capacity from the available physical pool is assigned when data isactually written to the storage media.

About Dynamic ProvisioningWhile basic or traditional provisioning strategies can be appropriate anduseful in specific scenarios, they can be expensive to set up, time-consumingto configure, difficult to monitor, and error prone. Dynamic Provisioningallows you to reserve virtual storage capacity based on anticipated futurecapacity needs, using virtual volumes instead of physical disk capacity.Although Dynamic Provisioning requires some additional setup steps, it can

Introduction to provisioning 25Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 26: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

provide a simpler and more beneficial alternative to traditional provisioningmethods.

Overall storage use rates can improve because you can potentially providemore virtual capacity to applications while using fewer physical drives.Dynamic Provisioning can provide lower initial cost, greater efficiency, andease of storage management for storage administrators. The DynamicProvisioning feature offers the following benefits:• Simplifies storage management• Provides a better balance of resources and performance optimization by

default than traditional provisioning• Optimizes physical drive usage• Reduces device address requirements over traditional provisioning by

providing larger volume sizes

When to use Dynamic Provisioning

Dynamic Provisioning is a best fit in an open-systems environment in thefollowing scenarios:• When the aggregation of storage pool capacity usage across many

volumes provides the best opportunity for performance optimization.• For stable environments and large consistently growing files or volumes.• When device addressing constraints are a concern.

When to use Dynamic ProvisioningDynamic Provisioning is a best fit in an open-systems environment in thefollowing scenarios:• Where the aggregation of storage pool capacity usage across many

volumes provides the best opportunity for performance optimization.• For stable environments and large consistently growing files or volumes.• Where device addressing constraints are a concern.

Advantages of using Dynamic Provisioning

Advantages Without Dynamic Provisioning With Dynamic Provisioning

Reduces initialcosts

You must purchase physical drivecapacity for expected future use. Theunused capacity adds costs for boththe storage system and softwareproducts.

You can logically allocate morecapacity than is physically installed.You can purchase less capacity,reducing initial costs and you can addcapacity later by expanding the pool.

Some file systems take up little poolspace. For more details, see Operatingsystem and file system capacity onpage 138.

Reducesmanagementcosts

You must stop the storage system toreconfigure it.

When physical capacity becomesinsufficient, you can add pool capacitywithout service interruption.

26 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 27: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Advantages Without Dynamic Provisioning With Dynamic Provisioning

In addition, with Dynamic Tiering youcan configure pool storage consistingof multiple types of data drives,including SSD, SAS, and externalvolumes. This eliminates unnecessarycosts.

Reducesmanagementlabor andincreasesavailability ofstorage volumesfor replication

As the expected physical drive capacityis purchased, the unused capacity ofthe storage system also needs to bemanaged on the storage system andon licensed products.

Licenses for storage system productsare based on used capacity rather thanthe total defined capacity.

You can allocate volumes of up to 256TB regardless of physical drivecapacity.

Dynamic Tiering allows you to usestorage efficiently by automaticallymigrating data to the most suitabledata drive.

Increases theperformanceefficiency of thedata drive

Because physical drive capacity isinitially purchased and installed tomeet expected future needs, portionsof the capacity may be unused. I/Oloads may concentrate on just a subsetof the storage which might decreaseperformance.

Effectively combines I/O patterns ofmany applications and evenly spreadsthe I/O activity across availablephysical resources, preventingbottlenecks in parity groupperformance. Configuring the volumesfrom multiple parity groups improvesparity group performance. This alsoincreases storage use while reducingpower and pooling requirements (totalcost of ownership).

Dynamic Provisioning advantage example

To illustrate the merits of a Dynamic Provisioning environment, assume youhave twelve LDEVs from 12 RAID 1 (2D+2D) array groups assigned to a DPpool. All 48 drives contribute their IOPS and throughput power to all DPvolumes assigned to that pool. Instead, if more random read IOPShorsepower is desired for a pool, then the DP pool can be created with 32LDEVs from 32 RAID 5 (3D+1P) array groups, thus providing 128 drives ofIOPS power to that pool. Up to 1024 LDEVs can be assigned to a single pool,providing a considerable amount of I/O capability to just a few DP volumes.

DP-VOL with data direct mapping attributeBy using a DP-VOL for which the data direct mapping attribute is enabled,you can create a mapping of an external volume larger than 4 TB withouthaving to change its capacity as a DP-VOL of the local storage system.

A DP-VOL with the data direct mapping attribute enabled is associated withthe following pool-VOLs: an external volume for which the data directmapping attribute is enabled and a pool-VOL with System Area.

To use DP-VOLs with the data direct mapping attribute enabled, you mustenable the data direct mapping attribute for pool-VOLs, pools, and DP-VOLs.

Introduction to provisioning 27Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 28: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. In the Add External Volumes window, add a volume of an externalstorage system to an external volume group.

2. In the Create LDEVs window, create an external volume for which thedata direct mapping attribute enabled.

3. In the Create Pools window, create a Dynamic Provisioning pool for whichthe data direct mapping attribute is enabled. Specify the followingvolumes as pool-VOLs:• The external volume with the data direct mapping attribute enabled.• One or more normal volumes or external volumes.

4. In the Create LDEVs window, create a DP-VOL with the data directmapping attribute enabled.

5. In the Add LUN Paths window, configure a LU path to the DP-VOL withthe data direct mapping attribute enabled.

The following table shows what kind of external volumes can be added aspool-VOLs:

Operation Data direct mapping attribute of external volumes

Disabled Enabled

Add volumes to a pool withthe data direct mappingattribute enabled

The volumes can beadded.

The volumes can be added.

Add volumes to thefollowing pools:

The volumes can beadded.

The volumes cannot be added.

28 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 29: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Dynamic Provisioningpools

• Dynamic Tiering pools(including active flash)

The following table shows what kind of operations can be performedwhen the data direct mapping attribute of a Dynamic Provisioning pool isenabled or disabled:

Operation Data direct mapping attribute of the pool Remark

Disabled Enabled

Add an LDEV forwhich the datadirect mappingattribute is disabledto the pool

The operation can beperformed.

The operation can beperformed.

None

Add an externalvolume for whichthe data directmapping attributeis enabled to thepool

The operation cannotbe performed.

The operation can beperformed.

None

Set the depletionthreshold and thewarning threshold

The operation can beperformed.

The operation cannotbe performed.

None

Set subscription The operation can beperformed.

The operation can beperformed.

None

Protect V-VOLswhen I/O fails toBlocked Pool VOL

The operation can beperformed.

The operation can beperformed.

None

Protect V-VOLswhen I/O fails toFull Pool

The operation can beperformed.

The operation can beperformed.

None

Performingrebalancing

The operation can beperformed.

The operation can beperformed.

None

Define the usedcapacity of the pool

The sum of thereserved pagescapacity and themapped capacity

The sum of thereserved pagescapacity and themapped capacity

None

Define the licensedcapacity

The sum of the pool-VOLs

The sum of pool-VOLsfor which the datadirect mappingattribute is disabled.

However, the licensecapacity does notinclude the capacity ofpool-VOLs for which thedata direct mappingattribute is enabled.

None

Expand pool The operation can beperformed.

The operation can beperformed.

None

Introduction to provisioning 29Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 30: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

However, the capacityof pool-VOLs withSystem Area must bereserved in advance.For details on how toestimate the capacity ofpool-VOLs with SystemArea.

Shrink pool The operation can beperformed.

The operation can beperformed.

However, if a pool-VOLfor which the datadirect mappingattribute is enabled isassociated with a DP-VOL, you cannot shrinkthe pool.

None

Delete pool The operation can beperformed.

The operation can beperformed.

You can delete a poolonly if there is noDP-VOL that isassociated with thepool

Create DP-VOL You can only createDP-VOLs for which thedata direct mappingattribute is disabled.

You can only create DP-VOLs for which the datadirect mappingattribute is enabled.

None

Implement achange to DynamicTiering (includingactive flash pool)

The operation can beperformed.

The operation can beperformed.

None

The following table shows what kind of operations can be performedwhen the data direct mapping attribute of a DP-VOL is enabled ordisabled:

Operation Data direct mapping attribute of the DP-VOL

Disabled Enabled

Configure LU paths The operation can beperformed.

The operation can be performed.

Format LDEVs The operation can beperformed.

The operation can be performed.

Delete LDEVs The operation can beperformed.

The operation can be performed.

Expand V-VOLs The operation can beperformed.

The operation cannot be performed.

Reclaim zero pages The operation can beperformed.

The operation cannot be performed.

Execute the V-VOL fullallocation function

The operation can beperformed.

The operation cannot be performed.

Protect V-VOLs whenI/O fails to Blocked PoolVOL

The operation can beperformed.

The operation can be performed.

30 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 31: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Protect V-VOLs whenI/O fails to Full Pool

The operation can beperformed.

The operation can be performed.

Apply to LDEVs of SLUattribution

The operation can beperformed.

The operation cannot be performed.

Dynamic Provisioning advantage exampleTo illustrate the merits of a Dynamic Provisioning environment, assume youhave twelve LDEVs from 12 RAID 1 (2D+2D) array groups assigned to a DPpool. All 48 disks contribute their IOPS and throughput power to all DPvolumes assigned to that pool. Instead, if more random read IOPShorsepower is desired for a pool, then the DP pool can be created with 32LDEVs from 32 RAID 5 (3D+1P) array groups, thus providing 128 disks ofIOPS power to that pool. Up to 1024 LDEVs can be assigned to a single pool,providing a considerable amount of I/O capability to just a few DP volumes.

Dynamic Provisioning high-level workflowThe following illustrates the Dynamic Provisioning workflow.

Capacity saving and accelerated compression functionsThe VSP G series and VSP F series storage systems provide the followingfunctions to make efficient use of user capacity:• Capacity saving: The capacity saving function includes data deduplication

and data compression. Capacity saving enables you to reduce your bitcostfor the stored data by deduplicating and compressing the data. Datadeduplication and compression are performed by the controllers of thestorage system.

• Accelerated compression: The accelerated compression function enablesyou to reduce your bitcost for the stored data by allowing you to takeadvantage of the compression function in the FMC drives. Accelerated

Introduction to provisioning 31Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 32: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

compression allows you to assign FMC capacity to a pool that is larger thanthe physical capacity of the FMC parity groups. The data accessperformance of the storage system is maintained when the acceleratedcompression function is used, as the compression engine is offloaded tothe FMC drives.

Capacity saving function: data deduplication and compressionThe capacity saving function is available for use on internal flash drives,including data stored on encrypted drives. When the capacity saving functionis in use, the controller of the storage system performs data deduplicationand compression to reduce the size of data to be stored. A DP-VOL withcapacity saving enabled is called a data reduction (DRD) volume.

• Deduplication: The data deduplication function deletes duplicate copies ofdata written to different addresses in the same pool and maintains only asingle copy of the data at one address. The deduplication function isenabled on a Dynamic Provisioning pool and then on the desired DP-VOLsin the pool. When deduplication is enabled, data that has multiple copiesbetween DP-VOLs assigned to that pool is removed.When you enable deduplication on a pool, the deduplication system datavolume (DSD volume) for that pool is created. The deduplication systemdata volume is used exclusively by the storage system to manage thededuplication function. A search table in the deduplication system datavolume is used to locate redundant data in the pool.

32 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 33: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Compression: The data compression function utilizes the LZ4compression algorithm to compress the data. The compression functioncan be enabled per DP-VOL.

The capacity overheads associated with the capacity saving function includethe following:• Capacity consumed by metadata

The capacity consumed by metadata for the capacity saving function(deduplication and compression) is approximately 3% of the consumedDP-VOL capacity that has been processed by capacity saving. For example,if the consumed capacity of a DP-VOL is 150 TB and the capacity savingfeature has processed 100 TB of the 150 TB consumed capacity reducing itto 30 TB, the capacity consumed by metadata for capacity saving functionwill be approximately 3 TB (3% of 100 TB). The total consumed capacity ofthis DP-VOL at this instant is 83 TB (30 TB + 50 TB + 3 TB).

• Capacity consumed by garbage (invalid) dataThe capacity consumed by garbage data is approximately 7% of the totalconsumed capacity of all DP-VOLs with capacity saving enabled. Thecapacity is dynamically consumed based on garbage data created by thecapacity saving process and cleaned by the background garbage collectionprocess. The garbage collection process is a background process with alower priority than host I/O, so the capacity consumed by garbage datadepends on both the garbage created and the host I/O rate.

Pools containing pool volumes carved from accelerated compression-enabled parity groups

This topic describes pools that contain pool volumes carved from acceleratedcompression-enabled parity groups.

Accelerated compression-enabled parity groupsData on LDEVs carved from parity groups comprised of FMC drives iscompressed before it is stored onto the drives. The default setting ofaccelerated compression is Disabled. You must set this feature to Enable totake advantage of the data compression services on the FMC drives.

Note: If encryption is enabled on an FMC parity group, acceleratedcompression cannot be enabled on that parity group. You can use thededuplication and compression functions on encrypted parity groups.

When you enable accelerated compression on a parity group comprised ofFMC drives:• The capacity of the parity group expands the usable physical capacity of

the parity group. You can potentially carve out LDEVs from this expandedcapacity and use them as pool volumes to create or expand a pool. Whenyou do this, you can utilize the increased available capacity because thedata on the FMC drives has been compressed.

Introduction to provisioning 33Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 34: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• LDEVs carved from the accelerated compression-enabled parity groups canonly be used as pool volumes to create or expand a pool. These LDEVscannot be assigned directly to a host and must be assigned to a singlepool as pool volumes. LDEVs from a single parity group cannot be sharedamong multiple pools.

Monitoring used pool capacity and used pool capacity reserved for writing• Monitoring used pool capacity

The used pool capacity must always be monitored. As data is written toDP-VOLs and stored in the pool, in cases where DP-VOLs are over-provisioned, the pool might become full before the DP-VOLs become full.Therefore, the used pool capacity must always be monitored to preventthis from happening. A threshold value is set for the used pool capacity. Ifthe used pool capacity exceeds the threshold value, a SIM is reported anda notification is sent to the hosts. If these SIMs are reported, you canresolve the status of threshold exceeded by expanding the pool capacity orby deleting data. For details about the threshold values, see Thresholds onpage 200.

• Monitoring used pool capacity reserved for writingFor the pools consisting of pool volumes carved from acceleratedcompression-enabled parity groups, you must monitor both the used poolcapacity and the used pool capacity reserved for writing. If the used poolcapacity reserved for writing exceeds the threshold value, a SIM isreported. The used pool capacity and the pool capacity reserved for writingare not always the same, and the SIMs are independent of each other. Thefollowing conditions can occur:○ The used pool capacity exceeds the threshold, but the used pool

capacity reserved for writing is lower than the threshold.

34 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 35: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

○ The used pool capacity is lower than the threshold, but the used poolcapacity reserved for writing exceeds the threshold.

○ Both used capacities exceed the threshold.

If SIMs are reported, you can resolve the status of the exceeded threshold byexpanding the pool capacity or by deleting unwanted data.

If pool volumes that are carved from accelerated compression-enabled paritygroup are used to create a new pool, you must estimate the data savingspercentage beforehand.

For a pool with a pool volume that has accelerated compression enabled, theused pool capacity and the used pool capacity reserved for writing aremonitored. The following shows the threshold values for the used poolcapacity and used pool capacity reserved for writing that trigger output ofSIMs when exceeded:• Warning Threshold: You can set a value from 1% to 100% in 1%

increments. The initial value is 70%.• Depletion Threshold: You can set a value from 1% to 100% in 1%

increments. The initial value is 80%.• Prefixed Depletion Threshold: The value is set for the used capacity of the

parity group. The value is fixed at 90%.

The following figure shows the used pool capacity and used pool capacityreserved for writing with threshold values. Hereinafter, the smaller freecapacity between the pool or drive is called the remaining free capacityreserved for writing.

Introduction to provisioning 35Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 36: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Dynamic Tiering

Dynamic TieringAfter using Dynamic Provisioning software to virtualize LUs and pool storageinto a thin provisioning strategy, the array now has all the elements in placeto offer automatic self-optimizing storage tiers provided by Hitachi DynamicTiering (HDT). Using Dynamic Tiering, you can configure a storage systemwith multiple storage tiers using different kinds of data drives, including SSD,SAS, and external volumes. This helps improve the speed and cost ofperformance. Dynamic Tiering extends and improves the functionality andvalue of Dynamic Provisioning. Both use pools of physical storage againstwhich virtual disk capacity, or V-VOLs, is defined. Each thin provisioning poolcan be configured to operate either as a DP pool or a Dynamic Tiering pool.

Automated tiering of physical storage is the logical next step for thinprovisioned enterprise arrays. Automated tiering is the ability of the array todynamically monitor and relocate data to the optimum tier of storage. Itfocuses on data segments rather than entire volumes. The functionality isentirely within the array without any mandated host level involvement.Dynamic Tiering adds another layer to the thin provisioned environment.

Using Dynamic Tiering you can:

36 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 37: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Configure physical storage into tiers consisting of multiple kinds of datadrives, including SSD, and SAS. Although host volumes are conventionallyconfigured from a common pool, the pool is efficiently configured usingmultiple kinds of data drives. Placing data that needs high performancewhile reducing storage costs by using high cost disks such as SSDs asefficiently as possible, resulting in data that is accessed infrequently beingplaced on lower cost physical storage.

• Automatically migrate small portions of host volumes to the most suitabletier according to access frequency. Frequently accessed data is migrated tohigher speed, higher cost data drives (for example, SSD). Infrequentlyaccessed data is migrated to lower cost and lower speed data drives (forexample, SAS7.2K) to use the storage efficiently.

Dynamic Tiering simplifies storage administration by automating andeliminating the complexities of efficiently using tiered storage. Itautomatically moves data on pages in Dynamic Provisioning virtual volumesto the most appropriate storage media, according to workload, to maximizeservice levels and minimize total cost of storage.

Dynamic Tiering gives you:• Improved storage resource usage• Improved return on costly storage tiers• Reduced storage management effort• More automation• Nondisruptive storage management• Reduced costs• Improved performance

Overview of tiersWhen not using Dynamic Tiering data is allocated to only one kind of datadrive (typically an expensive high-speed data drive) without regard to theworkload because the volumes are configured with only one kind of datadrive. When using Dynamic Tiering, frequently access data is automaticallyallocated to the higher-speed HDT pool volumes and the lower speed drive tothe volumes of low workload. This improves performance and reduces costs.

Dynamic Tiering places the host volume's data across multiple tiers ofstorage contained in a pool. There can be up to three tiers (high-, medium-,and low-speed layers) in a pool. Dynamic Tiering determines tier usage basedon data access levels. It allocates the page with high I/O load to the uppertier, which contains a higher speed drive, and the page with low I/O load tothe lower tier, which contains a lower speed drive.

The following figure illustrates the basic tier concept.

Introduction to provisioning 37Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 38: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When to use Dynamic TieringDynamic Tiering is the best fit in an environment in which DynamicProvisioning is a good fit.

For detailed information, see Configuring thin provisioning on page 127.

Active flashThe active flash feature of Dynamic Tiering monitors page accesses over aset time frame and attempts to keep the most frequently accessed pages inTier 1.

The active flash feature monitors a page's access frequency level real timeand promotes pages that suddenly became busy from a slower media tohigh-performance flash media, in real-time.

The active flash feature can be enabled on any Dynamic Tiering pool as longas you have SSD and FMD drives in Tier 1 of the pool. No specialconfiguration beyond what is needed for active flash is required.

Prompt Promotion

A primary goal of Dynamic Tiering and active flash is to have the mostfrequently access pages in Tier 1. As the workload varies in both thefrequency of access and the type of access, reads or writes, the threshold formoving pages from one tier to another changes. Dynamic Tiering generates adynamic tier range value that is used to determine which pages need to be inTier 1 and which need to be in a lower tier.

The active flash feature compares the recent the access frequency of eachpage to the Prompt Promotion threshold to determine whether a page shouldbe promoted to Tier 1. The Prompt Promotion threshold is a dynamicthreshold that adjusts based upon changes in workload to make most

38 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 39: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

efficient use of the SSD and FMD drives. If the recent access frequency for apage meets or exceeds the Prompt Promotion threshold, the page isrelocated to Tier 1 without waiting for the next Dynamic Tiering relocationcycle.

Certain type of I/O benefit more from being served by flash media thenothers. To achieve the best performance gains for certain I/O, active flashgives read I/O greater weight than write I/O when calculating the totalaccess frequency for a page.

High Prioritized Demotion

In order to be certain that there is always some room for active flash to doPrompt Promotion of pages to Tier 1, High Prioritized Demotion is used todemote pages out of Tier 1. Pages that have the lowest IOPH are candidatesfor High Prioritized Demotion. Similar to Prompt Promotion, High PrioritizedDemotion does not wait for the current Dynamic Tiering cycle to end to makerelocation decisions.

Page demotion is only triggered when:• Tier 1 free capacity is depleted• performance utilization reaches 80%

Peak performance utilization is predefined for a particular media.

Performance utilization of a tier is the maximum amount of I/O it can receive.The maximum I/O load that should be targeted to a tier depends upon themedia type used to make the tier. A performance utilization of 100% meansthat the tier is receiving the maximum amount of I/O it can sustain. Whenperformance utilization reaches about the 60% level, response time to theparticular media becomes noticeably slower. See Tier relocation workflow onpage 154 for more information on performance utilization.

The following diagram shows the differences in the way pools are managedbetween Dynamic Provisioning, Dynamic Tiering, and active flash

Introduction to provisioning 39Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 40: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Data retention strategiesAfter provisioning your system, you can assign access attributes to open-system volumes to protect the volume against read, write, and copyoperations and to prevent users from configuring LU paths and commanddevices. Use the Data Retention Utility to assign access attributes.

40 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 41: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

For more information, see Configuring access attributes on page 283.

Requirements

System requirements• The hardware and microcode for the storage system must be configured

and ready for use.For information about availability of Virtual Storage Platform F1500,contact your Hitachi Data Systems representative.

• The parity groups in the storage system must be configured and ready foruse.

• Hitachi Device Manager - Storage Navigator must be configured and readyfor use. See the System Administrator Guide for your storage system.

• The license keys for the provisioning software products must be enabled.See the System Administrator Guide for your storage system.

Shared memory requirementsAdditional shared memory is required when Dynamic Provisioning is used andthe total capacity of Dynamic Provisioning, Dynamic Tiering, capacity saving(deduplication and compression), and Thin Image pools is 1.1 PB or more.

If the capacity saving function (deduplication and compression) is used,additional shared memory is required. If Dynamic Tiering or active flash isused, additional shared memory is required.

Caution: Before shared memory is removed, all Dynamic Provisioning andDynamic Tiering pools must be deleted.

The following table shows the required shared memory capacity whenDynamic Provisioning is used.

Total capacity of all pools Required shared memory capacity

1.1 PB or less None

If the total capacity satisfies both of following:• More than 1.1 PB• 3.4 PB or less

8 GB

If the total capacity satisfies both of following:• More than 3.4 PB• 7.9 PB or less

24 GB

If the total capacity satisfies both of following:• More than 7.9 PB• 12.3 PB or less

40 GB

Introduction to provisioning 41Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 42: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tip: The V-VOL management area is automatically created when sharedmemory is added. This area is used to store information for associating pool-VOLs and DP-VOLs.

If Dynamic Tiering is used in conjunction with Dynamic Provisioning, DynamicTiering, and Thin Image pools, additional shared memory must be installed.When using Dynamic Tiering, the following table shows the required sharedmemory capacity.

Total capacity of all pools Required shared memory capacity

1.1 PB or less 8 GB

If the total capacity satisfies both of following:• More than 1.1 PB• 3.4 PB or less

16 GB

If the total capacity satisfies both of following:• More than 3.4 PB• 7.9 PB or less

32 GB

If the total capacity satisfies both of following:• More than 7.9 PB• 12.3 PB or less

48 GB

Cache management device requirementsCache management devices are used to manage the cache associated withvolumes (LDEVs). Each volume (LDEV) requires at least one cachemanagement device. The storage system can manage up to 65,280 cachemanagement devices.

A DP-VOL might require multiple cache management devices. This topicdescribes how to calculate the number of cache management devicesrequired.

The View Management Resource Usage window displays the number of cachemanagement devices in use and the maximum number of cache managementdevices. For details, see Viewing the number of cache management deviceson page 43.

Calculating the number of cache management devices required for DP-VOLsThe number of cache management devices that a DP-VOL requires dependson the capacity of the V-VOL (capacity of the user area) and the maximumcapacity of cache management device. The maximum capacity of cachemanagement device depends on the pool attribute (internal volume orexternal volume) associated with V-VOL.

The following table explains the relationship between the pool attribute andthe maximum capacity of cache management device.

42 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 43: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Maximum capacity of cache management devices

Pool attribute Maximum capacity(MB)

Maximum capacity(blocks)

Maximum capacity(cylinders)

Internal volume 711,768.75

(695.08 GB)

1,457,702,400 837,760

External volume 949,659.37

(927.40 GB)

1,944,902,400 1,117,760

Use the following formula to calculate the number of cache managementdevices that a DP-VOL requires. In this formula, the user-specified capacity isthe user area capacity of a V-VOL.

ceil(user-specified-capacity ÷ max-capacity-of-cache-management-device)

ceil: round up the calculated value to the nearest whole number.

Note:• For a DP-VOL with the deduplication or compression function enabled, use

twice the number of the cache management devices calculated by thisformula.

• For each deduplication system data volume, 14 cache managementdevices are used.

Calculating the number of cache management devices required by a volume thatis not a DP-VOL

One volume that is not a DP-VOL requires one cache management device.

Viewing the number of cache management devicesClick Actions and select View Management Resource Usage to display thenumber of cache management devices in the View Management ResourceUsage window.

Introduction to provisioning 43Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 44: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

44 Introduction to provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 45: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2Managing virtual storage machine

resourcesThe virtual storage machine is the unit that is used to manage virtualizedresources for each storage system. For software with global storagevirtualization functions (for example, global-active device, nondisruptivemigration), a virtual storage machine is created in the storage system.

For example, if nondisruptive migration is used to migrate a storage systemto a Virtual Storage Platform G1000 storage system, the virtualized storagesystem is the migration source storage system. The migration source storagesystem is created in the migration target storage system. In global-activedevice (GAD), the virtualized storage system is the storage system thatcontains the secondary volume (S-VOL) of the GAD pair.

□ About virtual storage machines and virtualized resources

□ Provisioning operations for resources in a virtual storage machine

□ Pair operations with virtual storage machine pairs

□ Software operations for resources in a virtual storage machine

□ Editing virtualization management settings

Managing virtual storage machine resources 45Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 46: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

About virtual storage machines and virtualized resourcesFor operations involving virtualized resources in a virtual storage machine,physical resources must be linked to virtualized resources. For example,when you operate LDEVs in a virtual storage machine, you must specifyphysical LDEV IDs (not virtual LDEV IDs) that link to resources in a virtualstorage machine.

The following describes the relationship between virtual storage machinesand a storage system.

Virtual storage machines are created with operations involving data migrationor the high availability function. Other than those operations, a user cannotcreate virtual storage machines in a storage system. For information onremoving virtual storage machines, see the Command Control Interface Userand Reference Guide.

Information on the virtualized resources of a virtual storage machine appearsin Device Manager - Storage Navigator with associated physical storageinformation. If the information on these resources is not displayed by default,you can change the column settings in the table options.

For information on displaying virtualized resources, see the Command ControlInterface User and Reference Guide.

The following terms about virtualized resources appear in Device Manager -Storage Navigator windows.

46 Managing virtual storage machine resourcesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 47: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Term Description

LDEVs for which virtualization management isdisabled

LDEVs that satisfy both of these conditions:• The model name and serial number of the

virtual storage machine that manages aresource group with LDEVs is the same asthe storage system involved in the operation.

• The values of the virtual LDEV ID and theLDEV ID are the same.

LDEVs for which virtualization management isenabled

LDEVs that satisfy one of these conditions:• The model name and serial number of the

virtual storage machine that manages aresource group with LDEVs is different thanthe storage system involved in the operation.

• The model name and serial number of thevirtual storage machine that manages aresource group with LDEVs is the same asthe storage system involved in the operation.Yet, the values of the virtual LDEV ID and theLDEV ID are different.

Provisioning operations for resources in a virtual storagemachine

For provisioning operations that involve virtualized resources in a virtualstorage machine, you can perform provisioning operations that specifyconventional physical resources or virtualized resources. However,provisioning operations that specify IDs of virtualized resources are limited.

For details about data management operations for virtualized resources, seethe Command Control Interface User and Reference Guide.

Pair operations with virtual storage machine pairs

Specifying virtual IDs in pair operations

You can perform pair operations by specifying both of the following in theHORCM_LDEV parameters of the Command Control Interface configurationdefinition file:• Serial number of the virtual storage machine in the Serial# parameter• Virtual LDEV number in the CU:LDEV(LDEV#) parameter

You can perform conventional pair operations by specifying both of thefollowing in the HORCM_LDEV parameters of the configuration definition file:• Serial number of the physical storage system in the Serial# parameter• Physical LDEV number in the CU:LDEV(LDEV#) parameter

Managing virtual storage machine resources 47Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 48: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: If the following condition exists, the local copy pair operationcannot be performed by specifying virtual IDs:• Primary volume and secondary volume are defined differently for the

virtual storage machine.

If both of the following conditions exist, remote copy pair operations cannotbe performed by specifying virtual IDs:• The primary volume is an LDEV in a VSP, HUS VM, or USP V/VM storage

system.• The secondary volume is an LDEV in a VSP G1000, VSP G1500, or VSP

F1500 storage system.

Caution: Global-active device pair operations that specify the virtual IDcannot be performed.

Displaying pair information

You can create pairs by specifying both of following in the HORCM_LDEVparameters of the Command Control Interface configuration definition file:• Serial number of the physical storage system in the Serial# parameter• Virtual LDEV number in the CU:LDEV(LDEV#) parameter

If the pair is created under the above conditions, the following are displayedas results of executing the pairdisplay command:• Serial number of the virtual storage machine in the Seq# parameter• Virtual LDEV number in the LDEV# parameter

You can create pairs by specifying both of the following in the HORCM_LDEVparameters of the configuration definition file:• Serial number of the physical storage system in the Serial# parameter• Physical LDEV number in the CU:LDEV(LDEV#) parameter

If the pair is created under the above conditions, the following are displayedas results of executing the pairdisplay command:• Physical serial number of the virtual storage machine in the Seq#

parameter• Physical virtual LDEV number in the LDEV# parameter

Caution: You can create pairs by specifying both of the following in theHORCM_LDEV parameters of the configuration definition file:• Serial number of the physical storage system in the Serial# parameter• Physical LDEV number in the CU:LDEV(LDEV#) parameter

For pairs created under the above conditions, the device information that isrecognized by the server and the device information that results fromexecuting the pairdisplay command are different.

48 Managing virtual storage machine resourcesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 49: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Software operations for resources in a virtual storagemachine

For data management operations that involve virtualized resources in avirtual storage machine, you can perform data management operations thatspecify conventional physical resources or virtualized resources. However,data management operations that specify IDs of virtualized resources arelimited.

For details about data management operations for virtualized resources, seethe Command Control Interface User and Reference Guide.

Editing virtualization management settingsThis section explains how to edit virtualization management settings.

Caution: If the setting of the LDEV virtualization management is canceled,Transient is displayed in the Virtual LDEV ID and Failed is displayed on theStatus column in the Task window. To resolve the transient status, performone of the following operation on the LDEVs:• Resolve the cause of the failure by addressing the error message in the

Task window, and then retry the same operation by using the EditVirtualization Management Settings window.

• In the Edit Virtualization Management Settings window, set VirtualManagement Settings to Disable before applying the setting to the storagesystem.

Before you begin

You must have Security Administrator (View & Modify) role to perform thistask.

Procedure

1. In the Administration tree, select Resource Groups.2. Select the resource group that has the volume with the virtualization

management settings you want to edit.3. In the LDEVs tab, select a volume with the virtualization management

settings you want to edit.4. Use either of the following methods to display the Edit Virtualization

Management Settings window:• Click Edit Virtualization Management Settings.• In the Settings menu, select Resource Management, and then Edit

Virtualization Management Settings.

Managing virtual storage machine resources 49Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 50: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. In Virtualization Management Settings, select one of the followingvirtualization management settings:• Enable: Virtualization management can be used. You can set an initial

virtual LDEV ID or the virtual configuration, or both.• Enable (Not Set): Virtualization management can be used. However,

you cannot set the initial virtual LDEV ID or virtual configuration.• Disable: Virtual management cannot be used.

6. If you select Enable in Virtualization Management Settings and canset a virtual LDEV ID, set the starting virtual LDEV ID for Initial VirtualLDEV ID.A virtual LDEV ID that is not used in the virtual storage machine isassigned at the interval specified in Interval, sequentially starting fromthe specified virtual LDEV ID.

Note: If the virtual storage machine is the same as the storagesystem, assign the virtual LDEV ID which is different from theLDEV ID of the selected LDEV. If the virtual storage machine is thesame as the storage system, and if the virtual LDEV ID which isthe same as LDEV ID of the selected LDEV must be assigned,select Disable in Virtualization Management Settings.

Caution: If the virtual storage machine is configured by multiplestorage systems, a virtual LDEV ID that is already used in anotherstorage system might be assigned. In such a configuration, set theinterval so that a virtual LDEV ID that is already used in anotherstorage system is not assigned.

7. If you select Enable in Virtualization Management Settings, selectVirtual Configuration. If you want to specify the virtual configuration ofthe LDEV (to make the configuration different from that of the LDEV),select Specify. If you do not want to specify the virtual configuration ofthe LDEV (to make the configuration the same as that of the LDEV),select Not Set.a. In Emulation Type, select the virtual emulation type. For the virtual

emulation type, similar to the emulation type, set one of theemulation types that exists in the same group of 32 volumes withLDEV IDs.

b. Select CVS Settings.c. In Number of Concatenated LDEVs, specify the number of

concatenated virtual LDEVs with a value from 1 to 36 (decimalnumber).If you do not concatenate virtual LDEVs, enter 1.

d. In SSID, specify a virtual SSID with a value from 0004 to FFFE(hexadecimal number).

50 Managing virtual storage machine resourcesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 51: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Set a virtual SSID for each virtual LDEV address (64, 128, 256) in thevirtual storage machine.

Note: For the virtual configuration, the specified values are setfor all selected LDEVs.

8. Click Finish.9. Enter the task name in Task Name.

10. Click Apply.

The task is registered, and if the Go to tasks window for status checkbox is selected, the Tasks window appears.

Managing virtual storage machine resources 51Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 52: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

52 Managing virtual storage machine resourcesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 53: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

3Configuring resource groups

The Storage Administrator can divide a provisioned storage system intoresource groups that allow managing the storage system as multiple virtualprivate storage systems. Configuring resource groups involves creatingresource groups, moving storage system resources into the resource groups,and assigning resource groups to user groups. Resource groups can be set upon both open and mainframe systems. Resource Partition Manager softwareis required.

□ Resource group strategies

□ System configuration using resource groups

□ Resource groups examples

□ Resource group rules, restrictions, and guidelines

□ Using Resource Partition Manager and other storage products

□ Managing resource groups

Configuring resource groups 53Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 54: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Resource group strategiesA storage system can connect to multiple hosts and be shared by multipledivisions in a company or by multiple companies. Many storageadministrators from different organizations can access the storage system.Managing the entire storage system can become complex and difficult.Potential problems are that private data might be accessed by other users, ora volume in one organization might be accidentally destroyed by a storageadministrator in another organization.

To avoid such problems, use Hitachi Resource Partition Manager software toset up resource groups that allow you to manage one storage system asmultiple virtual private storage systems. The storage administrator in eachresource group can access only their assigned resources. Resource groupsprevent the risk of data leakage or data destruction by another storageadministrator in another resource group.

Resources such as LDEVs, parity groups, iSCSI targets, external volumes,ports, or host groups can be assigned to a resource group. These resourcescan be combined to flexibly compose a virtual private storage system.

Resource groups should be planned and created before creating volumes. Formore information, see Configuring resource groups on page 53.

System configuration using resource groupsConfiguring resource groups prevents the risk of data leakage or datadestruction by another Storage Administrator in another resource group. TheStorage Administrator considers and plans which resource should bemanaged by which user, and then the Security Administrator creates resourcegroups and assigns each resource to the resource groups.

A resource group is assigned one or more storage system resources. Thefollowing resources can be assigned to resource groups.• LDEV IDs• Parity groups• External volumes• Ports• Host group IDs• iSCSI target IDs

Note: Before you create LDEVs, the LDEV IDs can be reserved and assignedto a resource group for future use. Host group numbers can also be reservedand assigned in advance because the number of host groups created on asingle port is limited. The iSCSI targets numbers can also be reserved and

54 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 55: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

assigned in advance because the number of iSCSI targets created on a singleport is limited.

The following tasks provide instructions for configuring resource groups.• Creating resource groups on page 72• Editing resource groups on page 73• Deleting resource groups on page 74

Meta_resourceThe meta_resource is a resource group comprised of additional resources(other than external volumes) and the resources that exist on the storagesystem before the Resource Partition Manager is installed. By default,existing resources initially belong to the meta_resource group to ensurecompatibility with older software when a system is upgraded to includeResource Partition Manager.

Resource lockWhile processing a task on a resource, all of the resource groups assigned tothe logged-on user are locked for exclusive access.

A secondary window (such as the Basic Information Display) or anoperation from the service processor (SVP) locks all of the resource groups inthe storage system.

When a resource is locked, a status indicator appears on the Device Manager- Storage Navigator status bar. Click the Resource Locked button to viewinformation about the locked resource.

Resource group assignmentsAll resource groups are normally assigned to the Security Administrator andthe Audit Log Administrator.

Each resource group has a designated Storage Administrator who can accessonly their assigned resources and cannot access other resources.

All resource groups to which all resources in the storage system belong canbe assigned to a user group. Configure this in Device Manager - StorageNavigator by setting All Resource Groups Assigned to Yes.

A user who has All Resource Groups Assigned set to Yes can access allresources in the storage system. For example, if a user is a SecurityAdministrator (with View & Modify privileges) and a Storage Administrator(with View and Modify privileges) and All Resource Groups Assigned is Yes onthat user account, the user can edit the storage for all the resources.

Configuring resource groups 55Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 56: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If allowing this access becomes a problem with security on the storagesystem, then register the following two user accounts and use these differentaccounts for different purposes.• A user account for a Security Administrator where All Resource Groups

Assigned is set to Yes.• A user account for a Storage Administrator who does not have all resource

groups assigned and has only some of the resource groups assigned.

User groupsUser groups and associated built-in roles are defined in the SVP. A userbelongs to one or more user groups. Privileges allowed to a particular userare determined by the user group or groups to which the user belongs.

The Security Administrator assigns resource groups to user groups. A usergroup might already be configured, or a new user group might be requiredfor certain resources.

For more information, see the System Administrator Guide of your storagesystem.

Resource groups examplesThe following examples illustrate how you can configure resource groups onyour storage system.• Example of resource groups sharing a port on page 56• Example of resource groups not sharing ports on page 58

Example of resource groups sharing a portIf you have a limited number of ports, you can still operate a storage systemeffectively by sharing ports using resource groups.

The following example shows the system configuration of an in-house divisionproviding virtual private storage system for two divisions. Divisions A and Beach use their own assigned parity group, but share a port between the twodivisions. The shared port is managed by the system division.

56 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 57: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The Security Administrator in the system division creates resource groups foreach division in the storage system and assigns them to the respectivedivisions. The Storage Administrator in Division A can manage the resourcegroups for Division A, but cannot access the resource groups for Division B.In the same manner, the Storage Administrator in Division B can manage theresource groups for Division B, but cannot access the resource groups forDivision A.

The Security Administrator creates a resource group for managing thecommon resources, and the Storage Administrator in the system divisionmanages the port that is shared between Divisions A and B. The StorageAdministrators in Divisions A and B cannot manage the shared port belongingto the resource group for common resources management.

Configuration workflow for resource groups sharing a port1. The system division forms a plan about the resource group creation and

assignment of the resources.2. The Security Administrator creates the resource groups. For more

information, see Creating resource groups on page 72.

Configuring resource groups 57Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 58: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

3. The Security Administrator creates the user groups.For more information, see the System Administrator Guide or the HitachiCommand Suite User Guide.

4. The Security Administrator assigns the resource groups to the usergroups.For more information, see the System Administrator Guide or the HitachiCommand Suite User Guide.

5. The Storage Administrator in the system division sets a port.6. The Security Administrator assigns resources to the resource groups. For

more information, see Editing resource groups on page 73.7. The Security Administrator assigns the Storage Administrators to the

appropriate user groups.For more information, see the System Administrator Guide or the HitachiCommand Suite User Guide.

After the above procedures, the Storage Administrators in A and B divisionscan manage the resource groups assigned to their own division.

Example of resource groups not sharing portsIf you assign ports to each resource group without sharing, performance canbe maintained on a different port even if the bulk of I/O is issued from oneside port.

The following shows a system configuration example of an in-house systemdivision providing the virtual private storage system for two divisions.Divisions A and B each use individual assigned ports and parity groups. Inthis example, they do not share a port.

58 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 59: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The Security Administrator in the system division creates resource groups foreach division in the storage system and assigns them to the respectivedivisions. The Storage Administrator in Division A can manage the resourcegroups for Division A, but cannot access the resource groups for Division B.In the same manner, the Storage Administrator in Division B can manage theresource groups for Division B, but cannot access the resource groups forDivision A.

Configuration workflow for resource groups not sharing a port1. The system division forms a plan about creating resource groups and the

assigning resources to the groups.2. The Security Administrator creates the resource groups. For more

information, see Creating resource groups on page 72.3. The Security Administrator creates the user groups.

For more information, see the System Administrator Guide or HitachiCommand Suite User Guide.

4. The Security Administrator assigns the resource groups to user groups.For more information, see the System Administrator Guide or HitachiCommand Suite User Guide.

Configuring resource groups 59Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 60: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. The Storage Administrator in the system division sets ports.6. The Security Administrator assigns resources to the resource groups. For

more information, see Editing resource groups on page 73.7. The Security Administrator assigns each Storage Administrator to each

user group.For more information, see the System Administrator Guide or HitachiCommand Suite User Guide.

After the above procedures, the Storage Administrators in A and B divisionscan access the resource groups allocated to their own division.

Resource group rules, restrictions, and guidelinesRules• The maximum number of resource groups that can be created on a

storage system is 1023.• A Storage Administrator with the Security Administrator (View & Modify)

role can create resource groups and assign resources to resource groups.• Resources removed from a resource group are returned to meta_resource.• Only a Security Administrator (View & Modify) can manage the resources

in assigned resource groups.

Restrictions• No new resources can be added to meta_resource.• Resources cannot be deleted from meta_resource.• LDEVs with the same pool IDs or journal IDs cannot be added to multiple

resource groups.In the case of adding LDEVs that are used as pool volumes or journalvolumes, add all the LDEVs that have the same pool IDs or journal IDs byusing a function such as sort.

• Host groups that belong to the initiator port cannot be added to a resourcegroup.

Guidelines• If you are providing a virtual private storage system to different

companies, you should not share parity groups, external volumes, or poolsif you want to limit the capacity that can be used by each user. Whenparity groups, external volumes, or pools are shared between multipleusers, and if one user uses too much capacity of the shared resource, theother users might not be able to create an LDEV.

60 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 61: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Using Resource Partition Manager and other storageproducts

To use Resource Partition Manager with other storage products, the resourcesthat are required for the operation must satisfy specific conditions. Thefollowing topics provide information about the specific resource conditionsthat are required for using each product.

Dynamic ProvisioningThe following table provides information about specific Dynamic Provisioningconditions that must be observed when using Resource Partition Manager.

Operation name Condition

Create LDEVs If DP-VOLs are created, the following must be assigned to theStorage Administrator group that is permitted to manage them.• LDEV ID• Pool-VOL of the pool

Delete LDEVs If DP-VOLs are deleted, the following must be assigned to theStorage Administrator group that is permitted to manage them.• LDEV ID• Pool-VOL of the pool

Create pools

Expand pools

Volumes to be specified as pool-VOLs must be assigned to theStorage Administrator group permitted to manage them.

All the volumes that are specified when creating a pool must belongto the same resource group.

Edit pools

Delete pools

Pool-VOLs of the specified pool must be assigned to the StorageAdministrator group permitted to manage them.

Expand V-VOLs You can expand only the DP-VOLs that are assigned to the StorageAdministrator group permitted to manage them.

Reclaim zero pages

Stop reclaiming zero pages

You can reclaim or stop reclaiming zero pages only for the DP-VOLsthat are assigned to the Storage Administrator group permitted tomanage them.

Encryption License KeyThe following table provides information about specific Encryption LicenseKey conditions that must be observed when using Resource PartitionManager.

Operation name Condition

Edit encryption keys When you specify a parity group and open the Edit Encryptionwindow, the specified parity group and LDEVs carved from the paritygroup must be assigned to the Storage Administrator grouppermitted to manage them.

Configuring resource groups 61Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 62: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

When you open the Edit Encryption window without specifying aparity group, more than one parity group and LDEVs carved from theparity group must be assigned to the Storage Administrator grouppermitted to manage them.

LUN ManagerThe following table provides information about specific LUN Managerconditions that must be observed when using Resource Partition Manager.

For Fibre Channel

Operation name Condition

Add LUN paths When you specify host groups and open the Add LUN Pathswindow, the specified host groups must be assigned to the StorageAdministrator group permitted to manage them.

When you specify LDEVs and open the Add LUN paths window, thespecified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

Delete LUN paths When you specify a host group and open the Delete LUN Pathswindow, the specified host group must be assigned to the StorageAdministrator group permitted to manage them.

When you specify LDEVs and open the Delete LUN Paths window,the specified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

When selecting the Delete all defined LUN paths to above LDEVscheck box, the host groups of all the alternate paths in the LDEVdisplayed on the Selected LUNs table must be assigned to theStorage Administrator group permitted to manage them.

Edit host groups The specified host groups and initiator ports must be assigned to theStorage Administrator group permitted to manage them.

Add hosts The specified host groups must be assigned to the StorageAdministrator group permitted to manage them.

Edit hosts The specified host group must be assigned to the StorageAdministrator group permitted to manage them.

When you select the Apply same settings to the HBA WWN of allports check box, all the host groups where the specified HBA WWNsare registered must be assigned to the Storage Administrator grouppermitted to manage them.

Remove hosts When you select the Remove hosts from all host groups containingthe hosts in the storage system check box, all the host groups wherethe HBA WWNs displayed in the Selected Hosts table are registeredmust be assigned to the Storage Administrator group permitted tomanage them.

Edit ports The specified port must be assigned to the Storage Administratorgroup permitted to manage them.

If this port attribute is changed from Target or RCU Target to Initiatoror to External, the host group of this port belongs to meta_resource.

62 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 63: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Therefore, the host group of this port is not displayed in windows.

Create alternative LUNpaths

The specified host groups and all the LDEVs where the paths are setto the host groups must be assigned to the Storage Administratorgroup permitted to manage them.

Copy LUN paths The specified host groups and the LDEVs where the paths are setmust be assigned to the Storage Administrator group permitted tomanage them.

Edit command devices LDEVs where the specified paths are set must be assigned to theStorage Administrator group permitted to manage them.

Edit UUIDs The specified LDEV must be assigned to the Storage Administratorgroup permitted to manage them.

Delete UUIDs The specified LDEV must be assigned to the Storage Administratorgroup permitted to manage them.

Create host groups When you open the Create Host Groups window by specifying hostgroups, the specified host groups must be assigned to the StorageAdministrator group permitted to manage them.

Delete host groups The specified host groups and all the LDEVs where the paths are setto the host groups must be assigned to the Storage Administratorgroup permitted to manage them.

Release Host-Reserved LUNs LDEVs where the specified paths are set must be assigned to you.

For iSCSI

Operation name Condition

Add LUN paths When you specify host groups and open the Add LUN Pathswindow, the specified iSCSI target must be assigned to the StorageAdministrator group permitted to manage them.

When you specify LDEVs and open the Add LUN paths window, thespecified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

Delete LUN paths When you specify an iSCSI target and open the Delete LUN Pathswindow, the specified iSCSI target must be assigned to the StorageAdministrator group permitted to manage them.

When you specify LDEVs and open the Delete LUN Paths window,the specified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

When selecting the Delete all defined LUN paths to above LDEVscheck box, the iSCSI target of all the alternate paths in the LDEVdisplayed on the Selected LUNs table must be assigned to theStorage Administrator group permitted to manage them.

Add hosts The specified iSCSI target must be assigned to the StorageAdministrator group permitted to manage them.

Edit hosts The specified iSCSI target must be assigned to the StorageAdministrator group permitted to manage them.

When you select the Apply same settings to the HBA WWN of allports check box, all the iSCSI targets where the specified HBA WWNs

Configuring resource groups 63Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 64: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

are registered must be assigned to the Storage Administrator grouppermitted to manage them.

Remove hosts The specified iSCSI target must be assigned to the StorageAdministrator group permitted to manage them.

Edit ports The specified port must be assigned to the Storage Administratorgroup permitted to manage them.

Create alternative LUNpaths

The specified iSCSI target and all the LDEVs where the paths are setto the iSCSI target must be assigned to the Storage Administratorgroup permitted to manage them.

Copy LUN paths The specified iSCSI target and the LDEVs where the paths are setmust be assigned to the Storage Administrator group permitted tomanage them.

Edit command devices LDEVs where the specified paths are set must be assigned to theStorage Administrator group permitted to manage them.

Edit UUIDs The specified LDEV must be assigned to the Storage Administratorgroup permitted to manage them.

Delete UUIDs The specified LDEV must be assigned to the Storage Administratorgroup permitted to manage them.

Release Host-Reserved LUNs LDEVs where the specified paths are set must be assigned to you.

Create iSCSI targets When you open the Create iSCSI targets window by specifyingiSCSI targets, the specified iSCSI targets must be assigned to theStorage Administrator group permitted to manage them.

Edit iSCSI targets The specified iSCSI targets and ports must be assigned to theStorage Administrator group permitted to manage them.

Delete iSCSI targets The specified iSCSI targets and all the LDEVs where the paths areset to the iSCSI targets must be assigned to the StorageAdministrator group permitted to manage them.

Performance MonitorThe following table provides information about specific Performance Monitorconditions that must be observed when using Resource Partition Manager.

Operation name Condition

Add to ports The specified ports must be assigned to the Storage Administratorgroup permitted to manage them.Add new monitored WWNs

Edit WWNs

ShadowImageThe following table provides information about specific ShadowImageconditions that must be observed when using Resource Partition Manager.

64 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 65: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Create pairs Both primary volume and secondary volumes must be assigned tothe Storage Administrator group permitted to manage them.

Split pairs Primary volumes must be assigned to the Storage Administratorgroup permitted to manage them.Suspend pairs

Resynchronize pairs

Release pairs

Thin ImageThe following table provides information about specific Thin Image conditionsthat must be observed when using Resource Partition Manager.

Operation name Condition

Create LDEVs If LDEVs for Thin Image are created, the following must be assignedto the Storage Administrator group that is permitted to managethem.• LDEV ID• Pool VOL of the pool

Delete LDEVs If LDEVs for Thin Image are deleted, the following must be assignedto the Storage Administrator group that is permitted to managethem.• LDEV ID• Pool VOL of the pool

Create pools

Expand Pool

Volumes that are specified when creating or expanding pools mustbe assigned to the Storage Administrator group that is permitted tomanage them.

All the volumes that are specified when creating pools must belongto the same resource group.

Edit Pools

Delete Pools

Pool-VOLs of the specified pools must be assigned to the StorageAdministrator group that is permitted to manage them.

Create pairs Both primary volumes and secondary volumes must be assigned tothe Storage Administrator group that is permitted to manage them.

Split pairs Primary volumes must be assigned to the Storage Administratorgroup that is permitted to manage them.Suspend pairs

Resynchronize pairs

Release pairs

TrueCopyThe following table provides information about specific TrueCopy conditionsthat must be observed when using Resource Partition Manager.

Configuring resource groups 65Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 66: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Edit Ports Specified ports must be assigned to the user.

Add Remote Connection Specified initiator ports must be assigned to the user.

Edit Remote ConnectionOptions

Operation can be performed with no conditions.

Create Pairs Primary volumes must be assigned to the user.

Initiator ports of remote paths that are connected with the primaryvolume in the remote storage must be assigned to the user.

Split Pairs Specified primary volumes or secondary volumes must be assignedto the user.

Resync Pairs Primary volumes must be assigned to the user.

Delete Pairs Specified volumes must be assigned to the user.

If primary volumes are specified, the initiator ports of remote pathsthat are connected with the primary volume in the remote storagemust be assigned to the user.

Edit Pair Options Primary volumes must be assigned to the user.

Add Remote Paths Specified initiator ports must be assigned to the user.

Remove Remote Paths Specified initiator ports must be assigned to the user.

Edit Remote ConnectionOptions

Initiator ports of remote paths that are connected to a specifiedremote storage must be assigned to the user.

Remove RemoteConnections

Initiator ports of remote paths that are connected to a specifiedremote storage must be assigned to the user.

Force Delete Pairs Specified primary volumes or secondary volumes must be assignedto the user.

Global-active deviceThe following table provides information about specific global-active deviceconditions that must be observed when using Resource Partition Manager.

Operation name Condition

Edit Ports Specified ports must be assigned to the user.

Add Remote Connection Specified initiator ports must be assigned to the user.

Edit Remote ConnectionOptions

Operation can be performed with no conditions.

Create Pairs Primary volumes must be assigned to the user.

Initiator ports of remote paths that are connected with the primaryvolume in the remote storage must be assigned to the user.

Split Pairs Specified primary volumes or secondary volumes must be assignedto the user.

Resync Pairs Primary volumes must be assigned to the user.

Delete Pairs Specified volumes must be assigned to the user.

66 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 67: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

If primary volumes are specified, the initiator ports of remote pathsthat are connected with the primary volume in the remote storagemust be assigned to the user.

Edit Pair Options Primary volumes must be assigned to the user.

Add Remote Paths Specified initiator ports must be assigned to the user.

Remove Remote Paths Specified initiator ports must be assigned to the user.

Edit Remote ConnectionOptions

Initiator ports of remote paths that are connected to a specifiedremote storage must be assigned to the user.

Remove RemoteConnections

Initiator ports of remote paths that are connected to a specifiedremote storage must be assigned to the user.

Force Delete Pairs Specified primary volumes or secondary volumes must be assignedto the user.

Add Quorum Disks LDEVs to be set as quorum disks must be assigned to the user.

Remove Quorum Disks LDEVs to be set as quorum disks must be assigned to the user.

Universal ReplicatorThe following table provides information about specific Universal Replicatorconditions that must be observed when using Resource Partition Manager.

Operation name Condition

Edit Ports Specified ports must be assigned to the user.

Add Remote Connection Specified initiator ports must be assigned to the user.

Add Remote Paths Specified initiator ports must be assigned to the user.

Create Journals All LDEVs that are specified when creating a journal must belong tothe same resource group. Volumes to be assigned to a journal mustbe assigned to the user.

Assign Journal Volumes Volumes to be assigned to a journal must be assigned to the user. Allvolumes to be assigned to a journal must belong to a same resourcegroup to which the existing journal volumes belong.

Assign MP Blade Journal volumes must be assigned to the user.

Edit Remote ConnectionOptions

Operation can be performed with no conditions.

Create Pairs Journal volumes for pair volumes and primary volumes must beassigned to the user.

Initiator ports of remote paths that are connected with the primaryvolume in the remote storage must be assigned to the user.

Split Pairs Specified primary volumes or secondary volumes must be assignedto the user.

Split Mirrors All data volumes configured to a mirror must be assigned to the user.

Resync Pairs Primary volumes must be assigned to the user.

Resync Mirrors All data volumes configured to a mirror must be assigned to the user.

Delete Pairs Specified volumes or secondary volume must be assigned to theuser.

Configuring resource groups 67Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 68: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Initiator ports of remote paths that are connected with the primaryvolume in the remote storage must be assigned to the user.

Delete Mirrors All data volumes configured to a mirror must be assigned to the user.

Edit Pair Options Primary volumes must be assigned to the user.

Force Delete Pairs Specified volumes must be assigned to the user.

Edit Journal Options All data volumes consisting of the specified journal must be assignedto the user.

Journal volumes must be assigned to the user.

Edit Mirror Options All data volumes configuring the specified journal must be assignedto the user.

Journal volumes must be assigned to the user.

Remove Journals Journal volumes must be assigned to the user.

Edit Remote ConnectionOptions

Initiator ports of remote paths that are connected to a specifiedremote storage must be assigned to the user.

Remove Remote Paths Specified initiator ports must be assigned to the user.

Move LDEVs to otherresource groups

When you move LDEVs used for journal volumes to other resourcegroups, you must specify all the journal volumes of the journal towhich the LDEVs belong.

Assign Remote CommandDevices

Journal volumes must be assigned to the user.

Specified remote command devices must be assigned to the user.

Release Remote CommandDevices

Journal volumes must be assigned to the user.

Specified remote command devices must be assigned to the user.

Universal Volume ManagerThe following table provides information about specific Universal VolumeManager conditions that must be observed when using Resource PartitionManager.

Operation name Condition

Add external volumes When creating an external volume, a volume is created in theresource group where the external port belongs.

When you specify a path group and open the Add ExternalVolumes window, all the ports that compose the path group mustbe assigned to the Storage Administrator group permitted to managethem.

Delete external volumes The specified external volume and all the LDEVs allocated to thatexternal volume must be assigned to the Storage Administratorgroup permitted to manage them.

Disconnect external storagesystems

All the external volumes belonging to the specified external storagesystem and all the LDEVs allocated to that external volumes must beassigned to the Storage Administrator group permitted to managethem.

68 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 69: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Reconnect external storagesystems

All the external volumes belonging to the specified external storagesystem and all the LDEVs allocated to that external volumes must beassigned to the Storage Administrator group permitted to managethem.

Disconnect external volumes The specified external volume and all the LDEVs allocated to theexternal volumes must be assigned to the Storage Administratorgroup permitted to manage them.

Reconnect external volumes The specified external volume and all the LDEVs allocated to theexternal volumes must be assigned to the Storage Administratorgroup permitted to manage them.

Edit external volumes The specified external volume must be assigned to the StorageAdministrator group permitted to manage them.

Assign MP Blade The specified external volumes and all the ports of the external pathsconnecting the external volumes must be assigned to the StorageAdministrator group permitted to manage them.

Disconnect external paths Ports of the specified external paths and all the external volumesconnecting with the external path must be assigned to the StorageAdministrator group permitted to manage them.

When you specify By Ports, all the external paths connecting with thespecified ports and all the external volumes connecting with theexternal paths must be assigned to the Storage Administrator grouppermitted to manage them.

When you specify By External WWNs, all the ports of the externalpaths connecting to the specified external WWN and all the externalvolumes connecting with those external paths must be assigned tothe Storage Administrator group permitted to manage them.

Reconnect external paths Ports of the specified external paths and all the external volumesconnecting with those external paths must be assigned to theStorage Administrator group permitted to manage them.

When you specify By Ports, all the external paths connecting with thespecified ports and all the external volumes connecting with theexternal paths must be assigned to the Storage Administrator grouppermitted to manage them.

When you specify By External WWNs, all the ports of the externalpaths connecting to the specified external WWN and all the externalvolumes connecting with those external paths must be assigned tothe Storage Administrator group permitted to manage them.

Edit external WWNs All the ports of the external paths connecting to the specifiedexternal WWN and all the external volumes connecting with theexternal paths must be assigned to the Storage Administrator grouppermitted to manage them.

Edit external pathconfiguration

Ports of all the external paths composing the specified path groupand all the external volumes that belong to the path group must beassigned to the Storage Administrator group permitted to managethem.

Configuring resource groups 69Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 70: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Open Volume ManagementThe following table provides information about specific Open VolumeManagement conditions that must be observed when using Resource PartitionManager.

Operation name Condition

Create LDEVs When you specify a parity group and open the Create LDEVswindow, the parity group must be assigned to the StorageAdministrator group permitted to manage them.

When you create an internal or external volumes parity groupswhere the LDEV belongs and ID of the new LDEV must be assignedto the Storage Administrator group permitted to manage them.

Delete LDEVs When deleting an internal or external volume, the deleted LDEV andparity groups where the LDEV belongs must be assigned to theStorage Administrator group permitted to manage them.

Edit LDEVs The specified LDEV must be assigned to the Storage Administratorgroup permitted to manage them.

Restore LDEVs When you specify LDEVs and open the Restore LDEVs window, thespecified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

When you specify a parity group and open the Restore LDEVswindow, the specified parity group and all the LDEVs in the paritygroup must be assigned to the Storage Administrator grouppermitted to manage them.

Block LDEVs When you specify LDEVs and open the Block LDEVs window, thespecified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

When you specify a parity group and open the Block LDEVs window,the specified parity group and all the LDEVs in the parity group mustbe assigned to the Storage Administrator group permitted to managethem.

Format LDEVs When you specify LDEV and open the Format LDEVs window, thespecified LDEV must be assigned to the Storage Administrator grouppermitted to manage them.

When you specify a parity group and open the Format LDEVswindow, the specified parity group and all the LDEVs in the paritygroup must be assigned to the Storage Administrator grouppermitted to manage them.

Virtual Partition ManagerThe following table provides information about specific Virtual PartitionManager conditions that must be observed when using Resource PartitionManager.

70 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 71: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Condition

Migrate parity groups When you specify virtual volumes, the specified LDEV must beassigned to the Storage Administrator group permitted to managethem.

When you specify a parity group, the specified parity group must beassigned to the Storage Administrator group permitted to managethem.

Volume ShredderThe following table provides information about specific Volume Shredderconditions that must be observed when using Resource Partition Manager.

Operation name Condition

Shred LDEVs When you specify LDEVs and open the Shred LDEVs window, thespecified LDEVs must be assigned to the Storage Administratorgroup permitted to manage them.

When you specify a parity group and open the Shred LDEVswindow, the specified parity group and all the LDEVs in the paritygroup must be assigned to the Storage Administrator grouppermitted to manage them.

Server Priority ManagerThe following table provides information about specific Server PriorityManager conditions that must be observed when using Resource PartitionManager.

Operation name Conditions

Set priority of ports(attribute/threshold/upperlimit)

The specified ports must be assigned to the Storage Administratorgroup permitted to manage them.

Release settings on ports bythe decrease of ports

Set priority of WWNs(attribute/upper limit)

Change WWNs and SPMnames

Add WWNs (add WWNs toSPM groups)

Delete WWNs (delete WWNsfrom SPM groups)

Add SPM groups and WWNs

Delete SPM groups

Set priority of SPM groups(attribute/upper limit)

Configuring resource groups 71Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 72: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Operation name Conditions

Rename SPM groups

Add WWNs

Delete WWNs

Initialization All ports must be assigned to the Storage Administrator grouppermitted to manage them.Set threshold

Managing resource groupsThese topics provide information and procedures that you can use to create,edit, and delete resource groups.

Creating resource groupsNote the following restrictions for creating a resource group:• The maximum number of resource groups that can be created on a

storage system is 1023.• A resource group name can use alphanumeric characters, spaces, and the

following symbols: ! # $ % & ' ( ) + - . = @ [ ] ^ _ ` { } ~• The characters in a resource group name are case-sensitive.• Duplicate occurrences of the same name are not allowed.• The name meta_resource cannot be used for user-created resource

groups.

Before you begin

You must have Security Administrator (View & Modify) role to perform thistask.

Procedure

1. Open the Create Resource Groups window.In Hitachi Command Suite:a. On the Administration tab, select Resource Groups.b. Click Create Resource Groups.

In Device Manager - Storage Navigator:a. In the Explorer pane, expand the Storage Systems tree, click the

Administration tab, and then select Resource Groups.b. Click Create Resource Groups.

2. In the Create Resource Groups window, enter a resource group nameand select a storage system.

3. Add parity groups, LDEVs, iSCSI targets, ports, and host groups asfollows:

72 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 73: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. On the tab for the desired resource type, click the Add resourcebutton.

b. In the add resource window, select one or more resources, and thenclick OK.

4. Enter a task name or accept the default, and then click Submit.If you select View task status, the Tasks & Alerts tab opens.

Editing resource groupsNote the following restrictions for editing resource groups:• Only resources allocated to meta_resource can be added to resource

groups.• Resources removed from a resource group are returned to meta_resource.• No resource can be added to or removed from meta_resource.• The name of the meta_resource group cannot be changed or used for any

resource group other than the meta_resource group.• Duplicate occurrences of the same name are not allowed.• Resource group names can include alphanumeric characters, spaces, and

the following symbols: ! # $ % & ' ( ) + - . = @ [ ] ^ _ ` { } ~• Resource group names are case-sensitive.• LDEVs with the same pool ID or journal ID cannot be added to multiple

resource groups or partially removed from a resource group. For example,if two LDEVs belong to the same pool, you must allocate both to the sameresource group. You cannot allocate them separately.You cannot partially remove LDEVs with the same pool ID or journal IDfrom a resource group. If two LDEVs belong to the same pool, you cannotremove LDEV1 leave only LDEV2 in the resource group.Use the sort function to sort the LDEVs by pool ID or journal ID. Thenselect the IDs and add or remove them all at once.

• Host groups that belong to the initiator port cannot be added to a resourcegroup.

• To add or delete DP pool volumes, you must first add or delete DP pools.

Before you begin

You must have Security Administrator (View & Modify) role to perform thistask.

Procedure

1. Open the Edit Resource Group window.In Hitachi Command Suite:a. On the Administration tab, select Resource Groups.b. Click the check box of a Resource Group Name.

In Device Manager - Storage Navigator:

Configuring resource groups 73Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 74: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. In the Explorer pane, click the Administration tab, and then selectResource Groups.

b. Click the check box of a Resource Group Name.2. Click Edit Resource Group.

• To change the name of a resource group, enter a new identifier inName.

• To add a resource, select the Parity Groups, LDEV IDs, StoragePorts, or Host Group Numbers tab, and then click the add button.

• To remove a resource, select it on the Parity Groups, LDEV IDs,Storage Ports, or Host Group Numbers tab, and then click theremove button.

3. Enter a task name or accept the default, and then click Submit.If you select View task status, the Tasks & Alerts tab opens.

Deleting resource groupsThe following resource groups cannot be deleted:• The meta_resource• A resource group that is assigned to a user group• A resource group that has resources assigned to it• Resource groups included in different resource groups cannot be removed

at the same time.

Before you beginThe Security Administrator (View & Modify) role is required to perform thistask.

Procedure

1. Click the check box of a Resource Group.2. Click Delete Resource Groups.3. In the Delete Resource Groups window, enter a task name or accept

the default, and then click Submit.If you select View task status, the Tasks & Alerts tab opens.

74 Configuring resource groupsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 75: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

4Configuring custom-sized provisioning

Configuring custom-sized provisioning involves creating and configuringcustom-size volumes (CVs). CVs are created by dividing a fixed-sized volume(FV) into several smaller volumes of arbitrary sizes. This provisioningstrategy is suitable for use on both open and mainframe systems. The VirtualLUN software is required to configure custom-sized provisioning on opensystems.

□ Virtual LUN functions

□ Virtual LUN specifications

□ Virtual LUN size calculations

□ Enabling accelerated compression

□ Disabling accelerated compression

□ Configuration of interleaved parity groups

□ SSID requirements

□ Creating and deleting volumes

□ Create LDEV function

□ Blocking and restoring LDEVs

□ Formatting LDEVs

□ Assigning an MP blade

□ Viewing LDEVs of ALUs or SLU attribution

Configuring custom-sized provisioning 75Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 76: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Virtual LUN functionsVirtual LUN function is used to create, configure, or delete a customizedvolume (LDEV).

The Virtual LUN function is an open systems function available in OpenVolume Management software.

A parity group usually consists of some fixed-sized volumes (FVs) and somefree space. The number of FVs is determined by the emulation type. A VirtualLUN volume usually consists of at least one FV, one or more customizedvolumes (CVs), and some free space.

Use the Virtual LUN function to configure variable-sized volumes thatefficiently exploit the capacity of a disk. Variable-sized volumes are logicalvolumes that are divided into smaller than normal fixed-size volumes. Thisconfiguration is desirable when frequently accessed files are distributedacross smaller multiple logical volumes. This generally improves the dataaccessing performance, though file access may be delayed in some instances.

The Virtual LUN function can also divide a logical volume into multiple smallervolumes to reduce unused capacity and provide a more efficient use of spacefor small volumes such as command devices. The Virtual LUN function canefficiently exploit the capacity of a disk by not wasting capacity using largervolumes when the extra capacity is not needed.

Virtual LUN specificationsVirtual LUN specifications for open systems

Parameter Specification for fixed-sizevolumes

Specification for variable-sizevolumes

Emulation type OPEN-3, OPEN-8, OPEN-9, OPEN-E

OPEN-V

Ability to intermixemulation type

Depends on the track geometry Depends on the track geometry

Maximum number ofvolumes (normal andVirtual LUN) per paritygroup

2,048 for RAID 5 (7D+1P), RAID6 (6D+2P), and RAID 6 (14D+2P)

1,024 for all other RAID levels

2,048 for RAID 5 (7D+1P), RAID6 (6D+2P), and RAID 6 (14D+2P)

1,024 for all other RAID levels

Maximum number ofvolumes (normal andVirtual LUN) per storagesystem

65,280 65,280

Minimum size for oneVirtual LUN volume

36,000 KB (+ control cylinders) 48,000 KB (50 cylinders)

76 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 77: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Parameter Specification for fixed-sizevolumes

Specification for variable-sizevolumes

Maximum size for oneVirtual LUN volume

See CV capacity by emulationtype for open systems onpage 77.

See CV capacity by emulationtype for open systems onpage 77.

Size increment 1 MB or 1 block (512 bytes) 1 MB or 1 block (512 bytes)

Disk location for VirtualLUN volumes

Anywhere Anywhere

CV capacity by emulation type for open systems

Emulation type* Minimum CV capacity(CYL) Maximum CV capacity Number of control

cylinders (cyl)

OPEN-V 48,000 KB (50 cyl) Internal volume:3,221,159,680 KB(2.99 TB)

External volume:4,294,967,296 KB (4TB)

None

OPEN-3 36,000 KB (50 cyl) 2,403,360 KB 5,760 KB (8 cyl)

OPEN-8 36,000 KB (50 cyl) 7,175,520 KB 19,440 KB (27 cyl)

OPEN-9 36,000 KB (50 cyl) 7,211,520 KB 19,440 KB (27 cyl)

OPEN-E 36,000 KB (50 cyl) 14,226,480 KB 13,680 KB (19 cyl)

*Virtual LUN operations are not available for OPEN-L volumes.

Virtual LUN size calculationsWhen creating a CV, you can specify the capacity of each CV. However,rounding will produce different values for the user-specified CV capacity andthe actual entire CV capacity. To estimate the actual capacity of a CV, use amathematical formula. The following topics explain how to calculate the userarea capacity and the entire capacity of a CV.

The capacity of a CV or an LDEV consists of two types of capacity. One type isthe user area capacity that stores the user data. The second type is thecapacities of all areas that are necessary for an LDEV implementationincluding control information. The sum of these two types of capacities iscalled the entire capacity.

Implemented LDEVs consume the entire capacity from the parity groupcapacity. Therefore, even if the sum of user areas of multiple CVs and theuser area of one CV are the same size, the remaining free space generatedwhen multiple CVs are created may be smaller than the free space in theparity group when one CV is created.

Configuring custom-sized provisioning 77Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 78: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When using CCI, the specified size of CVs is created regardless of thecapacity calculation. Therefore, even if the same capacity size (for example,1 TB) appears, the actual capacity size might be different between the CVscreated by CCI and the CVs created by Hitachi Device Manager - StorageNavigator.

When you create an LDEV using the Create LDEVs window, depending onthe Capacity Compatibility Mode (Offset boundary) option, the capacity ofLDEV differs even if the specified value is the same. If the CapacityCompatibility Mode (Offset boundary) option is selected, the specified LDEVcapacity is offset by conforming to the prescribed boundary values, and anLDEV is created. For details about the prescribed boundary values, see Calculating fixed-size open-systems volume size (CV capacity unit is MB) onpage 79, Calculating fixed-size open-systems volume size (CV capacity unitis blocks) on page 80, Calculating OPEN-V volume size (CV capacity unit isMB) on page 78, or Calculating OPEN-V volume size (CV capacity unit isblocks) on page 79.

If the Capacity Compatibility Mode (Offset boundary) option is not selected,the capacity of the LDEV is created with the specified size. In the storagesystem, data is managed based on a slot, and the data protection isperformed based on a parity stripe unit. For an LDEV with capacity offset by aboundary, the efficiency of the drive capacity is improved because thecapacity of LDEV is offset by the unit of the data management. If you want tocreate copy pairs with VSP, HUS VM, or previous storage system, exactly thesame LDEV capacity must be used for the volumes in a pair. If there is anemphasis on the efficiency of the drive capacity, select the CapacityCompatibility Mode (Offset boundary) option when creating LDEVs. If youwant the LDEV capacity to be a specific size, do not select the CapacityCompatibility Mode (Offset boundary) option when creating LDEVs.

Calculating OPEN-V volume size (CV capacity unit is MB)The methods for calculating the user area capacity and the entire capacity ofa CV vary depending on the CV capacity unit that is specified when creatingthe CV.

To calculate the user area capacity of a CV whose capacity unit is defined asmegabytes:ceil(ceil(user-specified-CV-capacity * 1024 / 64) / 15) * 64 * 15 where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-specified-CV-capacity is expressed in megabytes.• The resulting user area capacity is expressed in kilobytes.

To calculate the entire capacity of a CV:ceil(user-area-capacity / boundary-value) * boundary-value / 1024

78 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 79: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-area-capacity is expressed in kilobytes• boundary value is expressed in kilobytes. The boundary value depends on

volume emulation types and RAID levels (see Boundary values of volumeson page 81).

• The resulting entire capacity is expressed in megabytes.

Calculating OPEN-V volume size (CV capacity unit is blocks)To calculate the user area capacity of a CV whose capacity unit is defined asblocks:ceil(user-specified-CV-capacity / 2)where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-specified-CV-capacity is expressed in blocks.• The resulting user area capacity is expressed in kilobytes.

To calculate the entire capacity of a CV:ceil(user-specified-CV-capacity / (boundary-value * 2)) * (boundary-value * 2)where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-specified-CV-capacity is expressed in blocks.• boundary-value is expressed in kilobytes. The boundary value depends on

volume emulation types and RAID levels (see Boundary values of volumeson page 81).

• The resulting entire capacity is expressed in blocks. To convert theresulting entire capacity into megabytes, divide this capacity by 2,048.

Calculating fixed-size open-systems volume size (CV capacity unit isMB)

To calculate the user area capacity of a CV whose capacity unit is defined asmegabytes:ceil(ceil(user-specified-CV-capacity * 1024 / capacity-of-a-slot) / 15) * capacity-of-a-slot * 15 where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-specified-CV-capacity is expressed in megabytes.

Configuring custom-sized provisioning 79Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 80: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• capacity-of-a-slot is expressed in kilobytes. The capacity of a slot dependson the device emulation type (see Capacity of a slot on page 81).

• The resulting user area capacity is expressed in kilobytes.

To calculate the entire capacity of a CV:ceil((user-area-capacity + management-area-capacity) /boundary-value) * boundary-value / 1024where• The value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-area-capacity is expressed in kilobytes.• management-area-capacity is expressed in kilobytes. The management

area capacity depends on the device emulation type (see Managementarea capacity of an open-systems volume on page 81).

• boundary-value is expressed in kilobytes. The boundary value depends onthe device emulation type and RAID level (see Boundary values of volumeson page 81).

• The resulting entire capacity is expressed in megabytes.

Calculating fixed-size open-systems volume size (CV capacity unit isblocks)

To calculate the user area capacity of a CV whose capacity unit is defined asblocks:user-specified-CV-capacity / 2where• user-specified-CV-capacity is expressed in blocks.• The resulting user area capacity is expressed in kilobytes.

To calculate the entire capacity of a CV:ceil((user-specified-CV-capacity + management-area-capacity * 2)/ (boundary-value * 2)) * (boundary-value * 2) where• the value enclosed in ceil( ) must be rounded up to the nearest whole

number.• user-specified-CV-capacity is expressed in blocks.• management-area-capacity is expressed in kilobytes. The management

area capacity depends on volume emulation types (see Management areacapacity of an open-systems volume on page 81).

• boundary-value is expressed in kilobytes. The boundary value depends onvolume emulation types and RAID levels (see Boundary values of volumeson page 81).

• The CV capacity recognized by hosts is the same as the CV capacitycalculated by the above formula.

80 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 81: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• If block is selected as the LDEV capacity unit in the Create LDEVs windowand dialog boxes, the window and dialog boxes correctly show thecalculated LDEV capacity. However, if MB, GB, or TB is selected as theLDEV capacity unit in the Create LDEVs window and dialog boxes, thecapacity values shown might have a margin of error due to unit conversionreasons. If you need to know the exact LDEV capacity, select block as thecapacity unit.

• The resulting entire capacity is expressed in blocks. To convert theresulting entire capacity into megabytes, divide this capacity by 2,048:

Management area capacity of an open-systems volume

Emulation type Management area capacity (KB)

OPEN-V 0

OPEN-3 5,760

OPEN-8 19,440

OPEN-9 19,440

OPEN-E 13,680

Boundary values of volumesThe following table provides the boundary values for internal volumes. Theboundary value for an external volume is always one kilobyte, regardless ofemulation type and RAID level.

Emulation type

Boundary value (KB)

RAID 1 (2D+2D)

RAID 5 (3D+1P)

RAID 5 (7D+1P)

RAID 6 (6D+2P)

RAID 6 (14D+2P)

OPEN-xx (exceptfor OPEN-V)

768 1,152 2,688 2,304 -

OPEN-V 1,024 1,536 3,584 3,072 7,168

Notes:• xx indicates one or more numbers or letters (for example, OPEN-3).• Boundary values are expressed in kilobytes.• The boundary value for an external volume is always one kilobyte, regardless of RAID level.• Hyphen (-) indicates that the combination is not supported.

Capacity of a slot

Emulation type Capacity of a slot

OPEN-xx (except for OPEN-V)

xx indicates one or more numbers or letters (for example,OPEN-3).

48 KB

OPEN-V 256 KB

Configuring custom-sized provisioning 81Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 82: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Emulation type Capacity of a slot

Note: Slot capacity is expressed in kilobytes.

Configuring volumes in a parity groupFor RAID 5 (7D+1P), RAID 6 (6D+2P), or RAID 6 (14D+2P) a maximum of2,048 fixed-size volumes (FVs) and a certain amount of free space areavailable in one parity group. For other RAID levels, a maximum of 1,024 FVsand a certain amount of free space are available in one parity group. Eachparity group has the same configuration, and is assigned the same FVs of thesame size and RAID level.

The Virtual LUN functions of Delete LDEVs and Create LDEVs are performedon each parity group. Parity groups are also separated from each other byboundary limitations. Therefore, you cannot define a volume across two ormore parity groups beyond these boundaries.

As the result of the Virtual LUN operations, a parity group contains FVs, CVs,and free spaces that are delimited in logical cylinders. Sequential free spacesare combined into a single free space.

The following depicts an example of configuring volumes in a parity group:

Enabling accelerated compressionBefore you begin

• You must have the Storage Administrator role.• Set the parity group drive type to FMC.• The emulation type of the target parity group must be OPEN-V.• The target parity group must be in a storage system (an internal parity

group).• The status of LDEVs in the target parity group must be Normal or

Blocked.

82 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 83: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• The capacity of the defined internal volumes must be 8 GB or more. (Thecapacity is equal to or greater than the minimum capacity of a poolvolume.)

• If the defined internal volumes are used as pool volumes, those poolvolumes belong to the same Dynamic Provisioning pool or Thin Imagepool.

• The defined internal volumes must have no LUN path definitions.• The defined internal volumes must not be used by Volume Migration.• The defined internal volumes must not be reserved by the Data Retention

Utility.• The defined internal volumes must not have the Protect, Read Only, or S-

VOL Disable attribute of Data Retention Utility.• The encryption setting of the parity group must be disabled.• There must not be any DP-VOL page reserved areas.

Procedure

1. Perform one of the following to display the parity group to be operated.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Parity Groups of the target storage system.In Device Manager - Storage Navigator:

Perform one of the following to display the Parity Groups tab:• To display all parity groups in the storage system, in the storage

system tree, select Parity Groups, then select the Parity Groupstab.

• To display parity groups related to Internal, in the storage systemtree, expand Parity Groups, select Internal, then select the ParityGroups tab.

After displaying the parity group, select the target parity group.2. Open the Edit Parity Groups window.

In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Parity Groups.c. Select one or more parity groups, and then click Edit Parity Groups.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Parity Groups.c. On the Parity Groups tab, select a parity group.d. Click More Actions > Edit Parity Groups.

3. In Accelerated Compression, check Enable.

Configuring custom-sized provisioning 83Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 84: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: When you enable accelerated compression, confirm if thedata reduction efficiency can be achieved. For details, see Guidelines for pools when accelerated compression is enabled onpage 421.

4. Click Finish.5. In the Confirm window, confirm the settings. In Task Name, type a

unique name for this task or accept the default, then click Apply. If Goto tasks window for status is checked, the Tasks window opens.

Disabling accelerated compressionBefore you begin

• You must have the Storage Administrator role.• Set the parity group drive type to FMC.• You should format the parity group.• The emulation type of the target parity group must be OPEN-V.• The target parity group must be in a storage system (an internal parity

group).• The status of LDEVs in the target parity group must be Normal or

Blocked.• The Expanded Spaced Used column for the parity group must be No.

Procedure

1. Display the parity group for which you want to disable acceleratedcompression.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Parity Groups of the target storage system.In Device Manager - Storage Navigator:a. Perform one of the following to display the Parity Groups tab:

- To display all parity groups in the storage system, in the storagesystem tree, select Parity Groups, then select the Parity Groupstab.- To display parity groups related to Internal, in the storage systemtree, expand Parity Groups, select Internal, then select the ParityGroups tab.

b. Select the target parity group.2. Format the parity group.

84 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 85: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: Before you disable the accelerated compression on aparity group, you must format the parity group.

3. Open the Edit Parity Groups window.In Hitachi Command Suite:• Select one or more parity groups, and then click Edit Parity Groups.

In Device Manager - Storage Navigator:• Click More Actions > Edit Parity Groups.

4. In the Accelerated Compression setting, check Disable.5. Click Finish.6. In the Confirm window, confirm the settings. In Task Name, type a

unique name for this task or accept the default, then click Apply. If Goto tasks window for status is checked, the Tasks window opens.

Configuration of interleaved parity groupsIf RAID configurations are RAID1 (2D+2D) or RAID5 (7D+1P), theinterleaved parity group can be created by concatenating multiple of paritygroups. The following table lists the RAID configurations and the number ofparity groups that can be concatenated.

RAID configuration 2 concatenating 4 concatenating

RAID1 (2D+2D) Available Not Available

RAID5 (7D+1P) Available Available

When concatenating parity groups, data in LDEVs that are FV or CV isallocated between the interleaved parity groups. Therefore, loads aredispersed because of the parity group concatenation, and the LDEVperformance is improved.

The capacity of the created LDEV is managed by each of the parity groupsthat are in the interleaved parity group. The maximum capacity of an LDEV isthe same as the capacity of the interleaved parity group.

Note: Even if the parity groups are concatenated, the total capacity of theinterleaved parity group is not large.

See the following examples:• Creating the interleaved parity group by concatenating of parity groups of

PG1-1 and PG1-2.• Creating LDEVs in each parity group that are in the interleaved parity

group.○ LDEV 1 in PG1-1○ LDEV 2 in PG1-2

Configuring custom-sized provisioning 85Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 86: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

SSID requirementsThe storage system is configured with one SSID (Storage System ID) foreach group of 64 or 256 devices, so there are one or four SSIDs per CUimage. Each SSID must be unique inside the individual array and also for theentire storage system. This includes across all LPARs in a sysplex. SSIDs areuser-specified and are assigned during storage system installation inhexadecimal format, from 0004 to FEFF.

The following table shows the relationship between controller emulation typesand SSIDs.

Controller emulationtype SSID requirement Virtual LUN support

I-2107 0004 to FEFF OPEN-3, OPEN-8, OPEN-9,OPEN-E, andOPEN-V volumes

Creating and deleting volumesThis module describes how to create volumes and delete unallocatedvolumes.

About creating volumesYou create volumes, then allocate them to a host.

86 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 87: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You create volumes by using the available space in a DP pool or parity group.You can then access the volumes when you are ready to allocate them to ahost. If, while allocating volumes to a host, no volumes match the specifiedrequirements, volumes are automatically created using the available space.Note that when a basic volume is created, the volume is also formatted atthe same time.

Newly created volumes are included in the list of Open-Unallocated volumesuntil you allocate them to a host.

Because creating volumes takes time, you should create volumes in advance.

Tip: For VSP G1000, VSP G1500, or VSP F1500 storage systems, you canblock volumes separated from parity groups, recover parity groups fromerrors, and format volumes by using the windows available by clicking theSystem GUI link. To access the System GUI link, on the Resources tab, right-click Parity Groups for the target storage system, and then select System GUIfrom the menu. Or, click Parity Groups for the target storage system, andthen click the System GUI link that appears in the application pane.

Additionally, you can format, block, and restore volumes, configure commanddevices, edit command devices, assign MP blades, and force delete copy pairs(TC, UR, and GAD) by using the windows available by clicking the SystemGUI link. To access these windows, on the Resources tab, right-click Volumesfor the target storage system, and then select System GUI from the menu.

You can allocate the deduplication system data volume (DSD volume) to a DPpool or create a DP volume for which the capacity saving function (dedupeand compression) is enabled, by clicking the System GUI link: On theResources tab, right-click DP Pools for the target storage system, and thenselect System GUI from the menu.

When you are linking with Data Ingestor v11.3 or later and volumes arecreated for creating or expanding storage pools, it is recommended that youcreate volumes using the Create Storage Pool or Expand Storage Pool dialogboxes. Device Manager can automatically specify the number of volumes andcapacity, and create volumes following the best practices for configuringstorage pools.

Notes on performing quick formatsA quick format might impose a heavy workload on some components andlower I/O performance of all hosts running in the target storage system.

We recommend running a quick format when system activity is low andmajor system operations are not running.

We also recommend running a quick format on a maximum of eight volumesat first, and then confirming that the quick format has not lowered host I/Operformance. After that, when you perform a quick format on other volumes,

Configuring custom-sized provisioning 87Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 88: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

we recommend increasing the number of volumes to be formatted inincrements of four.

In particular, if the storage system components are configured as follows, thehost I/O performance is likely to be lowered when a quick format isperformed:• Components such as cache memory, CHAs (channel adapters), and DKAs

(disk adapters) are in the minimum configuration.• The number of installed components is extremely different among DKCs

(controller chassis) or modules within a single storage system.

In these configurations, run a quick format on only one volume at first,review the host I/O performance, and then continue to run a quick format onother volumes one by one.

Creating volumesFor registered storage systems, volumes are created so they can be allocatedto hosts.

Before you begin

• Identify the storage system• Identify the number of volumes to create• Identify volume types and capacities• Create and format parity groups and create DP pools.

Procedure

1. On the Resources tab you can create volumes from several locations:• From General Tasks, select Create Volumes.• Select the storage system, click Actions, and select Create

Volumes.• Select the storage system, list existing parity groups, and click Create

Volumes.• Select the storage system, list existing DP pools, and click the Create

Volumes button or select Create Volumes from Actions.2. In the create volumes dialog box, configure volumes and their

characteristics.3. Click Show Plan and confirm that the information in the plan summary

is correct. If changes are required, click Back.4. (Optional) Update the task name and provide a description.5. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

6. Click Submit.If the task is scheduled to run immediately, the process begins.

88 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 89: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Result

Created volumes are added to the target storage system Open-Unallocatedvolume list.

Tip: If a task to create multiple volumes fails, some volumes might havebeen created even when the Status in Volume Information displayed in theTask Details window is other than Completed. Refresh the target storagesystem information, and then in the Open-Unallocated volume list, confirmwhether volumes were created.

Create Volumes dialog boxNewly created volumes are placed in the Open-Unallocated folder of the userselected storage system until they can be allocated to hosts as needed.

When you enter the minimum required information in this dialog box, theShow Plan button activates to allow you to review the plan. Click the Backbutton to modify the plan to meet your requirements.

The following table describes the dialog box fields, subfields, and fieldgroups. A field group is a collection of fields that are related to a specificaction or configuration. You can minimize and expand field groups by clickingthe double-arrow symbol (>>).

As you enter information in a dialog box, if the information is incorrect, errorsthat include a description of the problem appear at the top of the box.

Table 1 Create volumes dialog box

Field Subfield Description

No. of Volumes - Manually enter the number of volumes to create, or use thearrows (click, or click and hold) to increment or decrementthe volume count.

Volume Capacity - This number (in blocks, MB, GB, or TB) is the capacity toallocate for each volume.

The total capacity to be allocated is calculated as No. ofVolumes * Volume Capacity and is displayed.

If the drive type is FMC and if accelerated compression isenabled for the parity group, then make sure you specifythe amount before compression for the volume capacity.

Storage System - This field will either display the selected storage systemname, or prompt the user to select the storage system froma list.

Volume Type - Select the volume type to create. For example BasicVolume, Dynamic Provisioning or Dynamic Tiering. Thedisplayed volume types are determined by your selected

Configuring custom-sized provisioning 89Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 90: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

storage system. If you do not see an expected volume type,check that you have selected the correct storage system.

Internal/External - When volume type is Basic Volume, or DynamicProvisioning, volumes can be created using availablecapacity from the selected storage system (internal) or froman external storage system physically connected to theselected storage system (external).

Pool - When volume type is Dynamic Tiering, volumes can becreated using Select Pool.

>> AdvancedOptions

Volume Selection This is displayed if a storage system was selected and thevolume type is Basic. Specify whether to use parity groupsor free space to create a volume. This is only displayedwhen using VSP G1000, VSP G1500, VSP F1500 VirtualStorage Platform, or Unified Storage VM.

Drive Type If multiple drive types are displayed, you can designate aspecific drive type.

Drive Speed(RPM)

If multiple drive speeds are displayed, you can designate aspecific drive speed, or accept the default of any availablespeed.

Chip Type If multiple chip types are displayed, you can designate aspecific chip type.

RAID Level If multiple RAID levels are displayed, you can designate aspecific RAID level, or accept the default of any availableRAID level.

Select Free Space After selecting a storage system, specifying Basic for thevolume type, and Free Space with volume selection, youcan specify free space for parity groups when creatingvolumes.

Parity Group When volume type is Basic Volume, based on drive type,drive speed, chip type, and RAID level selections anappropriate parity group is selected and displayed for you.You can also manually select a parity group by clickingSelect Parity Group. In the displayed list of parity groups,you can use sort and filter features on columns such asRAID level, or unallocated capacity (or other fields) toidentify the preferred parity groups.

Pool When volume type is Dynamic Provisioning, volumes can becreated using Select Pool. The listed pools can varydepending on drive type, drive speed, chip type, and RAIDlevel selections.

Tiering PolicySetting

Displays only if Dynamic Tiering is selected as the volumetype, and an HDT pool has been selected with Select Pool(see previous Volume Selection section). You can select aspecific tier policy for the volume to be allocated, or selectAll.

New PageAssignment Tier

For VSP G1000, VSP G1500, VSP F1500, VSP and HUS VM,selecting this option specifies to which hardware tier thenew page of an HDT volume is to be assigned with aspecified priority. Within the hardware tiers for which thetiering policy is set, specify High for an upper-levelhardware tier, Middle for a medium-level hardware tier, andLow for a low-level hardware tier.

90 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 91: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

Relocation Priority For VSP G1000, VSP G1500, VSP F1500, VSP and HUS VM,selecting this option specifies whether you want to prioritizethe relocation of the data in HDT volumes.

Full Allocation VSP G1000, VSP G1500, VSP F1500, selecting this optionallows you to reserve pages that correspond to the specifiedcapacity when you create volumes.

Label Volume labels are searchable, and therefore recommendedas a way to find volumes. The Initial value is not required,but can be useful for differentiation when you are creatingmultiple volumes. Reflect a label to the storage system ischecked by default so that naming is consistent betweenHCS and the storage system itself.

LDEV ID An LDEV ID can be assigned automatically or manually.

Format Type You can request a quick format, or a basic format.

Note that during a quick format, the load might becomeconcentrated on some components, lowering the I/Operformance of all hosts that are running in the targetstorage system.

About shredding volume dataBefore deleting a volume that you no longer need, completely remove thedata from the volume to avoid unauthorized use of information. The data canbe removed by shredding or reformatting the volume.

Volume data is shredded by overwriting it repeatedly with dummy data,which securely destroys the original data. Some volumes, such as basicvolumes and DP volumes that are allocated to hosts or used for replication,cannot be shredded.

Some storage systems do not support the shredding functionality. For thosestorage systems, delete volume information by reformatting the volumes.

Shredding volume dataSpecify one or more volumes that are not allocated to a host and shred thedata on the volumes.

Caution: You cannot restore shredded data.

Note: Shred during off hours, such as overnight, so that the shreddingprocess does not adversely affect system performance. To verify the standardrequired times for shredding, see the Hitachi Volume Shredder User Guide.

Configuring custom-sized provisioning 91Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 92: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Before you begin• Identify the storage system name that includes the volumes that you want

to shred.• Identify the volumes to be shredded.• Unallocate the volumes to be shredded from the host.

Procedure

1. On the Resources tab, select Storage Systems.2. Expand the Storage Systems tree, select a storage system, and from the

volumes list, select the volumes whose data you want to shred.3. Click Shred Volumes.4. In the Shred Volumes dialog box, check the target volume to be

shredded and edit the writing data pattern, if needed.If the storage system does not support shredding, the data will beformatted.

5. (Optional) Update the task name and provide a description.6. (Optional) Expand Schedule to specify the task schedule. You can

specify the task to run immediately or later. The default setting is Now.If the task is scheduled to run immediately, you can select View taskstatus to monitor the task after it is submitted.

7. Click Submit. If the task is scheduled to run immediately, the processbegins.

8. You can check the progress and result of the task on the Tasks & Alertstab. Click the task name to view details of the task.

Result

When the task completes, the data is shredded or reformatted from thevolume.

About deleting unallocated volumesVolumes that are not allocated to any host can be deleted and their spaceadded to the unused capacity of DP pools or parity groups. To completely andsecurely remove the data, shred the volume data before deleting the volume.

Deleting unallocated volumesYou can delete unallocated volumes from a registered storage system.

Before you begin• Identify the target storage system• Identify the target volumes• Shred volume data, if needed• Unallocate volumes

92 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 93: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. On the Resources tab, select the target storage system.2. Expand the tree and select the storage system from which you want to

delete volumes.3. Select Open-Unallocated or HDP Pools and then select the HDP Vols

tab of the target DP pool.4. From the volume list, select the volumes that you want to delete, and

then click Delete Volumes.5. Specify additional information, as appropriate:

• Verify the information that is displayed.• Enter a name in Task Name.• Specify when to execute the task.

6. Click Submit, and confirm task completion.

Result

When the task completes, deleted volumes no longer appear in the Open-Unallocated or HDP volume list.

Create LDEV functionUse the Create LDEV function to create a custom-size volume (CV). UseVirtual LUN to create an open-systems volume.

The following depicts an example of creating customized volumes. First youdelete FVs to create free space. Then you can create one or more custom-size volumes of any size in that free space.

Creating an LDEVUse this procedure to create one or more internal or external logical volumes(LDEVs) in a selected storage system. You can create multiple LDEVs at once,for example, when you are setting up your storage system. After the storagesystem is set up, you can add LDEVs as needed.

Configuring custom-sized provisioning 93Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 94: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Before you can create an LDEV in a storage system, you might need to createfree space. Before deleting volumes to create free space, remove the LUpaths to the open-system volumes.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• You can create LDEVs using any of the following tabs in Hitachi Device

Manager - Storage Navigator:○ Parity Groups tab when selecting Parity Groups.

You can create multiple LDEVs in the specified free space by setting thenecessary items collectively. If multiple free spaces are in one paritygroup, the number of free spaces appears in Total Selected Free Spacein the Parity Group Selection section on the Create LDEVs wizard.Confirm the number of free spaces, and then create the LDEVsaccordingly.For example, if you are creating LDEVs in parity group PG1-1 and itcontains two free spaces, 2 appears in Total Selected Free Space. In thiscase, if you specify 1 in Number of LDEVs per Free Space, and continueto create the LDEV, two LDEVs are created because one LDEV is createdfor each free space.In this case, if LDEVs are created by the initial setting withoutconfirming the number of free spaces, more LDEVs than necessary canbe created. When you create LDEVs, confirm the number of free spacesdisplayed on the Select Free Spaces window in the Create LDEVswindow

○ LDEVs tab when selecting any parity group in Parity Groups.○ LDEVs tab when selecting Logical Devices.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, from the Provisioning Type list, select a

provisioning type for the LDEV to be created.• If creating internal volumes, select Basic.• If creating external volumes, select External.

4. In System Type, select Open to create open system volumes.

94 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 95: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. From the Emulation Type list, select an emulation type for the selectedsystem type.

6. If creating an internal volume, select the parity group, and then do thefollowing:a. From the Drive Type/RPM list in Parity Group Selection, select the

drive type and RPM.b. From the RAID level list in Parity Group Selection, select the RAID

level.c. Click Select Free Spaces.d. In the Select Free Spaces window, in the Available Free Spaces

table, select the free spaces to be assigned to the volumes.

Do the following, if necessary:

- To specify the conditions and show the free space, click Filter,specify the conditions, and then click Apply.

- To specify the unit for capacity and the number of rows to view, clickOptions.

e. Click View Physical Location.f. In the View Physical Location window, confirm where the selected

free space is physically located, and then click Close.g. In the Select Free Spaces window, if the selected free spaces have

no issues, click OK.7. Otherwise, if creating an external volume, select the external volume,

and then do the following:a. Click Select Free Spaces.b. In the Select Free Spaces window, in the Available Free Spaces

table, select the free space to be assigned to the volumes.Do the following, if necessary:

- To specify the conditions and show the free space, click Filter,specify the conditions, and then click Apply.

- To specify the unit for capacity and the number of rows to view, clickOptions.

c. Click View Physical Location.d. In the View Physical Location window, confirm where the selected

free space is physically located, and then click Close.e. In the Select Free Spaces window, if the selected free spaces have

no issues, click OK.8. If you want to offset the specified LDEV capacity by boundary, set the

Capacity Compatibility Mode (Offset boundary) to ON.• If the emulation type is OPEN-V, Capacity Compatibility Mode

(Offset boundary) is set to OFF by default.• If the emulation type is other than OPEN-V, Capacity Compatibility

Mode (Offset boundary) is set to ON by default.9. In LDEV Capacity, type the amount of LDEV capacity to be created and

select a capacity unit from the list.

Configuring custom-sized provisioning 95Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 96: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Enter the capacity within the range of figures displayed below the textbox. You can enter the number with 2 digits after decimal point. You canchange the capacity unit from the list.

10. In Number of LDEVs, type the number of LDEVs to be created.• If you create internal volume, Number of LDEVs per Free Space

appears.• If you create external volume, Number of LDEVs per External

Volume appears.

Caution: If creating LDEVs in the free space of the parity groupwith accelerated compression enabled, estimate the LDEV capacityand the number of LDEVs. For details, see Guidelines for poolswhen accelerated compression is enabled on page 421.

11. In LDEV Name, specify a name for this LDEV.a. In Prefix, type the characters that will become the fixed characters

for the beginning of the LDEV name. The characters are case-sensitive.

b. In Initial Number, type the initial number that will follow the prefixname.

12. In Format Type, select the format type for the LDEV from the list.• For an internal volume, select Normal Format, Quick Format, or No

Format. For LDEVs in the parity group with AcceleratedCompression enabled, Quick Format cannot be selected.If No Format is selected, format the volume after creating LDEVs.

• For an external volume, if you create the LDEV whose emulation typeis the open system, select Normal Format or No Format.If the external volume can be used as it is, select No Format. Thecreated LDEV can be used without formatting.If the external volume needs to be formatted, select No Format andthen format the volume with the external storage system, or selectNormal Format.

• If Quick Format is selected while quick formatting is in progress, hostI/Os may be affected. For details, see Quick Format function onpage 108

13. Click Options to show more options.14. In Initial LDEV ID, make sure that an LDEV ID is set. To confirm the

used number and unavailable number, click View LDEV IDs to open theView LDEV IDs window.a. In Initial LDEV ID in the Create LDEVs window, click View LDEV

IDs.In the View LDEV IDs window, the matrix vertical scale representsthe second-to-last digit of the LDEV number, and the horizontal scalerepresents the last digit of the LDEV number. The LDEV IDs tableshows the available, used, and disabled LDEV IDs.

96 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 97: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

In the table, used LDEV numbers appear in blue, unavailable numbersappear in gray, and unused numbers appear in white. LDEV numbersthat are unavailable may be already in use, or already assigned toanother emulation group (group by 32 LDEV numbers).

b. Click Close.15. In the Create LDEVs window, in SSID, type four digits, in hexadecimal

format (0004 to FEFF), for the SSID.16. To confirm the created SSID, click View SSIDs to open the View SSIDs

dialog box.a. In the Create LDEVs window, in Initial SSID, click View SSIDs.

In the SSIDs window, the SSIDs table shows the used SSIDs.b. Click Close.

17. In the Create LDEVs window, from the MP Blade list, select a MP bladeto be used by the LDEVs.• If you assign a specific MP blade, select the ID of the MP blade.• If you can assign any MP blade, click Auto.

18. In T10 PI, select Enable or Disable.The T10 PI attribute can be specified when creating a Basic volume ofemulation type OPEN-V.

Caution: The T10 PI attribute can only be defined during theinitial creation of LDEVs. The defined attribute cannot be removedfrom LDEVs on which it is already set.

19. Click Add.The created LDEVs are added to the Selected LDEVs table.

The Provisioning Type, System Type, Emulation Type, ParityGroup Selection, LDEV Capacity, and Number of LDEVs per FreeSpace or Number of LDEVs per External Volume fields must be set.If these required items are not registered, you cannot click Add.

20. If necessary, change the following LDEV settings:• Click Edit SSIDs to open the SSIDs window. If the new LDEV is to be

created in the CU, change SSID to be allocated to the LDEV.• Click Change LDEV Settings to open the Change LDEV Settings

window.21. If necessary, delete an LDEV from the Selected LDEVs table.

Select an LDEV to delete, and then click Remove.22. Click Finish.

The Confirm window appears.

To continue the operation for setting the LU path and defining a logicalunit, click Next.

23. In the Task Name text box, type a unique name for the task or acceptthe default.

Configuring custom-sized provisioning 97Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 98: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

24. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Finding an LDEV IDWhen creating volumes, the LDEV ID (LDKC: CU: LDEV) must be specified.Use this procedure to determine the LDEV IDs in use in the storage systemso you can specify the correct LDEV. This procedure looks at allocations in thearray. The LDEV IDs used must also be configured in the mainframe IO GENwith a device number.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, scroll down to Initial LDEV ID and click

View LDEV IDs.4. In the View LDEV IDs window, review the list to confirm the LDEV IDs.

The LDEV IDs table shows the available, used, and disabled LDEV IDs.The matrix vertical scale represents the second-to-last digit of the LDEVnumber, and the horizontal scale represents the last digit of the LDEVnumber.

In the table, used LDEV numbers appear in blue, unavailable LDEVnumbers appear in gray, and unused LDEV IDs appear in white. LDEVnumbers that are unavailable may be already in use, or already assignedto another emulation group (group by 32 LDEV numbers).

5. Click Close.The Create LDEVs window opens.

Finding an LDEV SSIDWhen creating the first volumes in a control unit, the LDEV SSIDs must bespecified. Use this procedure to determine the SSIDs in use in the storagesystem so you can specify the correct SSID.

98 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 99: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, scroll down to Initial SSIDs and click

View SSIDs.4. In the SSIDs window, review the list to confirm the LDEV SSIDs. The

SSIDs table shows the SSIDs in use in the system.5. Click Close. The Create LDEVs window opens.

Editing an LDEV SSIDThe Storage Administrator (Provisioning) role is required to perform this task.

Before registering an LDEV, you may need to edit the LDEV SSID. If the firstLDEV in a CU is specified as part of the current operation, the SSID value canbe changed. If an LDEV already exists in the CU, the SSID value cannot bechanged.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, click Edit SSIDs in the Selected LDEVs

pane.4. In the Edit SSIDs window, review the SSIDs table.5. To change an SSID, select an LDEV, and then click Change SSIDs.6. In the Change SSIDs window, type the new SSID, and then click OK.

Configuring custom-sized provisioning 99Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 100: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. In the Edit SSIDs window, click OK.8. In the Create LDEVs window, click Finish.9. In the Confirm window, click Apply.

The new SSID is registered.

If the Go to tasks window for status check box is selected, the Taskswindow appears.

Changing LDEV settingsBefore registering an LDEV, you may need to change the LDEV settings.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, in the Selected LDEVs table, select an

LDEV, and then click Change LDEV Settings.4. In the Change LDEV Settings window, change the setting of LDEV

Name, Initial LDEV ID, T10 PI, or MP Blade.• If you change LDEV Name, specify the prefix characters and the

initial number for this LDEV.• If you change Initial LDEV ID, specify the number of LDKC, CU,

LDEV, and Interval. To confirm used LDEV IDs, click View LDEV IDsto confirm the used LDEV IDs in the View LDEV IDs window.

• If you change MP Blade, click the list and specify the MP blade ID. Ifthe specific MP blade is specified, select the MP blade ID. If any MPblade is specified, click Auto.

• If you change T10 PI, select Enable or Disable.The T10 PI attribute can be changed if the provisioning type is Basicand the emulation type is OPEN-V.

5. Click OK.6. Click Finish.

The Confirm window appears.

100 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 101: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

8. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Removing an LDEV to be registeredIf you do not want to register an LDEV that is scheduled to be registered, youcan remove it from the registration task.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Selected LDEVs pane of the Create LDEVs window, select an

LDEV, and then click Remove.A message appears asking whether you want to remove the selected rowor rows. If you want to remove the row, click OK.

4. Click Finish.5. In the Confirm window, click Apply. The LDEV is removed from the

registering task.If Go to tasks window for status is checked, the Tasks windowopens.

Blocking and restoring LDEVsBefore you format or shred a registered LDEV, you must block it.

Blocking LDEVsBefore you format or shred a registered LDEV, the LDEV must be blocked.

Configuring custom-sized provisioning 101Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 102: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Use this procedure to block internal and external volumes from any of thefollowing tabs:• LDEVs tab when you make a selection from Parity Groups.• LDEVs tab when you select Logical Devices.• Virtual Volumes tab when you select a Pool.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• If you are blocking a DP-VOL whose Capacity Saving is set to

Deduplication and Compression, first change the Capacity Saving settingto Disable before blocking the volume.

Procedure

1. Open the LDEVs tab.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. If Blocked does not appear in the Status column, use the followingsteps to block the LDEV.If Blocked does appear in the column, skip the remaining steps.

3. From the table, select the LDEV ID of the LDEV you want to block.4. Click More Actions and select Block LDEVs.5. Note the settings in the Confirm window and enter a unique Task

Name or accept the default, and then click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Blocking LDEVs in a parity groupUse this procedure to block LDEVs in a parity group from one of the followingtabs:• Parity Groups tab when you select Parity Groups from the Storage Systems

tree of the Device Manager - Storage Navigator main window.• Parity Groups tab when you select a parity group from Parity Groups in the

Storage Systems tree.• LDEVs tab when you select Logical Devices.

102 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 103: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• If you are blocking a DP-VOL whose Capacity Saving is set to

Deduplication and Compression, first change the Capacity Saving settingto Disabled before blocking the volume.

Procedure

1. Open the Parity Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Parity Groups, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Parity Groups.

2. If Blocked does not appear in the Status column, use the followingsteps to block the LDEV.If Blocked does appear in the column, skip the remaining steps.

3. In the Parity Groups window, select the Parity Group ID of the paritygroup with the LDEV you want to block.You can select multiple parity groups that are listed together orseparately.

4. Click More Actions and select Block LDEVs.5. Note the settings in the Confirm window and enter a unique Task

Name or accept the default and click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Block LDEVs windowUse this window to block specific LDEVs. The data on the LDEV cannot beaccessed when the LDEV is blocked.

Configuring custom-sized provisioning 103Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 104: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

LDEV ID LDEV identifier, which is the combination of LDKC, CU, andLDEV.

LDEV Name LDEV name.

Parity Group ID Parity group identifier.

Pool Name (ID) Pool name and pool identifier.

Emulation Type Emulation type.

Capacity LDEV capacity.

Provisioning Type Provisioning type assigned to the LDEV.• Basic: Internal volume.• DP: DP-VOL.• External: External volume.• Snapshot: Thin Image volume.• ALU: LDEV with the ALU attribution.

Attribute Displays the attribute of the LDEV.• Command Device: CCI command device.• Remote Command Device: Remote command device for

CCI.• TSE: TSE-VOL for Compatible FlashCopy® SE.• ALU: LDEV with the ALU attribution.• SLU: LDEV with the SLU attribution.• Data Direct Mapping: LDEV with the data direct mapping

attribute.• Deduplication System Data Volume: LDEV used to manage

data deduplication.

104 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 105: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

• - (hyphen): LDEV for which the attribute is not defined.

Restoring blocked LDEVsUse this procedure to restore LDEVs from any of the following tabs:• LDEVs tab when you make a selection from Parity Groups.• LDEVs tab when you select Logical Devices.• Virtual Volumes tab when you select a Pool.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. If Blocked appears in the Status column, use the following steps toblock the LDEV.If Blocked does not appear in the column, skip the remaining steps.

3. In the Logical Devices window, select the LDEV ID of the LDEV youwant to restore.You can select multiple LDEVs that are listed together or separately.

4. Block the LDEV to be restored.For information on blocking internal volumes, see Blocking LDEVs onpage 101 and information on blocking externals volumes, see the HitachiUniversal Volume Manager User Guide.

5. Click More Actions and select Restore LDEVs.6. Note the settings in the Confirm window and enter a unique Task

Name or accept the default and click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Restoring blocked LDEVs in a parity groupUse this procedure to restore blocked LDEVs in a parity group from any of thefollowing tabs:

Configuring custom-sized provisioning 105Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 106: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Parity Groups tab when you select Parity Groups from the Storage Systemstree of the Device Manager - Storage Navigator main window.

• Parity Groups tab when you select a parity group from Parity Groups in theStorage Systems tree.

• LDEVs tab when you select Logical Devices.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Parity Groups, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Parity Groups.

2. If Blocked appears in the Status column, use the following steps toblock the LDEV.If Blocked does not appear in the column, skip the remaining steps.

3. In the Parity Groups window, select the Parity Group ID of the paritygroup with the LDEV you want to restore.You can select multiple parity groups that are listed together orseparately.

4. Block the LDEV to be restored.For information on blocking internal volumes, see Blocking LDEVs onpage 101 and information on blocking externals volumes, see the HitachiUniversal Volume Manager User Guide.

5. Click More Actions and select Restore LDEVs.6. Note the settings in the Confirm window and enter a unique Task

Name or accept the default and click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Restore LDEVs windowUse this window to recover blocked LDEVs.

106 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 107: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

LDEV ID LDEV identifier, which is the combination of LDKC, CU, andLDEV.

LDEV Name LDEV name.

Parity Group ID Parity group identifier.

Pool Name (ID) Pool name and pool identifier.

Emulation Type Emulation type.

Capacity LDEV capacity.

Provisioning Type Provisioning type assigned to the LDEV.• Basic: Internal volume.• DP: DP-VOL.• External: External volume.• Snapshot: Thin Image volume.• ALU: LDEV with the ALU attribution.

Attribute Displays the attribute of the LDEV.• Command Device: CCI command device.• Remote Command Device: Remote command device for

CCI.• JNL VOL: Journal volume for Universal Replicator.• Quorum Disk: Quorum disk for global-active device.• TSE: TSE-VOL for Compatible FlashCopy® SE.• ALU: LDEV of the ALU attribution.• SLU: LDEV of the SLU attribution.• Data Direct Mapping: LDEV with the data direct mapping

attribute.

Configuring custom-sized provisioning 107Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 108: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

• Deduplication System Data Volume: LDEV used to managedata deduplication.

• - (hyphen ): LDEV for which the attribute is not defined.

Formatting LDEVsIf you initialize LDEVs that are being used, you will need to format the LDEVs.

About formatting LDEVsThe LDEV Format function, which includes Normal Format and Quick Format.These functions format volumes, including external volumes. Beforeformatting volumes, ensure that the volumes are in blocked status.

The following table lists which formatting functions can be used on whichLDEV types.

Formatting function Corresponding volume

Normal Format Internal volume

Virtual volume

External volume

Quick Format Internal volume other than LDEV in the paritygroup with accelerated compression enabled.

Storage system operation when LDEVs are formattedThe storage system acts in one of two ways immediately after an LDEV isadded, depending on the default settings in the storage system.• The storage system automatically formats the added LDEV. This is the

default action.• The storage system blocks the LDEV instead of automatically formatting it.

To confirm or change the default formatting settings on the storage system,contact the administrator. Users who have the Storage Administrator(Provisioning) role can change these default formatting settings.

Quick Format functionThe Quick Format function formats internal volumes in the background. WhileQuick Format is running in the background, you can configure your systembefore the formatting is completed.

Before using Quick Format to format internal volumes, ensure that theinternal volumes are in blocked status.

108 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 109: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

I/O operation from a host during Quick Format are allowed. Formatting in thebackground might affect performance.

Because shared resources such as MP blades or cache paths are used duringquick format operations, all host I/Os in a storage system may be affected.

Particularly, in the following cases of quick format operations, due to the loadconcentration on a specific component at the same time, the I/O performanceof the host may decrease while performing of the quick format.• In a configuration that satisfies both of these conditions, many quick

formats started at the same time.○ The number of modules is one or two.○ Configuration of CPEX (Cache Path control adapter and PCI EXpress

path switch), DKA (DisK Adapter), and CHA (CHannel Adapter) areminimum in each module

• In a configuration that satisfies either one of these conditions, quickformats started.○ In the multiple modules configuration, the number of CPEXs, DKAs, and

CHAs that are installed in each module, are extremely unbalancedbetween modules.For example, in the multiple modules configuration, one modulecontains many CPEX, DKA, and CHA, and other modules contains CPEX,DKA, and CHA of the least number.

○ In a module, numbers of CPEXs, DKAs, and CHAs that connect to CPEX(Basic) or CPEX (Option) are extremely unbalanced between CPEX(Basic) and CPEX (Option).For example, inside of one module, CPEX (Basic) is connected by manyDKAs, CHAs, or other devices. However, CPEX (Option) is connected byDKAs, CHAs, or other devices of the least number.

For configurations such as those described above, perform the quick formatoperation on one LDEV first to confirm that the I/O performance of the hostdoes not decrease. After that, it is strongly recommended that the number ofquick format operations you perform at the same time is increased one byone.

For configurations other than those described above, it is recommended thatno more than eight quick format operations are started at the same time.After eight or fewer quick formats, it is recommended that four quick formatoperations are started in each increment while I/O performance is monitoredon the host.

Configuring custom-sized provisioning 109Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 110: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Quick Format specifications

Item Description

Preparation forexecuting the QuickFormat feature

The internal volume must be in blocked status.

Maximum number ofparity groups that canundergo Quick Format

Up to 72 parity groups can concurrently undergo Quick Format. There isno limit on the number of volumes that can undergo Quick Format.

Concurrent QuickFormat operations

While one Quick Format operation is in progress, another Quick Formatoperation can be performed. A maximum of 72 parity groups canconcurrently undergo Quick Format.

Preliminary processing At the beginning of the Quick Format operation, preliminary processing isperformed to generate management information. If a volume isundergoing preliminary processing, the status of the volume isPreparing Quick Format. While preliminary processing is in progress,hosts cannot perform I/O access to the volume.

Blocking and restoringof volumes

If a volume undergoing Quick Format is blocked, the storage systemrecognizes that the volume is undergoing Quick Format. After the volumeis restored, the status of the volume changes to Normal (QuickFormat).

If all volumes in one or more parity groups undergoing Quick Format areblocked, the displayed number of parity groups undergoing Quick Formatdecreases by the number of blocked parity groups. However, the numberof parity groups that have not undergone and can undergo Quick Formatdoes not increase. To calculate the number of parity groups that have notundergone but can undergo Quick Format, use the following formula:

72 - X - Y

Where:

X = number of parity groups on which Quick Format is being performed.

Y = number of parity groups for which all volumes are blocked during theQuick Format.

Storage system ispowered off and back on

The Quick Format operation resumes when power is turned back on.

Restrictions • Quick Format cannot be executed on LDEV in the parity group withAccelerated Compression enabled, external volumes, virtual volumes,or journal volumes of Universal Replicator.

• The volume migration feature or the QuickRestore feature cannot beapplied to volumes undergoing Quick Format. When you useCommand Control Interface to execute the volume migrationoperation or the QuickRestore operation on volumes undergoingQuick Format, EX_CMDRJE will be reported to Command ControlInterface. In this case, check the volume status.

• The prestaging feature of Cache Residency Manager cannot beapplied to volumes undergoing Quick Format.

• Quick format cannot be performed on a deduplication system datavolume.

110 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 111: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Formatting LDEVs in a Windows environmentIn a Windows environment, both Normal Format or Quick Format arecommonly used. In this environment, Quick Format consumes less thinprovisioning pool capacities than Normal Format.

On Windows Server 2008, using Normal Format issues Write commands tothe overall volume (for example, overall "D" drive). When Write commandsare issued, pages corresponding to the overall volume are allocated,therefore, pool capacities corresponding to the ones of the overall volume areconsumed. In this case, the thin provisioning advantage of reducingcapacities is lost.

Quick Format issues Write commands only to management information (forexample, index information). Therefore, pages corresponding to themanagement information areas are allocated, but the capacities are smallerthan the ones consumed by Normal Format.

Formatting a specific LDEVUse this procedure to perform Normal formatting on a volume.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the Logical Devices window, select the LDEV ID of the LDEV youwant to format.

Note: When you format a deduplication system data volume, youmust select only the (one) deduplication system data volume. Donot select any other volumes.

3. Block the LDEV to be formatted.For information about blocking internal volumes, see Blocking LDEVs onpage 101. For information about blocking external volumes, see theHitachi Universal Volume Manager User Guide.

Configuring custom-sized provisioning 111Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 112: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

4. Click More Actions, and select Format LDEVs.5. In the Format LDEVs window, select the format type from the Format

Type list, and then click Finish.6. Note the settings in the Confirm window, enter a unique Task Name or

accept the default, and click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Formatting all LDEVs in a parity groupUse this procedure to perform Normal formatting on all of the volumes(LDEVs) in the parity group you select.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Before you format the LDEVs in the selected parity group, make sure that allthe LDEVs in the parity group have been blocked. See Blocking LDEVs onpage 101 for blocking an internal volume. See the Hitachi Universal VolumeManager User Guide for blocking an external volume.

Procedure

1. Open the Parity Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Parity Groups, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Parity Groups.

2. In the Parity Groups window, select a Parity Group ID of parity groupwith the LDEVs you want to format.You can select multiple parity groups that are listed together orseparately.

3. Block the LDEV to be formatted.For information on blocking internal volumes, see Blocking LDEVs onpage 101 and information on blocking external volumes, see the HitachiUniversal Volume Manager User Guide.

4. Click More Actions, and select Format LDEVs.5. In the Format LDEVs window, select the format type from the Format

Type list, and then click Finish.In the Confirm window, click Next to go to the next operation.

6. Click Apply.

112 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 113: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If Go to tasks window for status is checked, the Tasks windowopens.

Format LDEVs wizardUse this window to format LDEVs. LDEVs must be formatted before you canuse the storage space.

Format LDEVs window

Item Description

Format Type Select the type of formatting to be used on this LDEV.• Quick Format (default): Select this to perform quick-formatting.

This option is available only for formatting an internal volume.• Write to Control Blocks: Select this when the provisioning type

is for a mainframe external volume. The management area ofexternal volumes for mainframe systems will be overwritten.This is the default option for an external volume.

• Normal Format: Select this to perform normal-formatting. Thisoption is available for formatting an internal volume, or anexternal volume whose emulation type is OPEN.

Number of Selected ParityGroups

Number of selected parity groups.

Format LDEVs confirmation windowConfirm proposed settings, name the task, and then click Apply. The task willbe added to the execution queue.

Configuring custom-sized provisioning 113Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 114: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

LDEV ID LDEV identifier, which is the combination of LDKC, CU, andLDEV.

LDEV Name LDEV name.

Parity Group ID Parity group identifier.

Pool Name (ID) Pool name and pool identifier.

Emulation Type Emulation type.

Capacity LDEV capacity.

Provisioning Type Provisioning type to be assigned to the LDEV.• Basic: Internal volume.• DP: DP-VOL.• External: External volume.• Snapshot: Thin Image volume.• ALU: LDEV with the ALU attribution.

Attribute Displays the attribute of the LDEV.• Command Device: CCI command device.• TSE: TSE-VOL for Compatible FlashCopy® SE.• ALU: LDEV with the ALU attribution.• SLU: LDEV with the SLU attribution.• Data Direct Mapping: LDEV with the data direct mapping

attribute.• Deduplication System Data Volume: LDEV used to manage

data deduplication.• Hyphen (-): LDEV for which the attribute is not defined.

Format Type Type of formatting operation.• Quick Format: Quick formatting is performed.• Normal Format: Normal formatting is performed.• Write to Control Blocks: The management area of external

volumes for mainframe systems is overwritten.

Assigning an MP blade

114 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 115: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Guidelines for changing the MP blade assigned to an LDEVUse the following guidelines when changing MP blade assignments:• Changes to the MP blade ID of an LDEV should be made during off-peak

hours when the I/O load is as low as possible. Before and after changesare made, it is recommended that the cache write-pending rate (%) for allCLPRs is lower than 50%. Do not change the MP blade ID when the I/Oload is high -- for example: during initial copy of ShadowImage, TrueCopy,global-active device, or Universal Replicator).

• When you change the MP blade ID of an LDEV, you should usePerformance Monitor before and after the change to check the load statusof devices. Do not change several LDEV MP blade IDs during a short periodof time. As a guideline, you can change 10% or less of the total number orthe full workload of LDEV MP blade IDs assigned to the same MP blade IDat the same time.

• After you change the MP blade for an LDEV, wait more than 30 minutesbefore you try to change the ID again for the same LDEV.

Assigning an MP blade to a resourceUse this procedure to assign an MP blade to a resource (logical device,external volume, and journal volume).

Before you beginThe Storage Administrator (System Resource Management) role is required toperform this task.

Procedure

1. Open the MP Blades window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Components.c. In the Components window, select the Chassis ID of the DKC that

has the MP blade settings you want to change.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Expand Components and click the DKC that has the MP blade

settings you want to change.2. In the MP Blades window, select the MP Blade ID of the MP blade with

the settings you want to change.3. Click Edit MP Blades.4. In the Edit MP Blades window, disable or enable Auto Assignment.

The default value depends on the value set for the selected MP blade.

Configuring custom-sized provisioning 115Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 116: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Select Enable if the MP blade can be automatically assigned.• Select Disable if the MP blade cannot be automatically assigned.

5. Click Finish.The Confirm window appears.

6. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Changing the MP blade assigned to an LDEV

Before you beginThe Storage Administrator (System Resource Management) role is required toperform this task.

Caution:• Changes to the MP blade ID of an LDEV should be made during off-peak

hours when the I/O load is as low as possible. Before and after changesare made, it is recommended that the cache write-pending rate (%) for allCLPRs is lower than 50%. Do not change the MP blade ID when the I/Oload is high -- for example: during initial copy of ShadowImage, TrueCopy,global-active device, or Universal Replicator.

• When you change the MP blade ID of an LDEV, you should usePerformance Monitor before and after the change to check the load statusof devices. Do not change several LDEV MP blade IDs during a short periodof time. As a guideline, you can change 10% or less of the total number orthe full workload of LDEV MP blade IDs assigned to the same MP blade IDat the same time.

• After you change the MP blade for an LDEV, wait more than 30 minutesbefore you try to change the ID again for the same LDEV.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:

116 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 117: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. Click Storage Systems, and then expand the Storage Systemstree.

b. Click Logical Devices.2. In the Logical Devices window, select the LDEV ID of the LDEV that

has the MP blade you want to change.3. Click More Actions, and then select Assign MP Blade.4. In the Assign MP Blade window, specify the MP blade in MP Blade.5. Click Finish.

The Confirm window appears.6. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Changing the ALUA mode setting of LDEVIf LDEV is used in ALUA, the ALUA mode in LDEV must be enabled.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. Select the volume that you want to change the ALUA mode.3. Click Edit LDEVs.4. In the Edit LDEVs window, select the ALUA Mode and click Enable or

Disable.If you choose Enable, the LDEV is used in ALUA.

5. Click Finish.The Confirm window appears.

6. In the Task Name text box, enter the task name.

Configuring custom-sized provisioning 117Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 118: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Components windowUse this window to view information about the controller chassis componentsin the storage system.

Summary

Item Description

Number of Controller Chassis Number of controller chassis

High Temperature Mode High Temperature Mode• Enabled (16-40 degrees C ): High Temperature mode is

enabled

118 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 119: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

• Disabled (16-32 degrees C ): High Temperature mode isdisabled

Power Consumption Total power consumption of the controller chassis and DKUs. Whenthe power information cannot be acquired because of a failure inthe component or network, the power information is not added tothe power consumption information.

In the following cases, the power consumption value mighttemporarily displays lower:• When starting the storage system• After replacing a part of the storage system• When updating the microcode or after updating the microcode

Edit High Temperature Mode Opens the Edit Temperature Mode window

View Temperature Monitor Opens the Temperature Monitor window

Components tab

Item Description

Chassis ID Chassis identifier of the storage system.

Chassis Type Chassis type.

Temperature (degrees C) Temperature of the cluster.• Cluster 1: Temperature of the cluster 1.• Cluster 2: Temperature of the cluster 2.

A question mark (?) appears when the temperature informationcannot be acquired because of a failure in the component ornetwork.

Export Opens a window where you can export configuration informationlisted in the table to a file that can be used for multiple purposes,such as backup or reporting.

DKC: MP Blades tabUse this window to view information about MP blades in the storage system.

Configuring custom-sized provisioning 119Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 120: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Summary

Item Description

Number of MP Blades Number of MP blades assigned to this component.

MP Blades tab

Item Description

MP Blade ID Identifier of the MP blade.

MP Blade Name Name of the MP blade.

Status Status of the MP blade.

Normal: Available.

120 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 121: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Warning: The MP blade is partially blocked.

Blocked: The MP blade is blocked.

Failed: The MP blade is in abnormal status.

Cluster Cluster number of the MP blade.

Auto Assignment Indicates whether the MP blade is automatically assigned toresources.

Enabled: The MP blade is automatically assigned to resources(logical devices, external volumes, and journal volumes).

Disabled: The MP blade is not automatically assigned to resources.

Edit MP Blades Opens the Edit MP Blades window.

Export Opens a window where you can export configuration informationlisted in the table to a file that can be used for multiple purposes,such as backup or reporting.

Assign MP Blade wizardUse this wizard to assign a MP blade that will control selected resources.

Assign MP Blade windowUse this window to select a MP blade to assign to an LDEV.

Configuring custom-sized provisioning 121Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 122: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

MP Blade Change the MP blade assigned to the LDEV.

MP-blade-ID: The selected MP blade is assigned to the LDEV.

Assign MP Blade confirmation windowConfirm proposed settings, name the task, and then click Apply. The task willbe added to the execution queue.

Item Description

LDEV ID LDEV identifier, which is the combination of LDKC, CU, andLDEV.

LDEV Name LDEV name.

Parity Group ID Parity group identifier.

Pool Name (ID) Pool name and pool identifier.

Emulation Type Emulation type.

Capacity LDEV capacity.

Provisioning Type Provisioning type to be assigned to the LDEV.• Basic: Internal volume.• DP: DP-VOL.• External: External volume.• External MF: Migration volume.• Snapshot: Thin Image volume.

122 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 123: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Attribute Displays the attribute of the LDEV.• Command Device: Command device.• Remote Command Device: Remote command device.• JNL VOL: Journal volume.• Pool VOL: Pool volume. The number in parentheses shows

the pool ID.• Quorum Disk: Quorum disk for global-active device.• ALU: LDEV of the ALU attribution.• SLU: LDEV of the SLU attribution.• Data Direct Mapping: LDEV of the data direct mapping

attribute.• Deduplication System Data: Deduplication System Data

volume.• Hyphen (-): Volume in which the attribute is not defined.

MP Blade ID MP blade identifier to be set.

Edit MP Blades wizardUse this wizard to enable or disable the storage system to automaticallyassign the load of resources controlled by the selected MP blades.

Edit MP Blades window

Configuring custom-sized provisioning 123Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 124: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Auto Assignment Specify whether to automatically assign an MP blade to resources(logical devices, external volumes, and journal volumes).• Enable: Resources will be automatically assigned to the

specified MP blade.• Disable: Resources will not be automatically assigned to the

specified MP blade.

Edit MP Blades confirmation windowConfirm proposed settings, name the task, and then click Apply. The task willbe added to the execution queue.

Item Description

MP Blade ID MP blade identifier.

Cluster Cluster number of the MP blade.

Auto Assignment Indicates whether automatic assignment of MP blades is inuse.• Enabled: An MP blade is automatically assigned to

resources (logical devices, external volumes, and journalvolumes).

• Disabled: An MP blade is not automatically assigned toresources.

124 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 125: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Viewing LDEVs of ALUs or SLU attributionUse this procedure to the view the ALUs or SLUs of the storage system. Theprocedure can be performed on an ESXi host or vCenter Server as well.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Select Logical Devices.

2. In the LDEVs pane, click More Actions > View ALUs/SLUs.

Configuring custom-sized provisioning 125Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 126: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

126 Configuring custom-sized provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 127: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5Configuring thin provisioning

Thin provisioning technology allows you to allocate virtual storage capacitybased on anticipated future capacity needs, using virtual volumes instead ofphysical disks. Thin provisioning is an optional provisioning strategy for yourstorage systems. Thin provisioning is implemented by creating one or moreDynamic Provisioning pools (DP pools) of physical storage space.

□ Dynamic Provisioning overview

□ Dynamic Tiering overview

□ Active flash overview

□ Thin provisioning requirements

□ Using Dynamic Provisioning or Dynamic Tiering or active flash with othersoftware products

□ Dynamic Provisioning workflow

□ Dynamic Tiering and active flash

□ Thresholds

□ Working with pools

□ Working with DP-VOLs

□ Virtualizing storage capacity (DP/HDT)

□ Virtualizing storage tiers (HDT)

□ Monitoring capacity and performance

Configuring thin provisioning 127Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 128: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

□ Working with SIMs

□ Enabling data direct mapping for external volumes, pools, and DP-VOLs

128 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 129: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Dynamic Provisioning overviewDynamic Provisioning is an advanced thin-provisioning software product thatallows you to save money on storage purchases and reduce storagemanagement expenses.

You can operate Dynamic Provisioning using both Hitachi Device Manager -Storage Navigator software and the Command Control Interface.

Dynamic Tiering overviewDynamic Tiering is a software product that helps you reduce storage costsand increase storage performance by supporting a volume configured withdifferent storage media of different cost and performance capabilities. Thissupport allows you to allocate data areas with heavy I/O loads to higher-speed media and to allocate data areas with low I/O loads to lower-speedmedia. In this way, you can make the best use of the capabilities of installedstorage media. Up to three storage tiers consisting of different types of datadrives are supported in a single pool of storage.

Active flash overviewThe active flash feature of Dynamic Tiering automatically promotes pageswhen their access frequency suddenly becomes high.

Based on functions for Dynamic Tiering, active flash can promote pages toTier 1 if their latest access frequency suddenly becomes high. The activeflash feature can improve Tier 1 I/O performance by reallocating Tier 2 pagesif their I/O loads have increased suddenly.

Thin provisioning requirementsLicense requirements

Before you use Dynamic Provisioning, the Dynamic Provisioning must beinstalled on the storage system. For this, you will need to purchase theStorage Virtualization Operating System (SVOS) license.

Before you use the capacity saving function, Dynamic Provisioning anddedupe and compression must be installed on the storage system. For this,you will need to purchase the Storage Virtualization Operating System(SVOS) license and the dedupe and compression license.

Before you use Dynamic Tiering, Dynamic Provisioning and Dynamic Tieringmust be installed on the storage system. For this, you will need to purchase

Configuring thin provisioning 129Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 130: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

the Storage Virtualization Operating System (SVOS) license and the DynamicTiering license.

You need the Dynamic Tiering license to access the total capacity of the poolwith the tier function enabled.

For Dynamic Provisioning, Dynamic Tiering, active flash, DynamicProvisioning for Mainframe, Dynamic Tiering for Mainframe, and active flashfor mainframe, the same license capacity as the pool capacity is required. Ifyou are using a pool comprised of pool volumes from the acceleratedcompression-enabled parity group, you must purchase the license forphysical capacity.

For Dynamic Tiering, active flash, Dynamic Tiering for Mainframe, and activeflash for mainframe, the same license capacity as the pool capacity isrequired. If you are using a pool comprised of pool volumes from theaccelerated compression-enabled parity group, you must purchase the licensefor physical capacity.

For active flash and active flash for mainframe, the same license capacity asthe pool capacity is required. If you are using a pool comprised of poolvolumes from the accelerated compression-enabled parity group, you mustpurchase the license for physical capacity.

Before you use active flash, the Dynamic Provisioning, and Dynamic Tieringsoftware must be installed on the storage system. For this, you will need topurchase the Storage Virtualization Operating System (SVOS) license and theDynamic Tiering and active flash license. You will need the Dynamic Tieringand active flash licenses for the total capacity of the pool with the tierfunction enabled.

If the DP-VOLs of Dynamic Provisioning or Dynamic Tiering are used for theprimary volumes and secondary volumes of ShadowImage, TrueCopy,Universal Replicator, Volume Migration, global-active device, or Thin Image,you will need the ShadowImage, TrueCopy, Universal Replicator, VolumeMigration, global-active device, and Thin Image licenses for the total poolcapacity in use. If you are using a pool comprised of pool volumes from theaccelerated compression-enabled parity group, you must purchase the licensefor physical capacity.

If you exceed the licensed capacity, you will be able to use the additionalunlicensed capacity for 30 days. For more information about temporarylicense capacity, see the System Administrator Guide.

Pool requirementsA pool is a set of volumes reserved for storing Dynamic Provisioning writedata.

130 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 131: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Items Requirements

Pool capacity Calculate pool capacity using the following formula:

The capacity of the pool (MB) = Total numberof pages * 42 - 4200.

4200 in the formula is the management area size ofthe pool-VOL with System Area.

Total Number of pages = Σ(floor(floor(pool-VOLnumber of blocks ÷ 512) ÷ 168)) for each pool-VOL.

floor( ): Truncates the value calculated from theformula in parentheses after the decimal point.

However, the upper limit of total capacity of all pools is12.3 PB if shared memory is installed.

Max number of pool-VOLs From 1 to 1,024 volumes (per pool).

A volume can be registered as a pool-VOL to one poolonly.

Maximum number of pools Up to a total of 128 pools per storage system. The 128pools include following pool types:• Dynamic Provisioning (including Dynamic Tiering)• Dynamic Provisioning for Mainframe (including

Dynamic Tiering for Mainframe)• Thin Image

Pool IDs (0 to 127) are assigned as pool identifiers.

Increasing capacity You can increase pool capacity dynamically. Bestpractice is to add pool-VOLs to increase capacity byone or more parity groups.

Reducing capacity You can reduce pool capacity by removing pool-VOLs.

Deleting You can delete pools that are not associated with anyDP-VOLs.

Subscription limit From 0 to 65534 (%).

If you do not specify a value, the subscription is set tounlimited.

Thresholds • Warning Threshold: You can set the value between1% and 100%, in 1% increments. The default is70%.

• Depletion Threshold: You can set the valuebetween the warning threshold and 100%, in 1%increments. The default is 80%.

• Thresholds cannot be defined for a pool with datadirect mapping enabled.

Data allocation unit 42 MB

The 42-MB page corresponds to a 42-MB continuousarea of the DP-VOL. Pages are allocated for the poolonly when data has been written to the area of theDP-VOL.

Tier Defined based on the media type (see Drive type for aDynamic Tiering tier, below). Maximum 3 tiers. If

Configuring thin provisioning 131Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 132: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Items Requirements

(Dynamic Tiering and active flash) active flash is used, SSD must be assigned to the firsttier.

Maximum capacity of each tier

(Dynamic Tiering and active flash)

4.0 PB (Total capacity of the tiers must be within 4.0PB

Pool-VOL requirementsPool-VOLs make up a DP pool.

Items Requirements

Volume type Logical volume (LDEV)

While pool-VOLs can coexist with other volumes in the same paritygroup, for best performance:• Pool-VOLs for a pool should not share a parity group with other

volumes.• Pool-VOLs should not be located on concatenated parity groups.

Pool-VOLs cannot be used for any other purpose. For instance, youcannot specify the following volumes as Dynamic Provisioning andDynamic Tiering pool-VOLs:• Volumes used by ShadowImage, Volume Migration, TrueCopy,

global-active device, or Universal Replicator• Volumes defined by Cache Residency Manager• Volumes already registered in Thin Image, Dynamic Provisioning, or

Dynamic Tiering pools• Volumes used as Thin Image P-VOLs or S-VOLs• Volumes reserved by Data Retention Utility• Data Retention Utility volumes with a Protect, Read Only, or S-VOL

Disable attribute• LDEVs whose status is other than Normal, Correction Access, or

Copying.• Command devices• Quorum disks used by global-active device

The following volume cannot be specified as a pool-VOL for DynamicTiering:• An external volume with the data direct mapping attribute enabled.

If pool-VOLs are LDEVs created from the parity group with acceleratedcompression enabled, these pool-VOLs must be applied to one pool.

Emulation type OPEN-V

RAID level for a DynamicProvisioning pool

All RAID levels of pool-VOLs can be added. Pool-VOLs of RAID 5, RAID6, RAID 1, and the external volume can coexist in the same pool. Forpool-VOLs in the same pool:• RAID 6 is the recommended RAID level for pool-VOLs, especially for

a pool where the recovery time of a pool failure due to a drivefailure is not acceptable.

• Pool-VOLs of the same drive type with different RAID levels cancoexist in the same pool. Use the following configuration:○ Set one RAID level for pool-VOLs.

132 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 133: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Items Requirements

If you register pool-VOLs with multiple RAID levels, the I/Operformance depends on the RAID levels of pool-VOLs to beregistered. In that case, note the I/O performance of the disks.

RAID level for a DynamicTiering pool

All RAID levels of pool-VOLs can be added. Pool-VOLs of RAID 5, RAID6, RAID 1, and the external volume can coexist in the same pool.

For pool-VOLs in a pool:• RAID 6 is the recommended RAID level for pool-VOLs, especially for

a pool where the recovery time of a pool failure due to a drivefailure is not acceptable.

• Pool-VOLs of the same drive type with different RAID levels cancoexist in the same pool. Note that we recommend that you use thefollowing configuration:○ Set one RAID level for pool-VOLs.

If you register pool-VOLs with multiple RAID levels, the I/Operformance depends on the RAID levels of pool-VOLs to beregistered. In that case, note the I/O performance of the disks.

• Because the speed of RAID 6 is slower than other RAID levels, tiersthat use other RAID levels should not be placed under a tier thatuses RAID 6.

Data drive type for aDynamic Provisioningpool

SSD*, SAS15K, SAS10K, SAS7.2K, and external volumes can be usedas the data drive type. These data drive types can coexist in the samepool.

Cautions:• Best practice is for drives of different types not to coexist in the

same pool. If multiple pool-VOLs with different drive types areregistered in the same pool, the I/O performance depends on thedrive type of the pool-VOL to which the page is assigned. Therefore,if different drive types are registered in the same pool, ensure thatthe required I/O performance is not degraded by using less desirabledrive types.

• If multiple data drives coexist in the same pool, we recommend notusing data drives that are the same types and different capacities.

Data drive type for aDynamic Tiering pool

SAS15K, SAS10K, SAS7.2K, SSD*, and external volumes can be usedas the data drive type. These data drive types can coexist in the samepool. If active flash is used, SSD must be installed in advance.

Cautions:• If multiple data drives coexist in the same pool, we recommend not

using data drives that are the same types and different capacitysizes.

Volume capacity Internal volume: From 8 GB to 2.9 TB

External volume: From 8 GB to 4 TB

External volume with the data direct mapping attribute: From 8 GB to256 TB

LDEV format The LDEV format operation can be performed on pool-VOLs if one ofthese conditions is satisfied:• The DP-VOL defined for the pool does not exist.• All DP-VOLs defined for the pool are blocked.

Path definition You cannot specify a volume with a path definition as a pool-VOL.

* Includes FMC, FMD, and MLC.

Configuring thin provisioning 133Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 134: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

DP-VOL requirements

Items Requirements

Volume type DP-VOL (V-VOL)

The LDEV number is handled in the same way as for normal volumes.

Emulation type OPEN-V

Maximumnumber of DP-VOLs

Up to 63,232 per pool. Any number of available DP-VOLs can be associated witha pool.• For a pool with data direct mapping enabled: up to 1,023 per pool• For a pool with Deduplication enabled, 32,632 is the maximum number of

DP-VOLs whose Capacity Saving is set to Compression or Deduplication andCompression.

• For a pool with Deduplication disabled, 32,639 is the maximum number ofDP-VOLs whose Capacity Saving is set to Compression.

Up to 63,232 volumes per system.• For a pool with data direct mapping enabled: up to 1,023 per pool• For a pool with Deduplication enabled, 32,632 is the maximum number of

DP-VOLs whose Capacity Saving is set to Compression or Deduplication andCompression.

• For a pool with Deduplication disabled, 32,639 is the maximum number ofDP-VOLs whose Capacity Saving is set to Compression.

If external volumes and V-VOLs are used, the total number of external volumesand V-VOLs must be 63,232 or fewer.

Volume capacity Volume capacity from 46.87 MB to 256 TB per volume. For DP-VOLs with datadirect mapping enabled, the capacity range is from 8 GB to 256 TB.• TB: 0.01 to 256 (For DP-VOLs with data direct mapping enabled, the

capacity range is from 0.01 TB to 256 TB)• GB: 0.04 to 262,144 (For DP-VOLs with data direct mapping enabled, the

capacity range is from 8 GB to 262,144 GB.)• MB: 46.87 to 268,435,456 (For DP-VOLs with data direct mapping enabled,

the capacity range is from 8,192 MB to 268,435,456 MB.)• Blocks: 96,000 to 549,755,813,888 (For DP-VOLs with data direct mapping

enabled, the capacity range is from 16,777,216 block to 549,755,813,888block.)

Total maximum volume capacity of 12.3 PB per storage system.

Path definition Available.

LDEV format Available. Quick Format is not available.

System option mode (SOM) 867 ON: When you format an LDEV on a DP-VOL,the capacity mapped to the DP-VOL is released to the pool as free space.

When you format a DP-VOL, the storage system releases the allocated pagearea in the DP-VOL. The quick format operation cannot be performed. If theLDEV format is applied to V-VOLs that are enabled for full allocation, the usedcapacity of the pool is not changed before the LDEV format is applied.

Caution:• For a DP-VOL with deduplication and compression enabled, a deduplication

system data volume whose LDEV status is Blocked cannot be formatted.

134 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 135: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Items Requirements

• For a DP-VOL with deduplication and compression enabled, a deduplicationsystem data volume whose capacity saving status is Failed cannot beformatted.

Deduplication system data volume requirementsWhen you enable deduplication on a pool, the deduplication system datavolume (DSD volume) for the pool is created. The deduplication system datavolume is used exclusively by the storage system to manage the datadeduplication function. The deduplication system data volume for a pool isdeleted automatically when you disable the Capacity Saving setting for thepool or delete the pool.

The following table lists the requirements for the deduplication system datavolume.

Items Requirements

Volume type DP-VOL (V-VOL)

Emulation type OPEN-V

Number per pool One deduplication system data volume per pool (fixed)

Volume capacity 40 TB (fixed)

Path definition Not available

LDEV format Available

Warning: Format a deduplication system data volume only when you want todelete all deduplication-enabled DP-VOLs in the associated pool. After thededuplication system data volume has been formatted, all deduplication-enabled DP-VOLs assigned to the pool are not usable and must be formattedand deleted.

When you format a deduplication system data volume, you must specify onlyone deduplication system data volume and no other volumes in the FormatLDEVs window.

Resource group A deduplication system data volume and its associated pool volumes must be inthe same resource group.

Cachemanagementdevices

Each deduplication system data volume uses 14 cache management devices.

Requirements for increasing DP-VOL capacityYou can increase DP-VOL capacity up to 256 TB. To notify the host that theDP-VOL capacity has been increased, make sure host mode option 40 isenabled. Processing differs as follows, depending on the value of host modeoption 40:

Configuring thin provisioning 135Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 136: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• When host mode option 40 is not enabled, the host will not be notified thatthe DP-VOL capacity has been increased. Therefore, the DP-VOL data hasto be read again by the host after the capacity is increased.

• When host mode option 40 is enabled, the host is notified that the DP-VOLcapacity has increased. If the operating system cannot recognize the valueof capacity that was increased, the DP-VOL data has to be read again byyour storage system.

The following requirements are important when increasing the DP-VOLcapacity:• The DP-VOL to be increased is not shared with a software product that

does not allow increasing the volume capacity (see ).• The DP-VOL is not undergoing LDEV formatting.• The DP-VOL does not have data direct mapping enabled.• The DP-VOL is not a deduplication system data volume.• The capacity to be added to the DP-VOL must be specified within the range

indicated below LDEV Capacity in the Expand V-VOLs window.• The pool related to the DP-VOLs to be increased is in any one of the

following statuses:○ Normal○ Exceeding the subscription limit threshold○ In progress of pool capacity shrinking

Caution: When increasing DP-VOL capacity, do not perform the followingoperations. When you perform these operations, do not increase DP-VOLcapacity.• Operations using Virtual LUN• Operations using Cache Residency Manager• Creating DP-VOLs• Restoring pools• Deleting DP-VOLs• Operations to increase the DP-VOL capacity in another instance of CCI• Maintenance of your storage system

After increasing DP-VOL capacity, refresh the display and confirm that theDP-VOL is increased. If the DP-VOL capacity is not increased, wait a while,refresh the display again, and confirm that the DP-VOL is increased. If youperform an operation without making sure that the DP-VOL is increased,operations from Device Manager - Storage Navigator may fail.

If one of the following operations is being performed, the DP-VOL capacitymight not be increased:• Volume Migration• Configuration change of journal used by Universal Replicator• Quick Restore by ShadowImage

136 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 137: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Estimating the required capacity of pool-VOLs with system area inthe pool with data direct mapping enabled

If you want to expand a pool for which data direct mapping is enabled, youmust free up space in the pool. Make sure that the estimated capacity of freespace is available before expanding the pool.

Use the following mathematical formulas to estimate the capacity of freespace required in the pool:• Formula 1:

Required-free-space-for-a-pool-to-be-added-of-one-external-volume-of-the-data-direct-mapping-attribute [in MB] = (ceil (pool-VOL-capacity [in MB] / 3,145,548 MB) × 4 pages × 42 MB) + ( ceil (pool-VOL-capacity [in MB] / 42 MB) - floor (pool-VOL-capacity [in MB] / 42 MB) ) × 42MB

• Formula 2:Required-capacity-of-pool-VOL-with-system-area-in-one-pool-of-a-data-direct-mapping-attribute [in MB] = Total-of-calculated-values-by-the-Formula-1-for-each-volume + Management area (4200) [in MB] + 42 [in MB]

The (ceil) indicates the number enclosed by brackets must be rounded up tothe whole number.

The (floor) indicates the number enclosed by brackets must be rounded downto the whole number.

Note: A DP-VOL with data direct mapping attribute uses the followingcapacities:• Mapped capacity uses multiple of 42MB in the capacity of the pool volume

as well as the capacity for one page (the area of the capacity other thanmultiples of 42 MB).

• Control information (168 MB is used per 3,145,548 MB)The pool-VOL with the system area contains the one page capacity andcontrol information.

V-VOL page reservation requirementThe V-VOL full allocation is performed in a range less than the depletionthreshold size of the pool. If the capacity of V-VOLs is larger than thedepletion threshold size, the full allocation operation is rejected.

Note: If the pool is using pool volumes assigned by parity groups withaccelerated compression enabled, the V-VOL full allocation function cannot bedefined.

Configuring thin provisioning 137Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 138: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The reserved page capacity for each pool can be calculated by the followingformula. In the formula, the value enclosed in ceil ( ) must be rounded up tothe nearest whole number.

Reserved capacity for each pool [block] =ceil (CV capacity of V-VOL [block] / 86016) * 86016 + ceil (CVcapacity of V-VOL [block] /6442082304) * 4 * 86016 - Used capacity of V-VOL [block]

Operating system and file system capacityWhen initializing a DP-VOL operating systems and file systems will consumesome Dynamic Provisioning pool space. Some combinations initially take uplittle pool space, while other combinations take as much pool space as thevirtual capacity of the DP-VOL.

The following table shows the effects of some combinations of operatingsystem and file system capacity. For more information, contact your servicerepresentative.

OS File System Metadata Writing Pool Capacity Consumed

Windows Server2003 andWindows Server2008*

NTFS Writes metadata to firstblock.

O

Small (one page)

If file update is repeated,allocated capacity increaseswhen files are updated(overwritten). Therefore, theeffectiveness of reducing thepool capacity consumptiondecreases.

Linux XFS Writes metadata inAllocation Group Sizeintervals.

O

Depends upon allocation groupsize. The amount of pool spaceconsumed will beapproximately [DP-VOLSize]*[42 MB/Allocation GroupSize]

Ext2

Ext3

Writes metadata in 128-MB increments.

O

About 33% of the size of theDP-VOL.

The default block size for thesefile systems is 4 KB. Thisresults in 33% of the DP-VOLacquiring DP pool pages. If thefile system block size ischanged to 2 KB or less thenthe DP-VOL Page consumptionbecomes 100%.

Solaris UFS Writes metadata in 52-MBincrements.

X

Size of DP-VOL.

138 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 139: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

OS File System Metadata Writing Pool Capacity Consumed

VxFS Writes metadata to thefirst block.

O

Small (one page).

AIX JFS Writes metadata in 8-MBincrements.

X

Size of DP-VOL.

If you change the AllocationGroup Size settings when youcreate the file system, themetadata can be written to amaximum interval of 64 MB.Approximately 65% of the poolis used at the higher group sizesetting.

JFS2 Writes metadata to thefirst block.

O

Small (one page).

VxFS Writes metadata to thefirst block.

O

Small (one page).

HP-UX JFS (VxFs) Writes metadata to thefirst block.

O

Small (one page).

HFS Writes metadata in 10-MBincrements.

X

Size of DP-VOL.

Explanatory notes:

O: Indicates an effective reduction of pool capacity.

X: Indicates no effective reduction of pool capacity.

* See Formatting LDEVs in a Windows environment on page 111

Using Dynamic Provisioning or Dynamic Tiering or activeflash with other software products

Interoperability of DP-VOLs and pool-VOLsDP-VOLs and pool-VOLs can be used in conjunction with other softwareproducts with certain limitations and restrictions. The following table lists thesoftware products and indicates the operations that are permitted and notpermitted for each product.

Software product (userguide) Permitted Not permitted

Cache Residency Manager(Performance Guide)

Not applicable Performing operations on DP pool-VOLs or DP-VOLs.

Configuring thin provisioning 139Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 140: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Software product (userguide) Permitted Not permitted

Thin Image (Hitachi ThinImage User Guide)

Using a V-VOL as a Thin Imageprimary volume (P-VOL) orsecondary volume (S-VOL). Themaximum total number of poolsper storage system is 128. ThinImage pool limits are reducedby the number of DynamicProvisioning pools and DynamicTiering pools.

• Using a DP-VOL as a Thin Imagepool-VOL.

• Using a Dynamic Provisioning orDynamic Tiering pool-VOL as aThin Image P-VOL, S-VOL, orpool-VOL.

• Increasing the capacity of a DP-VOL using Thin Image. Thisapplies when a DP-VOL ofDynamic Provisioning, DynamicTiering, or active flash is used asthe P-VOL or S-VOL of a ThinImage pair.

• Reclaiming zero pages of a V-VOLused by Thin Image. This applieswhen a DP-VOL of DynamicProvisioning, Dynamic Tiering, oractive flash is used as the P-VOLor S-VOL of a Thin Image pair.

• Using an external volume withdata direct mapping enabled as aThin Image pair's P-VOL, S-VOL,or pool-VOL.

• Using an LDEV with acceleratedcompression enabled as a ThinImage pair's P-VOL or S-VOL.

• Using a deduplication system datavolume as a Thin Image pair's P-VOL or S-VOL.

Data Retention Utility(Provisioning Guide)

Performing operations on DP-VOLs.

• Performing operations on DP pool-VOLs.

• Performing operations on anexternal volume with data directmapping enabled.

• Performing operations on LDEVswith accelerated compressionenabled.

• Performing operations on adeduplication system datavolume.

global-active device(Global-Active DeviceUser Guide)

Using a DP-VOL as a global-active device primary volume(P-VOL) or secondary volume(S-VOL).

• Using a DP-VOL as a quorum disk.• Using a pool-VOL as a global-

active device P-VOL or S-VOL.• Increasing the capacity of a DP-

VOL used by global-active device.• Using an external volume with

data direct mapping enabled as aquorum disk.

• Using a DP-VOL with data directmapping enabled as a quorumdisk.

• Using a deduplication system datavolume as a global-active deviceP-VOL or S-VOL.

140 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 141: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Software product (userguide) Permitted Not permitted

LUN Manager(Provisioning Guide)

LUN management(Hitachi Command SuiteUser Guide andProvisioning Guide)

LUN Security(Provisioning Guide)

Performing operations on DP-VOLs.

• Performing operations on DP pool-VOLs.

• Performing operations on anexternal volume with data directmapping enabled.

• Performing operations on LDEVswith accelerated compressionenabled.

• Performing operations on adeduplication system datavolume.

ShadowImage (HitachiShadowImage® UserGuide)

Using a DP-VOL as aShadowImage primary volume(P-VOL) or secondary volume(S-VOL).

• Using a pool-VOL as aShadowImage P-VOL or S-VOL.

• Increasing the capacity of a DP-VOL used by ShadowImage.

• Reclaiming zero pages of a DP-VOL is determined by the pairstatus. For more information, see ShadowImage pair status forreclaiming zero pages onpage 142.

• Using a deduplication system datavolume as a ShadowImage P-VOLor S-VOL.

TrueCopy (HitachiTrueCopy® User Guide)

Using a DP-VOL as a TrueCopyprimary volume (P-VOL) orsecondary volume (S-VOL).

• Using a pool-VOL as a TrueCopy P-VOL or S-VOL.

• Increasing the capacity of DP-VOLused as a P-VOL or S-VOL of aTrueCopy pair.

• Using a deduplication system datavolume as a TrueCopy P-VOL or S-VOL.

Universal Replicator(Hitachi UniversalReplicator User Guide)

Using a DP-VOL as a UniversalReplicator primary volume (P-VOL), secondary volume (S-VOL), or journal volume.However, the journal volumemust be a DP-VOL that hasOPEN-V emulation type.

• Using a DP-VOL as a journalvolume that has a mainframeemulation type.

• Using a DP pool-VOL as aUniversal Replicator P-VOL, S-VOL, or journal volume.

• Increasing the capacity of a DP-VOL used as a P-VOL or S-VOL ofa Universal Replicator pair.

• Reclaiming zero pages of a DP-VOL used by a journal volume.

• Using an external volume withdata direct mapping enabled as ajournal volume.

• Using a DP-VOL with data directmapping enabled as a journalvolume.

• Using a DP-VOL with capacitysaving enabled (DRD volume) as ajournal volume.

• Using a deduplication system datavolume (DSD volume) as a

Configuring thin provisioning 141Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 142: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Software product (userguide) Permitted Not permitted

Universal Replicator P-VOL or S-VOL.

Universal VolumeManager (HitachiUniversal VolumeManager User Guide)

Using Universal VolumeManager volumes as pool-VOLs.

• Increasing the capacity of a DP-VOL that is mapped to theUniversal Volume Manager. If youtry to increase the capacity of aDP-VOL with the conventionalLDEV operation, the capacity ofthe DP-VOL will not be changed.In this case, remove the mappingbetween the DP-VOL andUniversal Volume Manager,increase the capacity of theexternal volume used as a pool-VOL, and then perform themapping between the DP-VOL andUniversal Volume Manager again.

• Setting the data direct mappingattribute on a DP-VOL withcapacity saving enabled (DRDvolume).

• Setting the data direct mappingattribute on a deduplicationsystem data volume (DSDvolume).

• Enabling capacity saving on a DP-VOL from a pool with UniversalVolume Manager pool-VOLs.

Virtual LUN (ProvisioningGuide)

Registering Virtual LUN volumesin Dynamic Provisioning pools.

• Performing Virtual LUN operationson volumes that are alreadyregistered in a DP pool.

Virtual Partition Manager(Performance Guide)

Performing operations on DP-VOLs and pool-VOLs.

Not applicable

Volume Shredder (HitachiVolume Shredder UserGuide)

Use on DP-VOLs. • Using on pool-VOLs.• Performing operations on an LDEV

that has accelerated compressionenabled.

• Increasing the capacity of DP-VOLused by Volume Shredder.

• Reclaiming zero pages of V-VOLused by Volume Shredder.

• Performing operations on a DP-VOL with capacity saving enabled(DRD volume).

• Performing operations on adeduplication system datavolume.

ShadowImage pair status for reclaiming zero pagesYou can use this table to determine whether reclaiming zero pages is possiblefor a particular pair status

142 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 143: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Pair status

Reclaim zero pagesfrom Hitachi DeviceManager - Storage

Navigator

Reclaim zero pages from Command ControlInterface

SMPL, status of anunpaired volume

Enabled Enabled

COPY(PD)/COPY Disabled Disabled

PAIR Disabled Disabled

COPY(SP) Disabled Disabled

PSUS(SP)/PSUS Disabled Disabled

PSUS Enabled Enabled

COPY(RS)/COPY Disabled Disabled

COPY(RS-R)/RCPY Disabled Disabled

PSUE Disabled Disabled

TrueCopyYou can use Dynamic Provisioning, Dynamic Tiering, or active flash incombination with TrueCopy to replicate V-VOLs.

The following figure illustrates the interaction when the TrueCopy primaryvolume and secondary volume are also V-VOLs.

TrueCopy P-VOL TrueCopy S-VOL Explanation

DP-VOLs DP-VOLs Supported.

DP-VOLs Normal (ordinary)volumes1

Supported.

Normal (ordinary)volumes1

DP-VOLs Supported.

Note, however, that this combination consumesthe same amount of pool capacity as the originalnormal volume (primary volume).

Note:1. Normal volumes include the internal volumes and external volumes that are mapped to the

volumes of the external storage system using Universal Volume Manager. For moreinformation on external volumes, see the Hitachi Universal Volume Manager User Guide.

Configuring thin provisioning 143Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 144: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You cannot specify a Dynamic Provisioning or Dynamic Tiering pool-VOL as aprimary volume or secondary volume.

Universal ReplicatorYou can use Dynamic Provisioning, Dynamic Tiering, or active flash incombination with Universal Replicator to replicate DP-VOLs.

The following table lists the supported Universal Replicator and DynamicProvisioning, Dynamic Tiering volume combinations.

Universal ReplicatorP-VOL

UniversalReplicator S-VOL

UniversalReplicator journal

volumeExplanation

DP-VOLs DP-VOLs DP-VOL that has theOPEN-V emulationtype2

Supported.

DP-VOLs Normal (ordinary)volumes1

DP-VOL that has theOPEN-V emulationtype2

Supported.

Normal (ordinary)volumes1

DP-VOLs DP-VOL that has theOPEN-V emulationtype2

Supported.

Note, however, that thiscombination consumes thesame amount of pool capacityas the original normal volume(primary volume).

Notes:1. Normal volumes include the internal volumes and external volumes that are mapped to the

volumes of the external storage system using Universal Volume Manager. For moreinformation on external volumes, see the Hitachi Universal Volume Manager User Guide.

2. DP-VOL that has a mainframe emulation type cannot be used.

You cannot specify a Dynamic Provisioning or Dynamic Tiering pool-VOL as aprimary volume, secondary volume, or journal volume.

ShadowImageYou can use Dynamic Provisioning, Dynamic Tiering, or active flash incombination with ShadowImage to replicate DP-VOLs.

144 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 145: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The following table lists the interaction when the ShadowImage primaryvolume and secondary volume are also DP-VOLs.

ShadowImage P-VOL ShadowImage S-VOL Explanation

DP-VOLs DP-VOLs Supported.

DP-VOLs Normal (ordinary)volumes1

Supported.

The Quick Restore function is unavailable.

Normal (ordinary)volumes1

DP-VOLs Supported.

Note, however, that this combination consumesthe same amount of pool capacity as the normalvolume.

The Quick Restore function is unavailable.

Note:1. Normal volumes include the internal volumes and external volumes that are mapped to the

volumes of the external storage system using Universal Volume Manager. For moreinformation on external volumes, see the Hitachi Universal Volume Manager User Guide.

You cannot specify a Dynamic Provisioning or Dynamic Tiering pool-VOL as aprimary volume or secondary volume.

Thin ImageWhen using Dynamic Provisioning, Dynamic Tiering, or active flash and ThinImage in a storage system, note the following:• A pool for Thin Image cannot be used in conjunction with Dynamic

Provisioning.• Up to 128 pools in total can be used for Dynamic Provisioning (including

Dynamic Tiering) and Thin Image.• A pool-VOL cannot be shared with Dynamic Provisioning (including

Dynamic Tiering) and Thin Image.

Configuring thin provisioning 145Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 146: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Virtual Partition Manager CLPR settingIf DP-VOL and pool-VOLs related to the same pool are assigned to CLPR, HDSrecommends assigning DP-VOL and pool-VOLs in the same pool to the sameCLPR.

For detailed information about CLPRs, see the Performance Guide.

Volume MigrationFor more information, see the Hitachi Volume Migration User Guide.

Resource Partition ManagerSee Resource group rules, restrictions, and guidelines on page 60 for theconditions of resources that are necessary in the operation of other softwareproducts and the precautions required when using Resource PartitionManager.

Dynamic Provisioning workflowThe following diagram shows the workflow for setting up DynamicProvisioning on the storage system.

Use Device Manager - Storage Navigator or Command Control Interface tocreate pools and DP-VOLs.

146 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 147: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: If you delete a pool, its pool-VOLs (LDEVs) will be blocked. Blockedvolumes must be formatted before use.

Caution: If the V-VOL data is migrated through the host, unallocated areasof the volume might be copied as well. The used capacity of the poolincreases after the data migration because the areas that were unallocatedbefore the data migration have become allocated areas due to the migration.

To migrate the V-VOL data:1. Copy all data of V-VOLs from the source to the target.2. Perform the operation to reclaim zero pages.

Perform this procedure for each V-VOL. When data migration is done ona file-by-file basis, perform the operation to reclaim zero pages ifnecessary.

To restore the backup data:1. Restore the V-VOL data.2. Perform the operation to reclaim zero pages.

Perform this procedure for each V-VOL.

Configuring thin provisioning 147Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 148: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Migrating V-VOL data

Procedure

1. Copy all data of V-VOLs from the source to the target.2. Perform the operation to reclaim zero pages.3. Perform this procedure for each V-VOL. When data migration is done on

a file-by-file basis, perform the operation to reclaim zero pages ifnecessary. In the case of a volume copy or the physical copy, theoperation to reclaim zero pages is unnecessary.

Restoring backup dataYou can restore the V-VOL backup data. You must complete the procedure foreach V-VOL.

Procedure

1. Restore the V-VOL data.2. Perform the operation to reclaim zero pages.

Dynamic Tiering and active flash

About tiered storageIn a tiered storage environment, storage tiers can be configured toaccommodate different categories of data. A tier is a group of storage media(pool volumes) in a DP pool. Tiers are determined by a single storage mediatype. A storage tier can be one type of data drive, including SSD, SAS, orexternal volumes. Media of high-speed performance make up the upper tiers.Media of low-speed response become the lower tiers. Up to a maximum ofthree tiers can coexist in each Dynamic Tiering pool.

Categories of data may be based on levels of protection needed, performancerequirements, frequency of use, and other considerations. Using differenttypes of storage tiers helps reduce storage costs and improve performance.

Because assigning data to particular media may be an ongoing and complexactivity, Dynamic Tiering software automatically manages the process basedon user-defined policies.

As an example of the additional implementation of tiered storage, tier 1 data(such as mission-critical or recently accessed data) might be stored onexpensive and high-quality media such as double-parity RAIDs (redundantarrays of independent disks). Tier 2 data (such as financial or seldom-useddata) might be stored on less expensive storage media.

148 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 149: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tier monitoring and data relocationDynamic Tiering uses tiers to manage data storage. It classifies the specifieddrives in the pool into tiers (storage hierarchy). Up to three tiers can bedefined in a pool depending on the processing capacity of the data drives.Tiering allocates more frequently accessed data to the upper tier and lessfrequently accessed data, stored for a long period of time, to the lower tier.

Multi-tier poolWith Dynamic Tiering, you can enable the Multi-Tier pool option for anexisting pool. The default is to allow tier relocation for each DP-VOL. Only theDP-VOLs for which tier relocation is enabled are subject to calculation of thetier range value, and tier relocation will be performed on them. If tierrelocation is disabled for all DP-VOLs in a pool, tier relocation is notperformed.

The following figure illustrates the relationship between multi-tier pool andtier relocation.

Example of adding a tier

If the added pool-VOL is a different media type, then a new tier is created inthe pool. The tier is added to the appropriate position according to itsperformance. The following figure illustrates the process of adding a tier.

Example of deleting a tier

If a tier no longer has any pool-VOLs when you delete them, the tier isdeleted from the pool. The following figure illustrates deleting a tier.

Configuring thin provisioning 149Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 150: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tier monitoring and relocation cyclesPerformance monitoring and tier relocation can be set to execute in one oftwo execution modes: Auto and Manual. You can set up execution modes, orswitch between modes by using either Hitachi Device Manager - StorageNavigator or Command Control Interface.

In Auto execution mode, monitoring and relocation are continuous andautomatically scheduled. In Manual execution mode, the following operationsare initiated manually.• Start monitoring• Stop monitoring and recalculate tier range values• Start relocation• Stop relocation

In both execution modes, relocation of data is automatically determinedbased on monitoring results. The settings for these execution modes can bechanged nondisruptively while the pool is in use.

Auto execution modeAuto execution mode performs monitoring and tier relocation based oninformation collected by monitoring at a specified constant frequency: every0.5, 1, 2, 4, or 8 hours. All auto execution mode cycle frequencies have astarting point at midnight (00:00). For example, if you select a 1 hourmonitoring period, the starting times would be 00:00, 01:00, 02:00, 03:00,and so on.

As shown in the following table, the 24-hour monitoring cycle allows you tospecify the times of day to start and stop performance monitoring. The 24-hour monitoring cycle does not have to start at midnight. Tier relocationbegins at the end of each cycle.

Monitoring cycle(hours) Start Times Finish Times

0.5 0.5 hours from 00:00 AM. Forexample 00:00, 00:30, and 01:00

0.5 hours after the start time

1 1 hour from 00:00 AM. For example00:00, 01:00, and 02:00

1 hour after the start time

2 2 hours from 00:00 AM. Forexample 00:00, 02:00, and 04:00

2 hours after the start time

4 4 hours from 00:00 AM. Forexample 00:00, 04:00, and 08:00

4 hours after the start time

150 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 151: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Monitoring cycle(hours) Start Times Finish Times

8 8 hours from 00:00 AM. Forexample 00:00, 08:00, and 16:00

8 hours after the start time

24 (monitoring timeperiod can be specified)

Specified time Specified time

If the setting of the monitoring cycle is changed, performance monitoringbegins at the new start time. The collection of monitoring information andtier relocation operations already in progress are not interrupted when thesetting is changed.

Example 1: If the monitoring cycle is changed from 1 hour to 4 hours at01:30 AM, the collection of monitoring information and tier relocation inprogress at 01:30 AM continues. At 02:00 AM and 03:00 AM, however,monitoring information is not collected and tier relocation is not performed.From 04:00 AM, the collection of monitoring information and tier relocationoperations are started again. These operations are then performed at 4-hourintervals.

Example 2: If the monitoring cycle is changed from 4 hours to 1 hour at01:30 AM, the collection of monitoring information and tier relocation inprogress at 01:30 AM continues. From 04:00 AM, the collection of monitoringinformation and tier relocation operations are started again. These operationsare then performed at 1-hour intervals.

In auto execution mode, the collection of monitoring data and tier relocationoperations are performed in parallel in the next cycle. Data from theseparallel processes are stored in two separate fields.

Configuring thin provisioning 151Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 152: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Data while monitoring is in progress in the next cycle.• Fixed monitoring information used in the tier relocation.

Manual execution modeYou can start and stop performance monitoring and tier relocation at anytime. You should keep the duration of performance monitoring to less than 7days (168 hours). If performance monitoring exceeds 7 days, thenmonitoring stops automatically.

Manual execution mode starts and ends monitoring and relocation at the timethe command is issued. You can use scripts, which provide flexibility tocontrol monitoring and relocation tasks based on a schedule for each day ofthe week.

In manual execution mode, the next monitoring cycle can be started with thecollection of monitoring data and tier relocation operations performed inparallel. Data from these parallel processes are stored in two separate fields.• Data while monitoring is in progress in the next cycle.• Fixed monitoring information used in the tier relocation.

The following figure illustrates the collection of monitoring data to tierrelocation workflow in manual execution mode.

Case 1: If the second collection of the monitoring information is finishedduring the first tier relocation, the latest monitoring information is the secondcollection. In that case, the first collection of monitoring information isreferenced only after the first tier relocation has completed.

152 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 153: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Case 2: When tier relocation is performed with the first collection ofmonitoring information, the second collection of monitoring information canbe performed. However, the third collection cannot be started. Because onlytwo fields are used store collected monitoring information, the third collectioncannot be overwritten.

In that case, the third collection of the monitoring information is started afterthe first tier relocation is stopped or tier relocation has completed.

The collection of the monitoring information is not started under theseconditions as well:• When the second tier relocation is performed, the fourth collection of

monitoring information cannot be started.• When the third tier relocation is performed, the fifth collection of

monitoring information cannot be started.

If such conditions exist, two cycles of monitoring information cannot becollected continuously while tier relocation is performed.

The following figure illustrates the third collection of monitoring informationwhile tier relocation is performed.

Configuring thin provisioning 153Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 154: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tier relocation workflowThe following shows the flow of allocating new pages and migrating them tothe appropriate tier. The combination of determining the appropriate storagetier and migrating the pages to the appropriate tier is referred to as tierrelocation.

154 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 155: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Explanation of the relocation flow:1. Allocate pages and map them to DP-VOLs

Pages are allocated and mapped to DP-VOLs on an on-demand basis.Page allocation occurs when a write is performed to an area of any DP-VOL that does not already have a page mapped to that location.Normally, a free page is selected for allocation from an upper tier with afree page. If the capacity of the upper tier is insufficient for theallocation, the pages are allocated to the nearest lower tier. A DP-VOL setto a tier policy is assigned a new page that is based on the tier policysetting. The relative tier for new page allocations can be specified duringoperations to create and edit LDEVs. If the capacity of all the tiers isinsufficient, an error message is sent to the host.

2. Gather I/O load information of each pagePerformance monitoring gathers monitoring information of each page ina pool to determine the physical I/O load per page in a pool. I/Osassociated with page relocation, however, are not counted.

3. Create frequency distribution graphThe frequency distribution graph, which shows the relationship betweenI/O counts (I/O load) and capacity (total number of pages), is created.You can use the View Tier Properties window to view this graph. Thevertical scale of the graph indicates ranges of I/Os per hour and thehorizontal scale indicates a capacity that received the I/O level. Note thatthe horizontal scale is accumulative.

Configuring thin provisioning 155Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 156: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: When the number of I/Os is counted, the number of I/Ossatisfied by cache hits are not counted. Therefore, the number ofI/Os counted by Performance Monitoring is different from thenumber of I/Os from the host. The number of I/Os per hour isshown in the graph. If the monitoring time is less than an hour,the number of I/Os shown in the graph might be higher than theactual number of I/Os.

Monitoring mode settings of Period or Continuous influences the valuesshown on the performance graph. Period mode will report the mostrecent completed monitor cycle I/O data on the performance graph.Continuous mode will report a weighted average of I/O data that usesrecent monitor cycle data, along with historical data on the performancegraph.

4. Determine the tier range valuesThe page is allocated to the appropriate tier according to performancemonitoring information. The tier is determined as follows.a. Determine the tier boundary

The tier range value of a tier is calculated using the frequencydistribution graph. This acts as a boundary value that separatestiers.The pages of higher I/O load are allocated to the upper tier insequence. Tier range is defined as the lowest I/Os per hour (IOPH)value at which the total number of stored pages matches thecapacity of the target tier (less some buffer percentage) or the IOPHvalue that will reach the maximum I/O load that the tier shouldprocess. The maximum I/O load that should be targeted to a tier isthe limit performance value, and the rate of I/O to the limitperformance value of a tier is called the performance utilizationpercent. A performance utilization of 100% indicates that the targetI/O load to a tier is beyond the forecasted limit performance value.

Caution: The limit performance value is proportional to thecapacity of the pool volumes used in the tier. The totalcapacity of the parity group should be used for a pool tofurther improve the limit performance.

b. Determine the tier delta valuesThe tier range values are set as the lower limit boundary of each tier.The delta values are set above and below the tier boundaries (+10to 20%) to prevent pages from being migrated unnecessarily. If allpages subject to tier relocation can be contained in the upper tier,

156 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 157: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

both the tier range value (lower limit) and the delta value will bezero.

c. Determine the target tier of a page for relocation.The IOPH recorded for the page is compared against the tier rangevalue to determine the tier to which the page moves.

5. Migrate the pagesThe pages move to the appropriate tier. After migration, the page usagerates are averaged out in all tiers. I/Os which occur in the page migrationare not monitored.

Related references

• Monitoring modes on page 169

Tier relocation rules, restrictions, and guidelines

Rules• Performance monitoring, using both Auto and Manual execution modes,

observes the pages that were allocated to DP-VOLs prior to the start of themonitoring cycle and the new pages allocated during the monitoring cycle.Pages that are not allocated during performance monitoring are notcandidates for tier relocation.

• Tier relocation can be performed concurrently on up to eight pools. If morethan eight pools are specified, relocation of the ninth pool starts afterrelocation of any of the first eight pools has completed.

• If Auto execution mode is specified, performance monitoring may stopabout one minute before to one minute after the beginning of the nextmonitor cycle start time.

• The amount of relocation varies per cycle. In some cases, the cycle mayend before all relocation can be handled. If tier relocation does not finishcompletely within the cycle, relocation to appropriate pages is executed inthe next cycle.

• Calculating the tier range values will be influenced by the capacityallocated to DP-VOLs with relocation disabled and the buffer reservepercentages.

Configuring thin provisioning 157Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 158: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• While a pool-VOL is being deleted, tier relocation is not performed. Afterthe pool-VOL deletion is completed, tier relocation starts.

• Frequency distribution is unavailable when there is no data provided byperformance monitoring.

• While the frequency distribution graph is being created or the tier rangevalues are being calculated, the frequency distribution graph is notavailable. The time required for determining the tier range values variesdepending on the number of DP-VOLs and total capacity. The maximumtime is about 20 minutes.

• To balance the usage levels of all parity groups, rebalancing may beperformed after several tier relocation operations. If rebalancing is inprogress, the next cycle of tier relocation might be delayed.For details on rebalancing, see Rebalancing the usage level among paritygroups on page 191.

Performance monitoring or tier relocation conditions

The status of data collection, fixed monitoring, and tier relocation operationsare described in the following table. The latest fixed monitoring information isreferenced when tiers are relocated.

Monitoringinformation or

executionconditions

Status of datacollection in

progress

Status of fixedmonitoring

information usedin tier relocation

Tier relocationoperations Solutions

Unallocatedpages.

Pages are notmonitored.

No monitoringinformation onpages.

Tiers of thepages are notrelocated.

Unnecessary. Afterthe pages areallocated,monitoring andrelocation areperformedautomatically.

Zero data isdiscarded duringdata monitoring.

Monitoring onpages is reset.

Only monitoringinformation onpages is invalid.

Tiers of thepages are notrelocated.

Unnecessary. Afterthe pages areallocated,monitoring andrelocation areperformedautomatically.

V-VOL settings donot allow tierrelocation.

Volume ismonitored.

Monitoringinformation on thevolume is valid.

If the tierrelocationsetting is beingdisabled at theperformancemonitoringfinish time, tiersof the volumeare notrelocated.

N/A

When V-VOLs aredeleted

Volume is notmonitored.

Only monitoringinformation on thevolume is invalid.

Tier relocationof the volume issuspended.

N/A

158 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 159: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Monitoringinformation or

executionconditions

Status of datacollection in

progress

Status of fixedmonitoring

information usedin tier relocation

Tier relocationoperations Solutions

When executionmode is changedto Manual fromAuto or viceversa.

Suspended. Monitoringinformationcollected beforesuspension is valid.

Suspended. Collect themonitoringinformation again ifnecessary.1

When the powerswitch is powerON or OFF.

Monitoring issuspended bypowering OFFand is notresumed evenafter poweringON.1

Monitoringinformationcollected during theprevious cycle iscontinuously valid.

Tier relocation issuspended bypowering OFFand is resumedafter poweringON.

Collect themonitoringinformation again ifnecessary.1

• When VolumeMigration isperformed.

• When QuickRestore ofShadowImageis performed.

The monitoringinformation ofthe volume is notcollected at thepresent moment.In the nextmonitoringperiod, themonitoringinformation willbe collected.

Monitoringinformation isinvalid and thevolumes need to bemonitored.

Tier relocationto volumes issuspended.

Collect themonitoringinformation again ifnecessary.1

S-VOL of thefollowing productswhen the initialcopy operation isperformed:• TrueCopy• Global-active

device• Universal

Replicator

Monitoringinformation iscollectedcontinuously, butthe monitoring ofthe volumes isreset.2

No effect on thefixed monitoringinformation. Themonitoringinformationcollected during theprevious cyclecontinues to bevalid.

Tier relocationto volumes issuspended.

Collect themonitoringinformation again ifnecessary.1

• When thenumber oftiers increasesby addingpool-VOLs.

• When thepool-VOLs ofthe tiers areswitched byadding pool-VOLs.3

• When tier rankof the externalLDEV ischanged.

Continued. Fixed monitoringinformation isinvalid because themonitoringinformation wasdiscarded. Ifmonitoring is set tothe continuousmode, weighteddata calculated byusing themonitoringinformation in pastperiods is alsodiscarded.

Suspended. Relocate tiers again.1

When pool-VOLsare deleted.

Continued. Monitoringinformation isinvalid temporarily.The monitoring

Deleting thepool-VOL stopsthe tierrelocation. The

N/A

Configuring thin provisioning 159Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 160: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Monitoringinformation or

executionconditions

Status of datacollection in

progress

Status of fixedmonitoring

information usedin tier relocation

Tier relocationoperations Solutions

information iscalculated againafter deleting ofpool-VOLs.4

process resumesafter the pool-VOL is deleted.

When a cache isblocked.

Continued. No effect on thefixed monitoringinformation. Themonitoringinformationcollected during theprevious cyclecontinues to bevalid.

Suspended. After recovering thefaulty area, relocatetiers again.1

When an LDEV isblocked. (Pool-VOL or V-VOL)

Continued. No effect on thefixed monitoringinformation. Themonitoringinformationcollected during theprevious cyclecontinues to bevalid.

Suspended. After recovering thefaulty area, relocatetiers again.1

When thedepletionthreshold of thepool is nearlyexceeded duringrelocation.

Continued. No effect on thefixed monitoringinformation. Themonitoringinformationcollected during theprevious cyclecontinues to bevalid.

Suspended. Add pool-VOLs, thencollect monitoringinformation andrelocate tiers again.1

When executionmode is Auto andthe executioncycle ends duringtier relocation.

At the end timeof executioncycle, datamonitoring stops.

The monitoringinformationcollected beforemonitoringperformance stopsis valid.

Suspended. Unnecessary. Therelocation isperformedautomatically in thenext cycle.

When executionmode is Manualand 7 days elapseafter monitoringstarts.

Suspended. The monitoringinformationcollected beforesuspension is valid.

Continued. Collect themonitoringinformation again ifnecessary.1

Notes:1. The execution mode is Auto or the script is written in manual execution mode, information is

monitored again and tiers are relocated automatically.2. All pages of the S-VOLs are not allocated, and the monitoring information of the volume is

reset. After the page is allocated to the new page, the monitoring information is collected.3. Example: Pool-VOLs of SAS15K are added to the following Configuration 1

• Configuration 1 (before change): Tier 1 is SSD, Tier 2 is SAS10K, and Tier 3 is SAS7.2K.• Configuration 2 (after change): Tier 1 is SSD, Tier 2 is SAS15K, and Tier 3 is SAS10K and

SAS7.2K.

160 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 161: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Monitoringinformation or

executionconditions

Status of datacollection in

progress

Status of fixedmonitoring

information usedin tier relocation

Tier relocationoperations Solutions

4. The monitoring information status is changed from invalid (INV) to calculating (PND). Aftercompletion of calculating, the monitor information status is changed from calculating (PND) tovalid (VAL).

Buffer area of a tierDynamic Tiering uses buffer percentages to reserve pages for new pageassignments and allow the tier relocation process. Areas necessary forprocessing these operations are distributed corresponding to settings used byDynamic Tiering. The following describes how processing takes place tohandle the buffer percentages.

Buffer space: The following table shows the default rates (rate to capacity ofa tier) of buffer space used for tier relocation and new page assignments,listed by drive type.

Drive type buffer area for tierrelocation

buffer area for newpage assignment Total

SSD 2% 0% 2%

Non-SSD 2% 8% 10%

New page assignment: New pages are assigned based on a number ofoptional settings. Pages are then assigned to the next lower tier, leaving abuffer area (2% per tier by default) for tier relocation. After 98% of capacityof all tiers is assigned, the remaining 2% of the buffer space is assigned fromthe upper tier. The buffer space for tier relocation is 2% in all tiers.

The following illustrates the workflow of a new page assignment.

Configuring thin provisioning 161Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 162: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

For a pool comprised of pool volumes from parity groups with acceleratedcompression enabled, the capacity of the parity group equivalent to 20% ofthe FMC tier is used as the compression buffer area. When free space otherthan the FMC tier is not available, pages are assigned to this buffer area justbefore the capacity depletes.

Setting external volumes for each tierIf you use external volumes as pool-VOLs, you can put the external volumesin tiers by setting the External LDEV Tier Rank for the external volumes. TheExternal LDEV Tier Rank consists of the following three types: High, Middle,and Low. The following examples describe how tiers may be configured:

Example 1: Configuring tiers by using external volumes only

Tier 1: External volumes (High)

Tier 2: External volumes (Middle)

Tier 3: External volumes (Low)

Example 2: Configuring tiers by combining internal volumes and externalvolumes

Tier 1: Internal volumes (SSD)

Tier 2: External volumes (High)

Tier 3: External volumes (Low)

You can set the External LDEV Tier Rank when creating the pool, changingthe pool capacity, or setting the Edit External LDEV Tier Rank window. Thefollowing table explains the performance priority (from the top) of datadrives.

162 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 163: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Priority Data drive type

1 SSD

2 SAS 15K rpm

3 SAS 10K rpm

4 SAS 7.2K rpm

5 External volume* (High)

6 External volume* (Middle)

7 External volume* (Low)

*Displays as External Storage in the Drive Type/RPM.

Reserved pages for relocation operation: A small percentage of pages,normally 2, are reserved per tier to allow relocation to operate. These are thebuffer spaces for tier relocation.

Tier relocation workflow: Tier relocation is performed taking advantage of thebuffer space allocated for tier relocation, as mentioned previously. Tierrelocation is also performed to secure the space reserved in each tier for newpage assignment. The area is called the buffer space for new pageassignments. When tier relocation is performed, Dynamic Tiering reservesbuffer spaces for relocation and new page assignment.

During relocation, a tier may temporarily be assigned over 98% of capacity,or well under the allowance for the buffer areas.

Example of required Dynamic Tiering cache capacityThe following cache capacity is required when the total capacity is 128 TB:• Recommended capacity of cache memory for data: 28 GB• Required capacity of cache memory for control information for using

Dynamic Provisioning: 8 GB• Required capacity of cache memory for Dynamic Tiering: 4 GB• Therefore, in this example 40 GB of capacity of cache memory is required.

Note that cache memory is installed in pairs. Therefore, actual capacity istwice the cache memory capacity. For example, if required controllinginformation is 8 GB, then actual installed capacity is 16 GB.

To decrease the capacity of the cache memory for Dynamic Tiering, you haveto remove Dynamic Tiering.

Execution modes for tier relocation

Configuring thin provisioning 163Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 164: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Execution modes when using Hitachi Device Manager - Storage NavigatorDynamic Tiering performs tier relocations using one of two execution modes:Auto and Manual. You can switch between modes by using Hitachi DeviceManager - Storage Navigator.

Auto execution mode

In Auto execution mode, the system automatically and periodically collectsmonitoring data and performs tier relocation. You can select an Autoexecution cycle of 0.5, 1, 2, 4, or 8 hours, or a specified time.

The following illustrates tier relocation processing in a 2-hour Auto executionmode:

Manual execution mode

In Manual execution mode, you can manually collect monitoring data andrelocate a tier. You can issue commands to manually:1. Start monitoring.2. Stop monitoring.3. Perform tier relocation.

The following illustrates tier relocation processing in Manual execution mode:

164 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 165: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Notes on performing monitoring• You can collect the monitoring data even while performing the relocation.• After stopping the monitoring, the tier range is automatically calculated.• The latest available monitoring information, which is collected just before

the relocation is performed, is used for the relocation processing.• When the relocation is performed, the status of the monitor information

must be valid.

Viewing monitor and tier relocation information using HDvM - SNInformation is displayed on the following items in the GUI windows:

Field Windows Details

Monitoring Status • Pools window• Pool Volumes tab• View Pool Management Status

window

Displays the status of poolmonitoring.• In Progress: The monitoring is

being performed.• During Computation: The

calculating is being processed.

Other than these cases, a hyphen (-)is displayed.

Recent Monitor Data • Pools window• Pool Volumes tab

Displays the latest monitoring data.• If the monitoring data exists, the

monitoring period of time isdisplayed.Example: 2010/11/15 00:00 -2010/11/15 23:59

Configuring thin provisioning 165Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 166: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Windows Details

• If the monitoring data is beingobtained, only the starting time isdisplayed.Example: 2010/11/15 00:00 -

• If the latest monitoring data doesnot exist, a hyphen (-) isdisplayed.

Pool Management Task • Pools window• Pool Volumes tab

Displays the pool management taskbeing performed to the pool.• Waiting for Relocation: The tier

relocation process is waiting.• Relocating: The tier relocation

process is being performed.

For details about the relocationprogress rate, check the tierrelocation log.

Pool Management Task (Status/Progress)

View Pool Management Statuswindow

Displays the status of the poolmanagement task being performed,each V-VOL progress ratio in the pooland its average.• Waiting for Relocation: The tier

relocation process is waiting.• Relocating: The tier relocation

process is being performed.

For details about the relocationprogress rate, check the tierrelocation log.

Relocation Result • Pools window• Pool Volumes tab• View Pool Management Status

window

Displays the status of the tierrelocation processing.• In Progress: The status of Pool

Management Task is Waiting forRelocation or Relocating.

• Completed: The tier relocationoperation is not in progress, orthe tier relocation is complete.

• Uncompleted (n% relocated): Thetier relocation is suspended at theindicated percentage progression.

• Hyphen (-) : The pool is not aDynamic Tiering or DynamicTiering for Mainframe pool.

Relocation Speed • Pools window• View Pool Management Status

window• Create Pools window• Edit Pools window• Start Tier Relocation window• Stop Tier Relocation window

Displays the tier relocation speedsettings.• 1(Slowest)• 2(Slower)• 3(Standard)• 4(Faster)• 5(Fastest)

Relocation Priority • Pool Volumes tab• View Pool Management Status

window

Displays the relocation priority.• Prioritized: The priority is set to V-

VOL.

166 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 167: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Windows Details

• Blank: The priority is not set to V-VOL.

• Hyphen (-): V-VOL is not theDynamic Tiering V-VOL or the tierrelocation function is disabled.

Performance Graph View Tier Properties window The performance graph for theavailable monitor information isdisplayed in the View TierProperties window.

Execution modes when using Command Control Interface

Manual execution mode

In Manual execution mode, you can manually collect monitoring data andrelocate a tier. You can execute commands to do the following:1. Start monitoring.2. Stop monitoring.3. Perform tier relocation.

The following illustrates tier relocation processing when in Manual executionmode:

Notes on performing monitoring• You can collect the monitoring data even while performing the relocation.• After stopping the monitoring, the tier range is automatically calculated.

Configuring thin provisioning 167Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 168: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• The latest available monitoring information, which is collected just beforethe relocation is performed, is used for the relocation processing.

• When the relocation is performed, the status of the monitor informationmust be valid.

Viewing monitor and tier relocation information using CCIIf the raidcom get dp_pool command is executed with the -key opt optionspecified, the monitoring information and tier relocation information aredisplayed. For details about the raidcom get dp_pool command, see theCommand Control Interface Command Reference. Items are displayed asfollows:• STS

This item displays the operational status of the performance monitor andthe tier relocation.○ STP: The performance monitor and the tier relocation are stopped.

○ RLC: The performance monitor is stopped. The tier relocation isoperating.

○ MON: The performance monitor is operating. The tier relocation isstopped.

○ RLM: The performance monitor and the tier relocation are operating.

• DATThis item displays the status of the monitor information.○ VAL: Valid.

○ INV: Invalid.

○ PND: Being calculated.

• R(%)This item displays the progress percentage of tier relocation.0 to 99: Shows one of the following statuses.○ When the value of STS is RLC or RLM: Relocation is in progress.○ When the value of STS is STP or MON: Relocation is suspended at the

indicated percentage progression.

100: Shows if the relocation operation is not in progress, or the relocationis complete.

Relocation speedRelocation speed: The page relocation speed can be set to 1(Slowest),2(Slower), 3(Standard), 4(Faster), and 5(Fastest). The default is3(Standard). If you want to perform tier relocation at high speed, use the

168 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 169: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5(Fastest) setting. If you set a speed that is slower than 3(Standard), theload to data drives is low when tier relocation is performed.

Based on the number of the parity groups that constitute a pool, this functionadjusts the number of V-VOLs for which tier relocation can be performed atone time. Tier relocation can be performed on as many as 32 V-VOLs in astorage system at once.

After changing the setting, the relocation speed does not change and thedata drive load may not change in the following cases:• The number of parity groups is very few.• The number of V-VOLs associated with the pool is very few.• Tier relocations are being performed on the multiple pools.

Monitoring modesWhen you create or edit a pool, specify the Dynamic Tiering monitoringmode. The monitoring mode is either the period mode or the continuousmode. If you change the mode from the one to the other while performingthe monitoring, the changed setting becomes effective when the nextmonitoring starts.

Period mode

Period mode is the default setting. If Period mode is enabled, tier rangevalues and page relocations are determined based solely on the monitoringdata from the last complete cycle. Relocation is performed according to anychanges in I/O loads. However, if the I/O loads vary greatly, relocation maynot finish in one cycle.

Continuous mode

If Continuous mode is enabled, by weighting the latest monitoringinformation and the collected monitoring information in the past cycles, theweighted average efficiency is calculated. By performing the tier relocationbased on the weighted average efficiency, even if a temporary decrease or anincrease of the I/O load occurs, unnecessary relocation can be avoided.

Configuring thin provisioning 169Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 170: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Cautions when using monitoring modes• If Continuous mode is used, best practice is to collect monitoring

information using the following execution modes:○ Auto execution mode○ Manual execution mode with collecting the periodic monitoring

information by defining a script using CCI

If the Manual execution mode is used without scripts, the Continuousmonitoring mode can be set. However, in this case, unexpected resultsmay be calculated because the weighted average efficiency is calculatedbased on very different duration (short and long) periods informationobtained in the past cycles.

• When the monitoring mode is set to Continuous, the frequencydistributions are displayed for each pool and V-VOL calculated by using themonitor value on which the weighted calculation is done.

170 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 171: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

These calculated values are the predictive values for the next cycle aftersuccessfully relocating all pages. Therefore, these values may differ froman actual monitoring result when they appear.In Performance Utilization of each tier, regardless of the type of themonitoring mode setting, the monitor values which were already collectedin the current cycle are displayed.If you switch the monitoring mode from Period to Continuous or fromContinuous to Period, the current cycle's monitoring data that is beingcollected is not discarded. However, the data calculated by using pastmonitor cycle information on which the weighted calculation is done will bereset.

Notes on performing monitoring• You can collect a new cycle of monitoring data while performing relocation.• After monitoring stops, the tier range is automatically calculated.• The latest available monitoring information, collected just before the

relocation is performed, is used for relocation processing.• When relocation is performed, the status of the monitor information must

be valid (VAL).

Downloading the tier relocation log fileYou can download the log file that contains the results of past tier relocations.See Tier relocation log file contents on page 172 for information about thecontents of the log.

Note: For details on how to download the tier relocation file using the raidinfcommand, see the System Administrator Guide.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Export Tier Relocation Log window.

In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.c. Click More Actions > Export Tier Relocation Log.In Device Manager - Storage Navigator:

Use one of these methods given below:• Select Pools in the Storage Systems tree. In the Pools tab, click

More Actions and select Export Tier Relocation Log.• From the Actions menu, select Pool > Export Tier Relocation Log.

Configuring thin provisioning 171Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 172: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. In the dialog box, specify a folder for the log file you download and clickSave.If you change the file name from the default, make sure the file name isappended with the .tsv extension before saving the file.

Tier relocation log file contentsIn every cycle in which tier relocation is performed, information about eachpool and V-VOL is exported to the tier relocation log. The time required toincorporate the latest tier relocation results may be approximately 30minutes. The tier relocation log file is tab-delimited and contains the followinginformation.

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

Cycle ID Yes Yes Common ID of each cycle of a tier relocation.

A common ID is allocated to poollogs and V-VOL logs that arecollected in one cycle.

Log FormatVersion

Yes Yes Common Version number of the tier relocationlog format.

DKC SerialNumber

Yes Yes Common Serial number of the storage system.

Log Type Yes Yes Common Following log types are displayed.

POOL: Log information of each pool.

V-VOL: Log information of each V-VOL.

LDEV ID No Yes Common LDEV ID of a V-VOL exported to alog.

Pool ID Yes Yes Common Pool ID of a pool exported to a log.

Num of V-VOLs Yes No Common The number of V-VOLs to beprocessed when tier relocation isperformed.

Tiering Policy No Yes Tier relocationresult

Value of the tiering policy. Valuesfrom All(0) to Level31(31) can bedisplayed.

From Level6(6) to Level31(31), thenames of tiering policies can bechanged. If these names havechanged, the new names appear.

Tier1 Total Yes No Capacityinformation

Total pages of tier 1.

Tier2 Total Yes No Capacityinformation

Total pages of tier 2.

Tier3 Total Yes No Capacityinformation

Total pages of tier 3.

172 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 173: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

Tier1 Used Yes Yes Capacityinformation

Pages assigned to tier 1 at the startof tier relocation.

Tier2 Used Yes Yes Capacityinformation

Pages assigned to tier 2 at the startof tier relocation.

Tier3 Used Yes Yes Capacityinformation

Pages assigned to tier 3 at the startof tier relocation.

Start RelocationDate

Yes Yes1 Common Starting date of the tier relocation.

Start RelocationTime

Yes Yes1 Common Starting time of the tier relocation.

End RelocationDate

Yes Yes1 Common Ending date of the tier relocation.

End RelocationTime

Yes Yes1 Common Ending time of the tier relocation.

Result Status Yes Yes1 Tier relocationresult

Statuses of a tier relocation.

Normal End: Tier relocation andoptimization ended normally.

Normal End (Optimization remains):Tier relocation ended normally, buttier optimization terminated in themiddle of processing.2

Suspend: Tier relocation suspended.

Detail Status Yes Yes1 Tier relocationresult

If the Result Status is Suspend, oneof following reasons is displayed.

Monitor discarded: Suspended due tothe discard of monitoring data.3

End of cycle: Suspended due toincomplete tier relocation during amonitoring cycle.

Requested by user: Suspended dueto request by a user2.

Threshold exceeded: Suspendedbecause the used capacity of poolsreaches a threshold due to a tierrelocation. When the used capacity ofa pool reaches the depletionthreshold, this reason is logged.

FMC threshold exceeded: Suspendedbecause the used capacity of thephysical capacity in the acceleratedcompression-enabled FMC paritygroup pool reached its full capacity.

Cache blocked: Suspended because acache memory is blocked.

Configuring thin provisioning 173Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 174: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

Volume blocked: Suspended becausean LDEV which is pool-VOL or V-VOLis blocked.

The tier management changed (Auto/Manual): Suspended because the tiermanagement mode is changed fromAuto to Manual, or Manual to Auto.

Other reasons: Suspended forreasons other than the above, suchas:• A V-VOL was specified as the

secondary volume of theTrueCopy pair and an initial copyoperation was performed.

• A V-VOL was specified as thesecondary volume of the global-active device device pair and aninitial copy operation wasperformed

• A V-VOL was specified as thesecondary volume of theUniversal Replicator pair, and aninitial copy operation wasperformed.

Completed Rate(%)

Yes Yes Tier relocationresult

Progress percentage rate at the timetier relocation ends or is suspended.

Remediation Rate(%)

Yes Yes Tier relocationresult

IOPH (I/O per hour) remediation rateat the time tier relocation ends or issuspended.

The remediation rate = ((Total IOPHof pages after the promotion1) /(Total IOPH of all pages to beperformed of promotion1)) * 100

1: Promotion is the page migrationfrom a lower to higher tier.

Planned Tier1->Tier2

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 1 to tier 2.

Planned Tier1->Tier3

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 1 to tier 3.

Planned Tier2->Tier1

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 2 to tier 1.

Planned Tier2->Tier3

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 2 to tier 3.

Planned Tier3->Tier1

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 3 to tier 1.

Planned Tier3->Tier2

Yes Yes Tier relocation Number of pages that are planned tomove from the tier 3 to tier 2.

174 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 175: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

Moved Tier1->Tier2

Yes Yes Tier relocation Number of pages that are movedfrom the tier 1 to tier 2.

Moved Tier1->Tier3

Yes Yes Tier relocation Number of pages that are movedfrom the tier 1 to tier 3.

Moved Tier2->Tier1

Yes Yes Tier relocation Number of pages that are movedfrom the tier 2 to tier 1.

Moved Tier2->Tier3

Yes Yes Tier relocation Number of pages that are movedfrom the tier 2 to tier 3.

Moved Tier3->Tier1

Yes Yes Tier relocation Number of pages that are movedfrom the tier 3 to tier 1.

Moved Tier3->Tier2

Yes Yes Tier relocation Number of pages that are movedfrom the tier 3 to tier 2.

IOPH Yes Yes Monitoring result IOPHs of all pools or V-VOLs.

IOPH Tier1 (%) Yes Yes Monitoring result Percentage of IOPH for tier 1.

IOPH Tier2 (%) Yes Yes Monitoring result Percentage of IOPH for tier 2.

IOPH Tier3 (%) Yes Yes Monitoring result Percentage of IOPH for tier 3.

Performance UtilTier1 (%)

Yes No Monitoring result Performance utilization of tier 1. Theperformance utilization is the currentI/O percentage based on themaximum performance of tier 1.

Performance UtilTier2 (%)

Yes No Monitoring result Performance utilization of tier 2. Theperformance utilization is the currentI/O percentage based on themaximum performance of tier 2.

Performance UtilTier3 (%)

Yes No Monitoring result Performance utilization of tier 3. Theperformance utilization is the currentI/O percentage based on themaximum performance of tier 3.

Tier1 Low Range No Yes Monitoring result Lower limit in a range for tier 1.

Tier2 High Range No Yes Monitoring result Higher limit in a range for tier 2.

Tier2 Low Range No Yes Monitoring result Lower limit in a range for tier 2.

Tier3 High Range No Yes Monitoring result Higher limit in a range for tier 3.

Reclaim Zero PageNum

Yes Yes Tier relocation Number of pages processed in anoperation to reclaim zero pages.

Non CompliantTiering PolicyNumber

Yes No Monitoring result Number of a tiering policy that doesnot conform to the current tierconfiguration. A non-compliant policyprevents tier relocation.

Realtime MovedTier2->Tier1(Unplanned)

Yes Yes Tier relocation Number of pages moved from tier 2to tier 1 by active flash whileperforming the tier relocation byDynamic Tiering. However, the pagesmigration is not planned by DynamicTiering.

Configuring thin provisioning 175Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 176: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

Realtime MovedTier3->Tier1(Unplanned)

Yes Yes Tier relocation Number of pages moved from tier 3to tier 1 by active flash whileperforming the tier relocation byDynamic Tiering. However, the pagesmigration is not planned by DynamicTiering.

Realtime MovedTier2->Tier1(Planned)

Yes Yes Tier relocation Number of pages moved from tier 2to tier 1 by active flash whileperforming the tier relocation byDynamic Tiering. The pagesmigration is planned by DynamicTiering.

Realtime MovedTier3->Tier1(Planned)

Yes Yes Tier relocation Number of pages moved from tier 3to tier 1 by active flash whileperforming the tier relocation byDynamic Tiering. The pagesmigration is planned by DynamicTiering.

Realtime MovedTier1->Tier2

Yes Yes Tier relocation Number of pages moved from tier 1to tier 2 by active flash whileperforming the tier relocation forDynamic Tiering.

Realtime MovedTier1->Tier3

Yes Yes Tier relocation Number of pages moved from tier 1to tier 3 by active flash whileperforming the tier relocation forDynamic Tiering.

Realtime MovedTier2->Tier1 (NonCompliant)

Yes Yes Tier relocation In the total pages moved from tier 2to tier 1 by active flash, the numberof migrated pages that do notconform to the plan of DynamicTiering page migration.

Realtime MovedTier3->Tier1 (NonCompliant)

Yes Yes Tier relocation In the total pages moved from tier 3to tier 1 by active flash, the numberof migrated pages that do notconform to the plan of DynamicTiering page migration.

Realtime MovedTier1->Tier2 (NonCompliant)

Yes Yes Tier relocation In the total pages moved from tier 1to tier 2 by active flash, the numberof migrated pages that do notconform to the plan of DynamicTiering page migration.

Realtime MovedTier1->Tier3 (NonCompliant)

Yes Yes Tier relocation In the total pages moved from tier 1to tier 3 by active flash, the numberof migrated pages that do notconform to the plan of DynamicTiering page migration.

Notes1. If the log file is lfv2 (Log Format Version 2) or later, the log file information of each V-VOL appears. If the log

file is lfv 1, a hyphen appears.2. If the log file is lfv5 (Log Format Version 5) or later, this information appears.

176 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 177: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ItemDoes each pooloutput the loginformation?

Does each V-VOL output the

loginformation?

Type ofinformation Description

3. When deleting pool-VOLs, ex-valid monitor information are discarded, so that the tier relocation is interrupted.After completion of the pool-VOLs deleting, the tier determination calculation performs again and completes.Processed by this way, the valid monitor information are re-created.

Tiering policyThe tiering policy function is used to assign a specific storage tier to a specificDP-VOL. A tiering policy specifies subset of tiers that is available to a givenset of DP-VOLs.

Tier relocation changes the location of previously stored data. It is performedin conformance to the tiering policy. If a DP-VOL is initially allocated to a low-speed tier and the tiering policy is changed to a high-speed tier, relocation isperformed in the next cycle.

For example, if you set the tiering policy level on a V-VOL (DP-VOL) to a tierwith a high I/O speed, the data is always stored on the high-speed tier whenrelocating tiers. When you use that V-VOL (DP-VOL), regardless of the actualsize of the I/O load, you can always get high-speed responses. See Tieringpolicy levels on page 180.

When you create the DP-VOL, you can designate one of six existing tieringpolicies and define up to 26 new tiering policies. See Tiering policy levels onpage 180 and Setting tiering policy on a DP-VOL on page 180.

Use the Edit LDEVs window to change the tiering policy settings. When tierrelocation occurs, the related tiering policy set for the DP-VOL is used torelocate data to the desired tier or tiers.

The tiering policy does not own pool capacity. Rather, pool capacity is sharedamong tiers. Pages are allocated in order of priority from upper to lower tiersin a tiering policy. When you specify a new allocation tier, pages are allocatedstarting from the tier that you specify.

The tier range, frequency distribution, and used capacity are displayed pertiering policy: existing tier level All(0), Level1(1) through Level5(5), andLevel6(6) to Level31(31).

Custom policiesThe settings of the tiering policy can be changed and these tiering policieschanged by a user are called custom policies. Custom policies can be definedfor IDs of tiering policies from 6 to 31 (from Level6(6) to Level31(31)). Thefollowing items can be set in the custom policy:• Rename custom policy• Change allocation threshold

Configuring thin provisioning 177Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 178: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Custom policy name

A custom policy name can be changed arbitrarily. You can change the namesof custom policies from Level6(6) to Level31(31). For example, if you changethe name of Level6(6) to Policy06, Policy06(6) will then be displayed.

Allocation threshold

You can define allocation thresholds in new policies from Level6(6) toLevel31(31).

For all DP-VOLs that have the tiering policy in a pool, Dynamic Tieringperforms the relocation of pages to each tier based on the tiering policysetting.

Max(%) and Min(%) parameters: When a tiering policy is created, 4 types ofparameters can be set: Tier1 Max and Tier 1 Min, Tier 3 Max and Tier 3 Min.Each parameter setting is a ratio that corresponds to the total capacity of theallocated area of DP-VOLs that have the same tiering policy set for a pool.

Tier1 and Tier3 parameter settings can also limit the capacity for all volumesin a configuration that contain multiple DP-VOLs that have the same intendeduse. These settings can prevent conditions such as the following fromoccurring:• Excess allocation of SSD capacity for unimportant applications.• Degradation in average response time for high performance operations.

Tiering policy examplesThe following figure shows the allocation threshold settings Tier1 Max=40%,Tier1 Min=20%, Tier3 Max=40%, and Tier3 Min=20% for a DP-VOL with aLevel6(6) setting when the initial mapped capacity is 100GB.

178 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 179: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The following figure shows an example of data allocation when the defaulttiering policy level All(0) is specified. Pages in the DP-VOL are relocated toany tier.

The following figure shows an example of data allocation when setting thetiering policy to Level1(1) (see Level1(1) in Tiering policy levels onpage 180). In this case, pages in the DP-VOL are relocated to tier 1, and arenot relocated to other tiers.

Configuring thin provisioning 179Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 180: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Setting tiering policy on a DP-VOLThe setting of a tiering policy for a DP-VOL is optional. If one is not selected,the default is the All(0) tiering policy level. The available levels are listed in Tiering policy levels on page 180. DP-VOLs of different tiering policies cancoexist in one pool. If you specify the level of the tiering policy, DP-VOLs withthe policy are grouped together.• All(0) is the default policy. In this case, data is stored to all of the tiers.• When a tier is added to the pool after setting the tiering policy on a DP-

VOL, the DP-VOL is relocated according to the new tier lineup.For example, if you set the tiering policy to level 5, the data is alwaysallocated to the tier of the low I/O speed. If the pool has two tiers, data isstored in tier 2. If a new tier is added, the number of tiers becomes threeand if the new tier is the lowest tier, relocation will be performed to movedata into tier 3.

For more information about tiering policy and groups, see Tiering policy levelson page 180.

Tiering policy levels

Tiering policy 1 tierpool

2 tierpool

3 tierpool Note

All(0) Single Tier Both tiers All 3 tiers Default Tiering Policy

Level1(1) Same asAll(0)

Tier 1 Tier 1 Data is located to the Top Tier. Any overflowmoves to the next lower tier.

Level2(2) Same asAll(0)

Same asAll(0)

Tier 1and Tier2

See note

Data is located to the Top Tier afterLevel1(1) assignments are processed. Anyoverflow moves to the next lower tier.

Level3(3) Same asAll(0)

Same asAll(0)

Tier 2

See note

Data is located to the Middle Tier. Anyoverflow moves to the top tier.

Level4(4) Same asAll(0)

Same asAll(0)

Tier 2and Tier3

See note

Data is located to the Middle Tier afterLevel3(3) assignments are processed. Anyoverflow moves to the next lower tier.

Level5(5) Same asAll(0)

Tier 2 Tier 3

See note

Data is located to the bottom tier. Anyoverflow moves to the next higher tier.

From Level6(6) toLevel31(31)1

Same asAll(0)

Dependson usersetting

Dependson usersetting

For example:

If additional capacity is added to the pool and the capacity defines a new Tier 1 or new Tier 2, theDP-VOLs with a Level 5(5) assignment will not physically move but Level 5(5) will be associatedwith Tier 3.

180 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 181: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tiering policy 1 tierpool

2 tierpool

3 tierpool Note

If additional capacity is added to the pool and the capacity defines a new Tier 3, the DP-VOLs witha Level 5(5) assignment will physically move to the new Tier 3 and Level 5(5) will be associatedwith Tier 3.

Note 1: If these names have changed, the new names appear instead.

Viewing the tiering policy in the performance graphYou can view the frequency distribution graph of the pool by selecting eitherthe level of the tiering policy or the entire pool on the performance graph inthe View Tier Properties window.

The following table shows how tiering policy is shown in the performancegraph. How the graph appears depends on the number of tiers set in a pooland tiering policy level selected when viewing the performance graph.

Tiering policyselected withperformance

graph

V-VOL displayed in the performance graph

All(0) In the performance graph, you can display a frequency distribution of a DP-VOL, set to all tiers.

Level 1(1) In the performance graph, you can display the frequency distribution of a DP-VOL set to level 1.

Level 2(2) In the performance graph, you can display the frequency distribution of a DP-VOL set to level 2.

Level 3(3) In the performance graph, you can display the frequency distribution of a DP-VOL set to level 3.

Level 4(4) In the performance graph, you can display the frequency distribution of a DP-VOL set to level 4.

Level 5(5) In the performance graph, you can display the frequency distribution of a DP-VOL set to level 5.

From Level6(6) toLevel31(31)1

In the performance graph, you can display the frequency distribution of a DP-VOL set to custom policy.

Note 1: If these names have changed, the new names appear instead.

Configuring thin provisioning 181Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 182: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Reserving tier capacity when setting a tiering policyIf you set the tiering policy of a DP-VOL, the DP-VOL used capacity and theI/O performance limitation are reserved from the tier. The reserved limitperformance per page is calculated as follows:

The reserved limit performance per page = (The performance limit of thetier) ÷ (The number of pages in the tier).

A DP-VOL without a tiering policy setting uses the unreserved area in thepool.

182 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 183: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Example of reserving tier capacityThe reservation priority depends on the level of tiering policy. The followingfigure illustrates the reservation priority. Tiers are reserved in order ofpriority from (1) to (7) in the figure. If the pool-VOL capacity is deficientwhen you reserve a tier, the nearest tier of your specified tier is allocated. Ifyou specify two tiers like level 2 or level 4 of the tiering policy, first of all theupper tier is reserved. At this time, if the capacity of the pool-VOL assignedto the upper tier is deficient, the lower tier defined by the tiering policy isreserved automatically. For example, in case of level 2 in the diagram below,tier 1 is reserved first. If the capacity of tier 1 is deficient at this point, tier 2

Configuring thin provisioning 183Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 184: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

is reserved automatically. For details, see Notes on tiering policy settings onpage 185.

Tier reservationpriority Tiering policy Reserved tier

1 Level1(1) Tier 1

2 Level3(3) Tier 2

3 Level5(5) Tier 3

From 4 to 29 From Level6(6) to Level31(31)1 The custom policy whose number is small isprioritized.

Tier 1: From Level6(6) to Level31(31)1, eachof the Tier1 Min values are reserved.

Tier 2: From Level6(6) to Level31(31)1, eachof values that deducted the total value ofTier1 Max and Tier3 Max from 100(%) arereserved.

Tier 3: From Level6(6) to Level31(31)1, eachof the Tier3 Min values are reserved.

30 All(0) All tiers

Level2(2) Tier 1 and Tier 2

Level4(4) Tier 2 and Tier 3

From Level6(6) to Level31(31)1 Tier 1: From Level6(6) to Level31(31)1, eachof the Tier1 Max values are reserved.

Tier 3: From Level6(6) to Level31(31)1, eachof the Tier3 Max values are reserved.

Note 1: If these names have changed, the new names appear instead.

184 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 185: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Notes on tiering policy settings• If Auto is set as the execution mode, tier relocation is performed based on

the monitoring cycle. Therefore, when the tiering policy setting is changed,tier relocation will automatically implement the tiering policy at the end ofthe current monitoring cycle. See Example 1 in Execution mode settingsand tiering policy on page 192.

• If Manual is set as the execution mode, you must manually performmonitoring, issue a monitor stop, and then start relocation (see Example2, Case 1, in Execution mode settings and tiering policy on page 192). Ifyou change the tiering policy settings while obtaining monitoring data, themonitoring data is used for the next tier relocation (see Example 2, Case2, in Execution mode settings and tiering policy on page 192). Therefore,you do not need to perform new monitoring.

• If a capacity shortage exists in the tier being set, a message may appearin the View Tier Property window that the page allocation cannot becompleted according to the tiering policy specified for the V-VOL. Shouldthat occur, the page allocation in the entire pool, including the tier thatdefines the tiering policy might not be optimized.

Configuring thin provisioning 185Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 186: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note: The message that page allocation cannot be completedaccording to the tiering policy does not appear when these tieringpolicies are set:• All(0)• In a 2-tier configuration, Level2(2), Level3(3), or Level4(4) which

is equivalent to All(0)

When a capacity shortage exists in a tier, you can revise the setting of thetiering policy or the configuration of tiers. If the capacity of one tier is fullyexhausted, the migrating pages are assigned to the next tier according tothe tiering policy.○ Level1(1): When tier 1 is full, the remaining pages are allocated to tier

2. If tier 2 is full, the remaining pages are allocated to tier 3.○ Level3(3): When tier 2 is full, the remaining pages are allocated to tier

1. If tier 1 is full, the remaining pages are allocated to tier 3.○ Level5(5): When tier 3 is full, the remaining pages are allocated to tier

2. If tier 2 is full, the remaining pages are allocated to tier 1.○ Level2(2), Level4(4), and from Level6(6) to Level31(31): When the

specified tier is full, the unallocated pages are kept in the prior tier orthey are allocated to the tier that has free space. From Level 6 (6) toLevel 31 (31), the names of tiering policies can be changed. If thesenames have changed, the new names appear.

• If a performance shortage exists in the tier being set, pages may not beallocated in conformance to the tiering policy specified for the V-VOL. Inthat case, pages are allocated according to the performance ratio of eachtier.As shown in the following table, allocation capacity considerations arebased on the tiering policy.

Tiering Policy Allocation capacity considerations

All(0), Level2(2), or Level4(4) Tier range and I/O performance

Level1(1), Level3(3), orLevel5(5)

Tier range

From Level6(6) toLevel31(31)1

First phase: Tier range.

Allocation capacities in each tier.• Tier1: The setting value(%) in Tier1 Min.• Tier2: The value deducted Tier1 Max(%) and Tier3 Max(%)

from 100(%).• Tier3: The setting value(%) in Tier3 Min.

Second phase: Tier range and I/O performance.

Capacities deducted from the mapped capacities of the firstphase from the total used capacity, are mapped to each tier.

Note: 1. If these names have changed, the new names appear instead.

186 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 187: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

New page assignment tierIf you set the new page assignment tier value, when a new page is neededby a DP-VOL the page is taken from the specified tier aligned with the newpage assignment tier value. You can set this function by using Hitachi DeviceManager - Storage Navigator. In addition, this function becomes effective justafter setting. The following table lists setting values:

Setting value Description

High The new page is assigned from the higher tier of tiers set in thetiering policy.

Middle The new page is assigned from the middle tier of tiers set in thetiering policy.

Low The new page is assigned from the lower tier of tiers set in thetiering policy.

The following tables show the tiers to which new pages are preferentiallyassigned.

Tiering Policy When specifyingHigh

Whenspecifying

Middle

When specifyingLow Note

All From tier 1 to 2 From tier 1 to 2 From tier 2 to 1 If you set Low, tier 2 isgiven a priority overtier 1.

Level 1 From tier 1 to 2 From tier 1 to 2 From tier 1 to 2 Assignment sequenceswhen High, Middle, andLow are same.

Level 2 From tier 1 to 2 From tier 1 to 2 From tier 2 to 1 Every assignmentsequence is the sameas when All is specifiedas the tiering policy.

Level 3 From tier 1 to 2 From tier 1 to 2 From tier 2 to 1 Every assignmentsequence is the sameas when All is specifiedas the tiering policy.

Level 4 From tier 1 to 2 From tier 1 to 2 From tier 2 to 1 Every assignmentsequence is the sameas when All is specifiedas the tiering policy.

Level 5 From tier 2 to 1 From tier 2 to 1 From tier 2 to 1 Assignment sequenceswhen High, Middle, andLow are same.

Number Condition Order of new page allocation

1 T1 MIN = 100% Same as Level1(1)

2 T1 MAX = 0% Same as Level5(5)

Configuring thin provisioning 187Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 188: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Number Condition Order of new page allocation

3 T1 MAX > 0% Same as All(0)

Tiering policy When specifyingHigh

Whenspecifying

Middle

When specifyingLow Note

All From tier 1, 2, to3.

From tier 2, 3,to 1.

From tier 3, 2, to1.

Specifying High, Middleor Low to theassignment sequenceis effective.

Level 1 From tier 1, 2, to3.

From tier 1, 2,to 3.

From tier 1, 2, to3.

Assignment sequenceswhen High, Middle, andLow are same.

Level 2 From tier 1, 2, to3.

From tier 1, 2,to 3.

From tier 2, 1, to3.

If you set Low, tier 2 isgiven a priority overtier 1.

Level 3 From tier 2, 3, to1

From tier 2, 3,to 1

From the 2, 3, to1

Assignment sequenceswhen High, Middle, andLow are same.

Level 4 From tier 2, 3, to1

From tier 2, 3,to 1

From tier 3, 2, to1

If you set Low, tier 3 isgiven priority over tier2.

Level 5 From tier 3, 2, to1

From tier 3, 2,to 1

From tier 3, 2, to1

Assignment sequenceswhen High, Middle, andLow are same.

Number Condition Order of new page allocation

1 T1 MIN = 100% Same as Level1(1)

2 T3 MIN = 100% Same as Level5(5)

3 T1 MAX > 0% and T3 MAX = 0% Same as Level2(2)

4 T1 MAX = 0% and T3 MAX = 0% Same as Level3(3)

5 T1 MAX = 0% and T3 MAX > 0% Same as Level4(4)

6 T1 MAX > 0% and T3 MAX > 0% Same as All(0)

Relocation priorityIf you use the relocation priority function, you can set the selection priority ofa DP-VOL when performing relocation. With this setting, a prioritized DP-VOLcan be relocated earlier during a relocation cycle. You can set this function byusing Hitachi Device Manager - Storage Navigator. The function is activatedafter the monitoring data is collected.• If no relocation priority is set for all DP-VOLs, the general order of DP-VOL

selection is to select the next DP-VOL in LDEV number order after the lastDP-VOL that fully performed relocation. This selection order persists acrossrelocation cycles.

188 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 189: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• If one or more DP-VOLs is assigned a relocation priority, the prioritized DP-VOLs are operated upon in the early portion of the relocation cycle, beforeothers in the general order of DP-VOL selection.

• If V-VOL is not given priority for relocation:For example, if LDEVs of LDEV IDs with LDEV#1, LDEV#2, LDEV#3,LDEV#4, and LDEV#5 are not given priority for relocation, LDEVs arerelocated with the following sequences. In this example, three LDEVs arerelocated in each period, but the number of LDEVs to relocate may changeby the relocation cycle or the data size.

Relocatingcycle

Relocatingsequence ofLDEV#1 ineach cycle

Relocatingsequence ofLDEV#2 ineach cycle

Relocatingsequence ofLDEV#3 ineach cycle

Relocatingsequence ofLDEV#4 ineach cycle

Relocatingsequence ofLDEV#5 ineach cycle

T1 1st 2nd 3rd Unperformed Unperformed

T2 3rd Unperformed Unperformed 1st 2nd

T3 Unperformed 1st 2nd 3rd Unperformed

T4 2nd 3rd Unperformed Unperformed 1st

• If V-VOL is given priority for relocation:For example, if LDEVs of LDEV IDs with LDEV#3 and LDEV#4 are setpriority for relocation from LDEV#1 to LDEV#5, LDEVs are relocated withthe following sequences. In this example, three LDEVs are relocated ineach period, but the number of LDEVs to relocate may change by therelocation cycle or data size.

Relocatingcycle

Relocatingsequence ofLDEV#1 ineach cycle

Relocatingsequence ofLDEV#2 ineach cycle

Relocatingsequence ofLDEV#3 ineach cycle

Relocatingsequence ofLDEV#4 ineach cycle

Relocatingsequence ofLDEV#5 ineach cycle

T1 3rd Unperformed 1st 2nd Unperformed

T2 Unperformed 3rd 1st 2nd Unperformed

T3 Unperformed Unperformed 1st 2nd 3rd

T4 3rd Unperformed 1st 2nd Unperformed

Assignment tier when pool-VOLs are deletedWhen you delete pool-VOLs, the pages allocated to the pool-VOLs are movedto other pool-VOLs. The following table shows the tier numbers to whichpages are allocated before and after pool-VOLs are deleted. This operationdoes not depend on the tiering policy or the settings of newly assigned tiers.Relocate tiers after deleting pool-VOLs.

The following table describes page allocation in a 3-tier configuration.

Configuring thin provisioning 189Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 190: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tier of deleted pool-VOLs Order in which pages areallocated to tiers Description

Tier 1 Tier 1, Tier 2, and Tier 3 If there is free space in Tier 1, pagesare allocated to Tier 1.

If there is no free space in Tier 1,pages are allocated to Tier 2.

If there is no free space in Tier 1 andTier 2, pages are allocated to Tier 3.

Tier 2 Tier 2, Tier 1, and Tier 3 If there is free space in Tier 2, thepages are allocated to Tier 2.

If there is no free space in Tier 2,pages are allocated to Tier 1.

If there is no free space in Tier 1 andTier 2, pages are allocated to Tier 3.

Tier 3 Tier 3, Tier 2, and Tier 1 If there is free space in Tier 3, pagesare allocated to Tier 3.

If there is no free space in Tier 3,pages are allocated to Tier 2.

If there is no free space in Tier 2 andTier 3, pages are allocated to Tier 1.

The following table describes page allocation in a 2-tier configuration.

Tier of deleted pool-VOLs Order in which pages areallocated to tiers Description

Tier 1 Tier 1 and Tier 2 If there is free space in Tier 1, pagesare allocated to Tier 1.

If there is no free space in Tier 1,pages are allocated to Tier 2.

Tier 2 Tier 2 and Tier 1 If there is free space in Tier 2, pagesare allocated to Tier 2.

If there is no free space in Tier 2,pages are allocated to Tier 1.

Formatted pool capacityThe formatted pool capacity equals the capacity of the initialized free spaceand the reserved capacity of a pool, but not the capacity of all free space andreserved capacity of the pool. The free space of the pool is monitored by astorage system. Space is formatted automatically if needed. You can confirmthe formatted pool capacity in the View Pool Management Status window.Dependent on the load of the storage system, the format speed of free spaceand reserved capacity of the pool is adjusted.

190 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 191: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

For a pool with pool-VOLs that have accelerated compression enabled, theformatted pool capacity is not the parity group capacity. Instead, it is thepool capacity.

New pages are allocated, then initialized, during data write operations to theV-VOL. If a significant number of new pages are allocated, initialization mightbe delayed as a result of conflicts between data write and new pageinitialization processes. Such conflicts could occur, for example, when youcreate a file system of new DP-VOLs from the host. You can initialize the freespace of a pool in advance to prevent delays in data write operations.

If you want to change the method of performing the function to format thefree space of a pool, contact customer support.

Rebalancing the usage level among parity groupsIf multiple parity groups that contain LDEVs used as pool volumes exist,rebalancing can improve biased usage rates in parity groups. Rebalancing isperformed as if each parity group were a single pool volume. Afterrebalancing, the usage rates of LDEVs in a parity group may not be balanced,but the usage rate in the entire pool is balanced.

The usage level among parity groups is automatically rebalanced when theseoperations are in progress:

Note: In pools comprised of pool volumes assigned by parity groups withaccelerated compression enabled, the rebalancing operation is performedwith consideration of the parity group's used capacity. Therefore, afterperforming the rebalancing operation, the capacity of the pool volume maynot be reduced.

• Expanding pool capacity• Shrinking pool capacity• Reclaiming zero pages• Reclaiming zero pages in a page release request issued by the host with

the Write Same command, for example.• Performing tier relocations

If you expand the pool capacity, Dynamic Provisioning moves data to theadded space on a per-page basis. When the data is moved, the usage rateamong parity groups of the pool volumes is rebalanced.

Host I/O performance may decrease when data is moved. If you do not wantto have the usage level of parity groups automatically balanced, call thecustomer support.

You can see the rebalancing progress of the usage level among parity groupsin the View Pool Management Status window. Dynamic Provisioningautomatically stops balancing the usage levels among parity groups if the

Configuring thin provisioning 191Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 192: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

cache memory is not redundant or the pool usage rate reaches up to thethreshold.

Execution mode settings and tiering policyThe following figure depicts how tier relocation is performed after changingthe tiering policy setting while Auto execution mode is used.

The following figure depicts two cases of how tier relocation is performedafter changing the tiering policy setting while Manual execution mode is used.

192 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 193: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Functions overview for active flash and Dynamic TieringTier management is performed by both active flash and Dynamic Tiering. Thedifferences in supported functionality are included in the table below.

Category Functions active flash DynamicTiering

Initial page allocation Assigning new pages to thewrite data of the host

Supported Supported

Monitoring ofperformance

Monitoring tiers based onthe specified cycle time

Supported N/A

Tier relocation Promoting pages to the tierwhich is determined by thescheduled performancemonitoring

Supported Supported

Configuring thin provisioning 193Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 194: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Category Functions active flash DynamicTiering

Promoting pages from thetier 2 or 3 to tier 1, thepages where the latestaccess frequency issuddenly high

Supported N/A

To maintain capacity in thetier 1, demoting pagesfrom the tier 1 to tier 2 or3, the pages where thelatest access frequency islow

Supported N/A

The following diagram shows the differences between the functions ofDynamic Provisioning, Dynamic Tiering, and active flash.

194 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 195: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Relocating pages whose latest I/Os frequency is suddenly high byactive flash

The active flash feature identifies the frequently accessed pages by countingthe number of I/Os to specific pages. Pages that are accessed many timesare promoted to tier 1. Pages where the latest access frequency is low areallocated to lower tiers.

Configuring thin provisioning 195Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 196: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Dynamic Tiering workflowThe following illustration shows the workflow for setting up Dynamic Tieringon the storage system.

As shown in the illustration, Hitachi Device Manager - Storage Navigator andCommand Control Interface (CCI) have different workflows. This documentdescribes how to set up Dynamic Tiering using Hitachi Device Manager -Storage Navigator . For details about how to set up Dynamic Tiering usingCCI, see the Command Control Interface Command Reference and CommandControl Interface User and Reference Guide. Use Hitachi Device Manager -Storage Navigator to create pools and DP-VOLs.

196 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 197: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Configuring thin provisioning 197Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 198: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

*Notes:1. When you create a pool using CCI, you cannot enable the multi-tier pool

option or register multiple media as pool-VOLs. Before making tiers,enable the multi-tier pool option.

2. Enabling the multi-tier pool option from CCI automatically sets TierManagement to Manual. You must use Hitachi Device Manager - StorageNavigator to change Tier Management to Auto.

Caution: When you delete a pool, its pool-VOLs (LDEVs) are blocked, andyou must format the blocked LDEVs before using them.

Active flash workflowThe active flash feature of Dynamic Tiering can be set up using either HitachiDevice Manager - Storage Navigator or Command Control Interface.

The following illustration shows the workflow for a Storage Administrator toset up active flash on the storage system. As shown in the illustration, HitachiDevice Manager - Storage Navigator and Command Control Interface havedifferent workflows. The details about how to set up active flash using HitachiDevice Manager - Storage Navigator are covered in subsequent topics. Fordetails about how to set up active flash using Command Control Interface,see the Command Control Interface Command Reference and CommandControl Interface User and Reference Guide. Use Hitachi Device Manager -Storage Navigator to create pools and DP-VOLs.

198 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 199: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Configuring thin provisioning 199Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 200: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• In Command Control Interface, when creating a pool, you cannot enable

Multi-Tier Pool and cannot register multiple media as pool-VOLs. Beforemaking tiers, enable Multi-Tier Pool.

• Enabling Multi-Tier Pool from Command Control Interface automaticallysets Tier Management to Manual. To change Tier Management to Auto, youmust do this in Hitachi Device Manager - Storage Navigator.

Note: If you delete a pool, its pool-VOLs (LDEVs) will be blocked. If they areblocked, format them before using them.

Thresholds

Pool utilization thresholdsDynamic Provisioning monitors pool capacity using thresholds. A threshold isthe proportion (%) of the used capacity of the pool to the total capacity ofthe pool, or the proportion (%) of the physical used capacity of the pool tothe total capacity reserved for writing of the pool. Each pool has its own poolthreshold values.• Warning Threshold: Set the value between 1% and 100%, in 1%

increments. The default is 70%.• Depletion Threshold: Set the value between 1% and 100%, in 1%

increments. The default is 80%. The depletion threshold must be higherthan the Warning threshold

If either threshold is exceeded by the used capacity of the pool, a warning isissued in the form of SIMs (Service Information Messages) to Hitachi DeviceManager - Storage Navigator and SNMP (Simple Network ManagementProtocol) traps to the open-systems host. For more information on SNMPtraps and the SNMP Manager, see the Hitachi SNMP Agent User Guide. See Working with SIMs on page 268 for more information about SIMs.

The following figure illustrates a total pool capacity of 1,000 GB, WarningThreshold of 50%, and Depletion Threshold of 80%. If the used capacity ofthe pool is larger than 50% (500 GB) of the total pool capacity, a SIM and anSNMP trap are reported. If the used capacity of the pool increases andexceeds the Depletion Threshold (80%), a SIM and an SNMP trap arereported again.

200 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 201: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note that in this scenario, if the actual pool usage percentage is 50.1%, only50% appears on the Hitachi Device Manager - Storage Navigator windowbecause the capacity amount is truncated after the decimal point. If thethreshold is set to 50%, a SIM and an SNMP trap are reported even thoughthe pool usage percentage appearing on the screen does not indicate anexceeded threshold.

Pool subscription limitThe value of using a subscription limit is to manage the maximum amount ofover-provisioning that is acceptable for a pool. By managing the poolsubscription limit, you can control the potential demand for storing data thatmight exceed the pool capacity.

Note: If you are using the pool comprised of pool-VOLs assigned byaccelerated compression-enabled parity groups, the pool subscription limit isdefined with respect to the pool capacity not reserved for writing. In thiscase, the free area of the pool must be monitored even if the subscriptionlimit is defined to 100%.

If you do not want to monitor the free area of the pool, specify thesubscription limit conforming to the following formula:Pool capacity / Pool capacity reserved for data writing × Subscription limit = 100% For example, if 100 TB of the pool capacity and 80 TB of the pool capacityreserved for writing exist, specify 80% as the subscription limit.

The subscription limit is the ratio (%) of the total DP-VOL capacity that hasbeen configured to the total capacity of the pool. When the subscription limitis set, you cannot configure another DP-VOL if the new DP-VOL capacitywould cause the subscription limit to be exceeded.

The subscription limit includes pages required to store user data and controlinformation. The total capacity of DP-VOLs that are created from the pool issmaller than the subscription limit capacity. The formula used to calculate therequired pages for one DP-VOL includes the control information. To determine

Configuring thin provisioning 201Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 202: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

the total pages required in a pool, multiply the number of calculated pages bythe number of DP-VOLs. The value enclosed in ceil( ) must be rounded up tothe nearest whole number. The number of pages for the DP-VOL including thecontrol information equals:Number of pages for the DP-VOL including the control information = ceil((One DP-VOL capacity (MB) + ceil(One DP-VOL capacity (MB) / 3,145,548 (MB)) * 4 (Pages) * 42 (MB)) / 42 (MB)) For example, if the pool capacity is 100 GB and the subscription limit is150%, you can configure up to a total of 150 GB of capacity to the DP-VOLsrelated to the pool.

The following figure depicts setting the subscription limit of pool capacity.

Monitoring total DP-VOL subscription for a poolYou can configure the subscription limit of total DP-VOL capacity to poolcapacity. This prevents a new DP-VOL capacity that exceeds the configuredsubscription limit from being allocated and is associated with the pool. If youspecify more than 100% as the subscription limit or the subscription limit isnot set, you must monitor the free capacity of the pool because it is possiblethat writes to the DP-VOLs may exceed pool capacity.

The used value displayed in the cell for Current in the Subscription (%) istruncated after the decimal point of the calculated value. Therefore, theactual percentage of DP-VOL assigned to the pool may be larger than thevalue displayed in the window. If you create a new DP-VOL of the same sizeas the existing DP-VOL, the larger capacity which is displayed in the Currentcell is necessary.

202 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 203: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

For example, if 3 GB V-VOL is related to an 11.89 GB pool, the capacity (%)is calculated as follows:(( ceil(3,072 (MB) / 3,145,548 (MB)) * 4 (Pages) * 42 (MB)) +3,072 (MB)) / 12,175.36 (MB)) * 100 = 26.61....(%)In this case, 26% is displayed in the cell for Current in the Subscription (%).If you create a new V-VOL of the same size as the existing V-VOL, 27% ormore remaining capacity is necessary.

Working with pools

About poolsDynamic Provisioning requires the use of pools. A pool consists of more thanone pool-VOL. Pages in the pool are assigned to store user data and controlinformation. Four pages on a DP-VOL are required for the control information.

A storage system supports up to 128 pools, each of which can contain up to1024 pool-VOLs and 63,232 DP-VOLs per pool. The pool for DynamicProvisioning cannot be used in conjunction with other pools.

The 128-pool maximum per storage system applies to the total number ofDynamic Provisioning pools and Dynamic Tiering pools. The pool for DynamicProvisioning or Dynamic Tiering cannot be used in conjunction with otherpools.

A pool number must be assigned to a pool. Multiple DP-VOLs can be relatedto one pool.

The total pool capacity combines the capacity of all the registered DynamicProvisioning pool-VOLs assigned to the pool. Pool capacity is calculated usingthe following formulas:• capacity of the pool (MB) = total number of pages * 42 - 4200

4200 in the formula is the management area size of the pool-VOL withSystem Area.

• total number of pages = Σ(floor(floor(pool-VOL number of blocks ÷ 512) ÷168)) for each pool-VOL

where• floor( ) means to truncate the part of the formula within the parentheses

after the decimal point.

About pool-VOLsPool-VOLs are grouped together to create a pool. When a new pool iscreated, the available pool-VOLs are selected in the Select Pool VOLswindow and added to the Selected Pool Volumes table. Every pool must havea pool-VOL with System Area.

Configuring thin provisioning 203Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 204: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When adding a volume to the pool for which Multi-Tier Pool is enabled, notethe following:• Up to three different drives types/RPM are allowed between all the pool-

VOLs to be added.• Volumes to be added to the same pool must have the same RAID level

across all the same drive type/RPM pool-VOLs.For example, you cannot add a volume whose drive type/RPM is SAS/15kand whose RAID level is 5 (3D+1P) when a volume whose drive type/RPMis also SAS/15k but whose RAID level is 5 (7D+1P) is already in the pool.

• Up to three values are allowed for Drive Type/RPM for the volume.

If you increase the pool capacity by adding a pool-VOL, a portion of theexisting data in the pool automatically migrates from an older pool-VOL tothe newly added pool-VOL, balancing the usage levels of all the pool-VOLs. Ifyou do not want to automate balancing of the usage levels of pool-VOLs, callcustomer support for assistance.

Dynamic Provisioning does not automatically balance the usage levels amongpool-VOLs if the cache memory is not redundant or if the pool usage reachesup to the threshold.

The pool-VOLs contained in a pool can be added or deleted. Removing a pool-VOL does not delete the pool or any related DP-VOLs. You must delete all DP-VOLs related to the pool before the pool can be deleted. When the pool isdeleted, all data in the pool is also deleted.

Pool statusThe following table describes the pool status that appears in Device Manager- Storage Navigator. The status indicates that a SIM code might have beenissued that needs to be resolved. See SIM reference codes on page 268 forSIM code details.

The DP-VOL status remains normal even though the pool status might besomething other than normal.

Status Explanation SIM code1

Normal Normal status. None

Warning A pool-VOL in the pool is blocked. If the pool-VOL is blocked, SIM code627XXX is reported.

ExceededThreshold

The pool usage level might exceed a poolthreshold.

620XXX or 626XXX

Shrinking The pools is being shrunk and the pool-VOLs are being deleted.

None

Blocked The pool is full or an error occurred in thepool, therefore the pool is blocked.

622XXX or 623XXX

Note:

204 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 205: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Status Explanation SIM code1

1. XXX in the SIM code indicates the hexadecimal pool number.

Creating poolsWhen you create a pool, you select the pool volumes (pool-VOLs) for the pool(manually or automatically) and set options such as the subscription limit andthe warning and depletion thresholds for the pool. You can also enableoptions such as V-VOL protection and data deduplication.

The following procedures describe how to create pools for DynamicProvisioning and Dynamic Tiering.• Creating Dynamic Provisioning pools by selecting pool-VOLs manually on

page 206• Creating Dynamic Provisioning pools by selecting pool-VOLs automatically

on page 209• Creating Dynamic Tiering pools by selecting pool-VOLs manually on

page 212• Creating a Dynamic Tiering pool by automatically selecting pool-VOLs on

page 216

Prerequisites for creating pools• Before you can create pools, the proper amount of shared memory must

be installed, and you must have a V-VOL management area in sharedmemory. When shared memory is added, the V-VOL management area isautomatically created. To add shared memory, contact your servicerepresentative.

• One pool-VOL with system area is defined for a pool. The priority of thepool-VOL with system area is assigned according to the drive type. Theavailable capacity of the pool-VOL with system area is deducted from themanagement area capacity. The management area capacity stores themanagement information of software that uses the pool. If DynamicProvisioning, Dynamic Tiering, or Thin Image is used on an open system,4.2 GB is used as the management area in the pool-VOL with system area.

• When a pool is created, a pool-VOL with system area is assigned thepriority shown in the following table. If multiple pool-VOLs of the samedrive type exist, the priority of each is determined by the internal index ofthe storage system.

Priority Drive type

1 SAS 7.2K

2 SAS 10K

3 SAS 15K

4 SSD

5 External volume

Configuring thin provisioning 205Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 206: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Creating Dynamic Provisioning pools by selecting pool-VOLs manuallyYou can use Storage Navigator to create a Dynamic Provisioning pool withmanually selected pool-VOLs.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Create Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools and then select System GUI.c. In the Pools window, click Create Pools.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.c. Click Create Pools.

2. From the Pool Type list, select Dynamic Provisioning.3. From the System Type list, select Open.4. From the Multi-Tier Pool field, select Disable.

You cannot select Enable if the storage system has only externalvolumes with the Cache Mode set to Disable.

5. From the Pool Volume Selection field, select Manual.6. Follow the steps below to select pool-VOLs.

a. From the Drive Type/RPM list, select a data drive type and RPM.b. From the RAID Level list, select RAID level.

If you select External Storage from the Drive Type/RPM list, ahyphen (-) appears and you cannot select the RAID level.

c. Click Select Pool VOLs.The Select Pool VOLs window appears.

d. In the Available Pool Volumes table, select the pool-VOL row to beassociated with a pool, and then click Add.You can select a value other than Middle from External LDEV TierRank and click Add to set another tier rank for an external volume.

The selected pool-VOL is registered in the Selected Pool Volumestable. Up to 1,024 volumes can be added to a pool.

If LDEVs in the parity group with accelerated compression enabled areused as pool volumes, these LDEVs can be assigned to only one pool.LDEVs in one parity group with accelerated compression enabledcannot be assigned to multiple pools as pool volumes. We recommend

206 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 207: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

that LDEVs of the two types do not coexist in a single pool. One typeof LDEV belongs to the accelerated compression-enabled parity groupand another type of LDEV belongs to the accelerated compression-disabled parity group.

Caution: For details about adding LDEVs to parity groups withaccelerated compression enabled, see Guidelines for pools whenaccelerated compression is enabled on page 421.

Tip: Perform the following steps if necessary:• Click Filter to open the menu, specify the filtering conditions,

and click Apply.• Click Select All Pages to select all pool-VOLs in the table. To

cancel the selection, click Select All Pages again.• Click Options to specify the volumes or the number of rows

to be displayed.

e. Click OK.The information in the Selected Pool Volumes table is applied toTotal Selected Pool Volumes and Total Selected Capacity.

7. In Deduplication, select Enable or Disable.

Note: You cannot select Enable for Deduplication in thefollowing cases:• Enable is selected for Data Direct Mapping.• The dedupe and compression license is not installed.• Enable is selected for Multi-Tier Pool.• Pool volumes are not selected.• The number of available LDEV IDs is not enough.• The number of available cache management devices is not

enough.• Mainframe is selected for System Type.

8. If you want to change the deduplication system data volume options,click Change Deduplication System Data Volume Options:a. To change LDEV Name, specify the prefix characters and the initial

number for this LDEV.b. To change Initial LDEV ID, specify the number of LDKC, CU, LDEV,

and Interval. To confirm used LDEV IDs, click View LDEV IDs toconfirm the used LDEV IDs in the View LDEV IDs window.

c. To change Initial SSID, specify the 4-digit SSID as a hexadecimalnumber (0004 to FFFE). To confirm used SSIDs, click View SSIDs toconfirm the used SSIDs in the View SSIDs window.

Configuring thin provisioning 207Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 208: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

d. Click OK to save your changes and return to the Create Poolswindow.

9. Enter the name in the Pool Name text box.10. Click Options.11. In the Initial Pool ID text box, type the number of the initial pool ID,

from 0 to 127.When you specify a pool ID that was previously registered, the smallestavailable ID is displayed by default instead of the ID you specified. If apool ID is unavailable, no number is displayed.

12. In the Subscription Limit text box, enter an integer value from 0 to65534 as the subscription rate (%) for the pool.If no number is entered, the subscription rate is set to unlimited.

Caution: If you are using a pool comprised of pool volumesassigned by accelerated compression-enabled parity groups, youcan create a DP-VOL with a capacity larger than the pool capacityand writing is assured even if the subscription limit is defined to100% or less. In this case, the free area of the pool must bemonitored.

This is not a requirement for monitoring the free area of the poolbut if you want to specify the pool subscription limit, specify avalue lower than the value calculated by the following formula:

100 % * ( Pool physical capacity / Pool capacity )

13. In the Warning Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 70%.

14. In the Depletion Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 80%.Enter a value that is equal to or greater than the value of the WarningThreshold.

15. In Protect V-VOLs when I/O fails to Blocked Pool VOL, select Yesor No. If Yes is selected, when the pool-VOL is blocked, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

16. In Protect V-VOLs when I/O fails to Full Pool, select Yes or No. IfYes is selected, when the pool usage reaches the full size, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

17. Click Add.The created pool is added to the Selected Pools table. If invalid valuesare set, an error message appears.

The Pool Type, Pool Volume Selection, and Pool Name must be set.If the required items are not entered or selected, you cannot click Add.

208 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 209: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If you select a row and click Detail, the Pool Properties windowappears. If you select a row and click Remove, the message appearsasking whether you want to remove the selected row or rows. If youwant to remove the row, click OK.

18. Click Next.The Create LDEVs window appears.

If Subscription Limit of the created pool is set to 0%, the CreateLDEVs window does not appear.

19. Click Finish.20. Check the settings in the Confirmation window, and then enter the task

name in Task Name.Select the pool radio button and then click Details. The Pool Propertieswindow appears.

21. Click ApplyThe tasks are registered. If the Go to tasks window for status checkbox is selected, the Tasks window appears.

Creating Dynamic Provisioning pools by selecting pool-VOLs automaticallyUse the following procedure to create a Dynamic Provisioning pool withautomatically selected pool-VOLs.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Create Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools and then select System GUI.c. In the Pools window, click Create Pools.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.c. Click Create Pools.

2. In the Create Pools window, select Dynamic Provisioning from thePool Type list.

3. From the System Type list, select Open.4. From the Multi-Tier Pool field, select Disable.5. From the Pool Volume Selection field, select Auto.

Configuring thin provisioning 209Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 210: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note: Select Manual if creating a pool that contains LDEVs in aparity group with accelerated compression enabled.

6. Select pool-VOLs as follows:a. From the Resource Group list, select the resource group name of the

pool-VOL.b. From the Performance list, select the performance of the pool.c. In the Total Capacity list, specify the capacity of the pool.

Values are displayed in Total Pool Volumes and Total Capacity.These values are greater than the specified value of the pool capacity.If you change the pool configuration, perform steps d, e, and f.

d. Click Change Pool Configuration.The Change Pool Configuration Pattern window appears. You canchange the pool configuration that is automatically selected.

e. From the Pool Configuration Patterns table, select the poolconfiguration row. Then click Select.

Note:• You can select the pool configuration on a parity group basis.• The priority of the pool configuration is determined by these

conditions:Priority 1: There is no free space in the parity group and oneLDEV exists in the group.Priority 2: There is no free space in the parity group andmultiple LDEVs exist in the group.Priority 3: There is free space in the parity group andmultiple LDEVs exist in the group.

• The following items are not displayed in the PoolConfiguration Patterns table:Parity groups with LDEVs that cannot be used as pool-VOLs.Pool configuration patterns that contain more than 1,024LDEVs.

f. Click OK.The information in the Pool Configuration Patterns table is appliedto Total Pool Volumes and Total Capacity.

7. In Deduplication, select Enable or Disable.

Note: You cannot select Enable for Deduplication in thefollowing cases:• Enable is selected for Data Direct Mapping.• The dedupe and compression license is not installed.• Enable is selected for Multi-Tier Pool.• Pool volumes are not selected.

210 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 211: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• The number of available LDEV IDs is not enough.• The number of available cache management devices is not

enough.• Mainframe is selected for System Type.

8. If you want to change the deduplication system data volume options,click Change Deduplication System Data Volume Options:a. To change LDEV Name, specify the prefix characters and the initial

number for this LDEV.b. To change Initial LDEV ID, specify the number of LDKC, CU, LDEV,

and Interval. To confirm used LDEV IDs, click View LDEV IDs toconfirm the used LDEV IDs in the View LDEV IDs window.

c. To change Initial SSID, specify the 4-digit SSID as a hexadecimalnumber (0004 to FFFE). To confirm used SSIDs, click View SSIDs toconfirm the used SSIDs in the View SSIDs window.

d. Click OK to save your changes and return to the Create Poolswindow.

9. Enter the name in the Pool Name text box.10. Click Options.11. In the Initial Pool ID text box, type the number of the initial pool ID,

from 0 to 127.When you specify a pool ID that was previously registered, the smallestavailable ID is displayed by default instead of the ID you specified. If apool ID is unavailable, no number is displayed.

12. In the Subscription Limit text box, enter an integer value from 0 to65534 as the subscription rate (%) for the pool.If no number is entered, the subscription rate is set to unlimited.

13. In the Warning Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 70%.

14. In the Depletion Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 80%.Enter a value that is equal to or greater than the value of the WarningThreshold.

15. In Protect V-VOLs when I/O fails to Blocked Pool VOL, select Yesor No. If Yes is selected, when the pool-VOL is blocked, DP-VOL isprotected from reading and writing requests. At the same instant, theaccess attribute of the DP-VOL is changed to the Protect attribute.

16. In Protect V-VOLs when I/O fails to Full Pool, select Yes or No. IfYes is selected, when the pool usage reaches the full size, DP-VOL isprotected from reading and writing requests. At the same instant, theaccess attribute of the DP-VOL is changed to the Protect attribute.

17. Click Add.The created pool is added to the Selected Pools table. If invalid valuesare set, an error message appears.

Configuring thin provisioning 211Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 212: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If an item that must be set is not entered or selected, you cannot clickAdd.

The Pool Type, Pool Volume Selection, and Pool Name must be set.If the required items are not entered or selected, you cannot click Add.

If you select a row and click Detail, the Pool Properties windowappears. If you select a row and click Remove, the message appearsasking whether you want to remove the selected row or rows. If youwant to remove the row, click OK.

18. Click Next.The Create LDEVs window appears.

If Subscription Limit of the created pool is set to 0%, the CreateLDEVs window does not appear.

Click Finish, and the Confirmation window appears.19. Check the settings in the Confirmation window, and then enter the task

name in Task Name.Select the pool radio button and then click Details. The Pool Propertieswindow appears.

20. Click ApplyThe tasks are registered. If the Go to tasks window for status checkbox is selected, the Tasks window appears.

Creating Dynamic Tiering pools by selecting pool-VOLs manuallyUse this procedure to create pool-VOLs manually. These pools can be used byDynamic Tiering and by active flash.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• If you are creating a pool for active flash, LDEVs whose drive type is SSD

must be created in advance.

Procedure

1. Open the Create Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools and then select System GUI.c. In the Pools window, click Create Pools.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.c. Click Create Pools.

212 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 213: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. In the Create Pools window, select Dynamic Provisioning from thePool Type list.

3. From the System Type list, select Open.4. From the Multi-Tier Pool field, select Enable.5. If the pool is to be used by active flash, select the check box for Active

Flash.

Note: To use active flash, pool volumes whose drive type is SSDmust be installed in advance. If there are no pool volumes whosedrive type is SSD, this check box cannot be selected.

6. From the Pool Volume Selection field, select Manual.7. Follow the steps below to select pool-VOLs:

a. In the Drive Type/RPM list, make sure Mixable is selected.b. In the RAID Level list, make sure Mixable is selected.c. Click Select Pool VOLs.

The Select Pool VOLs window appears.d. In the Available Pool Volumes table, select the pool-VOL row to be

associated with a pool, and then click Add.

The selected pool-VOL is registered in the Selected Pool Volumestable. Up to 1,024 volumes can be added to a pool.

You can add volumes with the same Drive Type/RPM and differentRAID Levels. For example, you can add a volume that has an SAS/15KDrive Type/RPM and a 5(3D+1P) RAID Level to the same pool with avolume that has an SAS/15K Drive Type/RPM and a 5(7D+1P) RAIDLevel.

If LDEVs in the parity group with accelerated compression enabled areused as pool-VOLs, these LDEVs can be assigned to only one pool.LDEVs in one parity group with accelerated compression enabledcannot be assigned to multiple pools as pool-VOLs. We recommendthat LDEVs of the two types do not coexist in one pool. One type ofLDEV belongs to the accelerated compression-enabled parity groupand another type of LDEV belongs to the accelerated compression-disabled parity group.

Note: You can select a value other than Middle from ExternalLDEV Tier Rank and click Add to set another tier rank for anexternal volume.

Tip: Perform the following steps if necessary:• Click Filter to open the menu, specify the filtering conditions,

then click Apply.

Configuring thin provisioning 213Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 214: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Click Select All Pages to select all pool-VOLs in the table. Tocancel the selection, click Select All Pages again.

• Click Options to specify the volumes or the number of rowsto be displayed.

e. Click OK.The information in the Selected Pool Volumes table is applied toTotal Selected Pool Volumes and Total Selected Capacity.

8. Enter the name in the Pool Name text box.9. Click Options.

10. In the Initial Pool ID text box, type the number of the initial pool ID,from 0 to 127.When you specify a pool ID that was previously registered, the smallestavailable ID is displayed by default instead of the ID you specified. If apool ID is unavailable, no number is displayed.

11. In the Subscription Limit text box, enter an integer value from 0 to65534 as the subscription rate (%) for the pool.If no number is entered, the subscription rate is set to unlimited.

12. In the Warning Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 70%.

13. In the Depletion Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 80%.Enter a value that is equal to or greater than the value of the WarningThreshold.

14. In Protect V-VOLs when I/O fails to Blocked Pool VOL, select Yesor No. If Yes is selected, when the pool-VOL is blocked, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

15. In Protect V-VOLs when I/O fails to Full Pool, select Yes or No. IfYes is selected, when the pool usage reaches the full size, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

16. Configure Dynamic Tiering with the following steps.From the Tier Management option, select Auto or Manual.a. The selection is usually Auto which allows performance monitoring

and tier relocation to be performed automatically.If you select Manual, use the Command Control Interface or HitachiDevice Manager - Storage Navigator to manually perform performancemonitoring and tier relocation.

b. From the Cycle Time option, select the cycle for performancemonitoring and tier relocation.

When you select 24 Hours (default value):

Performance monitoring and tier relocation is performed once a day.

214 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 215: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

In the Monitoring Period field, set the time to start and endperformance monitoring. The default value is 00:00 to 23:59.

Set one or more hours between the starting and ending times. If youspecify a starting time that is later than the ending time, theperformance monitoring continues until the ending time on the nextday.

You can view the information gathered by performance monitoringwith Hitachi Device Manager - Storage Navigator or the CommandControl Interface.

When you select 0.5 Hours, 1 Hour, 2 Hours, 4 Hours, 8 Hours:

Performance monitoring is performed every hour that is selected,starting at 00:00.

You cannot set a specific time to start performance monitoring.

Caution: When Auto is set, all the V-VOL pages may not becompletely migrated in one cycle. In the next cycle, migrationstarts by updating information for the last processed V-VOL. Atthat point, the collection of performance monitoring information isswitched to the current cycle.

17. From the Monitoring Mode option, select Period Mode or ContinuousMode.If you perform tier relocation in a specified cycle, Period Mode isselected by default. If you perform tier relocation weighted to themonitoring result of the past period, select Continuous Mode.

18. From the Relocation Speed option, select the page relocation speed touse when performing relocation.You can set the speed to: 1(Slowest), 2(Slower), 3(Standard),4(Faster), and 5(Fastest). The default is 3(Standard). If you want toperform tier relocation at high speed, use the 5(Fastest) setting. If thespeed specified is slower than 3(Standard), the data drive load is lowwhen tier relocation is performed.

19. In the Buffer Space for New page assignment text box, enter aninteger value from 0 to 50 as the percentage (%) for each tier.A default value depends on the data drive type of the pool-VOL in eachtier. The default value of SSD is 0%. The default value of a type otherthan SSD is 8%.

20. In the Buffer Space for Tier relocation text box, enter an integervalue from 2 to 40 as the percentage (%) to set for each tier.The default value is 2%.

21. Click Add.The created pool is added to the Selected Pools table. If invalid valuesare set, an error message appears. If Detail is clicked when you select arow, the Pool Property window appears

Configuring thin provisioning 215Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 216: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The Pool Type, Multi-Tier Pool, Pool Volume Selection, and PoolName must be set. If the required items are not registered, you cannotclick Add.

If you select a row and click Detail, the Pool Properties windowappears. If you select a row and click Remove, the message appearsasking whether you want to remove the selected row or rows. If youwant to remove the row, click OK.

22. Click Next.The Create LDEVs window appears. For instructions, see Creating anLDEV on page 93.

If the Subscription Limit of the created pool is set to 0%, the CreateLDEVs window does not appear.

Click Finish. The Confirmation window appears.23. Check the settings in the Confirmation window, and then enter the task

name in Task Name.Select the pool radio button and then click Details. The Pool Propertieswindow appears.

24. Click ApplyThe tasks are registered. If Go to tasks window for status is selected,the Tasks window opens automatically.

Creating a Dynamic Tiering pool by automatically selecting pool-VOLsUse this procedure to create pool-VOLs automatically. These pools can beused by Dynamic Tiering and by active flash.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• If you are creating a pool for active flash, LDEVs whose drive type is SSD

must be created in advance.

Procedure

1. Open the Create Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools and then select System GUI.c. In the Pools window, click Create Pools.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.c. Click Create Pools.

216 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 217: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. From the Pool Type list, select Dynamic Provisioning.3. From the System Type list, select Open.4. From the Multi-Tier Pool field, select Enable.5. If the pool is to be used by active flash, select the check box for Active

Flash.

Note: To use active flash, pool volumes whose drive type is SSDmust be installed in advance. If there are no pool volumes whosedrive type is SSD, this check box cannot be selected.

6. From the Pool Volume Selection field, select Auto.

Note: Select Manual if creating a pool that contains LDEVs in theparity group with accelerated compression enabled.

7. Follow the steps below to select pool-VOLs:a. From the Resource Group list, select the resource group name of

the pool.b. From the Performance list, select the performance of the pool.c. In the Total Capacity list, specify the capacity of the pool.

Values are displayed in Total Pool Volumes and Total Capacity.These values are greater than the specified value of the pool capacity.If you change the pool configuration, perform steps d, e, and f.

d. Click Change Pool Configuration.The Change Pool Configuration Pattern window appears. You canchange the pool configuration that is automatically selected.

e. From the Pool Configuration Patterns table, select the poolconfiguration row. Then click Select.

Note:• You can select the pool configuration on a parity group basis.• The priority of the pool configuration is determined by these

conditions:Priority 1: There is no free space in the parity group and oneLDEV exists in the group.Priority 2: There is no free space in the parity group andmultiple LDEVs exist in the group.Priority 3: There is free space in the parity group andmultiple LDEVs exist in the group.

• If the check box for Active Flash is selected, only the poolconfigurations that contain LDEVs created by SSD aredisplayed.

• The following items are not displayed in the PoolConfiguration Patterns table:

Configuring thin provisioning 217Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 218: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Parity groups with LDEVs that cannot be used as pool-VOLs.Pool configuration patterns that contain more than 1,024LDEVs.

f. Click OK.The information in the Pool Configuration Patterns table is appliedto Total Pool Volumes and Total Capacity.

8. Enter the name in the Pool Name text box.9. Click Options.

10. In the Initial Pool ID text box, type the number of the initial pool ID,from 0 to 127.When you specify a pool ID that was previously registered, the smallestavailable ID is displayed by default instead of the ID you specified. If apool ID is unavailable, no number is displayed.

11. In the Subscription Limit text box, enter an integer value from 0 to65534 as the subscription rate (%) for the pool.If no number is entered, the subscription rate is set to unlimited.

12. In the Warning Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 70%.

13. In the Depletion Threshold text box, enter an integer value from 1 to100 as the rate (%) for the pool. The default value is 80%.Enter a value that is equal to or greater than the value of the WarningThreshold.

14. In Protect V-VOLs when I/O fails to Blocked Pool VOL, select Yesor No. If Yes is selected, when the pool-VOL is blocked, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOLis changed to the Protect attribute.

15. In Protect V-VOLs when I/O fails to Full Pool, select Yes or No. IfYes is selected, when the pool usage reaches the full size, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

16. Configure Dynamic Tiering with the following steps.

From the Tier Management option, select Auto or Manual.a. The selection is usually Auto which allows performance monitoring

and tier relocation to be performed automatically.If you select Manual, use the Command Control Interface or HitachiDevice Manager - Storage Navigator to manually perform performancemonitoring and tier relocation.

b. From the Cycle Time option, select the cycle for performancemonitoring and tier relocation.

When you select 24 Hours (default value):

Performance monitoring and tier relocation is performed once a day.

218 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 219: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

In the Monitoring Period field, set the time to start and endperformance monitoring. The default value is 00:00 to 23:59.

Set one or more hours between the starting and ending times. If youspecify a starting time that is later than the ending time, theperformance monitoring continues until the ending time on the nextday.

You can view the information gathered by performance monitoringwith Hitachi Device Manager - Storage Navigator or the CommandControl Interface.

When you select 0.5 Hours, 1 Hour, 2 Hours, 4 Hours, 8 Hours:

Performance monitoring is performed every hour that is selected,starting at 00:00.

You cannot set a specific time to start performance monitoring.

Caution: When Auto is set, all the V-VOL pages may not becompletely migrated in one cycle. In the next cycle, migrationstarts by updating information for the last processed V-VOL. Atthat point, the collection of performance monitoring information isswitched to the current cycle.

17. From the Monitoring Mode option, select Period Mode or ContinuousMode.If you perform tier relocation in a specified cycle or you do not need tospecify the Monitoring Mode option, select Period Mode. If youperform tier relocation weighted to the monitoring result of the pastperiod, select Continuous Mode.

18. From the Relocation Speed option, select the page relocation speed touse when performing relocation.You can set the speed to: 1(Slowest), 2(Slower), 3(Standard),4(Faster), and 5(Fastest). The default is 3(Standard). If you want toperform tier relocation at high speed, use the 5(Fastest) setting. If thespeed specified is slower than 3(Standard), the data drive load is lowwhen tier relocation is performed.

19. In the Buffer Space for New page assignment text box, enter aninteger value from 0 to 50 as the percentage (%) for each tier.A default value depends on the data drive type of the pool-VOL in eachtier. The default value of SSD is 0%. The default value of a type otherthan SSD is 8%.

20. In the Buffer Space for Tier relocation text box, enter an integervalue from 2 to 40 as the percentage (%) to set for each tier.The default value is 2%.

21. Click Add.

Configuring thin provisioning 219Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 220: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The created pool is added to the Selected Pools table. If invalid valuesare set, an error message appears. If Detail is clicked when you select arow, the Pool Property window appears.

The Pool Type, Multi-Tier Pool, Pool Volume Selection, and PoolName must be set. If the required items are not registered, you cannotclick Add.

If you select a row and click Detail, the Pool Properties windowappears. If you select a row and click Remove, the message appearsasking whether you want to remove the selected row or rows. If youwant to remove the row, click OK.

22. Click Next.The Create LDEVs window appears. For instructions on creating LDEVs,see Creating DP-VOLs on page 227.

If Subscription Limit of the created pool is set to 0%, the CreateLDEVs window does not appear.

Click Finish.23. Check the settings in the Confirmation window, and then enter the task

name in Task Name.Select the pool radio button and then click Details. The Pool Propertieswindow appears.

24. Click ApplyThe tasks are registered. If the Go to tasks window for status checkbox is selected, the Tasks window appears.

Enabling deduplication on an existing poolUse the following procedure to enable the data deduplication function on anexisting pool. When you enable deduplication on a pool, the deduplicationsystem data volume (DSD volume) for the pool is created. This task must beperformed before you can enable deduplication on DP-VOLs assigned to thepool.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• The dedupe and compression license must be installed.• There must be enough available cache management devices to create the

deduplication system data volume. Each deduplication system data volumeuses 14 cache management devices.

• The status of the pool must be Normal.• Data Direct Mapping must be disabled.• Multi-tier pool must be disabled.• There must be enough available LDEV IDs.

220 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 221: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. In the Pools table, select the pool for which you want to change thededuplication setting.

3. Click More Actions, and select Edit Pools.4. In the Edit Pools window, click the Deduplication check box.5. Select Enable.6. If you want to edit the assigned deduplication system data volume, click

Edit Deduplication System Data Volume.The Edit Deduplication System Data Volume window opens.

7. If you want to change the deduplication system data volume options,click Change Deduplication System Data Volume Options. In theChange Deduplication System Data Volume Options window:a. If you want to change the LDEV name or LDEV ID of a deduplication

system data volume, select the volume (check the box for the row),and then click Change Deduplication System Data VolumeOptions.To change the LDEV Name, specify the prefix characters and the initialnumber for the selected LDEV, and click OK.To change the Initial LDEV ID, specify the number of LDKC, CU, LDEV,and Interval, and click OK. To confirm used LDEV IDs, click ViewLDEV IDs to view the used LDEV IDs in the View LDEV IDs window.

b. If you want to change the SSID of a deduplication system datavolume, select the volume (check the box for the row), and then clickEdit SSID. The Edit SSIDs window opens and displays the definedSSIDs and SSIDs to be added.To change an SSID, select the SSID (check the box for the row), clickChange SSIDs, specify the initial SSID, and click OK.

c. When you are done changing the deduplication system data volumeoptions, click OK on the Change Deduplication System DataVolume Options window.

8. Click Finish on the Edit Pools window.The Confirm window opens.

9. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

Configuring thin provisioning 221Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 222: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

10. Click Apply.If Go to tasks window for status is selected, the Tasks window opensautomatically.

Configuring a Dynamic Tiering pool for use by active flashUse this procedure to enable the active flash feature on a Dynamic Tieringpool that includes SSD drives.

You can change a Dynamic Tiering pool to use active flash. However, youcannot change the pool status of Dynamic Tiering to disable in the followingcases:• Tier relocation is being executed manually.• Pool-VOLs are being deleted.• Zero pages are being reclaimed.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.

• Pool volumes whose drive type is SSD are installed.

Procedure

1. Perform one of the following to display the Pools window.In Hitachi Command Suite:a. On the Resource tab, expand the Storage Systems tree. Right click

DP Pools of the target storage system, then select System GUI.In Device Manager - Storage Navigator:a. In the Storage Systems tree on the left pane of the main window,

select Pools.2. From the Pools table on right, click the row of the pool you want to

change.3. Perform one of following to display the Edit Pools window.

a. Click More Actions and select Edit Pools.b. Click Actions > Pool > Edit Pools to open the window.

4. Check Active Flash to ON.

If there is no pool volume whose drive type is SSD, the check box cannotbe selected.

5. Click Finish.The Confirm window appears.

6. In the Task Name text box, enter the task name.

You can enter up to 32 ASCII characters and symbols in all, except for\ / : , ; * ? " < > |. The value "date-window name" is entered by default.

222 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 223: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. In the Confirm window, click Apply to register the setting in the task. Ifthe Go to tasks window for status check box is selected, the Taskswindow appears.

Deleting some capacity saving-enabled DP-VOLs in a poolUse the following workflow to delete some (but not all) of the DP-VOLs forwhich the capacity saving setting is enabled (DRD volumes) in a pool.

If you want to delete all of the capacity saving-enabled DP-VOLs in a pool,see Deleting all capacity saving-enabled DP-VOLs in a pool on page 223 forinstructions.1. Block the capacity saving-enabled DP-VOLs that you want to delete using

the Block LDEVs window. For instructions, see Blocking LDEVs onpage 101.

2. Format the (blocked) capacity saving-enabled DP-VOLs using the FormatLDEVs window. For instructions, see Formatting a specific LDEV onpage 111.

Note: The formatting operations for capacity saving-enabled DP-VOLs might take a lot of time.

3. Delete the (formatted) capacity saving-enabled DP-VOLs using theDelete LDEVs window.

Deleting all capacity saving-enabled DP-VOLs in a poolUse the following workflow to delete all of the DP-VOLs for which the capacitysaving setting is enabled (DRD volumes) in a pool.

If you want to delete some but not all of the capacity saving-enabled DP-VOLs in a pool, see Deleting some capacity saving-enabled DP-VOLs in a poolon page 223 for instructions.

Note: You cannot delete a deduplication system data volume (DSD volume).The deduplication system data volume for a pool is deleted automaticallywhen you disable the Capacity Saving setting for the pool or delete the pool.

1. Block all of the following volumes that are allocated to the target poolusing the Block LDEVs window. For instructions, see Blocking LDEVs onpage 101.• All of the DP-VOLs for which the capacity saving setting is enabled.• The deduplication system data volume for the pool.

2. Format the (blocked) deduplication system data volume using theFormat LDEVs window. Make sure to specify only the singlededuplication system data volume for the target pool. For instructions,see Formatting a specific LDEV on page 111.

Configuring thin provisioning 223Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 224: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Caution: When you format the deduplication system data volume,the data in all capacity saving-enabled DP-VOLs that are allocatedto the same pool is deleted.

3. Format all of the (blocked) capacity saving-enabled DP-VOLs in the poolusing the Format LDEVs window. For instructions, see Formatting aspecific LDEV on page 111.

Note: The formatting operations for capacity saving-enabled DP-VOLs might take a lot of time.

4. Delete all of the (formatted) capacity saving-enabled DP-VOLs in thepool.

Disabling deduplication on a poolUse the following procedure to disable the deduplication function on a pool.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• DP-VOLs with Deduplication and Compression enabled must not be

assigned to the target pool.• For the target pool, the value of Deduplication or Saving Effect >

Deduplication (%) must be 0%.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. In the Pools table, select the pool for which you want to change thededuplication setting.

3. Click More Actions, and select Edit Pools.4. In the Edit Pools window, click the Deduplication check box.5. Select Disable.6. Click Finish.

The Confirm window appears.

224 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 225: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

8. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Deleting a poolFor the pool that owns DP-VOL with a disabled data direct mapping attribute,if pool-VOL is released after the pool shrinking, the released pool-VOLs(LDEVs) will be blocked.

If the pool-VOLs are blocked, they must be formatted before they can bereused. If the blocked pool-VOL is an external volume, select Normal Formatwhen formatting the volume. You can delete a pool only when all of the DP-VOLs have been deleted.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• Prerequisites for deleting pools with deduplication enabled:

○ DP-VOLs with Deduplication and Compression enabled must not beassigned to an operation target pool.

○ For an operation target pool, the value of Deduplication or Saving Effect> Deduplication (%) must be 0%.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. From the Pools table, select the pool to be deleted.3. Click More Actions, and then select Delete Pools.

The Delete Pools window opens.You cannot delete a pool if the pool usage is not 0%, or a pool for whichDP-VOLs are assigned.

4. Click Finish.The Confirm window opens.

Configuring thin provisioning 225Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 226: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

To continue with the shredding operation and delete volume data, clickNext. For details about the shredding operation, see the Hitachi VolumeShredder User Guide.If the pool is blocked, you might not be able to perform shreddingoperations.

5. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

6. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Note: When the pool-VOLs of a pool are empty, the appropriatetier is deleted.

Working with DP-VOLs

About DP-VOLsDynamic Provisioning requires the use of DP-VOLs, which are virtual volumeswith no physical memory space. In Dynamic Provisioning, multiple DP-VOLscan be created.

A DP-VOL is a volume in a thin provisioning storage system. It is the virtualvolume from a DP pool. Data in the DP pool is used via a DP-VOL. A DP-VOLis a virtual LU to some hosts.

On open systems, OPEN-V is the only supported emulation type on a DP-VOL.You can define multiple DP-VOLs and assign them to a Dynamic Provisioningpool.

Relationship between a pool and DP-VOLsBefore you can use Dynamic Provisioning, a DP-VOL and a pool are required.Dynamic Provisioning uses the pool volumes in a pool through the DP-VOLs.

The following figure shows the relationship between a pool and DP-VOLs.

226 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 227: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Creating DP-VOLsYou can create a DP-VOL from any of the following tabs:• The LDEVs tab, which appears when Logical Devices is selected.• The Pools tab, which appears when Pools is selected.• The Virtual Volumes tab, which appears when a pool in Pools is selected.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• If you are creating DP-VOLs for active flash, pool volumes whose drive

type is SSD must be installed in advance.

Procedure

1. Click Create LDEVs.The Create LDEVs window appears.

2. From the Provisioning Type list, confirm Dynamic Provisioning isselected.If not, select Dynamic Provisioning from the list.

3. In the System Type option, select a system type.

To create open system volumes, select Open.4. From the Emulation Type list, confirm OPEN-V is selected.

Configuring thin provisioning 227Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 228: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. To use the capacity saving function, in Capacity Saving, selectCompression or Deduplication and Compression.If Deduplication is Not Available in the target pool, or if the LDEV statusof the deduplication system data volume in the selected pool is otherthan Normal, Deduplication and Compression cannot be selected.

Capacity Saving is set to Disabled in the following cases:• Data Direct Mapping is set to Enable.• The dedupe and compression license is not installed.• Multi-Tier Pool is set to Enable.

Caution: If you select Deduplication and Compression, you willnot be able to change the setting to Compression later.

6. From the Multi-Tier Pool field, select Enable when you create the V-VOL for Dynamic Tiering, and select Disable when you do not createone.If no pool is set to Enable in Dynamic Tiering, Disable is fixed.

Note: You cannot specify the TSE Attribute option when selectingOpen in the System Type option.

7. If the pool is to be used by active flash, select the check box for ActiveFlash.

Note: To use active flash, pool volumes whose drive type is SSDmust be installed in advance. If there are no pool volumes whosedrive type is SSD, this check box cannot be selected.

8. Select the pool according to the following steps.a. From the Drive Type/RPM list in Pool Selection, select the data

drive type and RPM.b. From the RAID level list, select the RAID level.c. Click Select Pool.

The Select Pool window appears.d. In the Available Pools table, select a pool.

Note: You can specify a pool when creating DP-VOLs if the poolhas a status of one of the following:• Normal status• Exceeded Threshold status• In progress of pool capacity shrinking

228 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 229: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can select only one pool. When Enable is selected in step6, the Dynamic Tiering pools appear, and when Disable isselected, only the non-Dynamic Tiering pools appear.

Perform the following if necessary:• Click Filter to open the menu, specify the filtering, and then

Apply.• Click Options to specify the units of pools or the number of

rows to be displayed.

e. Click OK.The Select Pool window closes. The selected pool name appears inSelected Pool Name (ID), and the total capacity of the selected poolappears in Selected Pool Capacity.

9. If you want to offset the specified LDEV capacity by boundary, changethe default Capacity Compatibility Mode (Offset boundary) fromOFF to ON.If block is specified as the unit of the LDEV capacity, this option isdisabled.

10. In the LDEV Capacity text box, enter the DP-VOL capacity to becreated.You can enter the capacity within the range of figures displayed belowthe text box. You can enter the number with 2 digits after the decimalpoint. You can change the capacity unit from the list.

11. In the Number of LDEVs text box, enter the number of LDEVs to becreated.You can enter the number of LDEVs within a range of the figuresdisplayed below the text box.

12. In the LDEV Name text box, enter the DP-VOL name.In the Prefix text box, enter the alphanumeric characters, which arefixed characters of the head of the DP-VOL name. The characters arecase-sensitive.

In the Initial Number text box, type the initial number following theprefix name, which can be up to 9 digits.

You can enter up to the 32 characters including the initial number.13. Click Option.14. In the Initial LDEV ID field, make sure that LDEV ID is set.

To confirm the used number and unavailable number, click View LDEVIDs to display the View LDEV IDs window.

In the table, used LDEV numbers appear in blue, unavailable numbersappear in gray, and unused numbers appear in white. LDEV numbers thatare unavailable may already be in use or already assigned to anotheremulation group (group by 32 LDEV numbers).

Configuring thin provisioning 229Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 230: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

15. In the Initial SSID text box, type the 4-digit SSID of a hexadecimalnumber (0004 to FFFE).To confirm the created SSID, click View SSID to display the View SSIDwindows.

16. From the Cache Partition list, select CLPR.17. From the MP Blade list, select an MP blade.

Select a MP blade to be used by the LDEVs. If you assign a specific MPblade, select the ID of the MP blade. If you can assign any MP blade,click Auto.

18. From the Full Allocation field, select Enable or Disable. To reservepages in the pool that are the same size as the LDEV capacity, selectEnable.If Compression or Deduplication and Compression is set forCapacity Saving, Disable is set for Full Allocation.

19. From the Tiering Policy field, select the tiering policy to be used by theLDEVs.If you assign a specific tiering policy, select any policy. All(0) is selectedby default. You can change a level from Level1(1) to Level5(5) or fromLevel6(6) to Level31(31). You can specify the function when the Multi-Tier Pool is enabled.

From Level6(6) to Level31(31), the names of tiering policies can bechanged. If these names have changed, the new names appear.

20. From the New Page Assignment Tier list, select a new pageassignment tier. You can select from levels High, Middle, and Low.You can specify the function when the Multi-Tier Pool is enabled.

21. In the Relocation Priority option, select a priority.To relocate the LDEV preferentially, set Prioritize. You can selectDefault or Prioritize. You can specify this function when the Multi-TierPool is enabled.

22. In T10 PI, select Enable or Disable.The T10 PI attribute can be specified when the emulation type is OPEN-V.

Caution: The T10 PI attribute can only be defined during theinitial creation of LDEVs. The defined attribute cannot be removedfrom LDEVs on which it is already set.

23. If necessary, change the settings of the V-VOLs.• Click Edit SSIDs to open the Edit SSIDs window.

• Click Change LDEV Settings to open the Change LDEV Settingswindow.

24. If necessary, delete a row from the Selected LDEVs table.Select a row to be deleted, then click Remove.

25. Click Add.

230 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 231: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The created V-VOLs are added to the right Selected LDEVs table. Ifinvalid values are set, an error message appears.

The Provisioning Type, System Type, Emulation Type, PoolSelection, Drive Type/RPM, RAID Level, LDEV Capacity, andNumber of LDEVs fields must be set. If these required items are notregistered, you cannot click Add.

26. Click Finish.The Confirm window appears.

To continue the operation for setting the LU path and define LUN, clickNext.

27. In the Task Name in the text box, enter the task name.You can enter up to 32 ASCII characters and symbols in all, except for\ / : , ; * ? " < > |. "yymmdd-window name" is entered as a default.

28. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Enabling and disabling the DP-VOL protection function optionsUse this procedure to enable or disable the DP-VOL protection functionoptions on an existing pool. The DP-VOL protection function options are:• Protect V-VOLs when I/O fails to Full Pool: Enable this option to protect the

DP-VOLs using the pool from read and write requests when the pool usagereaches the full size.

• Protect V-VOLs when I/O fails to Blocked Pool VOL: Enable this option toprotect the DP-VOLs using the pool from read and write requests when thepool-VOL is blocked.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• The pool must meet all of the following conditions:

○ Data Retention Utility is installed.○ The pool type is Dynamic Provisioning, Dynamic Tiering with Multi-Tier

Pool enabled, or Active Flash.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:

Configuring thin provisioning 231Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 232: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. Click Storage Systems, and then expand the Storage Systemstree.

b. Click Pools.2. In the Pools table, select the pool for which you want to enable or

disable the DP-VOL protection function options.3. Click More Actions, and select Edit Pools.4. In the Edit Pools window, select the desired options for Protect V-

VOLs when I/O fails to Blocked Pool VOL and Protect V-VOLswhen I/O fails to Full Pool.

5. Click Finish on the Edit Pools window.The Confirm window opens.

6. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If Go to tasks window for status is selected, the Tasks window opensautomatically.

Enabling capacity saving functions on DP-VOLsUse the following procedure to enable the capacity saving function on DP-VOLs. A DP-VOL with capacity saving enabled is also referred to as a datareduction (DRD) volume.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• Prerequisites for enabling deduplication on DP-VOLs:

○ The dedupe and compression license is installed.○ The status of the pools is other than Blocked.○ The deduplication function must be enabled on the pool.○ There are enough available cache management devices.○ The LDEV status is Normal.○ The emulation type is OPEN-V.○ Capacity Saving Status is other than Deleting Volume, Failed, or

Rehydrating.○ Data Direct Mapping is disabled.○ Full Allocation is disabled.○ The DP-VOLs are not used as Universal Replicator journal volumes.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:

232 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 233: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. On the Resources tab, click Storage Systems, and then expand AllStorage Systems and the target storage system.

b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane, select an LDEV ID, and click Edit LDEVs.3. In the Edit LDEVs window, click Capacity Saving, and then select

either Compression or Deduplication and Compression.

Caution: If you enable Deduplication and Compression on aDP-VOL, you will not be able to change the setting fromDeduplication and Compression to Compression later.

Note:• If Deduplication is Not Available in the pool, Deduplication and

Compression cannot be selected.• If the LDEV status of a deduplication system data volume in the

pool is other than Normal, Deduplication and Compressioncannot be selected.

4. Click Finish. The Confirm window appears.5. In the Task Name text box, type a unique name for the task or accept

the default.

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

6. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Disabling the capacity saving functions on DP-VOLsUse the following procedure to disable the capacity saving functions on DP-VOLs.

When you disable the capacity saving setting, both the used capacity of thepool and the physical used capacity of the pool increase due to the process ofexpanding the data. Use the following formula to calculate what the usedcapacity and physical used capacity of the pool will be after the expandingprocess has finished:

Configuring thin provisioning 233Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 234: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

pool-used-capacity-size-after-expanding = used-pool-capacity-size + (used-DP-VOL-capacity × saving-ratio-of-the-pool-[%])

physical-pool-used-capacity-size-after-expanding = used-physical-pool-capacity-size* + (used-DP-VOL-capacity × saving-ratio-of-the-pool-[%])

* If accelerated compression is used, the physical used capacity of the poolmust be confirmed.

This information is displayed as follows by Device Manager - StorageNavigator:• used-pool-capacity-size: Displayed as Capacity - Used on the Pools

window.• used-physical-pool-capacity-size: Displayed as Physical Capacity - Used on

the Pools window.• saving-ratio-of-the-pool-[%]: Displayed as Pool Saving (Post Process

Data) - Saving (%) on the Pools window.• used-DP-VOL-capacity: Displayed as Capacity - Used on the Virtual

Volumes tab of each pool window.

Caution: The expanding process stops when the size of the used capacity orphysical used capacity of a pool reaches the depletion threshold. If thisoccurs, you must expand the capacity of the pool so that the expandingprocess can start again.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• Prerequisites for disabling deduplication on DP-VOLs:

○ The status of the pool is other than Blocked by pool failure.○ Capacity Saving status is other than Deleting Volume or Failed.○ The Disable retry of data updating setting is enabled on the Edit

Advanced System Settings window. For instructions, see the SystemAdministrator Guide.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

234 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 235: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. In the LDEVs pane, select the desired LDEV ID, and click Edit LDEVs.3. In the Edit LDEVs window, click Capacity Saving, and select Disabled.

Note: If the LDEV status of the deduplication system data volumein the pool is other than Normal, you cannot change the settingfrom Deduplication and Compression to Disabled.

4. Click Finish. The Confirm window appears.5. In the Task Name text box, type a unique name for the task or accept

the default.

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

6. Click Apply.If Go to tasks window for status is selected, the Tasks window opensautomatically.

7. Verify that the Capacity Saving Status has changed from Rehydrating toDisabled, and then verify that Capacity Saving is Disabled. After that, setthe Disable retry of data updating to Disable on the Edit AdvancedSystem Settings window. For instructions, see the System AdministratorGuide.

Deleting a DP-VOLUse this procedure to delete a DP-VOL.

Note: You cannot delete a deduplication system data volume (DSD volume).The deduplication system data volume for a pool is deleted automaticallywhen you disable the Capacity Saving setting for the pool or delete the pool.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• You cannot delete a DP-VOL if the status is online.• If you are deleting a DP-VOL with Deduplication and Compression enabled,

you must first disable Deduplication and Compression on the DP-VOLbefore beginning this procedure.

• If you are deleting one or more DP-VOLs for which capacity saving isenabled, use one of the following workflows:○ To delete all of the DP-VOLs for which capacity saving is enabled in a

pool, see Deleting all capacity saving-enabled DP-VOLs in a pool onpage 223 for instructions.

○ To delete some (but not all) of the DP-VOLs for which capacity saving isenabled in a pool, see Deleting some capacity saving-enabled DP-VOLsin a pool on page 223 for instructions.

Configuring thin provisioning 235Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 236: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. Select the associated row with the DP-VOL to be deleted.3. Select the Virtual Volumes tab.4. From the table, select the DP-VOL to be deleted.

Do the following, if necessary.• In the Filter option, select ON to filter the rows.• Click Select All Pages to select all DP-VOLs in the list.• Click Options to specify the unit of volumes or the number of rows to

view.5. Click More Actions, and then select Delete LDEVs.

The Delete LDEVs window opens.6. Click Finish.

The Confirm window appears.7. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

8. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Virtualizing storage capacity (DP/HDT)This module describes how to virtualize storage capacity.

About virtualizing storage capacityDynamic Provisioning (DP) provides virtual volumes to a host and allocatesthe actual capacity from a DP pool when a host makes a write request. Byusing DP pools, you can allocate more capacity to a host than that allowed bythe actual physical configuration of the storage system.

DP pools provide the following advantages:• You can reduce system setup and operational costs.

236 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 237: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• You can use resources more efficiently.• You can distribute workload equally among volumes.

Dynamic Tiering (HDT) improves DP pools storage performance further bytaking advantage of performance differences between hardware tiers.

Note: The terms "HDP" and "HDT" are referred to collectively as "DP".

You can create virtual volumes (DP volumes) from physical volumes that aregrouped into DP pools. You can then allocate those virtual volumes to hosts.

In the illustration below, note that the volumes used to create the DP poolare called DP pool volumes. The DP pool is then used to provide capacity asneeded to allocated DP volumes.

DP pools are created by Device Manager, which automatically selects volumesbased on user-specified conditions. You can also directly specify parity groupsto create a DP pool.

By enabling the Full Allocation settings, you can reserve pages of thespecified capacity in advance (referred to as "reserved capacity"). Thereserved capacity is included in the used capacity. You can also specify valuesfor the usage rate threshold and reservation threshold.

If you have registered a Tiered Storage Manager license, you can use theMobility tab to evaluate and analyze the operation status that is related to aDP pool. When you delete an unnecessary DP pool, DP volumes created fromthat pool are also deleted. For this reason, a prerequisite to deleting a DPpool is that no DP volumes from that pool are allocated to a host.

Configuring thin provisioning 237Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 238: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tip: For VSP G1000, VSP G1500, or VSP F1500 storage systems, you canrestore DP pools, complete SIMs, and export tier relocation log files by usingthe windows available by clicking the System GUI link. To access the SystemGUI link, on the Resources tab, right-click DP Pools for the target storagesystem, and then select System GUI from the menu. Or, click DP Pools for thetarget storage system, and then click the System GUI link that appears in theapplication pane.

For Unified Storage 100 family of storage systems, the replication data andreplication management area, which are used by Copy-on-Write Snapshot orTrueCopy Extended Distance, are stored in the created DP pool.

Creating a DP poolYou can create an DP or HDT pool, which provides more efficient use ofphysical storage for virtual volumes that are allocated to hosts from the DPpool. DP pool performance can be improved if you use the entire capacity of aparity group for a single DP pool.

Before you begin

• Register the target storage system.• When defining an external LDEV tier rank, externally connect a storage

system that has multiple performance levels.• Verify the following when using VSP G1000, VSP G1500, VSP F1500,

Virtual Storage Platform, Hitachi Universal Storage Platform V/VM , orHitachi Unified Storage VM (HUS VM) storage systems for DP pools:○ Parity groups from which volumes have already been created can be

added○ Drive type characteristics (for example: drive type and drive speed) in

parity groups and RAID level• Verify the following when using HUS 100 and the Adaptable Modular

Storage (AMS) 2000 family of storage systems for HDP pools:○ Drive type and RPM (only drives in which a parity group has not been

created can be targets)

○ Parity group RAID level and capacity

○ Number of parity groups

Note: In HDT pools, if different drive types and/or RAID levels are mixed in asingle tier, they will all be considered equal for data placement regardless ofpage access frequency. As a result, I/O performance will be dependent on thedrive type characteristics and RAID level on which any given page resides.

In DP pools, if different drive types and/or RAID levels are mixed in an DPpool, I/O performance will be dependent on the drive type characteristics andRAID level on which any given page resides.

238 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 239: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. On the Resources tab, expand the storage system, list existing HDPPools, and click Create Pool.

2. In the Create Pool dialog box, specify a pool name, and optionally selectReflect this pool name to the storage system.

3. To configure an DP pool, select a Pool Type of DP and configure thefollowing:a. In the Additional Parity Groups table, click Add Parity Groups.b. (Optional) Select 'Allow to mix different drive types/speeds, chip

types, RAID levels or volume locations' to allow combining resourceswith different characteristics.

c. Select one or more parity groups, and click Add to Pool and thenClose. The Pool Summary information is updated.

d. (Optional) Click Advanced Options to configure Pool ID, UsedThreshold, Subscription Thresholds, and DP volume protection options,as needed.

e. Go to step 7.4. To configure an HDT pool, select a Pool Type of HDT, and then in the

Additional Parity Groups table, choose Standard or Mixed mode todisallow or allow combining resources with different characteristics. ForMixed mode, go to step 6.

5. For a Standard mode HDT pool, do the following:a. Click + to add a new tier.b. In the Add New Tier dialog box, select a volume to configure Tier 1,

and click Select. The Tier Configuration table in Pool Summary isupdated.

c. Click Add Parity Groups, select the parity group, click Add to Pool,and click Close. Select an available parity group that best meets yourperformance or capacity needs (Tier 1 for best performance, Tier 2 fornext best performance, and Tier 3 for capacity).

d. (Optional) Click + to add Tier 2 and Tier 3, configure the tiers basedon your performance and capacity needs using the choices in AddNew Tier dialog box. The Tier Configuration table in Pool Summary isupdated.

Tip: To delete an existing tier, click X in the Tier tab.

e. (Optional) Click Advanced Options to configure Pool ID, UsedThreshold, Subscription Thresholds, and DP volume protection options,as needed.

f. Click HDT Options, and configure the tier management options asneeded.

g. Go to step 7.6. For a Mixed mode HDT pool, do the following:

a. Click Add Parity Groups.

Configuring thin provisioning 239Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 240: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note: There are two parity group tab choices from which youcan select: Internal Parity Groups and External ParityGroups (the Internal Parity Groups tab is set by default). Ifyou select the External Parity Groups tab and select one ormore parity groups, this enables the External LDEV Tier Rankmenu from which you must choose a ranking for the tier.

b. For mixed mode in the Internal Parity Groups tab or the ExternalParity Groups tab, select parity groups that you want to add to theHDT pool, click Add to Pool, and click Close. The Tier Configurationtable in Pool Summary shows the new tier configuration status foreach tier.

c. (Optional) Click Advanced Options to configure Pool ID, UsedThreshold, Subscription Thresholds, and DP volume protection options,as needed.

d. Click HDT Options, and configure the tier management options asneeded.

7. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

8. (Optional) Update the task name and provide a description.9. (Optional) Expand Schedule to specify the task schedule. You can

schedule the task to run immediately or later. The default setting is Now.10. Click Submit. If the task is to run immediately, the task begins.11. You can check the progress and the result of the task on the Tasks &

Alerts tab. Click on the task name to view details of the task.

Result

Created pools are added to the target storage system HDP Pools list.

Create Pool dialog boxPools can be created for storage systems that support DP. In addition, HDTpools of differing performance levels can be used to improve applicationperformance.

When you enter the minimum required information in this dialog box, theShow Plan button activates to allow you to review the plan. Click the Backbutton to modify the plan to meet your requirements.

The following table describes the dialog box fields, subfields, and fieldgroups. A field group is a collection of fields that are related to a specificaction or configuration. You can minimize and expand field groups by clickingthe double-arrow symbol (>>).

As you enter information in a dialog box, if the information is incorrect, errorsthat include a description of the problem appear at the top of the box.

240 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 241: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Table 2 Create Pool dialog box

Field Subfield Description

StorageSystem

- Displays the selected storage system name, or prompts you to select thestorage system from a list.

Pool Name - Accept the default pool name, or enter a pool name. Do not confuse pool namewith pool ID. The pool ID is an assigned number, the pool name is a user-definable value.

Reflect a pool name to the storage system is selected by default, and providesnaming consistency between HCS and the storage system. If it is not displayed,it does not apply to the selected storage system.

Pool Type DP or HDT If Pool Type with DP and HDT options is displayed, the selected storage systemsupports both DP and HDT. If Pool Type options are not displayed, it means theselected storage system only supports DP.

Select DP to create a pool using one or more parity groups.

Select HDT to create a pool using one or more tiers, each with one or moreparity groups. The HDT pool type offers two mode option choices, Standard orMixed, and HDT options can be minimized or expanded at the bottom of thedialog box.

Pool Summary - Pool Summary information is updated as parity groups are selected (number ofparity groups, DP pool capacity, used capacity % (for thresholds 1 and 2), andsubscription % (that trigger subscription warnings and limits).

When pools are configured, use Advanced Options to set the Used Thresholds 1and 2, and the Subscription thresholds for warnings and limits.

For DP, Pool Configuration displays the physical attributes of the pool, includingvolume location, drive type, drive speed, chip type, RAID level, number of paritygroups, and parity group capacity.

For HDT, Tier Configuration displays the attributes of one or more tiers,including tier number (1-3), volume location (internal/external), drive type,drive speed, chip type, external LDEV tier rank, number of parity groups, andparity group capacity.

AdditionalParity Groups

- (Information in the Additional Parity Groups table differs depending on theselected pool type.)

For an DP pool, this table displays parity group, drive type, drive speed, chiptype, RAID level, total capacity, unallocated capacity, free capacity, number ofavailable volumes, external storage system, external storage model, and cachemode.

For an DP pool, you can select Allow mixing of different drive types/speeds, chiptypes, RAID levels, or volume locations to create pools using parity groups withdifferent attributes.

Click Add Parity Groups to add one or more parity groups to a pool, and clickClose to review the selected parity groups. Click Remove Parity Groups toremove any parity groups from the list.

For an DP pool, the Available Parity Groups table lists parity group, drive type,drive speed, chip type, RAID level, total capacity, unallocated capacity, freecapacity, number of volumes, CLPR, cache mode, and resource group.

Configuring thin provisioning 241Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 242: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

For an HDT pool, when the Internal Parity Groups tab is selected, the AvailableParity Groups table lists parity group, drive type, drive speed, chip type, RAIDlevel, total capacity, unallocated capacity, free capacity, number of volumes,CLPR, cache mode, and resource group. When the External Parity Groups tab isselected, the Available Parity Groups table lists parity group, total capacity,unallocated capacity, free capacity, number of volumes, CLPR, external storagesystem, external storage model, cache mode, and resource group. If theExternal Parity Groups tab is selected and you select one or more parity groups,the External LDEV Tier Rank menu is enabled from which you must choose aranking for the tier.

For an HDT pool, select from two mode option choices; Standard (single drivetype/speed, chip type, RAID level, and volume location in a single tier) or Mixed(mixed drive types/speeds, chip types, RAID levels, and/or volume locations in asingle tier). The selected mode affects the available parity groups you can addto pools in each tier to best meet your performance or capacity needs (Tier 1 forbest performance, Tier 2 for next best performance, and Tier 3 for capacity).

(For Standard mode) Click + to display the Add New Tier dialog box thatdisplays volume location, drive type, drive speed, and external LDEV tierranking. Select a volume to add to the tier (for example, Internal, SAS, 15000RPM), and click Select to update Tier Configuration with the selected volume anddrive information, and create a Tier 1 tab. On the Tier 1 tab, use Add ParityGroups to add parity groups to this tier. Click + to create Tier 2, and again tocreate Tier 3.

(For Mixed mode) Click Add Parity Groups, select parity groups from InternalParity Groups or External Parity Groups tab (for example: select the check boxfor the title row in Internal Parity Groups, click Add to Pool, and click Close). TheTier Configuration table in Pool Summary displays the new status for eachconfigured tier.

Note the following important items regarding HDT configuration:• HCS automatically arranges your tiers from highest performance (Tier 1) to

lowest performance (Tier 3), regardless of the order used when creatingtiers. For example, if Internal, SAS, 15000 is the tier created, but it is thehighest performance tier, then Internal, SAS, 15000 is displayed as Tier 1.

• When three tiers are defined, the + tab is no longer displayed because thereis a three-tier limit. To delete a tier, click X in the tier tab. When the + tabdisplays, you can define a new tier. Create new tiers and delete existing tiersas required.

>> AdvancedOptions

Pool ID Accept the default pool ID number, or choose from the options in the menu. UseShow Pool ID Usage to display pool ID number and names that are currently inuse.

Used Threshold 1 Set the level of physical capacity usage capacity as a percentage (1-100 range)of the physical DP pool capacity. When this threshold is exceeded, it generatesan alert, an email, or both. Alerts display on the Dashboard and the Tasks &Alerts tab.

Used Threshold 2 Same description as above.

Used Threshold 1 and 2 display in the Pool Summary, in both graphic andnumeric values.

SubscriptionThresholds

(Optional) Select Enabled to set subscription warning and limit thresholds.

Warning Exceeding the subscribed warning threshold (as a percentage of DP poolcapacity) generates an alert, an email, or both. Warnings are generated whenvolume allocation from a pool will exceed the warning level, but volume

242 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 243: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

allocation is still allowed (this is a soft limit). Alerts display on the Dashboardand the Tasks & Alerts tab.

Limit Exceeding the subscribed limit threshold (as a percentage of DP pool capacity)generates an alert, an email, or both. However, if volume allocation will exceedthe subscribed limit threshold, volume allocation is not allowed (this is a hardlimit). You must either reduce the capacity of the volume you want created, orincrease the subscription limit.

Protect HDP VOL

when:

For the VSP G1000, VSP G1500, and VSP F1500 storage system, this option isdisplayed when:• The Data Retention Utility is licensed• The storage system microcode version is 80-02-01-XX/XX or later

I/O fails to a full

Pool:

Select 'Yes' to prohibit host I/O to one or more DP volumes in a DP pool that isfull, where additional free space cannot be assigned. The displayed default (yesor no) is determined by values that have been set for the storage system.

See DP pools to verify the protection settings of the DP pool. For DP volumes,see the volume information summary in volume details, and select the Guardlink to verify that the protect attribute is assigned.

If host I/O to DP volumes is suspended, resolve the issue, and then change thevolume access attribute to read/write (see related topics).

I/O fails to a

blocked Pool VOL:

Select 'Yes' to prohibit host I/O to one or more DP volumes when a DP poolvolume is blocked. The displayed default (yes or no) is determined by valuesthat have been set for the storage system.

See DP pool details to verify the protection settings of the DP pool. For DPvolumes, see the volume information summary in volume details, and select theGuard link to verify that the protect attribute is assigned.

If host I/O to DP volumes is suspended, resolve the issue, and then change thevolume access attribute to read/write (see related topics).

>> HDTOptions

Tier management

Auto

(HDT options are only displayed when it is this pool type.)

By selecting this option, the storage system automatically starts and stopsperformance monitoring and tier relocation based on the Cycle time andMonitoring period settings.

Cycle time values range from 30 minutes to 24 hours. Monitoring period areuser-configurable time periods (for example, 13:00 to 19:00 hours). Continuousand Periodic monitoring modes apply to both the Cycle time and Monitoringperiod settings (for details, see Continuous Monitoring mode).

Tier management

Manual

Selecting this option lets you manually start and stop performance monitoringand tier relocation.

Tier management

Custom

Selecting this option lets you create performance monitoring and tier relocationplans using templates with different types of performance monitoring and tierrelocation settings. You can create and delete templates by name, set upmonitoring periods (up to 7 days a week), set up relocation start times (up to 7days a week), and view these plans in the Schedule Summary table. You canmodify templates to meet specific needs. In addition, click Pools using thistemplate to assign a template to any pool that you select from the Pools table.

Monitoring mode

Continuous

Continuous monitoring mode uses weighted-average performance data gatheredover several monitoring cycles so that tier relocation does not respondimmediately to workload changes (see Cycle time). This mode prevents

Configuring thin provisioning 243Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 244: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

overreaction to workload changes, that might otherwise result in unnecessarytier relocation I/O.

Monitoring mode

Periodic

Periodic monitoring mode uses performance data from the last monitoring cycle(see Cycle time). Tier relocation responds immediately to workload changes.This mode is a more aggressive tier relocation monitoring mode.

Relocation Speed For VSP G1000, VSP G1500, or VSP F1500, you can select one of five speeds forthe relocation of pages in a pool, in a unit of time, from 1 (slowest) to 5(fastest). The default is 3 (standard). To reduce the load on the storage system,specify a slower page relocation speed. To give a higher priority to pagerelocations for the pool, specify a faster relocation speed.

Buffer Space forNew pageassignment

Sets the buffer space used for new page assignments to HDT tiers (using thedefault values is recommended).

Buffer Space forTier relocation

Sets the buffer space used for tier page relocations between HDT tiers (usingthe default values is recommended).

Note: After configuring the pools, click Show Plan and perform all of theconfirming, scheduling, and submitting tasks needed to create the pools.

Verifying DP pool informationYou can check the DP pools summary information to verify the currentnumber of pools, the virtual volume capacity, the pools usage, and the totaland used pool capacity.

Before you begin• Create pools• Discover (register) pools

Procedure

1. On the Resources tab, select HDP Pools under the target storagesystem.A list of DP pools, (DP pool volumes and DP volumes) provides summaryinformation that includes the number of pools, the virtual volumecapacity, pool usage, and the total and used pool capacity.

2. Select a DP pool to display more detailed information about that pool.Use the detailed pools information to verify expected changes in capacityor performance. Configuration information for each DP volume isdisplayed by clicking the volume link from the list in the HDP Vols tab.

If you configured HDT pool tiers to be managed automatically usingElement Manager, you can also verify the following:• Monitoring Information• Last Monitoring Start Date• Last Monitoring End Date• Monitoring Status

244 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 245: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Relocation Status• Relocation Progress (%)

Expanding DP poolsPool capacity can be increased by expanding DP pools or by converting DPpools into HDT pools.

Expand a DP pool by adding volumes to it. Expand an existing DP pool byconverting it into an HDT pool, which changes the pool type and allows you toconfigure tiers that support a mix of drive types and RAID levels.

Before you begin• Register the storage system.• Verify the external connections for any storage system with multiple

performance levels before defining an external LDEV tier rank.• Verify the following when using VSP G1000, VSP G1500, VSP F1500,

Virtual Storage Platform, Universal Storage Platform V/VM, or HUS VMstorage systems for DP pools:○ Parity groups from which volumes have already been created can be

added○ Drive type characteristics (for example: drive type and drive speed) in

parity groups and RAID level

• Verify the following when using HUS 100 or AMS 2000 family of storagesystems for HDP pools:○ Parity group capacity to add○ Number of parity groups to add○ There are drives in which a parity group has not been created

Note: In HDT pools, if different drive types and/or RAID levels are mixed in asingle tier, they will all be considered equal for data placement regardless ofpage access frequency. As a result, I/O performance will be dependent on thedrive type characteristics and RAID level on which any given page resides

In DP pools, if different drive types and/or RAID levels are mixed in an DPpool, I/O performance will be dependent on the drive type characteristics andRAID level on which any given page resides.

Procedure

1. On the Resources tab, expand the storage systems, select an DP poolon the target storage system, and click Expand Pool.

2. To expand an DP pool:a. In Additional Parity Groups, click Add Parity Groups.b. Select one of more parity groups, click Add to Pool, and then click

Close. The Pool Summary is updated.

Configuring thin provisioning 245Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 246: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tip: To change an existing DP pool into an HDT pool, clickChanges to an HDT pool.

3. To expand an HDT pool, in Additional Parity Groups, choose Standard orMixed mode.For Standard mode (single drive type/speed, chip type, RAID level, andvolume location in a single tier):• Tier 1 is the default, click Add Parity Groups, select the parity group,

click Add to Pool, and click Close. Select an available parity groupthat best meets your performance or capacity needs (Tier 1 for bestperformance, Tier 2 for next best performance, and Tier 3 forcapacity).

• (Optional) Click + to add Tier 2 and Tier 3, configure the tiers basedon your performance or capacity needs from volume choices in theAdd New Tier dialog box. The Tier Configuration table in PoolSummary is updated.

Tip: To delete an existing tier, click X in the Tier tab.

For Mixed mode (mixed drive types/speeds, chip types, RAID levels,and/or volume locations in a single tier):• Click Add Parity Groups.

Note: There are two parity group tab choices from which youcan select: Internal Parity Groups and External ParityGroups (the Internal Parity Groups tab is set by default). Ifyou select the External Parity Groups tab and select one ormore parity groups, this enables the External LDEV Tier Rankmenu from which you must choose a ranking for the tier.

• For mixed mode in the Internal Parity Groups tab or the ExternalParity Groups tab, select parity groups that you want to add to theHDT pool, click Add to Pool, and click Close. The Tier Configurationtable in Pool Summary shows the new tier configuration status foreach tier.

4. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

5. (Optional) Update the task name and provide a description.6. (Optional) Expand Schedule to specify the task schedule. You can

schedule the task to run immediately or later. The default setting is Now.If scheduled for Now, select View task status to monitor the task afterit is submitted.

7. Click Submit. If the task is to run immediately, the task begins.

246 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 247: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

8. You can check the progress and the result of the task on the Tasks &Alerts tab. Click on the task name to view details of the task.

ResultWhen the task is complete, the DP pools are expanded. You can verify theupdated information on the HDP Pools list.

Shrinking a DP poolShrinking a DP pool allows the recovery of excess free capacity.

Note: When you shrink a DP pool, Hitachi Command Suite automaticallyperforms an immediate quick format when the volumes that were DP poolvolumes are internal volumes, and a basic format when the volumes thatwere DP pool volumes are external volumes, but became basic volumesbecause those volumes were removed from the DP pool, which allows thesebasic volumes to be used again. A best practice is to run this task whensystem activity is low and major system operations are not running.

Procedure

1. On the Resources tab, select Storage Systems in the navigation tree.2. Expand the tree for the target storage system that includes the DP pool

you want to shrink, and select it in HDP Pools.3. In the HDP Pool Vols tab, select DP pool volumes, and then click

Shrink Pool.4. Review the plan and specify additional information, as appropriate:

• Verify the information that is displayed.• Enter a name in Task Name.• Specify when to execute the task.

5. Click Submit.The plan is registered as a task.

6. On the Tasks & Alerts tab, confirm that the task completed.7. In the Storage Systems tree, return to the target storage system, click

HDP Pools, and confirm that the information is updated.

ResultExcess free capacity has been recovered from the DP pool.

Note: If a DP pool volume becomes blocked and the processing failed duringexecution of the Shrink Pool task, execute the FormatLU command in theDevice Manager CLI. For details about the FormatLU command, see theHitachi Command Suite CLI Reference Guide.

Configuring thin provisioning 247Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 248: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Modifying DP pool settingsAfter setting up a DP pool, you can modify DP pool settings.

Procedure

1. On the Resources tab, select HDP Pools under the target storagesystem.

2. From the list of DP pools, select the pool you want to modify and clickEdit Pool.

3. Modify the settings as appropriate and click Submit.4. On the Tasks & Alerts tab, confirm that all tasks are completed.5. In the Storage Systems tree, return to the target storage system and

click HDP Pools to confirm that the information is updated.

Deleting DP poolsAfter unallocating the DP volumes belonging to a DP pool, you can delete theDP pool.

Note: When you remove an encrypted DP pool in HUS 150 systems, thiscancels the encryption for all drives that comprise the target DP pool, andreleases the encryption on all DP pool volumes in the DP pool.

Before you begin

Before you delete DP pools, first unallocate the DP volumes that belong tothe DP pools to be deleted.

Note: When you delete a DP pool, HCS automatically performs an immediatequick format only on those volumes that were DP pool volumes, but becamebasic volumes because the associated DP pools were deleted. This allowsthese basic volumes to be used again. A best practice for deleting a pool is torun this task when system activity is low and major system operations arenot running.

Procedure

1. On the Resources tab, select Storage Systems.2. Expand the tree, select the target storage system, and select HDP

Pools.3. From the list of pools, select one or more target DP pools to delete, and

click Delete Pools.4. In the Delete Pool dialog box, confirm that the information matches the

DP pools to be deleted.Optionally, update the task name and provide a description.

5. Click Submit.The delete DP pools process begins.

248 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 249: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. You can check the progress and the result of the delete DP pools task onthe Tasks & Alerts tabVerify the results for each task by viewing the details of the task.

Result

The deleted DP pool no longer appears in the DP pool list for the targetstorage system.

Expanding DP volumesYou can expand the size of a DP volume to expand its capacity.

Procedure

1. On the Resources tab, select Storage Systems in the navigation tree.2. Expand the tree for the target storage system that includes the pool you

want to modify, and select the DP pool in HDP Pools.3. On the HDP Vols tab, select one or more volumes you want to expand

and click Expand HDP Volume.4. Specify the new capacity for the volume.5. Click Submit.6. On the Tasks & Alerts tab, confirm that all tasks are completed.7. In the Storage Systems tree, return to the target storage system, click

the target DP pool and view the HDP Vols tab to confirm that theinformation is updated.

Reclaiming zero pagesReclaiming unused zero pages for a DP pool releases unused capacity.

Procedure

1. On the Resources tab, select Storage Systems in the navigation tree.2. Expand the tree for the target storage system that includes the

appropriate DP pool, and select the DP pool in HDP Pools.3. On the HDP Vols tab, select one or more volumes and click Reclaim

Zero Pages.4. Click Submit.5. In the task list, confirm that the task is completed.

Virtualizing storage tiers (HDT)This module describes how to manage data relocation for HDT volumes.

Configuring thin provisioning 249Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 250: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

About virtualizing storage tiersDynamic Tiering functionality lets you configure the monitoring frequencymodes at which data is accessed and relocate that data in a specific pool andtier based on the results.

For example, you might create a pool that combines volumes having differentcost performances, such as combining high-speed volumes (SSD, FMD, FMC,or SAS) with inexpensive low-speed volumes (SATA). The data in this pool isthen automatically relocated among the volumes depending on the I/O load:• High-load pages are allocated to high-speed volumes• Low-load pages are allocated to low-speed volumes

By using Hitachi Command Suite, you can fine-tune settings related to themonitoring of HDT pools and data relocation depending on operatingconditions.

You can configure a tiering policy in an HDT pool so that parity groups areassigned to specific tiers (Tier 1, Tier 2, and Tier 3) to take advantage ofdrive types, drive speeds, chip types, and RAID levels to optimize yourperformance and capacity needs. Tiering allows data in a host volume to bespread across multiple tiers (Tier 1 for high-speed, Tier 2 for next highestspeed, and Tier 3 for additional capacity independent of drive type or RAIDlevel), which provides the flexibility to specify settings that address yourbusiness conditions or the characteristics of certain applications. Tieringallows you to optimize data placement, improve volume performance, andreduce costs through more effective use of your resources.

HDT settings that can be specified using HCS include:• Monitoring HDT pools and relocating data

Monitoring and data relocation can be configured to run automatically byspecifying a time or interval in advance, or you can manually monitor andrelocate data as required. For VSP G1000, VSP G1500, VSP F1500, , datarelocation speed can be set. For example, using a slower relocation speedwill reduce the impact of relocation on other I/O requests.

• Specifying the buffer space for HDT poolsWhen HDT pools are created or edited, on each hardware tier you canspecify a ratio corresponding to the buffer space for new page assignment(an area reserved for increasing used capacity). Similarly, you can specifya ratio corresponding to the buffer space for tier relocation (a working areareserved for the storage system to use when relocating data). However, beaware that changing the default values might degrade performance.

• Applying a tiering policy and setting priority to HDT volumesYou can regulate tiering to balance performance and cost, such as bypreventing more data than is necessary from being placed in a fasthardware tier and thereby reducing management costs. After thesesettings are configured, regularly review the status of the settings of thattiering policy and the amount of capacity in use by each hardware tier to

250 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 251: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

verify that resources are being appropriately allocated. Configurationsettings can be changed any time if costs increase or performance is lowerthan expected.In an HDT pool, data with a high frequency of I/O operations isautomatically preferentially placed in a high-speed hardware tier.Configure the following settings to control data placement, ensuring thatimportant data is placed in a fast hardware tier, according to your businessrequirements.○ Apply a tiering policy

To ensure that data with high importance but with few I/O operations isplaced in a hardware tier that maintains a certain speed at all times,you can specify the target hardware tier. You can also apply a tieringpolicy to HDT volumes that determines the capacity ratio of eachhardware tier, by defining such a policy in advance.

○ Specify a new page assignment tierWhen HDT volumes are created or allocated, specify which hardwaretier the new page of an HDT volume will be assigned with priority.Among the hardware tiers defined by a tiering policy, specify High for anupper-level hardware tier, Middle for a medium-level hardware tier, andLow for a low-level hardware tier.

○ Set relocation priorityWhen HDT volumes are created or allocated, specify whether toprioritize relocation of the data of the target HDT volumes.

○ Real-time tier controlsIf you use the Active Flash functionality, the page performance of tieredHDT volumes will be monitored, and data will be relocated to anappropriate tier if necessary due to changes in the I/O load across shortintervals.

○ Optimizing tier allocationBy using IOPH propagation, you can enable optimized tier allocation bytransmitting performance monitoring data to HDT pools that containvolumes that are inaccessible from a host. These operations areperformed from the CLI.

• Editing a tier rank for an external HDT pool volumeWhen an external volume is included in an HDT pool volume that is one ofthe volumes making up an HDT pool, you can define the external LDEV tierrank (high, medium, or low) according to the performance.

A registered Tiered Storage Manager license enables the following:• Evaluating and analyzing the operation status of HDT pools

Use the Mobility tab to evaluate and analyze the operation status that isrelated to an HDT pool.

• Setting a schedule for relocating data and monitoring HDT poolsRegister the time of HDT pool monitoring and data replacement as atemplate schedule.

Configuring thin provisioning 251Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 252: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Editing tier relocation for HDT volumes (preventing relocation by volume)Specify whether data can be relocated for each HDT volume. Tierrelocation can be controlled according to the characteristics and operatingstatus of applications using HDT volumes, such as by preventing otherapplications from relocating data of the volumes they are using when thereis an application for which data relocation takes a high priority.

• Restoring a data placement by applying a data placement profile of HDTvolumesRestore a previous data placement by saving data placements of optimizedHDT volumes per page as profiles, and applying them according tooperation. For example, if an HDT volume is being used for multipleoperations that have different access characteristics (such as online andbatch processing), you can create data placement profiles that fit thedifferent processes and apply the appropriate profile before beginningprocessing. By doing so, you can restore a data placement that fits thecharacteristics of the target processing in advance, which prevents I/Operformance from dropping. In addition, by setting up a schedule, you canupdate and apply profiles at regular intervals to suit the operation ofapplications. Profiles are applied only to pages placed in Tier 1 of HDTpools.

Tip: If you are using a VSP G1000 VSP G1500, or a VSP F1500 storagesystem, some windows might not display special HDT pool data, such ashardware tiers, for Dynamic Tiering for Mainframe volumes.

Manually starting or stopping the monitoring of HDT poolsYou can start or stop monitoring of an HDT pool manually.

Before you begin

A Tiered Storage Manager license is required to perform the operation fromthe Mobility tab.

After confirming that the HDT pools setting for Tier Management is set toManual or Custom, you can start or stop monitoring of an HDT pool.

Procedure

1. From the tree view in the Resources tab, select Storage Systems (oruse the Mobility tab).

2. Expand the tree and select HDP Pools under the target storage system.3. Select one or more HDT pools and click either the Start Monitoring or

Stop Monitoring button, both found in the More Actions menu.4. Set the desired items and execute the task.5. View the list of tasks to check execution results.6. Click the link for the task name and check that monitoring of each HDT

pool has started or stopped.

252 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 253: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Manually starting or stopping the tier relocation of an HDT poolYou can manually start or stop tier relocation of an HDT pool.

Before you begin

A Tiered Storage Manager license is required to perform the operation fromthe Mobility tab.

After confirming the following, you can start relocation of an HDT pool:• Existence of two or more hardware tiers in the target HDT pools• The HDT pools setting for Tier Management is Manual or Custom

To stop relocation of an HDT pool, confirm:• The HDT pool setting for Tier Management is Manual or Custom

Procedure

1. From the tree view in the Resources tab, select Storage Systems (oruse the Mobility tab).

2. Expand the tree and select HDP Pools under the target storage system.3. Select one or more HDT pools and click either the Start Relocation or

Stop Relocation button.4. Set the desired items and execute the task.5. View the list of tasks to make sure that all tasks have completed.6. Click the link for the task name and check that tier relocation of each

HDT pool has started or stopped.

Scheduling monitoring and tier relocation of HDT poolsYou can specify the schedule for monitoring and relocating HDT pools.

Before you begin

A Tiered Storage Manager license must be registered.

Procedure

1. From the Mobility tab, select HDP Pools.2. Expand the tree view and select the target storage system. From the list

of DP pools, select the HDT pool for which the schedule is to be set.3. On the Summary panel, click Actions, and select Edit Pool.4. In the Edit Pool dialog box, click HDT Settings. In Tier Management

select the Custom radio button. Click Select Template to use anexisting schedule template, or click Create Template to create a newschedule template.

5. Specify the necessary items by following the instructions in the windowand create and execute the plan.

Configuring thin provisioning 253Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 254: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. View a list of tasks to make sure that all tasks completed.7. In the Mobility tab select the HDT pool. Then from Tier Management

in Summary confirm the template names.

Editing tier relocation for HDT volumesThere are conditions where it is useful to disable tier relocation of volumes toprevent unnecessary data movement. For example, to preferentially performtier relocation of volumes whose I/O activity varies greatly, and suppress tierrelocation of other volumes.

Before you begin

A Tiered Storage Manager license must be registered.

Procedure

1. Click the Mobility tab, and select Logical Groups in the navigationpane.Logical Groups can contain both Public Logical Groups and PrivateLogical Groups. You can discover and register not only at the logicalgroup level, but also at the host or HDT volume level.

Note: If you want to register by host, you can start this processby selecting the Resources tab.

2. Expand the tree to select the target logical group from Public LogicalGroups or Private Logical Groups.

3. If you want to set all HDT volumes in the logical group at the same time,select Actions located in the corner of the application pane, then EditTier Relocation. If you want to specify the target volumes, select one ormore HDT volumes from the list of volumes, and then click the Edit TierRelocation button.

4. Select Enable to enable tier relocation for HDT volumes. Select Disableto disable tier relocation.

5. Set any required items in the window, and then execute the task.6. Verify that the task completed.

ResultThe applied tier location settings can be checked from Summary or the list ofvolumes, which are displayed by selecting a logical group from the Mobilitytab.

Applying a tiering policy to HDT volumes

Before you begin

• Two or more hardware tiers must exist in an HDT pool.

254 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 255: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• If selecting multiple HDT volumes from a logical group, all selected HDTvolumes must belong to the same HDT pool.

• A Tiered Storage Manager license is required to perform operations fromthe Mobility tab.

Procedure

1. From the tree view in the Resources tab, select Storage Systems.You can also perform this step from the Mobility tab.If editing from logical groups, perform this step from the Mobility tab.

2. Expand the tree and select the target HDT pools.3. Select one or more HDT volumes from the list of volumes on the HDP

Vols tab, and then click Edit Tiering Policy.4. Select the tiering policy, and then execute the task.

You can schedule the task to be executed later.5. View the list of tasks to make sure that all tasks are complete.

ResultThe tiering policy is applied.

Tip: The capacity ratio of Tier 1 might exceed the value specified for themaximum allocation threshold if a data placement profile and a tiering policyare being used concurrently.

Customizing a tiering policy for HDT volumesYou can set the value for the allocation threshold of a hardware tier. When atiering policy is specified for HDT volumes, make sure changing the definitionwill not cause a problem.

Before you begin

• A Tiered Storage Manager license must be registered to perform this taskfrom the Mobility tab.

• Verify the tiering policy to be changed.

Procedure

1. On the Resources tab, expand the Storage Systems tree.Note that you can also update tiering policies on the Mobility tab.

2. For the applicable storage system, select DP Pools.3. On the Tiering Policies tab, select a tiering policy to change, and click

Customize Tiering Policy.4. Specify the required items, and then submit the task.5. After the task is complete, on the Resources tab, select DP Pools.

Configuring thin provisioning 255Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 256: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. On the Tiering Policies tab, select the tiering policy you changed toverify that the can verify the changes.

ResultThe tiering policy is updated.

Changing a tiering policy nameUse this procedure to change a tiering policy name.

From Level6(6) to Level31(31), the names of tiering policies can be changed.However, for tiering policies from All(0) to Level5(5), names cannot bechanged.

Before you beginThe Storage Administrator (System Resource Management) role is required toperform this task.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. In the Pools window, click Edit Tiering Policies.3. In the Edit Tiering Policies window, select the tiering policy that you

want to change, and then click Change.The Change Tiering Policy window appears.

4. Select the Change Tiering Policy check box.5. Enter the name of the tiering policy.

You can enter up to 32 alphanumeric characters.6. Click OK.7. Return to the Edit Tiering Policies window.8. Click Finish.

The Confirm window appears.9. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

10. Click Apply.

256 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 257: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If the Go to tasks window for status check box is selected, the Taskswindow appears.

Notes on data placement profiles for HDT volumesWhen using data placement profiles for HDT volumes, it is useful tounderstand the following:• Data placement profiles are applied only to pages placed in Tier 1 of HDT

pools.• For pages to which both a data placement profile and tiering policy are set,

the settings of the profile are prioritized.• After a data placement profile has been applied and monitoring and data

relocation of HDT pools has been performed, a data placement is restored.Therefore, users need to determine the timing at which to apply a profileby taking into account the period during which the monitoring and datarelocation are performed.

• When performing a create, update, apply, or release of a data placementprofile, or when searching for or repairing an inconsistency in a dataplacement profile, users cannot perform other operations for the targetstorage system.

• A task to create, update, apply, or release a data placement profile or tosearch for or repair an inconsistency in a data placement profile takes timeto complete after it is executed. This time might affect other operations, somake a plan based on the displayed time estimate. Keep in mind that thelarger the number of target volumes, the more likely that the estimatedtime and the actual execution time will differ greatly. You can also stoptasks that are being executed, and then restart them.

• Regardless of the allocated resource groups, data placement profilescreated by another user can also referenced in the list of profiles. However,operation that can be performed on such profiles are restricted based onthe allocated resource group and role.

• If the number of managed data placement profiles exceeds 200, displayingthe profile list might take some time. In this case, set the [Rows/page] ofthe profile list to no more than 100.

• When the operations listed below are performed, users need to perform asearch inconsistency operation from the Refresh Storage Systems dialogbecause the profile that is actually applied to the volume in a storagesystem might not match the profile information that can be referencedfrom Hitachi Command Suite.○ When the Hitachi Command Suite database was overwritten, due to an

import or restoration, while the applied profile existed.○ When the storage system that contains an applied profile is deleted, and

the storage system is re-registered.

Tip: If inconsistencies are detected, a message will be displayed in ManageData Placement Profiles dialog. From the link shown in the message, performthe repair inconsistencies operation and release the profile. This can

Configuring thin provisioning 257Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 258: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

avoid unintended data to be fixed into a high-speed hardware tier by a profilethat was applied in the past.

Creating a data placement profile for HDT volumesTo restore a data replacement appropriate for application processing, theuser creates a data placement profile when HDT volumes provide sufficientperformance. A profile can be created for each logical group, and is managedwith the corresponding logical group.

Before you begin• A Tiered Storage Manager license must be registered• Gather the name of the target logical group• Specify settings for checking performance information, such as settings for

linking to Hitachi Tuning Manager, or settings for the performancemonitoring software of each application.

Tip: If linked to Hitachi Tuning Manager, you can view performance trendcharts for the volumes in the logical group from the Manage DataPlacement Profiles dialog box. For details on the linking method, see theHitachi Command Suite Administrator Guide.

Procedure

1. From the Mobility tab, General Tasks pane, select Manage DataPlacement Profiles.

2. Click the Logical Group View button, and specify the necessary items.Check the performance of the target logical group by using theperformance trend charts or software for checking performanceinformation, and then click the Create Profile button.

3. To create a plan, specify the necessary items by following theinstructions in the window.

4. If there is no problem with the plan, execute it.5. View the list of tasks to see the execution results.

ResultThe created profile can be checked in the list of profiles displayed by clickingthe Logical Group View button in the Manage Data Placement Profiles dialogbox.

Tip: To delete data placement profiles, select the data placement profiles tobe deleted, and click Delete Profiles.

Note: Regardless of the allocated resource groups, data placement profilescreated by another user can also be referenced in the list of profiles.However, operations that can be performed on such profiles are restricted

258 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 259: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

based on the allocated resource group and role. Therefore, specify profilenames and descriptions to make it clear which operations are available.

Updating a data placement profile for HDT volumesYou can update an existing data placement profile to reflect the latest HDTvolume data placement in the profile.

Tip: To change the target HDT volumes of a profile due to changes in thelogical group configuration or other reasons, re-create the profile.

Before you begin• Register a Tiered Storage Manager license.• Determine the name of the target logical group.• Specify settings for checking performance information, such as settings for

linking to Hitachi Tuning Manager, or settings for the performancemonitoring software of each application.

Tip: If linked to Hitachi Tuning Manager, you can view performance trendcharts for the volumes in the logical group from the Manage DataPlacement Profiles dialog box. For details on the linking method, see theHitachi Command Suite Administrator Guide.

Procedure

1. From the Mobility tab, General Tasks pane, select Manage DataPlacement Profiles.

2. Click the Logical Group View button, specify the necessary items.Check the performance of the target logical group by using theperformance trend charts or software for checking performanceinformation, select the row of the profile to be updated (only one profilecan be selected) from the list of data placement profiles, and then clickthe Update Profile button.

3. To create a plan, specify the necessary items by following theinstructions in the window.

4. If there is no problem with the plan, execute it.5. View the list of tasks to see the execution results.

ResultThe updated profile information can be viewed from the profile operationhistory list, which is displayed by clicking the Logical Group View button inthe Manage Data Placement Profiles dialog box.

Tip: When you want to periodically update the profile according to theintervals for monitoring HDT pools and replacing data, you can set a schedulefor updates by using the Schedule Profiles button. Also, When an applied

Configuring thin provisioning 259Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 260: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

profile is updated, the profile in a condition before the update will continue tobe used until the updated profile is reapplied.

Editing a data placement profile for HDT volumesYou can change the name and description of an existing data placementprofile.

Before you begin

A Tiered Storage Manager license must be registered.

Procedure

1. From the Mobility tab, General Tasks pane, select Manage DataPlacement Profiles.

2. Click Overall Profile View or Logical Group View, select the row ofthe profile to be edited from the list of data placement profiles (only oneprofile can be selected), and then click Edit Profile.

3. Edit the necessary items and submit the task.

ResultWhen the task completes, the data placement profile is updated.

Applying a data placement profile for HDT volumesBefore beginning application processing, apply a data placement profile torestore a data placement that fits the characteristics of the processing. Afterconfirming that the data placement is restored and performance is improved,release the applied profile to return to normal HDT operation.

Tip:• To apply or release the profile periodically to match the application

operations, click Schedule Profiles to schedule when to apply and releasethe profile. When you apply the profile, you can also specify a releaseschedule.

• The capacity ratio of Tier 1 might exceed the value specified for themaximum allocation threshold if a data placement profile and a tieringpolicy are being used concurrently.

Before you begin• A Tiered Storage Manager license must be registered• A data placement profile must be created• Gather the name of the target logical group• Specify settings for checking performance information, such as settings for

linking to Hitachi Tuning Manager, or settings for the performancemonitoring software of each application

260 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 261: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tip: If linked to Hitachi Tuning Manager, you can view performance trendcharts for the volumes in the logical group from the Manage DataPlacement Profiles dialog box. For details on the linking method, see theHitachi Command Suite Administrator Guide.

Procedure

1. On the Mobility tab, General Tasks pane, select Manage DataPlacement Profiles.

2. Click the Overall Profile View button, or Logical Group View button,to view the creation date, usage, and effect of applying profiles in thepast, and select the row of the profile to be applied (only one profile canbe selected) from the list of data placement profiles, and then click theApply Profile button.

3. To create a plan, specify the necessary items by following theinstructions in the window.

4. If there is no problem with the plan, execute it.5. View the list of tasks to see the execution results.

After the monitoring and data relocation of HDT pools finish, perform thefollowing operations to check the effect of applying the profile, and torelease the applied profile.

6. Using the software for checking the performance information, check theeffects of applying the profiles.If linked with Hitachi Tuning Manager, from the Manage DataPlacement Profiles dialog box, click Logical Group View to check theperformance trend chart of the target logical group.

7. To return to normal HDT pools operation, click Release Profile, specifythe necessary items, and then release the applied profile.

Scheduling data placement profiles for HDT volumesYou can set weekly or monthly schedules for applying, releasing, andupdating data placement profiles.

Tip: If multiple operation schedules are registered for a single logical group,the task list displays the operation that will be executed first. For example, ifa schedule to apply a profile is set followed by a schedule to release theprofile, then the application task is displayed until the profile is applied. Afterthe profile is applied, the task to release the profile is displayed.

Before you begin• Register a Tiered Storage Manager license.• Create a data placement profile.• Identify the name of the target logical group.

Configuring thin provisioning 261Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 262: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. On the Mobility tab, select General Tasks, then select Manage DataPlacement Profiles.

2. Click the Overall Profile View button, or Logical Group View button,then click the Schedule Profiles button.

3. To create a plan, specify the necessary items by following theinstructions in the window.

4. If there is no problem with the plan, submit it.5. View the list of tasks to make sure that the operation for which a

schedule is set is registered as a task.

Editing an external LDEV tiering rank for an HDT poolYou can edit the external LDEV tiering rank (Low/Medium/High) for a poolvolume.

Before you begin• Virtualize sufficient storage.• Define hardware tiers that consist of external volumes.• Connect the external storage system.

Procedure

1. On the Resources tab, select Storage Systems, and then select thetarget HDP Pools.

2. Select the target HDT pool, and click the HDP Pool Vols tab.3. Select the target HDT pool volume the HDP Pools Vols list, and then

click Edit External LDEV Tier Rank.4. In the External LDEV Tier Rank menu, change the tier ranking (Low/

Middle/High) to a different value.5. (Optional) Update the task name and provide a description.6. (Optional) Expand Schedule to specify the task schedule. You can

schedule the task to run immediately or later, The default setting is Now.If scheduled for Now, select View task status to monitor the task afterit is submitted.

7. Click Submit. If the task is to run immediately, the task begins.8. You can check the progress and the result of the task on the Tasks &

Alerts tab. Click on the task name to view details of the task.

ResultThe external LDEV tiering rank for the HDT pool volume has been changed onthe target storage system.

Monitoring capacity and performance

262 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 263: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Monitoring pool capacityThe storage system monitors the pool's free capacity in accordance withthreshold values defined when you create pools. If the pool capacity reachesthe threshold values, warnings are issued as SIMs and as SNMP traps to theopen-systems host.

You can provision a larger virtual capacity beyond the pool capacity by usingDP-VOLs of Dynamic Provisioning or Dynamic Tiering. However, when thepool's free capacity is depleted, you can lose access to DP-VOLs that requiremore pool capacity. For example, if the pool usage rate is 100% due toincreased write operations, then I/O is not accepted and I/O will be stoppedfor a DP-VOL that failed to receive needed pool capacity. Therefore, youshould carefully monitor the pool usage or pool free capacity, as well as thelevel of provisioned virtual capacity.

Monitoring pool usage levelsSeveral tools are available that show both the current pool usage rates andthe changes over time for those usage rates. These tools help you monitorthe pool free space and estimate when you will need to increase the poolcapacity by adding pool volumes.

In Device Manager - Storage Navigator the Pool window, use the VirtualVolumes tab to view DP-VOL usage rates and pool usage rates.

If you have Hitachi Command Suite, you can monitor DP-VOL usage and poolusages rates using the time-variable graph.

Monitoring performanceYou can monitor system performance using Performance Monitor. For moreinformation, see the Performance Guide.

You can monitor information on pools and DP-VOLs using Command ControlInterface (CCI). For more information, see the Command Control InterfaceUser and Reference Guide.

The following activities help you to monitor and control performance of theDP-VOL. Collecting monitor information and subsequent tuning may increasethroughput and the operating rates.• Collecting monitor information.

Collecting the following monitor information helps you determine the poolload (including the access frequency, and the access load upon datadrives) and DP-VOL load (including the access frequency). You can thenuse this monitor information to tune the appropriate allocation.○ Access frequency of DP-VOL, read hit rates, and write hit rates (using

Performance Monitor)○ Usage rates of parity groups of pools (using Performance Monitor)

Configuring thin provisioning 263Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 264: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

○ Pool usage (using Hitachi Device Manager - Storage Navigator)○ DP-VOL usage (using Hitachi Device Manager - Storage Navigator)○ Dynamic Tiering performance monitoring of pool storage

• Possible tuning actions (without Dynamic Tiering).The following techniques using ShadowImage or Hitachi Tiered StorageManager will move a DP-VOL:○ The DP-VOL is copied using ShadowImage from a pool with an I/O

bottleneck. For more information, see the Hitachi ShadowImage® UserGuide.

○ When normal volumes exist in the same parity group as the pool-VOL,Hitachi Tiered Storage Manager can be used to move the normal volumeto another parity group that is not shared with a pool-VOL. For moreinformation, see the Hitachi Command Suite User Guide (MK-90HC172).

○ ShadowImage copies a DP-VOL with a high I/O load to a pool with alower access level to adjust the pool load.

Managing I/O usage rates exampleThe following figure illustrates an example of managing I/O usage rates.

264 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 265: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tuning with Dynamic TieringIf Dynamic Tiering is active on your storage system, you can monitor accessfrequency and performance while Dynamic Tiering automatically relocatesdata to the most suitable data drive (tier). You can configure monitoring tobe automatic or manual. In both cases, relocation of the data is automaticallydetermined based on monitoring.

For details, see Dynamic Tiering and active flash on page 148

Improving performance by monitoring poolsWhen the multi-tier pool is enabled, and the performance of the pools andDP-VOLs is not as expected, use the workflow below to detect problems andimprove the performance.

1. Confirm the performance of pools and DP-VOLs

Using Performance Monitor, confirm the performance of pools and DP-VOLs.If the performance of pools and DP-VOLs is poor, go to Step 2.

Configuring thin provisioning 265Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 266: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. Confirm the Dynamic Tiering setting

Using Hitachi Device Manager - Storage Navigator or Command ControlInterface, confirm the Dynamic Tiering setting. If the values are set but donot conform to the design of pools or LDEVs, change the settings. If thevalues are set and conform to the design of pools or LDEVs, go to Step 3.

3. Confirm and improve the progress of tier relocation processing

Confirm the progress of tier relocation processing in Completed Rate (%) inthe tier relocation log file. If the progress of the tier relocation process is low,there might be many pages where the page allocation is not optimized. Inthis case, change the Monitoring Mode or Cycle Time setting. Therecommended values are as follows:

Monitoring Mode: If Period Mode is set, change to Continuous Mode.

Cycle Time: Set a longer period than the current setting.

If the recommended values are already set or if the progress of tierrelocation processing is still low even after the settings are changed, go toStep 4.

4. Confirm Performance Utilization of each tier

You can confirm the performance utilization of each tier in the View TierProperties window or with the raidcom get dp_pool command. Theperformance utilization is the ratio (%) of the number of I/Os against theperformance potential of the tier. For example, if the performance utilizationis 90% or more, a workload greater than the processing capacity of the tier isbeing applied to the tier.

If Performance Utilization is 90% or more on one or more of the tiers, or ifPerformance Utilization is 60% on all tiers, add drives and expand the poolcapacity.

1. In the case that Performance Utilization is 90% or more on a tier:

Add drives to the tier where Performance Utilization is 90% or more andconfirm the usage ratio of the capacity. The recommended drives to be addedare as follows:

Drives to be added to the tier wherePerformance Utilization is 90% or more Recommended pool volumes to be added

SSD Add SSD pool volumes.

SAS10K or SAS15K If the performance is given greater priority thanthe bit-cost: Add SSD pool volumes.

If the bit-cost is given greater priority than theperformance: Add SAS10K or SAS15K poolvolumes. However, add SSD pool volumes if the

266 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 267: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Drives to be added to the tier wherePerformance Utilization is 90% or more Recommended pool volumes to be added

capacity utilization of the SAS tier (SAS10K orSAS15K) is low.

SAS7.2K If the performance is given greater priority thanthe bit-cost: Add SAS (SAS10K or SAS15K) poolvolumes.

If the bit-cost is given greater priority than theperformance: Add SAS7.2K pool volumes.However, add SAS (SAS10K or SAS15K) poolvolumes if the capacity utilization of the SAS7.2K tier is low.

2. In the case that Performance Utilization is 90% or more on two or moretiers:

a. Collect the frequency distribution on the View Tier Properties window.

b. From the frequency distribution and the performance limit of each tier,seek the ratio of the most suitable tier capacity.

The performance limit of tier 2 is the maximum average IOPH on one pagethat the drive related to tier 2 can process. The performance limit of tier 3 isthe maximum average IOPH on one page that the tier 3 drive can process.Based on these values, calculate the most suitable tier capacity for the tier 1,tier 2, and tier 3.

The most suitable tier capacity for tier 1: The capacity from 0 GB to thecapacity related to the performance limit of tier 2

The most suitable tier capacity for tier 2: The capacity from the performancelimit of tier 2 to tier 3 of that capacity.

The most suitable tier capacity for tier 3: The capacity from the performancelimit of tier 3 to the maximum capacity of tier 3

Then, based on the most suitable tier capacity for each tier, calculate themost suitable capacity ratio of tier 1, tier 2, and tier 3 as follows:

The most suitable tier capacity for tier 1 : The most suitable tier capacity fortier 2 : The most suitable tier capacity for tier 3

c. Compare the ratio of the real tier capacity to the ratio of the most suitabletier capacity.

Comparing the ratio of tier capacity Pool volumes suggested to be added

The ratios of the most suitable tier capacity andreal tier capacity are different.

Add pool volumes to the tier that is lackingcapacity.

Configuring thin provisioning 267Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 268: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Comparing the ratio of tier capacity Pool volumes suggested to be added

The ratios of the most suitable tier capacity andreal tier capacity are the same.

If the performance is given greater priority thanthe bit-cost: Add SSD or SAS (SAS10K orSAS15K) pool volumes.

If the bit-cost is given greater priority than theperformance: Add SAS (SAS10K or SAS15K)pool volumes. However, add SSD pool volumes ifthe capacity utilization of the SAS tier (SAS10Kor SAS15K) is low.

d. Add drives and expand the pool capacity.

3. In the case that Performance Utilization is 60% on all tiers:

Add drives in the upper tier and expand the pool capacity.

Working with SIMs

About SIMsDynamic Provisioning and Dynamic Tiering provide Service InformationMessages (SIMs) to report the status of the DP-VOLs and pools. If an eventassociated with a pool occurs, a SIM and an SNMP trap are reported.

An example of a SIM condition is if the actual pool usage rate is 50.2%, butonly 50% is displayed because the capacity amount is truncated after thedecimal point. If the threshold is set to 50%, a SIM and an SNMP trap arereported, even though the pool usage rate displayed on the GUI does notindicate the threshold is exceeded.

SIM reference codesThe following table provides information about SIM reference codesassociated with Dynamic Provisioning or Dynamic Tiering.

SIM code(xxx =

hexadecimal pool

number)

(SIM level)

Event Thresholds orvalues

Types of reports

Is thereport

notifiedto thehost?

Does it needcompletionoperations

from HitachiDevice

Manager -Storage

Navigator?

Does it needoperations

by theoperator?

620xxx

(Moderate)

DynamicProvisioning poolusage level (Used(1%))exceeded

1% to 100% (in1% increments)

Default: 70%

Yes Yes No

268 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 269: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

SIM code(xxx =

hexadecimal pool

number)

(SIM level)

Event Thresholds orvalues

Types of reports

Is thereport

notifiedto thehost?

Does it needcompletionoperations

from HitachiDevice

Manager -Storage

Navigator?

Does it needoperations

by theoperator?

the WarningThreshold

622xxx

(Moderate)

DynamicProvisioning poolis full

100% Yes Yes No

623xxx

(Moderate)

Error occurred inthe DynamicProvisioning pool

Not applicable Yes No Yes

624000

(Moderate)

No space in theshared memory

Not applicable Yes Yes Yes

625000

(Moderate)

DynamicProvisioning poolusage level (Used(1%)) continuesto exceed thehighest poolthreshold. SOM734 must beenabled.

Highest poolthreshold ofDynamicProvisioning

Yes Yes No

626xxx

(Moderate)

DynamicProvisioning Poolusage level (Used(1%)) exceededthe DepletionThreshold

1% to 100% (in1% increments)

Default: 80%

Yes Yes No

627xxx

(Moderate)

Pool-VOL isblocked

Not applicable Yes No Yes

628000

(Service)

Protect attribute ofData RetentionUtility is set

Not applicable Yes Yes Yes

629xxx

(Moderate)

In the DynamicProvisioning pool,the used capacityreserved forwriting exceededthe WarningThreshold.

This SIM isreported if theDynamicProvisioning poolcontains one ormore LDEVs in the

1% to 100% (in1% increments)

Default: 70%

Yes Yes No

Configuring thin provisioning 269Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 270: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

SIM code(xxx =

hexadecimal pool

number)

(SIM level)

Event Thresholds orvalues

Types of reports

Is thereport

notifiedto thehost?

Does it needcompletionoperations

from HitachiDevice

Manager -Storage

Navigator?

Does it needoperations

by theoperator?

parity group withacceleratedcompressionenabled.

62Axxx

(Moderate)

In the DynamicProvisioning pool,the capacityreserved forwriting is full.

This SIM isreported if theDynamicProvisioning poolcontains one ormore LDEVs in theparity group withacceleratedcompressionenabled.

100% Yes Yes No

62B000

(Moderate)

In the DynamicProvisioning pool,the used capacityreserved forwriting continuesto exceed thehighest poolthreshold. SOM734 must beenabled.

This SIM isreported if theDynamicProvisioning poolcontains one ormore LDEVs in theparity group withacceleratedcompressionenabled.

Highest poolthreshold ofDynamicProvisioning

Yes Yes No

62Cxxx

(Moderate)

In the DynamicProvisioning pool,the used capacityreserved forwriting exceededthe DepletionThreshold.

1% to 100% (in1% increments)

Default: 80%

Yes Yes No

270 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 271: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

SIM code(xxx =

hexadecimal pool

number)

(SIM level)

Event Thresholds orvalues

Types of reports

Is thereport

notifiedto thehost?

Does it needcompletionoperations

from HitachiDevice

Manager -Storage

Navigator?

Does it needoperations

by theoperator?

This SIM isreported if theDynamicProvisioning poolcontains one ormore LDEVs in theparity group withacceleratedcompressionenabled.

62Dxxx

(Moderate)

In the DynamicProvisioning pool,the used capacityreserved forwriting exceededthe PrefixedDepletionThreshold.

This SIM isreported if theDynamicProvisioning poolcontains one ormore LDEVs in theparity group withacceleratedcompressionenabled.

90% Yes Yes No

Automatic completion of a SIMSome SIMs are completed automatically when you resolve the problem thatcaused the SIM. SOM 734 must be enabled for automatic completion of aSIM. Automatic completion of a SIM removes it from the system with noadditional manual intervention. After the SIM is automatically completed, thestatus of the SIM changes to completed.

The following SIMs are automatically completed when you resolve theproblem causing the SIM.• SIMs 620xxx, 625000, 626xxx, 629xxx, 62B000, and 62Dxxx are

automatically completed if you increase pool capacity by adding pool-VOLsbecause the condition that caused the SIM removed.

• SIMs are automatically completed in the following cases:○ SIM 620xxx

Configuring thin provisioning 271Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 272: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If the usage level (Used (%)) of DP pool number xxx falls below thewarning threshold, SIM is automatically completed.

○ SIM 625000If the usage level (Used (%)) of each DP pool in all pools of the storagesystem falls below the depletion threshold, SIM is automaticallycompleted.

○ SIM 626xxxIf the usage level (Used (%)) of DP pool number xxx falls below thedepletion threshold, SIM is automatically completed.

○ SIM 629xxxIf the physical capacity (Used (%)) of DP pool number xxx falls belowthe warning threshold, SIM is automatically completed.

○ SIM 62B000If the physical capacity (Used (%)) of each DP pool in all pools of thestorage system falls below the depletion threshold, SIM is automaticallycompleted.

○ SIM 62CxxxIf the physical capacity (Used (%)) of each DP pool in all pools of thestorage system falls below the depletion threshold, SIM is automaticallycompleted.

○ SIM 62DxxxIf the physical capacity (Used (%)) of DP pool number xxx falls belowthe prefixed depletion threshold which is fixed with 90 %, SIM isautomatically completed.

Manually completing a SIMSome SIMs must be manually completed to clear them from the system.After the trouble that caused the SIM is solved, you can manually completethe SIM. After manually completing a SIM, the status of the SIM changes tocompleted. If you complete the SIM before the underlying cause is solved,the SIM may reoccur.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• Perform the troubleshooting associated with the issued SIM. For details

about troubleshooting, see Troubleshooting Dynamic Provisioning onpage 406.

Procedure

1. Open the Complete SIMs window.In Hitachi Command Suite:

272 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 273: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. On the Resources tab, click Storage Systems, and then expand AllStorage Systems and the target storage system.

b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. On the Pools tab, click More Actions, and then select Complete SIMs.3. Click Finish.

The Confirm window appears.4. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

5. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.You can check whether a SIM completes successfully in the HitachiDevice Manager - Storage Navigator main window.

Complete SIMs window

Configuring thin provisioning 273Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 274: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Task Name Confirm the settings, type a unique task name or accept thedefault, and then click Apply.

A task name is case-sensitive and can be up to 32 ASCII letters,numbers, and symbols. The default is <date>-<window-name>.

Enabling data direct mapping for external volumes, pools,and DP-VOLsCreating external volumes with data direct mapping enabled

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Logical Devices.

2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, from the Provisioning Type list, select

External as a provisioning type for the LDEV to be created.4. In System Type, select Open to create open system volumes5. In Data Direct Mapping, select Enable.6. From the Emulation Type list, confirm that OPEN-V is being selected.7. Click Select Free Spaces.8. In the Select Free Spaces window, in the Available Free Spaces

table, select the free space to be assigned to the volumes. Do thefollowing, if necessary:• To specify the conditions and show the free space, click Filter, specify

the conditions, and then click Apply.• To specify the unit for capacity and the number of rows to view, click

Options.9. Click View Physical Location.

10. In the View Physical Location window, confirm where the selected freespace is physically located, and then click Close.

274 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 275: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

11. In the Select Free Spaces window, if the selected free spaces have noissues, click OK.

12. In the Number of LDEVs per External Volume, confirm that 1 isdisplayed.

13. In LDEV Name, specify a name for this LDEV.a. In Prefix, type the characters that will become the fixed characters

for the beginning of the LDEV name. The characters are case-sensitive.

b. In Initial Number, type the initial number that will follow the prefixname.

14. In Format Type, select the format type for the LDEV from the list.For an external volume, if you create the LDEV whose emulation type isthe open system, select Normal Format or No Format.If the external volume can be used as it is, select No Format. Thecreated LDEV can be used without formatting.If the external volume needs to be formatted, select No Format andthen format the volume with the external storage system, or selectNormal Format.

15. Click Options to show more options.16. In Initial LDEV ID, make sure that an LDEV ID is set. To confirm the

used number and unavailable number, click View LDEV IDs to open theView LDEV IDs window.a. In Initial LDEV ID in the Create LDEVs window, click View LDEV

IDs. In the View LDEV IDs window, the matrix vertical scalerepresents the second-to-last digit of the LDEV number, and thehorizontal scale represents the last digit of the LDEV number. TheLDEV IDs table shows the available, used, and disabled LDEV IDs. Inthe table, used LDEV numbers appear in blue, unavailable numbersappear in gray, and unused numbers appear in white. LDEV numbersthat are unavailable may be already in use, or already assigned toanother emulation group (group by 32 LDEV numbers).

b. Click Close.17. In the Create LDEVs window, in SSID, type four digits, in hexadecimal

format (0004 to FEFF), for the SSID.18. To confirm the created SSID, click View SSIDs to open the View SSIDs

dialog box.a. In the Create LDEVs window, in Initial SSID, click View SSIDs. In

the SSIDs window, the SSIDs table shows the used SSIDs.b. Click Close.

19. In the Create LDEVs window, from the MP Blade list, select a MP bladeto be used by the LDEVs.• If you assign a specific MP blade, select the ID of the MP blade.• If you can assign any MP blade, click Auto.

20. Click Add.

The created LDEVs are added to the Selected LDEVs table.

Configuring thin provisioning 275Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 276: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If these required items are not registered, you cannot click Add.21. If necessary, change the following LDEV settings:

a. Click Edit SSIDs to open the SSIDs window. If the new LDEV is to becreated in the CU, change SSID to be allocated to the LDEV.

b. Click Change LDEV Settings to open the Change LDEV Settingswindow.

22. If necessary, delete an LDEV from the Selected LDEVs table. Select anLDEV to delete, and then click Remove.

23. Click Finish. The Confirm window appears.To continue the operation for setting the LU path and defining a logicalunit, click Next.

24. In the Task Name text box, type a unique name for the task or acceptthe default.

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

25. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Creating pools with data direct mapping enabled

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Create Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools and then select System GUI.c. In the Pools window, click Create Pools.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.c. Click Create Pools.

2. From the Pool Type list, select Dynamic Provisioning.3. From the System Type list, select Open.4. From the Multi-Tier Pool field, select Disable.

You cannot select Enable if the storage system has only externalvolumes with the Cache Mode set to Disable.

5. From the Data Direct Mapping field, select Enable.

276 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 277: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. Follow the steps below to select pool-VOLs.a. From the Drive Type/RPM list, select a data drive type and RPM.b. From the RAID Level list, select RAID level.

If you select External Storage from the Drive Type/RPM list, ahyphen (-) appears and you cannot select the RAID level.

c. Click Select Pool VOLs.The Select Pool VOLs window appears.

d. In the Available Pool Volumes table, select the pool-VOL row to beassociated with a pool, then click Add.

Select one or more volumes to use as pool-VOLs with system area. Forthe attribute of the volume, which can be used as the pool-VOLs withsystem area, a hyphen(-) appears in the Attribute column. Inaddition, the external volume of the data direct mapping attribute canbe selected as an option. For the attribute of the external volume ofthe data direct mapping attribute, Data Direct Mapping appears inthe Attribute column. After creating the pool, you can also add theexternal volume of the data direct mapping attribute.You can select a value other than Middle from External LDEV TierRank and click Add to set another tier rank for an external volume.The selected pool-VOL is registered in the Selected Pool Volumestable. Up to 1,024 volumes can be added to a pool.If LDEVs in an accelerated compression enabled parity group are usedas pool-VOLs, these LDEVs can be assigned to only one pool. LDEVs inan accelerated compression enabled parity group cannot be assignedto multiple pools as pool-VOLs.

Tip: Perform the following steps if necessary:• Click Filter to open the menu, specify the filtering conditions,

and click Apply.• Click Select All Pages to select all pool-VOLs in the table. To

cancel the selection, click Select All Pages again.• Click Options to specify the volumes or the number of rows

to be displayed.

e. Click OK.The information in the Selected Pool Volumes table is applied toTotal Selected Pool Volumes and Total Selected Capacity.

7. Enter the name in the Pool Name text box.8. Click Options.9. In the Initial Pool ID text box, type the number of the initial pool ID,

from 0 to 127. When you specify a pool ID that was previouslyregistered, the smallest available ID is displayed by default instead of thevalue you entered. If a pool ID unavailable, no number is displayed.

10. In the Subscription Limit text box, enter an integer value from 0 to65534 as the subscription rate (%) for the pool.

Configuring thin provisioning 277Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 278: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

If no number is entered, the subscription rate is set to unlimited.11. In Protect V-VOLs when I/O fails to Blocked Pool VOL, select Yes

or No. If Yes is selected, when the pool-VOL is blocked, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

12. In Protect V-VOLs when I/O fails to Full Pool, select Yes or No. IfYes is selected, when the pool usage reaches the full size, DP-VOL isprotected from reading and writing requests. And at the same instant,the access attribute of the DP-VOL is changed to the Protect attribute.

13. Click Add.The created pool is added to the Selected Pools table. If invalid valuesare set, an error message appears.

The Pool Type, Pool Volume Selection, and Pool Name must be set.If the required items are not entered or selected, you cannot click Add.

If you select a row and click Detail, the Pool Properties windowappears. If you select a row and click Remove, the message appearsasking whether you want to remove the selected row or rows. If youwant to remove the row, click OK.

14. Click Next.The Create LDEVs window appears.If Subscription Limit of the created pool is set to 0%, the CreateLDEVs window does not appear.

15. Click Finish and the Confirmation window appears.16. Check the settings in the Confirmation window, and then enter the task

name in Task Name. Select the pool radio button and then clickDetails. The Pool Properties window appears.

17. Click ApplyThe tasks are registered. If the Go to tasks window for status checkbox is selected, the Tasks window appears.

Creating DP-VOLs with data direct mapping enabled

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.

278 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 279: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

b. Click Logical Devices.2. In the LDEVs pane of the Logical Devices window, click Create LDEVs.3. In the Create LDEVs window, from the Provisioning Type list, select

Dynamic Provisioning.4. In the System Type option, select Open.5. From the Data Direct Mapping field, select Enable.6. From theEmulation Type list, confirm OPEN-V is selected.7. From the Multi-Tier Pool field, select Disable.8. From the Available Volumes table, select LDEV.9. In the LDEV Name text box, enter the DP-VOL name.

In the Prefix text box, enter the alphanumeric characters, which arefixed characters of the head of the DP-VOL name. The characters arecase-sensitive.In the Initial Number text box, type the initial number following theprefix name, which can be up to 9 digits.You can enter up to the 32 characters including the initial number.

10. Click Option.11. In the Initial LDEV ID field, make sure that LDEV ID is set.

To confirm the used number and unavailable number, click View LDEVIDs to display the View LDEV IDs window.In the table, used LDEV numbers appear in blue, unavailable numbersappear in gray, and unused numbers appear in white. LDEV numbers thatare unavailable may be already in use, or already assigned to anotheremulation group (group by 32 LDEV numbers).

12. In the Initial SSID text box, type the 4-digit SSID of a hexadecimalnumber (0004 to FFFE).To confirm the created SSID, click View SSID to display the View SSIDwindows.

13. From the Cache Partition list, select CLPR.14. From the MP Blade list, select an MP blade.

Select an MP blade to be used by the LDEVs. If you assign a specific MPblade, select the ID of the MP blade. If you can assign any MP blade,click Auto.

15. If necessary, change the settings of the V-VOLs.• Click Edit SSIDs to open the Edit SSIDs window.• Click Change LDEV Settings to open the Change LDEV Settings

window.16. If necessary, delete a row from the Selected LDEVs table.

Select a row to be deleted, then click Remove.17. Click Add.

The created V-VOLs are added to the correct Selected LDEVs table. Ifinvalid values are set, an error message appears: The ProvisioningType, System Type, Emulation Type, Pool Selection, Drive Type/RPM, RAID Level, LDEV Capacity, and Number of LDEVs fields mustbe set. If these required items are not registered, you cannot click Add.

Configuring thin provisioning 279Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 280: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

18. Click Finish.The Confirm window appears. To continue the operation for setting theLU path and define LUN, click Next.

19. In the Task Name in the text box, enter the task name.

You can enter up to 32 ASCII characters and symbols in all, except for\ / : , ; * ? " < > |. "yymmdd-window name" is entered as a default.

20. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Editing the data direct mapping attribute for a pool

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Note: To change a pool with data direct mapping enabled to a pool ofDynamic Tiering or active flash, operate as follows:1. In the operation target pool, open the Edit Pools window, then select

Disable from the Data Direct Mapping field.2. Apply the setting to the storage system.3. In the operation target pool, open the Edit Pools window, then select

Enable from the Multi-Tier Pool field.4. Apply the setting to the storage system.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click DP Pools, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Pools.

2. From the Pools table, click the row of a pool for which you want tochange the data direct mapping attribute.

3. Perform one of the following to display the Edit Pools window.• Click More Actions and select Edit Pools.• Click Actions > Pool > Edit Pools to open the window.

4. Select the Data Direct Mapping check box.5. Select Enable or Disable.6. Click Finish.

The Confirm window opens.

280 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 281: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

Configuring thin provisioning 281Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 282: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

282 Configuring thin provisioningHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 283: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6Configuring access attributes

After provisioning your system, you can assign access attributes to open-system volumes to protect the volume against read, write, and copyoperations and to prevent users from configuring LU paths and commanddevices. Data Retention Utility software is required to assign access attributesto volumes.

□ About access attributes

□ Working with access attributes

Configuring access attributes 283Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 284: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

About access attributesOpen-systems volumes, by default, are subject to read and write operationsby open-systems hosts. With open-system volumes in this default condition,data might be damaged or lost if an open-systems host performs erroneouswrite operations. In addition, confidential data on open-systems volumesmight be stolen if a malicious operator performs read operations on open-systems hosts.

Therefore, it is recommended that you change the default read and writeconditions by assigning an access attribute to each logical volume. Accessattributes can be set to read/write, read-only, or protect.

By assigning access attributes, you can:• Protect a volume against both read and write operations of all hosts.• Protect a volume against write operations of all hosts, but allow read

operations.• Protect a volume against erroneous copy operations, but allow other write

operations.• Prevent other users from configuring LU paths and command devices.

One of the following access attributes can be assigned to each logicalvolume:• Read/write

If a logical volume has the read/write attribute, open-systems hosts canperform both read and write operations on the logical volume.You can use replication software to copy data to logical volumes that haveread/write attribute. However, if necessary, you can prevent copying datato logical volumes that have read/write attribute.All open-systems volumes have the read/write attribute by default.

• Read-onlyIf a logical volume has the read-only access attribute, open-systems hostscan perform read operations but cannot perform write operations on thelogical volume.

• ProtectIf a logical volume has the protect access attribute, open-systems hostscannot access the logical volume. Open-systems hosts cannot performeither read nor write operations on the logical volume.

Access attribute requirementsTo assign access attributes, the Hitachi Data Retention Utility software mustbe installed.

284 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 285: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The Hitachi Data Retention Utility software performs on the secondarywindow of Hitachi Device Manager - Storage Navigator. For details about thesetting for the secondary window, see the System Administrator Guide.

Access attributes and permitted operations

AccessAttribute

ReadOperations from

Hosts

WriteOperations from

HostsSpecified as P-VOL Specified as S-

VOL

Read/Write Yes Yes Yes Yes

Read-only Yes No Depends on thereplication software

No

Protect No No Depends on thereplication software

No

Read/Write andS-VOL disable

Yes Yes Yes No

Access attribute restrictionsSome restrictions apply when you use the following VSP G1000 products orfunctions on a volume that has an access attribute assigned to it.

Virtual LUN• You cannot convert into spaces volumes that do not have the read/write

attribute.• You cannot initialize customized volumes that do not have the read/write

attribute.

Command Control Interface• You can use Command Control Interface to make some Data Retention

Utility settings. You can view some of the Command Control Interfacesettings in the Data Retention Utility user interface.

• When viewing the Data Retention window, another user might be usingCCI to change an access attribute of a volume. If the CCI user changes anaccess attribute of a volume when you are viewing the Data Retentionwindow, you will be unable to change the access attribute of the volumeby using Data Retention Utility. If you attempt to change the accessattribute of the volume by using the Data Retention Utility, an erroroccurs. If the error occurs, refresh the display, then retry changing theaccess attribute of the volume.

Automatically started software

If any software that can start automatically is enabled, you must do one ofthe following:• Perform Data Retention Utility operations when the program is not

running.

Configuring access attributes 285Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 286: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Cancel the setting of the program start time.

Some software is likely to start automatically at the time specified by theuser. For example, if a Volume Migration user or a Performance Monitoringuser specifies the time for starting the monitor, the monitor will automaticallystart at the specified time.

Access attributes workflowAccess attribute workflow includes the following steps:1. Changing an access attribute to read-only or protect on page 2872. Changing an access attribute to read/write on page 2883. Enabling or disabling the expiration lock on page 2894. Disabling an S-VOL on page 2905. Reserving volumes on page 291

Working with access attributes

Assigning an access attribute to a volumeIf you want to protect volumes against both read and write operations fromhosts, change the access attribute to protect. To protect volumes againstwrite operations from hosts and allow read operations, change the accessattribute to read-only. In both ways, if you set the attribute to a volumeusing the GUI, S-VOL Disable is automatically set to prevent data in a volumefrom being overwritten by replication software. If you use Command ControlInterface to set the attribute to a volume, you can select whether the S-VOLDisable is set or not. If you set the Protect attribute to the volume when theDynamic Provisioning pool is full, the S-VOL Disable is not set to the volume.

After you change an access attribute to read-only or protect, the accessattribute cannot be changed to read/write for a certain period of time. Youcan specify the length of this period (called Retention Term) when changingthe access attribute to read-only or protect. The retention term can beextended but cannot be shortened.

During the retention term• Read-only access can be changed to protect or protect can be changed to

read-only.• If you need to change an access attribute to read/write, you must ask the

maintenance personnel to do so.

After the retention term is over• The access attribute can be changed to read/write.• The access attribute remains read-only or protect until changed back to

read/write.

286 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 287: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Changing an access attribute to read-only or protectWhen changing an access attribute to read-only or protect, observe thefollowing:• Hitachi Device Manager - Storage Navigator secondary windows must be

defined for use in advance. Select Modify from the Data Retentionsecondary window to set access attributes and prevent other users orprograms from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information onHitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

• Do not assign an access attribute to a volume if any job is manipulatingdata on the volume. If you assign an access attribute to such a volume,the job will possibly end abnormally.

• The emulation type of the volume must be one of the following:OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-K, OPEN-L, OPEN-V

• The volume must not be one of the following:○ Volumes that do not exist○ Volumes that are configured as command devices○ TrueCopy secondary volumes*

○ Universal Replicator secondary volumes* or journal volumes○ ShadowImage secondary volumes*

○ Thin Image secondary volumes*

○ Pool volume○ Thin Image virtual volume○ Volume assigned by the accelerated compression-enabled parity group

*Note: The access attribute of secondary volumes may be changeddepending on the pair status.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Data Retention window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click the target storage system, and then select Other

Functions.c. Click Actions > Other Function > Data Retention.In Device Manager - Storage Navigator: In the main window clickActions > Other Function > Data Retention.

Configuring access attributes 287Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 288: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. Click to change to Modify mode.3. Select an LDKC number in the LDKC list, select a group that the CU

belongs in the CU Group list, then click a CU in the tree.4. Right-click a volume whose access attribute you want to change. You

may select multiple volumes.5. Click Attribute, and then select Read Only or Protect.

6. In the Term Setting dialog box, specify the retention term. During thisperiod, the access attribute cannot be changed to read/write. You canenter the number of years and days, or select Unlimited. The retentionterm can be extended but cannot be shortened.• years: Specify the number of years within the range of 0 to 60. One

year is counted as 365 days, whether the year is a leap year.• days: Specify the number of days within the range of 0 to 21900.

For example, if 10 years 5 days or 0 years 3655 days is specified, theaccess attribute of the volume cannot be changed to read/write in thenext 3,655 days.

7. Click OK to close the dialog box.8. In the Data Retention window, click Apply to apply the setting.

To extend the retention term later, open the Data Retention window,right-click the volume, and then select Retention Term.

Changing an access attribute to read/writeBefore changing the access attribute from read-only or protect to read/write,considering the following:• Hitachi Device Manager - Storage Navigator secondary windows must be

defined for use in advance. Select Modify from the Data Retentionsecondary window to set access attributes and prevent other users orprograms from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information onHitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

• Do not assign an access attribute to a volume if any job is manipulatingdata on the volume. If you assign an access attribute to such a volume,the job will possibly end abnormally.

• Make sure that the retention term is expired. If expired, the RetentionTerm column in the Data Retention window shows 0. To change the

288 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 289: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

access attribute to read/write within the retention term, contact customersupport.

• Make sure that Expiration Lock indicates Disable > Enable. If it indicatesEnable > Disable, changing to read/write is restricted by an administratorfor some reason. Contact the administrator of your system to ask if youcan change the access attribute. For details, see the Provisioning Guide foryour storage system.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Data Retention window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click the target storage system, and then select Other

Functions.c. Click Actions > Other Function > Data Retention.In Device Manager - Storage Navigator: In the main window clickActions > Other Function > Data Retention.

2. Click to change to Modify mode.3. Select an LDKC number in the LDKC list, select a group in which the CU

belongs in the CU Group list, then click a CU in the tree.4. Right-click a volume for which you want to change access attributes. You

may select multiple volumes, select Attribute, then click Read/Write.5. Click Apply to apply the setting.

Related tasks

• Enabling or disabling the expiration lock on page 289

Enabling or disabling the expiration lockThe expiration lock provides enhanced volume protection. Enabling theexpiration lock ensures that read-only volumes and protect volumes cannotbe changed to read/write volumes, even after the retention term ends.Disabling the expiration lock changes the access attribute to read/write afterthe retention term ends. This setting applies to all volumes in the storagesystem with the read-only and protect attribute.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the DataRetention secondary window to enable or disable the expiration lock, otherusers or programs are prevented from changing storage system settings.When you close the secondary window, Modify mode is released. For more

Configuring access attributes 289Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 290: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

information on Hitachi Device Manager - Storage Navigator secondarywindows and Modify mode, see the System Administrator Guide.

Before you beginThe Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Data Retention window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click the target storage system, and then select Other

Functions.c. Click Actions > Other Function > Data Retention.In Device Manager - Storage Navigator: In the main window clickActions > Other Function > Data Retention.

2. Click to change to Modify mode.3. In the Data Retention window, verify which button appears beside

Expiration Lock.• If Disable > Enable appears, go to the next step.

• If Enable > Disable appears, expiration lock is already enabled. Youdo not need to follow the rest of this procedure because attempts tochange access attribute to read/write are already prohibited.

4. Click Disable > Enable. A confirmation message appears.5. Click OK. The button changes to Enable > Disable, and the expiration

lock is enabled.When the expiration lock is enabled, the access attributes of volumescannot be changed to read/write even after the retention term ends.

To disable the expiration lock, click Enable > Disable. The accessattribute can then be changed to read/write after the retention termends.

Disabling an S-VOLAssigning a read-only or protect attribute is one of the ways to prevent datain a volume from being overwritten by replication software. Volumes havingthe read-only or protect attribute are not only protected against these copyoperations, but are also protected against any other form of write operations.

To protect a volume only from copy operations, you must ensure that thevolume has the read/write attribute then assign the S-VOL Disable attributeto the volume. This setting prohibits the volume from being used as asecondary volume for copy operations.

290 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 291: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the DataRetention secondary window to disable an S-VOL, other users or programsare prevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.

• The volume is other than the volume assigned by the acceleratedcompression-enabled parity group

Procedure

1. Open the Data Retention window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click the target storage system, and then select Other

Functions.c. Click Actions > Other Function > Data Retention.In Device Manager - Storage Navigator: In the main window clickActions > Other Function > Data Retention.

2. Click to change to Modify mode.3. Select an LDKC number in the LDKC list, select a group that the CU

belongs in the CU Group list, and then click a CU in the tree.4. Right-click a volume for which the S-VOL column shows Enable. You

may select multiple volumes.5. Select S-VOL > Disable.6. Click Apply to apply the setting.

To use a volume as an S-VOL, ensure that the volume has the read/writeattribute then assign the S-VOL Enable attribute to the volume.

Reserving volumesBy default, all Hitachi Device Manager - Storage Navigator users with properpermissions can make LU path settings and command device settings. If youperform the following procedure in Hitachi Device Manager - StorageNavigator, all users, including yourself, will not be allowed to make LU pathsettings and command device settings on the specified volume. CommandControl Interface users can still make LU path settings and command devicesettings on the volume.

Configuring access attributes 291Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 292: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the DataRetention secondary window to disable an S-VOL, other users or programsare prevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.

• The volume is other than the volume assigned by the acceleratedcompression-enabled parity group

Procedure

1. Open the Data Retention window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click the target storage system, and then select Other

Functions.c. Click Actions > Other Function > Data Retention.In Device Manager - Storage Navigator: In the main window clickActions > Other Function > Data Retention.

2. Click to change to Modify mode.3. In the Data Retention window, select an LDKC number in the LDKC list,

select a group that the CU belongs in the CU Group list, and then click aCU in the tree.

4. Select a volume where the Reserved column contains a hyphen. Youmay select multiple volumes.

5. Right-click the selected volume or volumes, and then select Reserved >Set.

6. Click Apply to apply the setting.To permit users to make LU path settings and command device settingson a volume, follow the steps above and select Reserved > Release.Then call customer support to ask for SVP settings.

Data Retention windowUse the Data Retention window to assign an access attribute to open-system volumes.

292 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 293: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Summary

Item Description

LDKC Select the LDKC that contains the desired CU groups.

CU Group Select the CU group that contains the desired CUs from the following:• 00-3F: CUs from 00 to 3F appear in the tree.• 40-7F: CUs from 40 to 7F appear in the tree.• 80-BF: CUs from 80 to BF appear in the tree.• C0-FE: CUs from C0 to FE appear in the tree.

Tree A list of CUs. Selecting a CU provides the selected CU information in thevolume list on the right of the tree.

This tree appears only the CUs that include volumes to which accessattributes can be actually set.

Volume list Lists information about the CU selected in the tree. See the table below fordetails.

Expiration Lock Enables or disables enhanced volume protection.• Disable > Enable: Indicates the expiration lock is disabled. You can

change an access attribute to read/write when the retention term isover.

• Enable > Disable: Indicates the expiration lock is enabled. You cannotchange an access attribute to read/write even when the retention termis over.

Apply Applies settings to the storage system.

Configuring access attributes 293Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 294: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Cancel Discards setting changes.

Volume list

The volume list provides information about access attributes that areassigned to volumes.

Item Description

LDEV LDEV number.

• : Read/write

• : Read-only

• : Protect

The symbol beside the LDEV number indicates:• #: an external volume• A: LDEV of the ALU attribute.• S: LDEV of the SLU attribute.• V: a virtual volume• D: Deduplication System Data volume• X: a virtual volume used for Dynamic Provisioning

Attribute Access attribute assigned to this volume. These attributes can be assignedusing the Command Control Interface.• Read/Write: Both read and writer operations are permitted on the logical

volume.• Read-only: Read operations are permitted on the logical volume.• Protect: Neither read nor write operations are permitted.

Emulation Volume emulation types.

Capacity Capacity of each volume in GB to two decimal places.

S-VOL Indicates whether the volume can be specified as a secondary volume (S-VOL). You can also use the CCI to specify whether each volume can be usedas an S-VOL.

Reserved Indicates the method that can be used to make LU path and commanddevice settings.• Hyphen (-): Both CCI and Hitachi Device Manager - Storage Navigator

can be used to make LU path and command device settings.• CCI: Only CCI can be used to make LU path and command device

settings. Hitachi Device Manager - Storage Navigator cannot be used todo so.

Retention Term Period (in days) when you are prohibited from changing access attribute toread/write. The retention term can be extended but cannot be shortened.During the retention term, you can change read-only to protect, or viceversa.• 500 days. Attempts to change access attribute to read/write are

prohibited in the next 500 days.• Unlimited: The retention term is extended with no limits.• 0 days: You can change access attribute to read/write.

294 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 295: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Description

Caution: In Data Retention Utility, you can increase the value for RetentionTerm, but cannot decrease the value.

Path Number of LU paths.

Mode Indicates the mode that the CCI user assigns to the volume. You cannot useHitachi Device Manager - Storage Navigator to change modes. You must usethe CCI to change modes.• Zer: Zero Read Cap mode is assigned to the volume. If the Read

Capacity command (which is a SCSI command) is issued to a volume inZero Read Cap mode, it will be reported that the capacity of the volumeis zero.

• Inv: Invisible mode is assigned to the volume. If the Inquiry command(which is a SCSI command) is issued to a volume in Invisible mode, itwill be reported that the volume does not exist. Therefore, the hosts willbe unable to recognize the volume.

• Zer/Inv. Both Zero Read Cap mode and Invisible mode are assigned tothe volume.

• Hyphen (-): No mode is assigned by CCI to the volume.

Operation Target of the operation or the name of the operation. When no operation isperformed, No Operation appears.

Also shown are the volume icons and the total number of volumes with eachaccess attribute.

Error Detail dialog boxIf an error occurs with Data Retention Utility, the Error Detail dialog boxappears. The Error Detail dialog box displays error locations and errormessages.

Items Description

Location Location where the error occurred. If an error relating to avolume occurred, the LDKC number, CU number, and LDEVnumber (volume number) are shown.

Configuring access attributes 295Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 296: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Items Description

Error Message Provides the full text of the error message. For details about thesolution, see the Hitachi Device Manager - Storage NavigatorMessages.

Close Closes the Error Detail window.

Related references

• Troubleshooting Data Retention Utility on page 411

296 Configuring access attributesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 297: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7Managing logical volumes

After provisioning your system, you can begin to manage open-system logicalvolumes. Managing logical volumes includes tasks such as configuring hostsand ports, configuring LU paths, setting LUN security on ports, and setting upFibre Channel authentication. LUN Manager is required to manage logicalvolumes.

□ LUN Manager overview

□ Allocating and unallocating volumes

□ Managing logical units workflow

□ Configuring Fibre Channel ports

□ Overview for iSCSI

□ Managing hosts

□ Managing LUN paths

□ Releasing LUN reservation by host

□ Configuring LUN security

□ Setting Fibre Channel authentication

Managing logical volumes 297Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 298: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

LUN Manager overview

LUN Manager FunctionFibre security control and host group (Fibre Channel interface), or iSCSIsecurity control and target (iSCSI interface).• The Fibre security control function controls the access from specific hosts

or specific commands.• The host group function also enables the storage system to make a

suitable response to each host connected, even within the same port, bygrouping connected hosts within a port and setting the logical unitmapping and the host connection mode for each host group. Up to 255host groups on a port basis can be set.

• The iSCSI security function controls the access from specified hosts orspecific commands.

• The iSCSI target function enables the storage system to respond to eachconnected host, even within the same port, by grouping the connectedhosts within a port and setting LUs and the host option mode for eachgroup. Up to 255 iSCSI targets can be set for one port. Authentication canbe performed for each target by using the CHAP authenticationconcurrently.

LUN Manager operationsThe VSP G1000, VSP G1500, and VSP F1500 storage systems can beconnected to open-system server hosts of different platforms (for example,UNIX servers and PC servers). To configure your storage system for operationwith open-system hosts, use LUN Manager to configure logical volumes andports.

One of the important tasks when configuring logical volumes is to define I/Opaths from hosts to logical volumes. When paths are defined, the hosts cansend commands and data to the logical volumes and can receive data fromthe logical volumes.

After the system begins operating, you might need to modify the systemconfiguration. For example, if hosts or drives are added, you will need to addnew I/O paths. You can modify the system configuration using LUN Managerwhen the system is running. You do not need to restart the system aftermodifying the system configuration.

Fibre Channel operationsAfter open-system hosts and the storage system are physically connected bycables, hubs, and so on, use LUN Manager to establish I/O paths between thehosts and the logical volumes. This defines which host can access which

298 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 299: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

logical volume. Logical volumes that can be accessed by open-system hostsare referred to as logical units (LUs). The paths between the open-systemhosts and the LUs are referred to as LU paths.

Before defining LU paths, you must classify server hosts by host groups. Forexample, if Linux hosts and Windows hosts are connected to the storagesystem, you must create one host group for the Linux hosts and another hostgroup for the Windows hosts. Then, you must register the host bus adaptersof the Linux hosts in the Linux host group. You must also register the hostbus adapters of the Windows hosts in the windows host group.

A host group can contain only those hosts that are connected to the sameport, and cannot contain hosts that are connected to different ports. Forexample, if two Windows hosts are connected to port 1A and three Windowshosts are connected to port 1B, you cannot register all five Windows hosts inone host group. You must register the first two Windows hosts in one hostgroup, and then register the remaining three Windows hosts in another hostgroup.

After server hosts are classified into host groups, you associate the hostgroups with logical volumes. The following figure illustrates LU pathsconfiguration in a Fibre Channel environment. The figure shows host grouphg-lnx associated with three logical volumes (00:00:00, 00:00:01, and00:00:02). LU paths are defined between the two hosts in the hg-lnx groupand the three logical volumes.

Managing logical volumes 299Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 300: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can define paths between a single server host and multiple LUs. Thefigure shows that each of the two hosts in the host group hg-lnx can accessthe three LUs.

You can also define paths between multiple server hosts and a single LU. Thefigure shows that the LU identified by the LDKC:CU:LDEV number 00:00:00is accessible from the two hosts that belong to the hg-lnx host group.

The figure also shows that the LUs associated with the hg-lnx host group areaddressed by numbers 0000 to 0002. The address number of an LU isreferred to as a LUN (logical unit number). When software manipulates LUs,the software use LUNs to specify the LUs to be manipulated.

You can add, change, and delete LU paths when the system is in operation.For example, if new disks or server hosts are added to your storage system,you can add new LU paths. If an existing server host is to be replaced, youcan delete the LU paths that correspond to the host before replacing thehost. You do not need to restart the system when you add, change, or deleteLU paths.

If a hardware failure (such as a CHA failure) occurs, there is a chance thatsome LU paths are disabled and some I/O operations are stopped. To avoidsuch a situation, you can define alternate LU paths; if one LU path fails, thealternate path takes over the host I/O. For information, see and .

300 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 301: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Rules, restrictions, and guidelines for managing LUs

Rules• In a Fibre Channel environment, up to 2,048 LU paths can be defined for

one host group, and up to 2,048 LU paths can be defined for one port.• In an iSCSI environment, up to 2,048 LU paths can be defined for one

iSCSI target, and up to 2,048 LU paths can be defined for one port.• Up to 255 host groups can be created for one Fibre Channel port.• Up to 255 iSCSI targets can be created for one iSCSI port.• For an LDEV with the ALU attribute, you can define the LU path to only one

host group.• For an LDEV with the ALU attribute, you can define the LU path to only one

iSCSI target.

Restrictions• You cannot define an LU path to the following types of volumes:

○ Journal volumes○ Pool volumes○ External volumes with the data direct mapping attribute○ LDEVs created from an accelerated compression-enabled parity group○ Deduplication system data volumes

• When using iSCSI, you cannot define an LU path to multi-platformvolumes.

• When defining LU paths, you must not use Command Control Interfaceand Hitachi Device Manager - Storage Navigator at the same time. If bothprograms are used simultaneously, operations might not be performed inthe expected order, and the storage configuration might be definedincorrectly.

• If an LDEV of the ALU attribute is binding to LDEVs with the SLU attribute,the LU path cannot be removed.

• To define an LU path between a port and an LDEV that has the T10 PIattribute enabled, the port must have T10 PI mode enabled.

Guidelines• Queue depth: To ensure smooth processing at the ports and best average

performance, the recommended queue depth setting (max tag count) forthe storage systems is 2,048 per port and 32 per LDEV. Other queue depthsettings, higher or lower than these recommended values, can provideimproved performance for certain workload conditions.

Caution: Higher queue depth settings (greater than 2,048 per port)can impact host response times or cause failures such as job abend.Caution must be exercised in modifying the recommended queuedepth settings.

Managing logical volumes 301Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 302: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• If you attempt to apply many settings in the LUN Manager windows, theSVP might be unable to continue processing. Therefore, you should makeno more than approximately 1,000 settings. Note that many settings arelikely to be made when defining alternate paths (see ), even though onlytwo commands are required for defining alternate paths.

• Do not perform the following when host I/O is in progress and hosts are inreserved status (mounted):○ Remove LU paths (see )○ Disable LUN security on a port (see Disabling LUN security on a port on

page 376)○ Change the data transfer speed for Fibre Channel ports○ Change AL-PAs or loop IDs○ Change settings of fabric switches○ Change the topology○ Change the host modes○ Remove host groups○ Remove iSCSI targets○ Setting command devices

Allocating and unallocating volumesThis module describes volume allocation, provides information about volumeallocation requirements, describes multiple ways in which you can allocatevolumes, and provides related procedures for allocating volumes and editingvolume or host access.

About allocating volumesVolume allocation makes storage capacity available to host applications andfile servers. Hosts and file servers must already be registered before volumeallocation.

Depending on your registered storage systems, volumes can be allocatedusing basic volumes, pool volumes, or volumes from a tier. Basic volumes arevolumes from a parity group. Any storage system can provide basic volumes.Allocating DP pool volumes involves grouping resources on storage systemsthat support this feature. DP pools must exist prior to volume allocation froma DP pool. To allocate volumes from a tier, a Tiered Storage Manager licenseis required, and tier policies must be established for storage systems thatsupport tier policy configuration.

The following rules and behaviors apply when allocating volumes:• Settings that are assigned when allocating volumes to a host become the

default settings for the next time you allocate a volume to the same host.You can change these settings during volume allocation.

302 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 303: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• If a variety of volumes with different characteristics have been allocated toa host, when you allocate a new volume, you can select an existingvolume to set the volume allocation criteria.

• When allocating a volume, if no volumes match the specifiedrequirements, new volumes are created from unused capacity andallocated to the host. When a basic volume is created, the volume is alsoformatted at the same time.

• You can use keyword or criteria-based searches to find existing unallocatedvolumes that meet your requirements.

• When you allocate volumes to a host, LUN paths are assignedautomatically.LUN paths are storage port to host port mappings that provide host accessto volumes. The host port is a host bus adapter (HBA) or an iSCSI name.WWN nicknames can be displayed to confirm target HBAs for a host whileediting LUN paths during, or after, volume allocation.

• Volumes can be allocated on ports where LUN security is not enabled. Allhosts with access to the port can access the volume.

• Volume allocation to a clustered host should be done by using logicalgroups. Using logical groups ensures that the same volumes are allocatedto all hosts in the cluster.

• Volume allocation is not complete until you approve the volume allocationplan and submit the volume allocation task.

• When volumes are allocated to a host OS, you must create a file systemon them and mount them, before they can be used by host applications.

• When you are linking with NAS Platform v11.3 or later and using VSPG400, G600, G800, and VSP F400, F600, F800 with NAS modules, andvolumes are allocated for creating or expanding storage pools, it isrecommended that you allocate volumes using the Create Storage Pool orExpand Storage Pool dialog boxes. Device Manager can automaticallyspecify a volume and path configuration for allocating volumes that followsthe best practices for configuring storage pools.

• When volumes are allocated to a NAS Platform F or Data Ingestor, andHitachi File Services Manager v3.2 is installed, you can create a file systemand allocate the volume by using the Create File System dialog box.

Note: Before allocating volumes, review the available volume allocationmethods. Understanding the available starting points for volume allocationwill enable you to perform volume allocation in a way that best suits yourrequirements, and will help you understand the Allocate Volumes dialog boxand the Define Clustered-Host Storage dialog box.

Volume allocation methodsWhen you allocate volumes, you can use the Allocate Volumes dialog box orthe Define Clustered-Host Storage dialog box.

Managing logical volumes 303Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 304: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Begin by selecting resources, such as volumes or hosts, and then clickAllocate Volumes, which opens the Allocate Volumes dialog box.

Click Allocate Volumes without first selecting a resource, such as a volume orhost.

There are multiple methods in which to begin allocating volumes. The visiblefields in the Allocate Volumes dialog box will vary depending on the startingpoint and whether you have selected a resource to begin the procedure. Forexample, if you select resources first, the Allocate Volumes dialog box willprompt you for less information.

If you are linking with NAS Platform v11.3 or later and allocating volumes toa file server when creating or expanding storage pools, the following isrecommended:• Use the Create Storage Pool dialog box or the Expand Storage Pool dialog

box instead of the Allocate Volumes dialog box to create or expand storagepools and allocate volumes at the same time.

Device Manager can automatically specify a volume and path configurationfor allocating volumes that follows the best practices for configuring storagepools.

When using the Allocate Volumes dialog box, the following conditions applywhen you begin volume allocation using these available methods and startingpoints:• From the General tasks pane

On the Resources or Mobility tab, from General Tasks, click AllocateVolumes. Because no resource was selected first, you must specify host,volume, and other volume allocation requirements.

• Selecting hosts or file serversFrom the Resources tab, select one or more hosts or file servers, and clickAllocate Volumes. The dialog will display the selected host or file servernames instead of prompting.

• Selecting volumesFrom the Resources tab, select one or more volumes, and click AllocateVolumes. The dialog will prompt for host or file server name, but will notprompt for volume criteria.

• Selecting clustered hostsConfiguring clustered hosts in a logical group helps ensure that volumeallocations will be consistent for all members of the cluster. From theResources tab, select a logical group, select all host members of the group,and then click Allocate Volumes. If clustered hosts are not in a logicalgroup, from General Tasks or Actions, click Allocate Volumes and selectmultiple hosts. You may also use the Resources tab to locate clusteredhosts, and then Allocate Volumes. The use of logical groups isrecommended to avoid host selection errors.

• Searching for volumes

304 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 305: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can search for volumes that meet specific criteria (such as storagesystem, specific allocation status, volume type, drive type, drive speed, orchip type), select one or more volumes from the search results, and clickAllocate Volumes.You can also search using full or partial text keywords such as storagesystem name or host name, and then allocate volumes to the selectedresource. Searching eliminates the need to manually navigate to theresource.

• Using existing volume settingsFrom the Resources tab, select an allocated volume that has desiredattributes and criteria for the new volume allocation, and click Allocate LikeVolumes.

When you use the Define Clustered-Host Storage dialog box for Fibre Channeland Fibre Channel over Ethernet (FCoE) connections to create a clusterconfiguration using existing hosts, or to add a new host to an existing hostgroup or cluster, ensure that you allocate the same volumes that areassigned to the existing hosts in the host group to the new host.

Prerequisites for allocating volumesBefore allocating volumes to a host or file server, you must verify that thehost, file server, and storage systems are registered in Hitachi CommandSuite.

If you want to select volumes from a tier, Tiered Storage Manager licensemust be enabled.

In addition, determine:• The target host or file server• The volume type, count, capacity, and performance characteristics• An appropriate storage system

Allocating volumes from general tasksThe General Tasks list is conveniently located for volume allocation (and otherfrequent tasks). Because you have not selected a specific resource (forexample, a volume or a host) the dialog will prompt for all necessaryinformation.

Procedure

1. On the Resources or Mobility tab, select Allocate Volumes.

Tip: If you do not see Allocate Volumes listed, click more... tosee all menu items.

Managing logical volumes 305Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 306: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. On the Allocate Volumes dialog box, specify volume allocationrequirements.

3. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

4. (Optional) Update the task name and provide a description.5. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

6. Click Submit.If the task is scheduled to run immediately, the process begins.

7. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultA completed task indicates a successful volume allocation.

Allocating volumes to selected hostsWhen one or more hosts are selected, the hosts are displayed in theAllocate Volumes dialog box. The dialog box prompts for all additionalinformation.

Procedure

1. On the Resources tab, select Hosts.2. Hosts are grouped by operating system. Click the operating system for

the target hosts.

Tip: If you do not know the host OS, searching for the host maybe the better method for locating the host and allocating volumes.

3. Select one or more hosts, and click Allocate Volumes.4. In the Allocate Volumes dialog box, specify volume allocation

requirements.5. Click Show Plan and confirm that the information in the plan summary

is correct. If changes are required, click Back.6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

306 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 307: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ResultA completed task indicates a successful volume allocation.

Allocating volumes to selected file serversWhen a file server or file server cluster are selected, the file server isdisplayed in the volume allocation dialog. The dialog will prompt for alladditional information.

To allocate volumes to a file server or file server cluster:

Procedure

1. On the Resources tab, select File Servers, then select All FileServers.

2. In the Servers/Clusters list, select the row of the target file server orcluster (only one row can be selected), and click Allocate Volumes.The Allocate Volumes dialog box will launch.

3. Specify your volume allocation requirements.4. Click Show Plan and confirm that the information in the plan summary

is correct. If changes are required, click Back.5. (Optional) Update the task name and provide a description.6. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

7. Click Submit.If the task is scheduled to run immediately, the process begins.

8. You can check the progress and the result of the task on the Tasks &Alerts tab. Click on the task name to view details of the task.

Tip: If you are linking with NAS Platform v11.3 or later andallocating volumes to a file server when creating or expandingstorage pools, the following is recommended:

Use the Create Storage Pool dialog box or the Expand StoragePool dialog box instead of the Allocate Volumes dialog box tocreate or expand storage pools and allocate volumes at the sametime.

Device Manager can automatically specify a volume and pathconfiguration for allocating volumes that follows the best practicesfor configuring storage pools.

Managing logical volumes 307Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 308: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Tip: For NAS Platform family, in cases such as node additions, ifyou want to allocate volumes to file server nodes that are inclusters, use the Define Clustered-Host Storage dialog box.

A completed task indicates a successful volume allocation.9. To verify the volume allocation, select the file server or cluster to which

the volumes were allocated, and then select the System Drives tab forNAS Platform family, or select the Volumes tab for NAS Platform F orData Ingestor to confirm that the volumes were allocated successfully.

ResultVolumes have been allocated to a file server, or file server cluster, andverified.

Allocating selected volumes to hostsWhen volumes are selected, they are displayed in the Allocate Volumesdialog box, which prompts you for all additional information to allocateselected volumes to one or more hosts.

Procedure

1. On the Resources tab, select Storage Systems.2. Expand All Storage Systems, and select a specific storage system.

Links for volume groups are listed in the Detailed Information list onthe application pane. Listed groups represent available pools, paritygroups, and volumes.

3. Select a group from the Resources tree, or click a link from theDetailed Information list.Individual pools, parity groups, and volumes are listed.

4. Select an appropriate resource. For example, select a pool or paritygroup with available capacity, or select an open-unallocated volume.

5. Click Allocate Volumes.

Tip: Open-reserved volumes cannot be allocated. Open-allocatedcan be used to allocate like volumes.

6. On the Allocate Volumes dialog box, specify volume allocationrequirements.

7. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

8. (Optional) Update the task name and provide a description.9. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

308 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 309: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

10. Click Submit.If the task is scheduled to run immediately, the process begins.

11. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultA completed task indicates a successful volume allocation.

Allocating volumes to clustered hostsTo prevent possible errors with cluster host selection, group hosts that belongto a cluster into a public logical group or a private logical group. Alternatively,you can select multiple hosts or file servers in the Allocate Volumes dialogbox.

Tip: When you create a cluster configuration using existing hosts or whenyou add a new host to an existing host group or cluster, use the DefineClustered-Host Storage dialog box to allocate volumes to the new host.When using the Define Clustered-Host Storage dialog box, you can addthe WWN of the new host to the same host group as the WWN of the existinghosts.

Before you begin

• Add clustered hosts into the desired logical group type.• Confirm the names of the hosts that belong to the target cluster (if you

have not already added clustered hosts into a logical group).

Procedure

1. On the Resources tab, select Logical Groups.2. Expand the Public Logical Groups or Private Logical Groups root

folder, and locate an appropriate logical group.3. Locate the Hosts and Volumes tabs under the summary pane, and

select all hosts for the cluster.

Note: You can allocate volumes to cluster hosts that are not in alogical group. However, you must ensure that your host selection iscorrect. Logical groups are strongly recommended.

4. Click Allocate Volumes.

Tip: To allocate volumes to cluster hosts, verify that all clusterhost names are displayed in the Allocate Volumes dialog box toprevent an incorrect allocation.

Managing logical volumes 309Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 310: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. On the Allocate Volumes dialog box, specify volume allocationrequirements.

6. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

7. (Optional) Update the task name and provide a description.8. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

9. Click Submit.If the task is scheduled to run immediately, the process begins.

10. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultA completed task indicates a successful volume allocation.

The same volumes have been allocated to the individual hosts that belong toa cluster.

Allocating volumes by using a keyword searchA keyword search is a full or partial text search. For example, you can searchfor storage systems, hosts, volumes, parity groups, DP pools, and logicalgroups by entering a full or partial name. Note that file servers are notsubject to search. Using a keyword search provides an alternate method ofnavigating to a resource to allocate volumes.

Procedure

1. Enter a value in the search box and press Enter on your keyboard. AllResources is the default selection criteria. You can limit the scope ofsearched resources by using the drop-down menu.

Note: As the number of searchable objects grows, identify aspecific resource to search, for example Logical Groups, todecrease search time.

2. Click the link for appropriate resource. For example, assuming yousearched for hosts, from your search results, click the host name link.The appropriate location in the Resources tab is displayed. The host iseffectively selected.

3. Click Actions > Allocate Volumes.4. On the Allocate Volumes dialog box, specify your volume allocation

requirements.5. Click Show Plan and confirm that the information in the plan summary

is correct. If changes are required, click Back.

310 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 311: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultA completed task indicates a successful volume allocation.

Allocating volumes by using a criteria searchUsing a criteria search allows finding volumes with specific attributes, such asdrive type or RAID level, and allocating them.

Procedure

1. From the search box drop-down menu, select More Searches.2. Specify the search criteria and execute your search. There are Basic and

Advanced tabs presenting criteria options. Basic criteria for volumestatus, type, drive performance, RAID level, or capacity requirementshould meet most needs. Advanced criteria are more specialized.

Tip: Searches can be saved for re-use, and can be saved asprivate or public. Note that saving or sharing a search requires theTiered Storage Manager license.

3. From the search results, select the volumes and click AllocateVolumes.

4. On the Allocate Volumes dialog box, specify your host, and othervolume allocation requirements.

5. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Managing logical volumes 311Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 312: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ResultA completed task indicates a successful volume allocation.

Allocating volumes by using existing volume settingsIf you have allocated at least one volume to a host, you can allocate newvolumes for the host by using the attributes of an existing volume as thedefault for the Allocate Volume dialog box. Your volume allocation can beexactly like, or similar to, the selected existing volume.

Procedure

1. On the Resources tab, select Hosts or Logical Groups.2. Expand the tree for host or logical group type (public logical groups or

private logical groups), and select the desired host or logical group todisplay a list of existing volumes.

3. Select an existing volume that matches or is similar to your requirement,and click Allocate Like Volumes.

4. The selected host or logical group name displays, and you are promptedfor other volume allocation requirements.You can retain the existing settings, or change them as needed.

5. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultA completed task indicates a successful volume allocation.

Allocate Volumes dialog boxAllocating volumes is the process for making storage capacity available tohost applications and file servers. Hosts and file servers must already beregistered prior to volume allocation.

When you enter the minimum required information in this dialog box, theShow Plan button activates to allow you to review the plan. Click the Backbutton to modify the plan to meet your requirements.

The following table describes the dialog box fields, subfields, and fieldgroups. A field group is a collection of fields that are related to a specific

312 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 313: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

action or configuration. You can minimize and expand field groups by clickingthe double-arrow symbol (>>).

As you enter information in a dialog box, if the information is incorrect, errorsthat include a description of the problem appear at the top of the box.

Table 3 Allocate Volumes dialog box

Field Subfield Description

Host - If you select one or more hosts (including virtualizationservers, or file servers) prior to clicking Allocate Volumes, thehost names are displayed. If you did not select hosts, you areprompted to do so. The host drop-down list displays a singlehost, by name. Additionally, Select Hosts allows you to selectmultiple hosts, and displays related details such as the OS andWWN. For example, you can select cluster hosts for volumeallocation.

The host drop-down list provides a special selection calledAllocate without LUN security. When you enter adequatevolume information, LUN Path Options expand to a list ofstorage ports with LUN security disabled. Select one or moreports, and click Add to add them to the Selected Storage Portslist as part of this dialog box. All hosts with LUN paths on theselected storage port have access to the volumes on the ports.

Allocation Type - To allocate global-active device pairs, select Global-ActiveDevice, which will display Primary/Secondary tabs forconfiguring paired volumes. For all other volume allocations,select Standard. Global-Active Device is not displayed if aselected host is a file-server.

No. of Volumes - Specify the number of volumes to allocate to the selectedhosts.

For global-active device, this is the number of volumes tobecome paired volumes and allocated to the selected hosts.

Volume Capacity - Specify the volume size and select the unit of measure (forexample, GB represents Gigabytes).

The total capacity to be allocated (No. of Volumes * Capacity)is displayed.

Note: You can create a new volume when volume criteria, suchas volume capacity, cannot be satisfied by existing volumes.This is indicated in Advanced Options > Creating VolumeSettings > Advanced Options by formatting and stripe sizeoptions for the new volume.

Storage system - If you select a storage system prior to clicking AllocateVolumes, its name is displayed. Otherwise, select the storagesystem that is providing the volumes. Any storage system canprovide basic volumes.

Volumes to be allocated from DP pools must be provided by astorage system that supports this feature. Allocating volumesfrom a tier requires configured tier policies for the storagesystem.

The Any option means that the volume can be sourced fromany storage system that meets the criteria of the dialog box.

Managing logical volumes 313Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 314: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

For global-active device, from the primary/secondary tabs, youmust select primary and secondary storage systems that havebeen previously identified in the global-active device setup.

Virtual StorageMachine

- This displays a list of one or more virtual storage machinesfrom which you can choose during volume allocation. Thevirtual storage machines can also be viewed on theAdministration tab by selecting Virtual Storage Machine.

When allocating global-active device paired volumes, from theprimary/secondary tabs, you must select the virtual storagemachine to be used by both the primary and secondary storagefor access to virtualized resources. If the virtual storagemachine does not exist, it can be created during the global-active device setup.

Volume type - Select the volume type (for example, Basic Volume, DynamicProvisioning, or Dynamic Tiering). The displayed volume typesare determined based on your selected storage system. If youdo not see an expected volume type, check that you haveselected the correct storage system.

The selected volume type affects the Advanced Options fieldsdescribed below. For example, selecting a Basic Volume willpopulate Advanced Options > Volume Selection > Automaticwith a default drive type, drive speed, chip type, RAID level,and parity group. These specifications may be altered. Forexample, if you change the drive speed, the parity group maychange automatically. These changes are determined byavailable resources. You may also select an available tier fromwhich to allocate the volume, or manually locate availableunallocated volumes.

If you select volume type Dynamic Provisioning, AdvancedOptions > Volume Selection > Automatic, this populates withan DP pool instead of a parity group (and the specificationsmay be altered).

If you select Dynamic Tiering, the Volume Selection fielddisplays. Volume selection is either automatic or manual (seethe following section on the volume selection field).

The Any option means that the volume can be sourced fromany available volume type that meets the capacity criteria, andis available from the storage system.

For NAS Platform versions earlier than 12.1, basic volumes aredisplayed by default. If either HDP (Dynamic Provisioning) orHDT (Dynamic Tiering) are selected, you must manually selectthe volume to assign (see the following Volume Selection,Manual option).

Volume location - Select Internal if you want your volume to come directly fromyour selected storage system, or External if volumes aremapped from external storage systems (virtualized volumes)and are adequate for your needs. The external option canappear for basic volumes and DP pools. When viewing a list ofDP pools, the volume location column indicates whether it isinternal or external.

314 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 315: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

Note: This field is replaced by Volume Selection when DynamicTiering is selected (see the previous Volume type section).

VolumeSelection

- This field displays only if Dynamic Tiering is selected (seevolume type above).

Note: This field is not the same field as the Volume Selectionfield defined in the following Advanced Options section.

Select Automatic to allow the system to configure volumeallocation space for you from available DP pools.

Select Automatic and click Select Pool to list the available DPpools. Make an appropriate selection that corresponds to yourcapacity requirements, and click OK. Verify that the name iscorrectly displayed. When displayed under Advanced Options,you can tailor the Tiering Policy Setting (see the followingsection). When you submit the task, a volume is created forthe host.

Select Manual to display the selected volumes list, select AddVolumes,and select Unallocated Volumes or Allocated Volumes.Choose an appropriate volume, click Add to update SelectedVolumes, and click OK to return to volume allocation.

>> AdvancedOptions

(See thefollowing twelvefields fordetails)

When you set the volume type to Basic Volume or DynamicProvisioning, this displays the Tier, Automatic, and Manualoptions. When the volume type is Dynamic Tiering, the fieldsdisplay as explained in Volume Type.

The fields can be displayed or hidden when you click AdvancedOptions. These fields support explicit volume allocation (Tier orManual) or volume allocation based on criteria such as drivetype, drive speed, chip type, or RAID level.

Tiering PolicySetting

- Displays only if Dynamic Tiering is selected as the volume type,and an HDT pool has been selected with Select Pool (seeprevious Volume Selection section). You can select a specifictier policy for the volume to be allocated, or select All.

New PageAssignment Tier

- For VSP G1000, VSP G1500, VSP F1500, VSP, or HUS VMstorage systems, selecting this option specifies to whichhardware tier the new page of an HDT volume is to be assignedwith a specified priority. Within the hardware tiers for which thetiering policy is set, specify High for an upper-level hardwaretier, Middle for a medium-level hardware tier, and Low for alow-level hardware tier.

RelocationPriority

- For VSP G1000, VSP G1500, VSP F1500, VSP, or HUS VMstorage systems, selecting this option specifies whether youwant to prioritize the relocation of the data in HDT volumes.

VolumeSelection

Tier Note: This field is not the same field as the Volume Selectionfield that is defined when Volume Type is set to DynamicTiering.

If your selected storage system is configured with storage tiers,you can Select a Tier for volume allocation. If your selectedstorage system was Any, you can see which storage systemshave tiers available using Select a Tier, and make a selection.

Automatic Device Manager automatically selects volumes based on thespecified volume criteria.

Managing logical volumes 315Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 316: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

For a Basic Volume, Device Manager selects a parity groupbased on the volume criteria. The selected parity groupchanges with changes in criteria. If desired, click Select ParityGroups to make your selection.

For Dynamic Provisioning volumes, the system selects an DPpool based on volume criteria. The selected DP pool changeswith the changes in criteria. New volumes may be created orexisting volumes may be used. If desired, click Select Pool tomake your pool selection.

Drive type, speed, chip type, RAID level, and parity group orpool can be adjusted as desired, but the primary purpose ofthis field relies on the storage system to decide whichresources to use for volume allocation.

Manual For Basic Volume and Dynamic Provisioning volumes, clickingManual displays the Selected Volumes dialog box. Click AddVolumes to select basic volumes or Dynamic Provisioningvolumes.

Volume Criteria Drive Type Indicates the type of drive (FMD, FMC, SSD, SAS, or SATA).The drive type selection can affect volume performance, andthe drive type setting can affect the parity group or DP pooltype choices.

Drive Speed Indicates the rotational speed (in RPM) of the drive type. Thisoption is not displayed when an FMD, FMC, or SSD drive isselected in Drive Type. The drive speed selection can affect theparity group or DP pool type choices.

Chip Type Indicates the flash memory chip type of the physical drives.However, this option is only displayed if VSP G1000, VSPG1500, VSP F1500, Virtual Storage Platform or HUS VM is theselected storage system and SSD is the selected drive type. Ifthese two conditions are met, you can select one of threeoptions: Any, SLC, or MLC.

RAID Level Changing the RAID level can change the parity group or DPpool.

The RAID levels and parity group configuration can varydepending on the selected storage system. For example, RAID6may be supported on one storage system, but not on another.In addition, RAID6 is displayed only if RAID6 parity groupshave been configured on the selected storage system.

Parity Groups Changing the volume criteria can change the system selectedparity group. If desired, click Select Parity Groups to make yourselection.

Pool Click Select Pool to select the DP pool from which to allocatevolumes. Changing the volume criteria can change the systemselected DP pool.

Creating VolumeSettings

LDEV ID Creating Volume Settings fields are only displayed whenentered volume criteria requires that a new volume be created.See the previous definition for Volume Capacity field for anexample.

LDEV ID options for volume creation are displayed if VSPG1000, VSP G1500, VSP F1500, VSP, or HUS VM are theselected storage system, and volume criteria are such that new

316 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 317: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

volumes need to be created. Select auto or manual to locate oridentify an Initial LDEV ID for volume creation.

Format Type This field displays formatting methods that are available for thevolume to be created, and available for basic volumes on thespecified storage system. For example, you might see optionsfor setting either a quick or a basic format.

Note that during a quick format, the load might becomeconcentrated on some components, lowering the I/Operformance of all hosts that are running in the target storagesystem.

Stripe Size This field displays stripe size options for the volume to becreated, as supported by the specified storage system (forexample, 64 KB).

Resource Group - This field only displays allocating volumes for created users(those not associated with built-in accounts), and allows themto potentially source the volume or volumes from multipleresource groups to which they have access rights, asconfigured for their account type.

Full Allocation - • For VSP G1000, VSP G1500, or VSP F1500 select Useexisting settings to use the full allocation settings of theexisting DP volumes. For newly created DP volumes, the fullallocation settings are disabled by default.

• Select Enable to reserve pages of the specified capacitywhen allocating volumes.

• Select Disable to disable the full allocation settings.

Specify a newlabel

- Volume labels are searchable, and therefore recommended as away to find volumes.

Select the check box to add a LDEV label. If the target volumeis an existing volume with a current label, the new label isapplied to the volume.

Initial value - The smallest number of sequential numbers. The Initial value isnot required, but can be useful for differentiation whenallocating multiple volumes. For each volume, the number is inascending order of LDEV ID.

Reflect theselabels to thestorage system

- Reflect these labels to the storage system is checked by defaultso that naming is consistent between HCS and the storagesystem itself. If the selected storage system does not supportlabel setting, this item is not displayed.

CLPR for HDPvolume

- Select cache logical partition (CLPR) values from the list andassign this value to DP volumes when two conditions are met:the target storage system supports CLPR, and DP volumes areselected.

Select either of the following values: Automatically, whichassigns the default value if it is a new volume (if it is anexisting volume, the CLPR is unchanged), or CLPR, whichassigns a CLPR number (0-9...) to the unallocated volume.

If you are using a VSP G400, VSP G600, or VSP G800 storagesystem with a NAS module and the firmware version is earlierthan 83-03-25-XX/XX, do not use the CLPR named"NASSystemCLPR". If you use this CLPR, system performancemight be severely affected.

Managing logical volumes 317Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 318: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

CommandDevice

- When allocating a volume intended to be a command device fora pair management server, select the Command Device checkbox and select Enabled. You may also enable or disable thefollowing:• Command Device Security• User Authentication• Device Group Definition

ALUA - To enable ALUA, do the following:

• Configure the ALUA volume attribute.To enable ALUA volume attribute, select Enable.

• Configure Asymmetric Access State of the host group.To set path priorities for asymmetric access to the hostgroup, click Host Group Settings in Detail, and then fromthe drop-down list, select Active/Optimized for primarypreferred paths, or Active/Non-Optimized for secondarypaths, the less preferred paths. In general, the optimizedpaths are given preference over non-optimized paths.

Use secondaryvolume ofglobal-activedevice

- When primary and secondary storage systems are notdiscovered by a single HCS management server, select thischeck box to reserve the secondary volume to prevent accessuntil the global-active device pair can be created usingReplication Manager.

>> LUN PathOptions

(See thefollowing twofields fordetails)

Click LUN Path Options to display the fields and buttons forconfiguring LUN paths (the storage port to host port mappingsthat connect volumes to hosts).

No. of LUNpaths perVolume

- Specify the number of LUN paths to allocate per host. Changingthe path count may cause the system to suggest a new pathfor you automatically.

Click Edit LUN Paths to assign or change LUN paths. LUN pathscan be displayed in either graphical (Topological Graph) ortabular (Selection Table) formats. In both views, use theselinks to display WWN nickname information to confirm thetarget host bus adapter (HBA).• In the graphical view, click on a storage port row to add it

to the LUN Path Editor panel. Connect the line to thetarget HBA. When another line is displayed, you canconnect it to another HBA or discard it with a click.

• In the tabular view, select a storage port, select a host portrow, and then click Add to move the mapping to theSelected host Ports panel.

When editing the LUN path of a global-active device pairedvolume, you can specify the settings while referencing the LUNpaths of the other volume.

To delete default or incorrect mappings, click the connector linein the graphical view, or click Remove in the tabular view.

For NAS Platform family, the path redundancy setting isrecommended. By default, the number of LUN paths displayedis equal to the number of NAS Platform ports or storage systemports, whichever is lower.

318 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 319: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

LUN SecurityDisabledStorage Ports,SelectedStorage Ports

- These related screens only display under LUN Path Optionswhen Allocate without LUN security is selected from the hostdrop-down. For more information, see the previous Hosts fielddescription.

Unsecured ports are listed in LUN Security Disabled StoragePorts. Select the ports on which to make the volume available,and click Add to move the ports to Selected Storage Ports. Youcan review your plan and click Submit for this allocation task.All hosts that can access the ports on which the volume is oncan access the volume.

>> Host Groupand LUNSettings

(See thefollowing twofields fordetails)

Volume allocations using Fibre Channel or FCoE prompt forsuch items as host group name, host mode, host modeoptions, and LUN number.

To display or hide the following fields, click Host Group and LUNSettings.

Host Groups Shared by AllHosts

This option specifies the configuring of all hosts in a single hostgroup when volumes are allocated to multiple hostssimultaneously.

Separate forEach Host

This option specifies the configuring of a separate host groupfor each host when volumes are allocated to multiple hostssimultaneously.

Name Displays host group information for hosts that are in a hostgroup. For hosts that are not in a host group, a host group canbe created by entering a name for a host group as needed.

Host GroupSettings inDetail

This button displays a list of host groups than can be usedduring volume allocation, or which were previously created forprior volume allocations.

Host mode Select the host mode that supports the host type for which youare allocating volumes.

Host modeoptions

Select one or more host mode options for supporting specialrequirements for specific applications.

LU Number auto or manualselectionbuttons

Logical unit (LU) numbers can be assigned automatically ormanually for the volumes being allocated.

To automatically assign LU numbers to the allocated volumes,select auto and enter a start from number (or use the default).LU numbers are set in ascending order while avoiding existingnumbers.

To manually assign LU numbers to the allocated volumes,select manual and click Select LU Number to choose a startingLU number.

>> Virtual IDSettings

(See the nextfield for details)

Virtual ID Settings for global-active device are displayed onlywhen options other than the default virtual storage machineare used on the primary storage.

An option for manually specifying the starting virtual LDEV IDto use for allocated volumes. Volumes for which virtual IDshave not been specified are assigned a new virtual LDEV IDwhen they are allocated to a host that belongs to resourcegroups used in data migrations that use virtual IDs.

Managing logical volumes 319Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 320: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

Virtual LDEV IDs are assigned automatically by default, withunused IDs assigned to volumes in ascending order. If a usermanually specifies an ID, volumes receive the lowest availableID that is equal to or greater than the specified value.

For global-active device, accept the default starting virtualLDEV ID for global-active device paired volumes beingallocated, or manually select a starting virtual LDEV ID. Thedisplayed value is the minimum value that can be specified asan LDEV ID. The virtual LDEV IDs will provide a single ID forthe global-active device paired volumes being accessed byhosts.

Starting virtualLDEV ID Targets

LDKC Logical disk controller (LDKC) number that forms part of thestarting virtual LDEV ID.

CU Control unit (CU) number that forms part of the starting virtualLDEV ID.

DEV Device (DEV) number that forms part of the starting virtualLDEV ID.

>> PairmanagementServer Settings

(See thefollowing twofields fordetails)

These selections are made on the Primary and Secondary tabs.

Pairmanagementserver

- Displayed hosts will be pair management servers configuredduring global-active device setup.

Instance ID New Create a new configuration definition file (with a new instanceID). The entered instance ID is validated to prevent duplicates.

You can also specify a specific UDP port number forcommunications. Available UDP port numbers are:• 0 - 899• 999 - 4093

Existing Use an existing configuration definition file. The existinginstance IDs are listed.

>> Pair Settings (See thefollowing 3fields fordetails)

For global-active device, pair settings fields are used to finalizepair information.

Quorum Disk - The quorum disk number is configured for the primary andsecondary storage systems during the initial global-activedevice setup. You can accept the default value or select avalue.

CTG ID - When you enable this option you can select the CTG ID to beassigned for the global-active device pair.

To add the GAD 3DC delta resync configuration to allocatedglobal-active device pair volumes, CTG IDs need to beassigned.

Copy Group New Used to create a new copy group by name for the global-activedevice pairs being allocated. Valid copy group names must beunique in a configuration definition file (horcmN.conf) andfollow these rules:• 1 - 31 characters

320 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 321: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

• A - Z, a - z, 0 - 9, (dash -), (underscore _ ), (period .), @• '-' is not permitted as the first character

Existing Places global-active device pairs into an existing copy group.Existing is disabled if no copy group exists.

Pair name Automatic A pair name consisting of a prefix and start sequence no. iscreated automatically by the system.

Manual A pair name consisting of prefix and start sequence no. iscreated manually by the user. Valid pair names must be uniquein a copy group, and follow these rules:• 1 - 26 characters• A - Z, a - z, 0 - 9, (dash -), (underscore _ ), (period .), @• '-' is not permitted as the first character• Start sequence no should be 0 - 99999• The prefix and start sequence no. should not be blank

>> iSCSI Targetand LUNSettings

(See thefollowing twofields fordetails)

Volume allocations using iSCSI will prompt for items such ashost mode, host mode options, name of the iSCSI target, andLU number.

To display or hide the following fields, click iSCSI Target andLUN Settings.

iSCSI Targets Shared by AllHosts

Select this option to indicate that all hosts in the volumeallocation reference the same iSCSI targets.

Separate foreach Host

Select this option to indicate that each host in the volumeallocation references different iSCSI targets.

SelectingMethod

Select Use an existing iSCSI Target to indicate that you want touse existing iSCSI targets. Select Create an iSCSI Target toindicate that you want to create new iSCSI targets.

Name Prompts you to create a new iSCSI target, or displays theexisting iSCSI target name.

iSCSI TargetSettings inDetail

Displays a list of iSCSI targets that can be used duringallocation. This also displays iSCSI targets that were set afterthe task for allocating volumes was executed.

Host mode Select the host mode that supports the host type for which youare allocating volumes.

Note: When volumes are allocated to multiple hosts whosehost modes differ, Mixed is displayed and indicates an error.

Host modeoptions

Select one or more host mode options to support specialrequirements for specific applications.

LU Number - This is the same as described in Host Groups.

>> Host Groupand LUNSettings for Mid-range Storage

(See thefollowing fieldfor details)

For mid-range storage systems, the Host Group and iSCSItarget dialog boxes can include additional fields for enhancingthe management of mid-range storage systems.

The following fields are specific to mid-range storage systemsonly. The Host Mode and LU Number fields remain part of themid-range storage dialog box and are documented in theprevious descriptions.

Options Platform Selects the host platform type (for example, Windows) to assistin setting host mode.

Managing logical volumes 321Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 322: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

Middleware Selects the installed software on the platform to assist insetting host mode.

Alternate Path Indicates the multipath management software used by the hostto assist in setting host mode.

Failover Indicates the cluster software used by the host to assist insetting host mode.

AdditionalParameters

Click Select to display additional specific usage options to assistin setting host mode.

About clustered-host storageClustered-host storage is a storage configuration that is created whenvolumes are allocated to a new host (or file server) that is added to a hostgroup (also known as a host cluster).

When creating clustered-host storage, you add the WWN of the newly addedhost to the host group to which the WWN of an existing host belongs, andyou set LUN paths from the newly added host to the same volumes as thosefor an existing host.

For example, to better manage and distribute the load on your applicationsand resources, update the existing host group by creating clustered-hoststorage using existing volumes by allocating them to a new host in the hostgroup.

Newly allocated volumes represent additional storage resources for a newhost. Clustered-host storage supports the reallocation of existing volumeswithin the host group to meet specific needs.

The following figure illustrates the process of adding a host to createclustered-host storage in a system.

322 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 323: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Creating clustered-host storageYou create clustered-host storage by allocating volumes to a new host withinan existing host group (also known as a host cluster).

Creating clustered-host storage involves allocating new or additional volumesto a new host that has been added to an existing host group, setting newLUN paths to the new host, and adding the WWN of the new host to anexisting host group.

Before you begin• Discover (and register) new hosts.• Allocate volumes to existing hosts.• Verify that the host connections are Fibre Channel or Fibre Channel over

Ethernet (FCoE)

Procedure

1. On the Resources tab, click Hosts, and select All Hosts to list thehosts by OS type in the Application pane.

2. From the list of hosts, select one or more hosts (or from the list ofvolumes, select one or more volumes allocated to these hosts).

3. Click More Actions, and select Define Clustered-Host Storage.4. Select secondary hosts to add to the cluster:

• To add one host, select the host name in the Select a host/fileserver list.

Managing logical volumes 323Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 324: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• To add multiple hosts, click Select Hosts/File Servers, highlighteach host name, and click Add.

5. From the Selected Storage System list, select the storage system toassociate with the selected host.If two or more storage systems are allocated to the selected host, selectonly one storage system.

6. In the Add WWN list, select the WWN of the new host to add to anavailable host group.View the host groups to which the new host can be added in theAvailable Host Groups list.

7. In the Available Host Groups list, select a host group, and click Add.You can see the volumes that are associated with the selected host groupin A list of affected volumes, and also verify the added LUN paths foreach entry in the Available LUN Path list.

Note: To add more new hosts and associated WWNs, repeat steps4 through 7 as needed. If you need to modify host information,expand the Host Group and LUN Settings to set host groupinformation.

8. Click Show Plan and confirm that the information in the plan summaryis correct.If changes are required, click Back.

9. (Optional) Update the task name and provide a description.10. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If scheduled for Now, select View task status to monitor thetask after it is submitted.

11. Click Submit.If the task is scheduled to run immediately, the task begins.

12. You can check the progress and the result of the task on the Tasks &Alerts tab.Click on the task name to view details of the task.

Result

The new host is added to the designated host group, which created clustered-host storage by virtue of the following:• LUN paths are created between new host and host group• WWN of new host is added to host group

Tip: You can also use the Edit LUN Paths dialog box to confirm that the WWNof the new host is successfully added to the host group.

About unallocating volumesYou can unallocate volumes from hosts or file servers.

324 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 325: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can also unallocate global-active device paired volumes from hosts.

Unallocated volumes can be:• Re-allocated (with existing data) to a host or file server that can recognize

the existing data (file system).• Used for other storage requirements.

Unallocating a volume deletes all LUN paths that connect the volumes to thehost or file server.

Unallocating volumes does not delete existing volume data by default.However, during unallocation there is an option to delete the volume (volumedata is lost) and return it to unused capacity, or you can delete theunallocated volume later. As a precaution, to retain volume data, back upvolumes before unallocating them (for example, for volumes to be re-allocated to a new host).

Unallocating volumes from hostsUnallocated volumes can be re-allocated (with existing data) or can be madeavailable for other storage requirements.

Before you begin• Identify the name of the target host, and the volumes to unallocate.• If necessary, backup data on the target volumes.• Unmount all allocated volumes that you plan to unallocate. An IT

administrator might have to perform this task.

Procedure

1. On the Resources tab you can unallocate volumes from severallocations:• From General Tasks, select Unallocate Volumes.• Select a host OS, select one or more target hosts, and click

Unallocate Volumes.• Select a host OS, click a target host name to display volumes, select

one or more volumes, and click Unallocate Volumes.• Search for a host, click the host name and go directly to the volume

list, and click Unallocate Volumes

The Unallocate Volumes dialog is launched.2. Select the host and host volumes to unallocate. Note that if the host and

host volumes were selected prior to launching the Unallocate Volumesdialog box, you will go directly to the plan summary mentioned in thenext step.

3. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

4. (Optional) Update the task name and provide a description.5. (Optional) Expand Schedule to specify the task schedule.

Managing logical volumes 325Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 326: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

6. Click Submit.If the task is scheduled to run immediately, the process begins.

7. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Result

Unallocated volumes are added back to the storage system Open-Unallocatedvolume list.

Unallocating volumes from file serversUnallocated volumes can be re-allocated (with existing data) or can be madeavailable for other storage requirements.

Before you begin• Identify the name of the target cluster or file server and the volumes you

want to unallocate.• If necessary, back up data on the target volumes.• Unmount all allocated volumes that you plan to unallocate. An IT

administrator might have to perform this task.

Procedure

1. On the Resources tab, select File Servers, then select All FileServers.• From the Servers/Clusters list, select the row of the target cluster or

file server (only one row can be selected), and then click UnallocateVolumes.

• To unallocate individual volumes, select the target cluster or file serverin the tree, and then select the target volumes from the SystemDrives tab for NAS Platform, or the Volumes tab for NAS Platform For Data Ingestor, and then click Unallocate Volumes.

2. Specify the appropriate settings for creating a plan.3. Verify the plan and click Submit.4. In the task list, confirm that the task is completed.

Tip: For NAS Platform family, for reasons such as node deletion, ifyou want to unallocate volumes from individual file server nodesthat are in clusters, click the link of the cluster name and selectthe Physical View tab. From the list, select the row of the targetfile server (node) and click the Unallocate Volumes button.

326 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 327: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Unallocate volumes dialog boxSuccessfully unallocated volumes are placed in the Open-Unallocated folderof the storage system from which they originated. Unallocated volumes canbe reallocated to another host, with data intact.

Volumes can be deleted using this dialog box and returned to unusedcapacity if eligible, or previously unallocated volumes can be reallocated toanother host.

When you enter the minimum required information in this dialog box, theShow Plan button activates to allow you to review the plan. Click the Backbutton to modify the plan to meet your requirements.

The following table describes the dialog box fields, subfields, and fieldgroups. A field group is a collection of fields that are related to a specificaction or configuration. You can minimize and expand field groups by clickingthe double-arrow symbol (>>).

As you enter information in a dialog box, if the information is incorrect, errorsthat include a description of the problem appear at the top of the box.

Table 4 Unallocate volumes dialog box

Field Subfield Description

Host - Note: It is recommended that volumes be unmounted fromhosts prior to unallocating the volumes.

The unallocate volumes dialog box works slightly differentlydepending on where you launch the dialog box, forexample:

If you select Unallocate Volumes from General Tasks, youwill be prompted to select a host. Selecting the host willdisplay the host volumes. Select one or more volumes andclick Show Plan to display the fields and options below.

If you select Unallocate Volumes by selecting a host rowfrom the Hosts panel, you will not be prompted to select ahost. Host volumes are displayed. Select one or morevolumes and click Show Plan to display the fields andoptions below.

If you click the host name to list host volumes, you canidentify and select volumes using details such as host group,or volume attribute. Select volumes and click UnallocateVolumes to display the fields and options below. In thiscase, Show Plan is not displayed because the host volumeswere known prior to launching the dialog box.

Unallocate global-active device pairsimultaneously

- When a global-active device primary (P-VOL) or secondary(S-VOL) volume is selected to be unallocated and this checkbox is selected, the global-active device pair will besimultaneously unallocated.

Managing logical volumes 327Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 328: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

Plan Summary - For one or more volumes the volume ID, storage system,volume type, drive type, and host for each volume isdisplayed.

When a global-active device paired volume is selected, CopyInfo(P-VOL) and Copy Info(S-VOL) are displayed.

>> Plan Details VolumeInformation

There is a Deletable column with a yes or no value toindicate whether the volume can be deleted. For example,volume conditions preventing deletion include:• The volume is in use by another host• The volume is a command device• The volume is a pair volume

LUN PathInformation

For one or more volumes the volume ID, storage system,storage port, port type, host port, WWN/iSCSI name, LUnumber, host group/iSCSI target, host mode, and 'otheroptions' for each volume is displayed. This providescomplete storage and host LUN path information.

global-activedevice pairinformation

When unallocating global-active device pairs, copy pairnames, pair management servers, and copy groups appearas information of the pairs that are released at the sametime that global-active device pair volumes are unallocated.

PairManagementServerInformation

When unallocating global-active device pairs, informationsuch as the names and instance IDs of the copy groups thatare deleted at the same time that pairs are releasedappears.

Virtual LDEV IDinformation

When unallocating volumes such as global-active device S-VOLs, virtual LDEV ID information is displayed.

>> AdvancedOptions

Host Group/iSCSI TargetDelete

If you select all volumes for a host group/iSCSI target, ahost group/iSCSI target deletion option is displayed.Deleting a host group/iSCSI target is only done under veryspecific circumstances, such as the target server havingbeen replaced with a new server. Do not select this optionunless you know the exact status of the target server andthe server volumes (data), and are confident this option isappropriate.

Only unallocatevolumes

This default selection unallocates volumes only withoutdeleting volume data or the volume. Unallocated volumescan be re-allocated to another host, for example a newerand faster server.

Release LUSEvolumes

A LUSE volume is created by aggregating (combining)multiple smaller LDEVs into a larger volume that is allocatedto a host. If one or more LUSE volumes are selected, thisoption is activated and can be selected to unallocate LUSEvolumes and release the component LDEVs of the LUSEvolume.

Delete volumes For volumes where Plan Details > Volume Informationindicates the volume can be deleted (Deletable=yes),eligible volumes will be deleted and returned to unusedcapacity. To delete a selected volume when unallocatingvolumes: select Delete Volumes, and click Submit.

Delete virtualID information

Select Delete Virtual Information from volumes to deletevirtual ID information. This option should only be used by anadministrator knowledgeable about the status of migrated

328 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 329: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Field Subfield Description

assigned tovolumes

volumes with virtual ID information, or the status of global-active device pairs.

LDEV IDs aremoved to theresource pool ofthe defaultvirtual storagemachine

Select LDEV IDs are moved to the resource pool of thedefault virtual storage machine to release LDEV IDs.

Managing logical units workflow1. Configure Fibre Channel ports.2. Configure hosts.3. Configure LU paths.4. Enable LUN security.5. Set Fibre Channel authentication.6. Manage hosts.

Configuring Fibre Channel ports

Setting the data transfer speed on a Fibre Channel portAs system operation continues, you might notice that a large amount of datais transferred at some ports, but a small amount of data is transferred atother ports. You can optimize system performance on a Fibre Channel port bysetting a faster data transfer speed on ports where a large amount of data istransferred, and setting a slower data transfer speed on ports where asmaller amount of data is transferred.

Note: In Fibre Channel over Ethernet (FCoE) networks, the port speed isfixed at 10 Gbps and cannot be changed.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Ports/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:

Managing logical volumes 329Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 330: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. Click Storage Systems, and then expand the Storage Systemstree.

b. Click Ports/Host Groups/iSCSI Targets.2. In the Ports/Host Groups/iSCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. In the Edit Ports window, select the Port Speed check box, and then

select the desired port speed.Select the speed of the Fibre Channel port in the unit of Gbps (Gigabitper second). If Auto is selected, the storage system automatically setsthe speed to 2, 4, 8, or 16 Gbps.

Caution: Observe the following cautions when setting speed on aFibre Channel port:• If the host bus adapters (HBAs) and switches support 2 Gbps,

use the fixed speed of 2 Gbps for the channel adapter for FibreChannel (CHF) or channel board for Fibre Channel (CHB(FC))port speed. If they support 4, 8, or 16 Gbps, use 4, 8, or 16Gbps for the CHF port speed, respectively.

• If the Auto Negotiation setting is required, some links might notbe up when the server is restarted. Check the channel lamp. Ifit is flashing, disconnect the cable, and then reconnect it torecover from the link-down state.

• If the CHF port speed is set to Auto, some equipment might notbe able to transfer data at the maximum speed.

• When you start a storage system, HBA, or switch, check thehost speed appearing in the Port list. If the transfer speed isdifferent from the maximum speed, select the maximum speedfrom the list on the right, or disconnect, and then reconnect thecable.

• The available port speed which is specified in Port Speed islimited due to the combination of the type of the Fibre Channelport and the topology which is specified in Connection Type.

5. Click Finish.The Confirm window appears.

6. In the Task Name text box, type a unique name for the task or acceptthe default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value <date>-<window name> is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

330 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 331: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Related references

• Combination of data-transfer speed and connection type on page 331

Combination of data-transfer speed and connection typeThe available port speed specified in Port Speed is limited due to thecombination of the type of Fibre Channel port and the topology specified inConnection Type. For the possible combinations, see following tables.

For the 8-Gbps Fibre Channel ports, the combinations between data-transferspeeds and connection types are as follows:

ConnectionType

Port Speed

2 Gbps 4 Gbps 8 Gbps 16 Gbps Auto

FC-AL Available Available Available Not Available Available

(Default)

P-to-P Available Available Available Not Available Available

For the 16-Gbps Fibre Channel ports, the combinations between data-transferspeeds and connection types are as follows:

ConnectionType

Port Speed

2 Gbps 4 Gbps 8 Gbps 16 Gbps Auto

FC-AL Not Available Available Available Not Available Available *1

P-to-P Not Available Available Available Not Available Available(Default *2)

*1: If this combination is specified, the maximum transfer speed that isautomatically specified is 8 Gbps.

*2: If this default value is set, Fabric is set to ON automatically.

Setting the Fibre Channel port addressWhen configuring your storage system, set addresses for Fibre Channel ports.When addressing Fibre Channel ports, use AL-PA (arbitrated-loop physicaladdress) or loop IDs as the addresses.

Note: In FCoE networks, you do not need to set the address of a FibreChannel port.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Managing logical volumes 331Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 332: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Ports/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. In the Ports/Host Groups/iSCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. In the Edit Ports window, select the Address (Loop ID) check box,

and then select the address.5. Click Finish.

The Confirm window appears.6. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Addresses for Fibre Channel portsThe following addresses are available for setting Fibre Channel ports.

AL-PALoop ID

(0~29)AL-PA

Loop ID

(30~59AL-PA

Loop ID

(60~89)AL-PA

Loop ID

(90~119)AL-PA

Loop ID

(120~125)

EF 0 B4 30 76 60 49 90 10 120

E8 1 B3 31 75 61 47 91 0F 121

E4 2 B2 32 74 62 46 92 08 122

E2 3 B1 33 73 63 45 93 04 123

E1 4 AE 34 72 64 43 94 02 124

E0 5 AD 35 71 65 3C 95 01 125

DC 6 AC 36 6E 66 3A 96 - -

DA 7 AB 37 6D 67 39 97 - -

D9 8 AA 38 6C 68 36 98 - -

D6 9 A9 39 6B 69 35 99 - -

D5 10 A7 40 6A 70 34 100 - -

D4 11 A6 41 69 71 33 101 - -

332 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 333: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

AL-PALoop ID

(0~29)AL-PA

Loop ID

(30~59AL-PA

Loop ID

(60~89)AL-PA

Loop ID

(90~119)AL-PA

Loop ID

(120~125)

D3 12 A5 42 67 72 32 102 - -

D2 13 A3 43 66 73 31 103 - -

D1 14 9F 44 65 74 2E 104 - -

CE 15 9E 45 63 75 2D 105 - -

CD 16 9D 46 5C 76 2C 106 - -

CC 17 9B 47 5A 77 2B 107 - -

CB 18 98 48 59 78 2A 108 - -

CA 19 97 49 56 79 29 109 - -

C9 20 90 50 55 80 27 110 - -

C7 21 8F 51 54 81 26 111 - -

C6 22 88 52 53 82 25 112 - -

C5 23 84 53 52 83 23 113 - -

C3 24 82 54 51 84 1F 114 - -

BC 25 81 55 4E 85 1E 115 - -

BA 26 80 56 4D 86 1D 116 - -

B9 27 7C 57 4C 87 1B 117 - -

B6 28 7A 58 4B 88 18 118 - -

B5 29 79 59 4A 89 17 119 - -

Setting the fabric switchWhen you configure your storage system, specify whether the hosts and thestorage system are connected via a fabric switch.

Note: In FCoE networks, Fabric is fixed to ON. Therefore, you do not need toset Fabric.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Ports/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.

Managing logical volumes 333Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 334: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

b. Click Ports/Host Groups/iSCSI Targets.2. In the Ports/Host Groups/ISCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. Select a check box of Fabric, and select ON if you set the fabric switch.

If you do not set the fabric switch, select OFF.5. Click Finish.

The Confirm window appears.6. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Fibre Channel topologyThe term Fibre Channel topology indicates how devices are connected to eachother. Fibre channel provides the following types of topology:• Fabric: Uses a fabric switch to connect a large number of devices (up to 16

million) together.• FC-AL (Fibre Channel-Arbitrated Loop): A shared interface that can

connect up to 126 devices (AL-ports) together.• Point-to-point: The simplest fibre topology connects two devices directly

together.

When configuring your storage system, use the LUN Manager window tospecify whether the hosts and the storage system are connected using afabric switch.

If a fabric switch is used, specify FC-AL or point-to-point in the LUN Managerwindow. If a fabric switch is used, consult the documentation for the fabricswitch to learn whether FC-AL or point-to-point should be used. Some fabricswitches require you to specify point-to-point to get the system running.

If no fabric switch is used, specify FC-AL.

The combination of the topology which is specified in Connection Type andthe port speed which is specified in Port Speed is restricted. For details, see Combination of data-transfer speed and connection type on page 331.

In FCoE networks, Connection Type is fixed to P-to-P. Therefore, you do notneed to set Connection Type.

334 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 335: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Example of FC-AL and point-to-point topology

Setting the Fibre Channel topology

Procedure

1. Open the Ports/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. In the Ports/Host Groups/iSCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. Under Connection Type, select FC-AL or P-to-P.5. Click Finish.

The Confirm window opens.6. In the Task Name text box, type a unique name for the task or accept

the default.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value <date>-<window name> is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Managing logical volumes 335Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 336: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Overview for iSCSIAn iSCSI (Internet SCSI) is a protocol for sending and receiving a SCSIcommand through an IP network. iSCSI transfers data in block units. An IP-SAN that uses an existing Ethernet can be constructed by using iSCSI. In thenetwork for iSCSI, LUN Manager manages access paths between hosts andvolumes, for each port in your storage system. LUN Manager has thefollowing features:• Connecting multiple hosts to an port

With LUN Manager, you can connect more than one host to a port on yourstorage system.When setting up host connections in LUN Manager, for each host youspecify the settings for host mode, volume, and iSCSI target. Each hostcan access a volume simulating a dedicated port to the host even if thathost shares the port with other hosts.

• Mapping volumes to hostsWith LUN Manager, you can map or assign volumes to the hosts on yournetwork. You have complete flexibility to share or restrict volume accessamong the hosts.

• Network securityWith LUN Manager, you can enable or disable CHAP (Challenge HandshakeAuthentication Protocol), a security protocol that requires users to enter asecret for access.

Network configuration for iSCSIAn iSCSI connection makes it possible to construct an IP-SAN by connectingmany hosts and storage systems at a low cost. However, iSCSI greatlyincreases the I/O workload of the network and the storage system. Whenusing iSCSI, it is very important that you configure the network so that theworkload among the network, port, controller, and drive is properlydistributed.

Even though the LAN switches and NICs are the same, there are someimportant differences when you use iSCSI, particularly regarding the LANconnection. You need to focus particular attention to the following:• The shared memory must be installed with 16 GB or more in advance. For

the installation about the shared memory, contact the customer support.

• iSCSI consumes almost all of the available Ethernet bandwidth, unlike aconventional LAN connection. This can significantly degrade theperformance of both the iSCSI traffic and the LAN. Therefore, it is veryimportant that you separate the iSCSI IP-SAN and the office LAN.

336 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 337: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

When a packet loss of the network occurs in IP-SAN, the iSCSI transferperformance greatly deteriorates due to a congestion control of TCP.Although the packet loss and the congestion control are unavoidable bynature of the network, check the network construction for potentialmitigating solutions, such as separating the segments, to minimize theeffect of packet loss in the IP-SAN construction.

• Host I/O load will affect the iSCSI response time. In general, the greaterthe I/O traffic is, the lower the iSCSI performance.

• You need to have a failover path between host and iSCSI, so that you canupdate the firmware without stopping the system.

• If the Delayed Ack of the host setting is enabled in the iSCSI connectionconfiguration, the host I/O delaying occurs and it might significantly affectthe performance. To avoid the host I/O delaying, disable the Delayed Ackof the host.

• Since network devices are less expensive than FC devices, IP-SAN can beconstructed inexpensively, but the system reliability depends on the natureand quality of each device. Be careful of the device selection.

• When setting iSCSI User Name and Secret by CHAP authentication, checkthat the designation is correct. If the setting is incorrect, the storagesystem does not operate normally due to the following reasons:• Login is impossible for the initiator (user) whose login is allowed.• Login is possible for the initiator (user) whose login is not allowed.

• In the environment that CHAP authentication is used, when replacing theHBA of the connected host, it is necessary to change the setting of theCHAP authentication. Be sure to change the CHAP authentication settingafter replacing the HBA. However, when using NIC, the CHAPauthentication setting change is unnecessary because the iSCSI softwareinitiator setting does not change even if replacing NIC.

• When changing the MTU size from Default, it is necessary to change theport setting of the storage system, switch, and all the devices of the host.

• When CAN is used, both iSCSI Function and NIC Function exist in thesetting mode, but only NIC Function is supported.

• Ping transmission/receiptWhen performing a Ping transmission test from an iSCSI port to anunreachable address *1, the I/O processing occurs a delay or time-out. Westrongly recommend performing the Ping test in the status where the hostI/O processing is not executed. Furthermore, do not perform the Ping testat the same time from two or more iSCSI Ports.

Managing logical volumes 337Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 338: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note:• Unreachable address indicates an address which cannot reach (not

connected) the Ping transmission source both physically and logically. Theresult of the Ping test becomes a time-out because no response isacquired.

• SwitchAmong the physical ports of the network switch, when Spanning Tree isenabled for the port connecting directly to the iSCSI ports of the host andthe storage system, the communication may be blocked. Turn OFF theSpanning Tree protocol function (see the manual of the switch to use forhow to check and set).

• iSCSI Port settingWhen the setting change of iSCSI Port is performed while the host isconnected, the connection is temporarily disconnected and reconnected bythe host. Wait for one minute or more after setting iSCSI Port and checkthat it is reconnected by the host.

• When the IPv6 Mode is set to Enabled on the iSCSI port, if the IPv6 globaladdress is set to automatic, the address is determined by acquiring aprefix from an IPv6 router. If the IPv6 router does not exist in the network,the address cannot be determined and as a result, the iSCSI connectionmight be delayed. Therefore, when the IPv6 Mode is set to Enabled on theiSCSI port, confirm that the IPv6 router is connected to the same network,and then set IPv6 global address automatically.

The following figure shows LU paths configuration in a Fibre Channelenvironment. The figure shows the iSCSI target 00 associated with threelogical volumes (00:00:00, 00:00:01, and 00:00:02). LU paths are definedbetween the two hosts in the iSCSI target 00 and the three logical volumes.

338 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 339: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can define paths between a single server host and multiple LUs. Thefigure shows that each of the two hosts in the iSCSI target 00 can access thethree LUs.

You can also define paths between multiple server hosts and a single LU. Thefigure shows that the LU identified by the LDKC:CU:LDEV number 00:00:00is accessible from the two hosts that belong to the iSCSI target 00.

Managing hosts

Configure hosts workflow1. Determine the host modes and host mode options you will use.2. Determine the WWN of the host bus adapters that you will use.3. Create host groups.4. Register host groups.

Host modes for host groupsThe following table lists the host modes that are available for use on yourstorage system. Carefully review and determine which host modes you willneed to use when configuring your system and observe the cautions

Managing logical volumes 339Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 340: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

concerning using certain host modes. Host modes and host mode optionsmust be set on the port before the host is connected. If you change hostmodes or host mode options after the host is connected, the host (server)will not recognize it.

Host mode When to select this mode

00 Standard When registering Red Hat Linux server hosts or IRIX server hosts in thehost group

01 VMware When registering VMware server hosts in the host group1

03 HP When registering HP-UX server hosts in the host group

05 OpenVMS When registering OpenVMS server hosts in the host group

07 Tru64 When registering Tru64 server hosts in the host group

09 Solaris When registering Solaris server hosts in the host group

0A NetWare When registering NetWare server hosts in the host group

0C Windows When registering Windows server hosts in the host group2

0F AIX When registering AIX server hosts in the host group

21 VMware Extension When registering VMware server hosts in the host group. If the virtual hoston VMware recognizes LUs by the Raw Device Mapping (RDM) method, setthe host mode related to OS of the virtual host.

2C Windows Extension When registering Windows server hosts in the host group.

Notes:1. There are no functional differences between host mode 01 and 21. When you first connect a

host, it is recommended that you set host mode 21.2. There are no functional differences between host mode 0C and 2C. When you first connect a

host, it is recommended that you set host mode 2C.

Host mode optionsThe following table lists host mode options that are available to use forconfiguring hosts on your storage system.

No. Host mode options When to select this option

2 VERITAS DatabaseEdition/ Advanced Cluster

When VERITAS Database Edition/Advanced Cluster for OracleReal Application Clusters or VERITAS Cluster Server 4.0 or later(I/O fencing function) is used.

6 TPRLO When all of the following conditions are satisfied:• The host mode 0C Windows or 2C Windows Extension is

used.• The Emulex host bus adapter is used.• The mini-port driver is used.• TPRLO=2 is specified for the mini-port driver parameter of

the host bus adapter.

7 Automatic recognitionfunction of LUN

When all of the following conditions are satisfied:• The host mode 00 Standard or 09 Solaris is used.• SUN StorEdge SAN Foundation Software Version 4.2 or

higher is used.• You want to automate recognition of increase and decrease of

devices when genuine SUN HBA is connected.

340 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 341: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

No. Host mode options When to select this option

12 No display for ghost LUN When all of the following conditions are satisfied:• The host mode 03 HP is used.• You want to suppress creation of device files for devices to

which paths are not defined.

13 SIM report at link failure1 When you want to be informed by SIM (service informationmessage) that the number of link failures detected betweenports exceeds the threshold.

14 HP TruCluster withTrueCopy function

When all of the following conditions are satisfied:• The host mode 07 Tru64 is used.• You want to use TruCluster to set a cluster to each of primary

volume and secondary volume for TrueCopy or UniversalReplicator.

15 HACMP When all of the following conditions are satisfied:• The host mode 0F AIX is used.• HACMP 5.1 Version 5.1.0.4 or later, HACMP4.5 Version

4.5.0.13 or later, or HACMP5.2 or later is used.

22 Veritas Cluster Server When Veritas Cluster Server is used.

25 Support SPC-3 behavioron Persistent Reservation

When one of the following conditions are satisfied:• Using Windows Server Failover Clustering (WSFC)• Using Microsoft Failover Cluster (MSFC)• Using Symantec Cluster Server, also known as Veritas Cluster

Server (VCS)• Using a configuration other than above with the PERSISTENT

RESERVE OUT (Service Action=REGISTER AND IGNOREEXISTING KEY) command, change the status response fromReservation-Conflict to Good-Status when there is not aregistered key to be deleted

33 Set/Report DeviceIdentifier enable

When all of the following conditions are satisfied:• Host mode 03 HP or 05 OpenVMS1 is used.• You want to enable commands to assign a nickname of the

device.• You want to set UUID to identify a logical volume from the

host.

39 Change the nexusspecified in the SCSITarget Reset

When you want to control the following ranges per host groupwhen receiving Target Reset:• Range of job resetting.• Range of UAs (Unit Attentions) defined.

40 V-VOL expansion When all of the following conditions are satisfied:• The host mode 0C Windows or 2C Windows Extension is

used.• You want to automate recognition of the DP-VOL capacity

after increasing the DP-VOL capacity.

41 Prioritized devicerecognition command

When you want to execute commands to recognize the devicepreferentially.

43 Queue Full Response When the command queue is full in your storage systemconnecting with the HP-UX host, and if you want to respondQueue Full, instead of Busy, from the storage system to thehost.

49 BB Credit Set Up Option1 When you want to adjust the number of buffer-to-buffer credits(BBCs) to control the transfer data size by the Fibre Channel, forexample when the distance between MCU and RCU of the

Managing logical volumes 341Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 342: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

No. Host mode options When to select this option

TrueCopy (or global-active device) pair is long (approximately100 kilometers) and the Point-to-Point topology is used.

Use the combination of this host mode option and the host modeoption 50.

50 BB Credit Set Up Option2 When you want to adjust the number of buffer-to-buffer credits(BBCs) to control the transfer data size by the Fibre Channel, forexample when the distance between MCU and RCU of theTrueCopy (or global-active device) pair is long (approximately100 kilometers) and the Point-to-Point topology is used.

Use the combination of this host mode option and the host modeoption 49.

51 Round Trip Set UpOption3

If you want to adjust the response time of the host I/O, forexample when the distance between MCU and RCU of theTrueCopy (or global-active device) pair is long (approximately100 kilometers) and the Point-to-Point topology is used.

Use the combination of this host mode option and the host modeoption 65.

54 (VAAI) Support Option forthe EXTENDED COPYcommand

When the VAAI (vStorage API for Array Integration) function ofVMware ESX/ESXi 4.1 or later is used.

63 (VAAI) Support Option forvStorage APIs based onT10 standards

When you connect the storage system to VMware ESXi 5.0 orlater and use the VAAI function for T10. Use a combination ofthis host mode option and host mode option 54.

68 Support PageReclamation for Linux

When using the Page Reclamation function from the environmentwhich is being connected to the Linux host.

71 Change the Unit Attentionfor Blocked Pool-VOLs

When you want to change the unit attention (UA) from NOTREADY to MEDIUM ERROR during the pool-VOLs blockade.

72 AIX GPFS Support When using General Parallel File System (GPFS) in the storagesystem connecting to the AIX host.

73 Support Option forWS2012

When using following functions provided by Windows Server2012 (WS2012) from the environment which is being connectedto the WS2012.• Dynamic Provisioning function• Offload Data Transfer (ODX) function

78 The non-preferred pathoption

When all of following conditions are satisfied:• Global-active device is used in the configuration with the data

centers (Metro configuration).• Hitachi Dynamic Link Manager is used as the alternative path

software.• The host group is on the non-optimized path of Hitachi

Dynamic Link Manager.• The performance deterioration of I/O responses can be

avoided without I/O using the non-optimized path of HitachiDynamic Link Manager.

80 Multi Text OFF By using the iSCSI interface, if the storage system connects withthe host of which OS is not supported of the Multi Text function.For instance, connecting the storage system and the host ofRHEL5.0 which does not support the Multi Text-function.

81 NOP-In Suppress Mode In the environment by iSCSI connection, the delay replying ofthe Delayed Acknowledgment function which is located on the

342 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 343: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

No. Host mode options When to select this option

upper layer is restrained by sending NOP-IN of executing ofsense commands such as Inquiry, Test unit ready, or Modesense. However, select this option when connecting the storagesystem and the host which is not necessary of the NOP-INsending. However, when connecting the storage system and thehost which does not need of the NOP-IN sending, select thisoption.

For instance:• When connecting the storage system and the Open

Enterprise Server of Novell Co., Ltd.• When connecting the storage system and winBoot/i of

emBoot Co., Ltd..

82 Discovery CHAP Mode Select this option when the CHAP authentication is performed atthe time of the discovery login In the iSCSI connectionenvironment.

For instance: When the CHAP authentication is performed at thetime of the discovery login in the iSCSI environment of theVMware host and storage system

83 Report iSCSI Full PortalList Mode

When configuring alternate paths in the environment ofconnecting the VMware host and storage system: If waiting ofreplying of the target information from the host option mode 83enabled port other than ports of discovery login, select this hostmode option.

Apply this host mode option when all of the following conditionsare met:• Configuring alternate paths in the environment of connecting

the VMware host and storage system• Waiting for replying of the target information from the ports

other than ports of discovery login

88 Nondisruptive migrationwith HP-UX hosts

When converging multiple host-target ports used in themigration source storage system on the migration target storagesystem, and enable LUN path definition from a host groupbelonging to a virtual storage machine to an LDEV defined in adifferent virtual storage machine.• ON: LUN path definition is enabled.• OFF: LUN path definition is disabled.

Note:1. Apply this host mode option when all the following

conditions are met:- You are using the nondisruptive migration function tomigrate volumes in multiple old storage models that usethe same server.- You need to reducte the number of Target ports used onthe migration target storage system.- The host is an HP-UX server.

2. Applying this option to a server other than HP-UX cancause the following:- Path addition from the server to the migration targetstorage system might fail.- Display of devices that the server recognizes might beinvalid.

Managing logical volumes 343Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 344: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

No. Host mode options When to select this option

3. If a LUN path is defined to an LDEV defined in a virtualstorage machine different from the one to which the hostgroup belongs, this option cannot be set to OFF.

96 Change the nexusspecified in the SCSILogical Unit Reset

When you want to control the following ranges per host groupwhen receiving LU Reset:• Range of job resetting.• Range of UAs (Unit Attentions) defined.

97 Proprietary ANCHORcommand support

When connecting to Hitachi NAS Platform.

102 (GAD) Standard InquiryExpansion for HCS

When all of the following conditions are satisfied:• The OS of the host is Windows (host mode 0C Windows or 2C

Windows Extension) or AIX (host mode 0F AIX), and theMPIO function is used.

• Global-active device (GAD) or nondisruptive migration (NDM)is used.

• Hitachi Device Manager (HDvM) agent is used.

105 Task Set Full response inthe event of I/O overload

When all of following conditions are satisfied:• The host mode 0C Windows or 2C Windows Extension is

used.• You want to return Task Set Full response from the storage

system to the host when an overload of I/Os occurs on thestorage system.

Notes:1. Set the UUID when you set host mode option 33 and host mode 05 openvms is used.2. Set the host mode option 51 for both ports on the local and remote storage systems.3. This host mode option does not support channel packages for 8FC16 and 16FE10. If these

channel packages are used, do not set the host mode option 51.

Find WWN of the host bus adapterBefore physically attaching the storage system to hosts, some preparationwork needs to be performed. When configuring a Fibre Channel environment,first verify that the fibre adapters and the Fibre Channel device drivers areinstalled on the open-system hosts. Next, find the World Wide Name (WWN)of the host bus adapter that is used in each open-system host.

The WWN is a unique identifier for a host bus adapter in an open-systemhost, consisting of 16 hexadecimal digits. The following topics describe howto find the WWN of a host on different operating systems. It is best to makea record of the WWNs of the hosts in your storage system, because you willneed to enter these WWNs in LUN Manager dialog boxes to specify the hostsused in your storage system.• Finding a WWN on Windows on page 345• Finding a WWN on Oracle® Solaris on page 345• Finding a WWN on AIX, IRIX, or Sequent on page 346• Finding WWN for HP-UX on page 346

344 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 345: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Finding a WWN on WindowsHitachi Data Systems supports the Emulex Fibre Channel adapter in aWindows environment, and will support other adapters in the future. Forfurther information on Fibre Channel adapter support, or when using a FibreChannel adapter other than Emulex, contact the customer support forinstructions on finding the WWN.

Before attempting to acquire the WWN of the Emulex adapter, confirmwhether the driver installed in the Windows 2000 or Windows Server 2003environment is an Emulex port driver or an Emulex mini-port driver, and thenfollow the driver instructions.

Procedure

1. Verify that the Fibre Channel adapters and the Fibre Channel devicedrivers are installed.

2. Log on to the Windows 2000 host with administrator access.3. Go to the LightPulse Utility to open the LightPulse Utility window. If

you do not have a shortcut to the utility:a. Go to the Start menu, select Find and choose the Files and Folders

option.b. On the Find dialog box, in Named type lputilnt.exe, and from the

Look in list, choose the data drive that contains the Emulex mini-portdriver.

c. Choose Find Now to search for the LightPulse utility.If you still cannot find the LightPulse utility, contact Emulex technicalsupport.

d. Select lputilnt.exe from the Find: Files named list, and then pressEnter.

4. On the LightPulse Utility window, verify that any installed adaptersappear in the tree.

5. In the Category list, choose the Configuration Data option.6. In the Region list, choose the 16 World-Wide Name option. The WWN

of the selected adapter appears in the list on the right of the window.

Finding a WWN on Oracle® SolarisHitachi Data Systems supports the JNI Fibre Channel adapter in an OracleSolaris environment. This document will be updated as needed to coverfuture adapter-specific information as those adapters are supported. Forfurther information on Fibre Channel adapter support, or if using a FibreChannel adapter other than JNI, contact the customer support forinstructions for finding the WWN.

Managing logical volumes 345Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 346: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Verify that the Fibre Channel adapters and the Fibre Channel devicedrivers are installed.

2. Log on to the Oracle Solaris host with root access.3. Type dmesg |grep Fibre to list the installed Fibre Channel devices and

their WWNs.4. Verify that the Fibre Channel adapters listed are correct, and record the

listed WWNs.The following is an example of finding a WWN on Oracle Solaris.

# dmesg |grep Fibre <- Enter the dmesg command. : fcaw1: JNI Fibre Channel Adapter model FCW fcaw1: Fibre Channel WWN: 200000e0694011a4 <- Record the WWN. fcaw2: JNI Fibre Channel Adapter model FCW fcaw2: Fibre Channel WWN: 200000e06940121e <- Record the WWN.#

Finding a WWN on AIX, IRIX, or SequentTo find the WWN in an IBM AIX, SGI Irix, or Sequent environment, use thefabric switch that is connected to the host. The method of finding the WWN ofthe connected server on each port using the fabric switch depends on thetype of switch. For instructions on finding the WWN, see the manual of thecorresponding switch.

Finding WWN for HP-UXYou can find the WWN in an HP-UX environment.

Procedure

1. Verify that the Fibre Channel adapters and the Fibre Channel devicedrivers are installed.

2. Log in to the HP-UX host with root access.3. At the command line prompt, type:

/usr/sbin/ioscan -fnC lanThis will list the attached Fibre Channel devices and their device filenames. Record the Fibre Channel device file name (for example, /dev/fcms0).

Note: When the A5158 Fibre Channel adapter is used, at thecommand line prompt, enter /usr/sbin/ioscan -fnC fc for thedevice name.

4. Use the fcmsutil command along with the Fibre Channel device name tolist the WWN for that Fibre Channel device. For example, to list the WWNfor the device with the device file name /dev/fcms0, type:/opt/fcms/bin/fcmsutil /dev/fcms0

346 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 347: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Record the Fibre Channel device file name (for example, /dev/td0).

Note: When using the A5158 Fibre Channel adapter, list the WWNfor the device with the device file name as follows: /opt/fcms/bin/fcmsutil <device file name>

5. Record the WWN and repeat the above steps for each Fibre Channeldevice that you want to use.

Result

# /usr/sbin/ioscan -fnC lan <- 1Class I H/W Path Driver S/W State H/W TypeDescription==============================================================lan 0 8/0.5 fcT1_cntl CLAIMED INTERFACE HPFibre Channel Mass Storage Cntl /dev/fcms0 <-2 lan 4 8/4.5 fcT1_cntl CLAIMED INTERFACE HPFibre Channel Mass Storage Cntl /dev/fcms4 <-2lan 5 8/8.5 fcT1_cntl CLAIMED INTERFACE HPFibre Channel Mass Storage Cntl /dev/fcms5 <-2lan 6 8/12.5 fcT1_cntl CLAIMED INTERFACE HPFibre Channel Mass Storage Cntl /dev/fcms6 <-2lan 1 10/8/1/0 btlan4 CLAIMED INTERFACEPCI(10110009) -- Built-in #1lan 2 10/8/2/0 btlan4 CLAIMED INTERFACEPCI(10110009) -- Built-in #2lan 3 10/12/6 lan2 CLAIMED INTERFACEBuilt-in LAN /dev/diag/lan3 /dev/ether3 /dev/lan3## fcmsutil /dev/fcms0 <-3 Local N_Port_ID is = 0x000001 N_Port Node World Wide Name = 0x10000060B0C08294 N_Port Port World Wide Name = 0x10000060B0C08294 <- 4 Topology = IN_LOOP Speed = 1062500000 (bps) HPA of card = 0xFFB40000 EIM of card = 0xFFFA000D Driver state = READY Number of EDB's in use = 0 Number of OIB's in use = 0Number of Active Outbound Exchanges = 1 Number of Active Login Sessions = 2#1: Enter the ioscan.2: Device name3: Enter the fcmsutil command.4: Record the WWN.

Changing settings for a manually registered hostYou can use Edit Hosts to update host information registered by specifyingWWNs/iSCSI names, or registered by using the host detection function.

Managing logical volumes 347Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 348: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. On the Administration tab, select Managed Resources.2. On the Hosts tab, select the host to change, and click Edit Hosts.3. Specify the required items, and then click OK.

ResultThe host information is updated.

Changing settings for a host registered by using Device Manageragent

When the Device Manager agent is used, host information is periodically sentfrom the Device Manager agent. To update displayed host information, usethe Refresh Hosts button. Additionally, after changing settings on the EditHosts dialog box, depending on which settings you changed, it may benecessary to use Refresh Hosts to update host information.

To make changes or update host information to each item, see the followingtable:

Table 5 Updating host information using Device Manager agent

Item Refresh Hosts Edit Hosts

Host Name N Y1

OS Type Y N

IP Address Y N

Port Type Y2 Y2

WWN/iSCSI name Y2 Y2,3

WWN Nickname N4 N

Legend:• Y : Can be edited or refreshed• N : Cannot be edited or refreshed

Notes:1. To change the host name:

a. On the host, change the host name.b. (Optional) Restart the host.c. Click Edit Hosts, then change the host name to the same name that you used at step a.d. Restart the Device Manager agent service. Wait until the host has finished starting up

before restarting the Device Manager agent service.If both the old and new host name are displayed, then delete the old host.

2. WWNs/iSCSI names and their port types that have been added can be affected by theRefresh Hosts button, but deleted WWNs/iSCSI names and their port types cannot beaffected. To delete such WWNs/iSCSI names, click Edit Hosts.

3. If you want to allocate volumes to a FCoE port, you need to manually add a WWN.4. To update WWN nicknames that have been specified by using other storage system

management tools, such as Storage Navigator, refresh the storage system information. Whenseveral WWN nicknames are assigned to a single HBA, only one of the nicknames is displayedfor that HBA.

348 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 349: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. From the Administration tab, select Managed Resources.2. On the Hosts tab, select the host to update, then click either Refresh

Hosts or Edit Hosts depending on the items that you want to update.3. Modify the settings and click OK.

The task is registered to Data Collection Tasks tab. If you changedsettings in the Edit Hosts dialog box, after the task successfullycompletes, refresh the hosts using Refresh Hosts.

ResultThe host list is updated.

Note: After you change a host name, both host names (before and after thechange) might display on the Resources tab. In this case, delete the hostbefore making the change. When copy pairs are managed by the host whereDevice Manager agent is installed, in addition to deleting the host, thestorage system also needs to be updated.

Editing the host mode and host mode optionsManage your host group information by editing the host mode and host modeoptions after the volumes have been allocated.

You can edit host group information (host group name, host mode, and hostmode options) for an existing host group by editing its LUN paths when ahost, to which volumes are allocated, has been added to a host cluster. Youcan verify host mode option IDs when host group information is being editedor when new volumes are allocated.

Tip: In addition to when you are editing LUN paths, you can also edit hostgroup information when allocating volumes, allocating like volumes, anddefining clustered-host storage.

Before you begin• Allocate volumes to hosts.• Verify that the host connections are Fibre Channel or Fibre Channel over

Ethernet (FCoE).

Procedure

1. On the Resources tab, choose one of the following options to expandthe tree for storage systems, hosts, file servers, or logical groups todisplay volumes.• Storage Systems > All Storage Systems• Hosts > All Hosts• File Servers > All File Servers

Managing logical volumes 349Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 350: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Logical Groups > Public Logical Groups or Private LogicalGroups

2. Select a volume that belongs to the target host group (if selectingmultiple volumes, they must belong to the same configuration hostgroup).

3. Click More Actions, and select Edit LUN Paths.4. In the Edit LUN Paths dialog box, click Host Group and LUN Settings,

and modify the host group name, host mode, or host mode options asneeded.

5. Click Show Plan and confirm that the information in the plan summaryis correct.If changes are required, click Back.

6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If scheduled for Now, select View task status to monitor thetask after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the task begins.

9. (Optional) You can check the progress and the result of the task on theTasks & Alerts tab.Click on the task name to view details of the task.

10. (Optional) In the Host Group and LUN Settings dialog box, click EditLUN Paths to verify the host mode or host mode options.

Editing a WWN nicknameYou can specify a new WWN nickname, or change or delete an existingnickname, for storage HBA WWNs that are registered in a host group.

Before you begin

Register HBA WWNs for which nicknames will be specified in the host group.

Procedure

1. On the Resources tab, select Hosts.2. After selecting the target operating system, select a target host, and

then click More Actions > Edit WWN Nicknames.3. From the Storage system drop-down list, select a storage system.

• All Storage Systems: Displays a list of WWNs that belong to the hostgroup related to the selected host.

• Specific storage system: Displays a list of WWNs that belong to thehost group related to the selected host and storage system.

4. From Edit option, select one of the following:• Edit WWN nickname for all related host groups: Applies the WWN

nickname setting to all storage host groups.

350 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 351: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• Edit WWN nickname for individual host groups: Applies the WWNnickname to individual host groups in the storage system selected.

5. In the Nickname column, enter a WWN nickname or modify the existingWWN nickname. If All Storage Systems and Edit WWN nickname forall related host groups is selected, the minimum number of characterlimit allowed is applied to all storage systems.

Tip: To delete a WWN nickname, clear the text.

6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

ResultThe WWN nickname edits are applied to the WWNs registered in the hostgroup.

Changing HBA iSCSI name or nickname of a host bus adapter

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

In iSCSI environments, host bus adapters can be identified by HBA iSCSInames or nicknames.

Procedure

1. Open the Edit Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the Hosts tab, and then click the Port ID of the HBA iSCSIName or Host Name you want to change.

3. Click Edit Host.

Managing logical volumes 351Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 352: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. To change the HBA iSCSI name, select the HBA iSCSI Name checkbox, and then type a new iSCSI name.

b. To change the nickname, select a Host Name check box, and thentype a new nickname.

If you check Apply same settings to the HBA iSCSI Name in allports, new settings affect other ports. For example, if the same host busadapter (the same iSCSI Name) is located below ports CL1-A and CL2-Ain the tree, when you select the host bus adapter (or the iSCSI Name)from below one of the ports and change the nickname to hba1, the hostbus adapter below the other port will also be renamed hba1.

However, new settings will not affect any port if:• The resulting nickname is already used as the nickname of a host bus

adapter connected to the port.• The resulting iSCSI name exists in the port.

4. Click Finish.The Confirm window appears.

5. In the Task Name text box, enter the task name.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

6. Click Apply.

If Apply same settings to the HBA WWN in all ports is checked, adialog box opens listing the host bus adapter to be changed. Confirm thechanges and click OK. Otherwise, click Cancel.

If the Go to tasks window for status check box is selected, the Taskswindow appears.

Changing iSCSI target settingUse LUN Manager to change the name or host mode of a iSCSI target. Youcan change only the host mode option of the host group for the initiator port.You cannot use this procedure on the host group for the external port.

Caution:• Before changing the host mode of an iSCSI target, you should back up

data on the port to which the iSCSI target belongs. Setting host modeshould not be destructive, but data integrity cannot be guaranteed withouta backup.

• When a secret is changed two times or more for the same iSCSI targetsuccessively, perform the next change after waiting for the completion ofthe task that has been applied. If the secret is changed without waiting forthe completion of the task that has been applied, the user name which youexpected to be changed can not be incorporated.

352 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 353: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Before you begin

To perform this task, following roles are required:• Storage Administrator (Provisioning) role• Security Administrator (View and Modify) role

Procedure

1. Open the Edit Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. On the iSCSI Targets tab, select the Port ID of the iSCSI target youwant to change.

3. Display the Edit iSCSI Targets window by performing the following:Click Edit iSCSI Targets. Or from the Actions menu, select Ports/Host Groups, iSCSI, and then Edit iSCSI Targets.

4. In the Edit iSCSI Targets window, select ON and specify the values.The following values can be modified:• iSCSI Target Alias: Specifies the alias of the iSCSI target.• iSCSI Target Name: Selects the format from iqn or eui, and specifies

the name of the iSCSI target.• Host Mode: Selects the host mode and the host mode option. For

detailed information about host mode options, see Host mode optionson page 340.

• Authentication Method: Selects the CHAP authentication mode fromComply with Host Setting, CHAP, or None.

• Mutual CHAP: Selects Enable or Disable. If Enable is selected, themutual authentication mode is performed. If Disable is selected, theunidirectional authentication mode is performed.

• User Name: Specifies the user name. You can use case-sensitivealphanumeric characters, spaces, and the following symbols:. - + @ _ = : [ ] , ~

• Secret: Specifies the password. You can use alphanumeric characters,spaces, and the following symbols in a secret:. - + @ _ = : [ ] , ~

5. Click Finish.If OK is clicked, either the Edit iSCSI Targets window or the Confirmwindow appears. If the Confirm window appears proceed to the nextstep. If the Edit iSCSI Targets window appears, go to step 3 and editthe settings again.

Managing logical volumes 353Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 354: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. In the Task Name text box, enter the task name.You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

7. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

Removing hosts from iSCSI targets

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Edit Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select hosts in the Hosts tab.3. Display the Remove Hosts window by performing one of the following:

• Click Remove Hosts.• Click Remove Hosts (iSCSI).• Click More Actions, then select Remove Hosts(iSCSI).• From the Actions menu, select Ports/Host Groups/iSCSI, then

Remove Hosts.4. Click Finish.

The Confirm window appears.5. In the Task Name text box, enter the task name.

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

6. Click Apply.If the Go to tasks window for status check box is selected, the Taskswindow appears.

354 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 355: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Deleting an iSCSI target

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Caution: This task cannot be performed if corresponding with following:• Host I/O processing is being performed.• Hosts are not reserved (mounted) in the iSCSI target.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the iSCSI target that you want to delete.3. Display the Delete iSCSI Targets window by performing one of the

following:• Click More Actions, then select Delete iSCSI Targets.• From the Actions menu, select Ports/Host Groups/iSCSI, then

Delete iSCSI Targets.4. In the Delete iSCSI Targets window, confirm the settings, in Task

Name, type a unique name for this task or accept the default, then clickApply.If Go to tasks window for status is checked, the Tasks window opens.

5. Click OK to close a message.

Deleting login iSCSI names

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Note: If you disconnect a host that has been connected through a cable toyour storage system, the iSCSI name for the host will remain in the LoginWWNs/iSCSI Names tab. Use the Delete Login WWNs window to deletefrom the Login WWNs/iSCSI Names tab. A login iSCSI name for a host that isno longer connected to your storage system.

Managing logical volumes 355Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 356: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Edit Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the Login WWNs/iSCSI Names tab. To confirm the statuses ofiSCSI names, click View Login iSCSI Name Statuses window.

3. Select the iSCSI names you want to delete.4. Display the Delete Login iSCSI Names window by performing one of the

following• Click Delete Login iSCSI Names.• From the Actions menu, select Ports/Host Groups/iSCSI, then

Delete Login iSCSI Names.5. In the Delete Login iSCSI Names window, confirm the settings, in

Task Name, type a unique name for this task or accept the default, thenclick Apply.

6. Click OK to close a message.

Adding a selected host to a host group

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the Hosts tab, or select a port from the tree then select theHosts tab.

3. Select a host that you want to add.4. Select Add to Host groups.

356 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 357: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. Select the desired host groups from the Available Host Groups table,and then click Add.

Selected host groups are listed in the Selected Host Groups table.

If you select a row and click Detail, the Host Group Properties windowappears.

6. Click Finish.The Confirm window appears.

7. In the Add to Host groups window, confirm the settings, in TaskName type a unique name for this task or accept the default, and thenclick Apply.If Go to tasks window for status is checked, the Tasks windowopens.

8. Click OK to close the message.

Adding a host to the selected iSCSI target

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the Host tab.3. Select the iSCSI target for the host you want to add.4. Select Add to Hosts.5. Select the desired host from the Available Hosts table, and then click

Add.

Selected iSCSI targets are listed in the Selected Hosts table.

If the desired host has never been connected with a cable to any port inthe storage system, perform the following steps:a. Click Add New Host under the Available Hosts list.

The Add New Host dialog box opens.b. Select the format from iqn or eui. Enter the desired HBA iSCSI name

in the HBA iSCSI Name box.

Managing logical volumes 357Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 358: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

c. If necessary, enter a nickname for the host bus adapter in the HostName box.

d. Click OK to close the Add New Host dialog box.e. Select the desired host bus adapter from the Available Hosts list.

6. Click Finish.The Confirm window appears.

7. In the Add to Host groups window, confirm the settings, in TaskName type a unique name for this task or accept the default, and thenclick Apply.If Go to tasks window for status is checked, the Tasks windowopens.

8. Click OK to close the message.

Setting the T10 PI mode on a port

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.• Access to the Fibre Channel board port is required.• The port speed must be 16 Gbps.

Caution: If you change the T10 PI mode of one port, the T10 PI mode of theother port paired with changed port also needs to be changed. You mustverify the mode of each port in the pair before changing the T10 PI mode.Make sure the ports in each pair are the same in the resource group. Thefollowing shows pairs of port IDs.

If you change the setting on one of the ports in the pair, the setting on theother port in the pair will also be changed:• Port IDs 1x and 3x (where x is a letter from A to R). For example, 1A and

3A can be paired with each other.• Port IDs 2x and 4x (where x is a letter from A to R). For example, 2B and

4B can be paired with each other.

Note: If the T10 PI mode is enabled between the path of the target port andLDEV, you cannot disable the T10 PI mode of the port.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:

358 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 359: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

a. Click Storage Systems, and then expand the Storage Systemstree.

b. Click Ports/Host Groups/iSCSI Targets.2. Select the Ports tab.3. Select the desired port.

To collectively change the T10 PI mode of ports, do not intermix portswith enabled or disabled T10 PI modes.

4. Click Edit Ports.5. Click OK on the message window.6. Select Enable or Disable on the Edit T10 PI Mode window.7. Click Finish. The Confirm window appears.8. In the Confirm window, confirm the settings. In the Task Name, type a

unique name for this task or accept the default, then click Apply. If Goto tasks window for status is checked, the Tasks window opens.

Deleting logical groupsYou can delete logical groups when they are no longer required.

Procedure

1. On the Resources tab, select Logical Groups.2. Depending on the type of logical group you want to delete (public or

private), do one of the following:• Click Public Logical Groups in the navigation pane, and then select a

logical group from the list of logical groups in the application pane.• Expand Private Logical Groups in the navigation pane, select a

logical group folder from the navigation pane, and then select a logicalgroup from the list of logical groups in the application pane.

3. Click Delete Logical Groups.

Tip: Deleting a user group deletes the private logical group folderand all the logical groups in that folder. You cannot delete a top-level folder for each user group under the Private Logical Groupfolder.

4. Click OK to confirm and delete the logical group.

ResultThe logical group you deleted no longer appears in the list of logical groups.

Creating iSCSI targets and registering hosts in an iSCSI target

Before you begin• The Storage Administrator (Provisioning) role is required to perform this

task.

Managing logical volumes 359Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 360: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• The Security Administrator (View and Modify) role is required to performthis task.

• The installed shared memory capacity must be 16 GB or more. Ifadditional shared memory is required, contact customer support.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Enter the iSCSI alias in the iSCSI Target Alias box. If the Use DefaultName checkbox is selected, the iSCSI target alias is input by default.

3. Enter the iSCSI target name in the iSCSI Target Name box. Select theformat from iqn or eui. If the Use Default Name check box is selected,the iSCSI target name is input by default.

4. Select the resource group in which an iSCSI target is created. If youselect Any, ports to which you can add iSCSI targets within all portsassigned to a user are displayed in the Available Ports list. If you selectother than Any, ports to which you can add iSCSI targets within theports assigned to the selected resource group are displayed in theAvailable Ports list.

5. Select a host mode from the Host Mode list. When selecting a hostmode, you must consider the platform and some other factors.

6. If necessary, click Host Mode Options and select host mode options.

When you click Host Mode Options, the dialog box expands to displaythe list of host mode options. The Mode No. column indicates optionnumbers. Select an option you want to specify and click Enable.

7. Select hosts to be registered in an iSCSI target. If the desired host hasever been connected with a cable to another port in the storage system,select the desired host bus adapter from the Available Hosts list. Ifthere is no host to be registered, skip this step and move to the nextstep. Otherwise, an iSCSI target with no host would be created. If thedesired host has never been connected through a cable to any port in thestorage system, perform the following steps:a. Click Add New Host under the Available Hosts list. The Add New

Host dialog box opens.b. Select the format from iqn or eui.c. Enter the desired HBA iSCSI name in the HBA iSCSI Name box.d. If necessary, enter a nickname for the host bus adapter in the Host

Name box.

360 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 361: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

e. Click OK to close the Add New Host dialog box.f. Select the desired host bus adapter from the Available Hosts list.

8. Select the port to which you want to add the iSCSI target. If you selectmultiple ports, you can add the same iSCSI target to multiple ports byone operation.

9. Select CHAP, None, or Comply with Host Setting in theAuthentication Method list. If CHAP is selected, specify following:• Mutual CHAP: Select Enable or Disable. If Enable is selected, the

mutual authentication mode is performed. If Disable is selected, theunidirectional authentication mode is performed.

• User Name: If Disable is selected in Mutual CHAP, this item isoptionally specified. If Enable is selected in Mutual CHAP, this itemmust be specified.

• Secret and Re-enter Secret: If Disable is selected in MutualCHAP, this item is optionally specified. If Enable is selected inMutual CHAP, this item must be specified.

10. Select CHAP users to be registered in an iSCSI target. If the CHAP userhas ever been connected with a cable to another port in the storagesystem, select the desired host bus adapter from the Available CHAPUsers list. If there is no host to be registered, skip this step and move tothe step 11. Otherwise, an iSCSI target with no CHAP user would becreated. If the desired CHAP user has never been connected through acable to any port in the storage system, perform the following steps:a. Click Add New CHAP User under the Available CHAP Users list.

The Add New CHAP User dialog box opens.b. Specify an user name, and secret.c. Click OK to close the Add New CHAP User dialog box.d. Select the desired CHAP user from the Available CHAP Users list.

11. Click Add to add the iSCSI target. By repeating steps from 2 to 10, youcan create multiple iSCSI targets. If you select a row and click Detail,the iSCSI Target Properties window appears. If you select a row andclick Remove, a message appears asking whether you want to removethe selected row or rows. To remove the row, click OK.

12. Click Finish to display the Confirm window. To continue to add LUNpaths, click Next.

13. Confirm the settings and enter the task name in the Task Name box. Atask name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default. If you select a row and click Detail, the iSCSI TargetProperties window appears.

14. Click Apply in the Confirm window. If the Go to tasks window forstatus check box is selected, the Tasks window appears.

Managing logical volumes 361Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 362: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Editing port settings

Before you begin

• The Storage Administrator (System Resource Management andProvisioning) role is required to perform this task.

• The Security Administrator (View and Modify) role is required to performthis task.

Procedure

1. Open the Port/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the Ports tab.3. Select the desired port.4. Display the Edit Ports window by performing one of the following:

• Click Edit Ports.• From the Actions menu, select Ports/Host Groups/iSCSI Targets,

then Edit Ports.5. Select the check box to change the option and specify values. Items that

can be changed are as follows:• IPv4 Settings: Specifies IP Address, Subnet Mask, or Default

Gateway.

• IPv6 Mode: Specifies enable or disable of this mode.• IPv6 Settings: Specifies Link Local Address, Global Address,

Global Address 2, or Default Gateway if IPv6 Mode is selected toEnable.

• Port Security: Specifies enable or disable.• Port Speed: Specifies the data transfer speed.• TCP Port Number: Specifies the TCP port number.• Selective ACK: Specifies enable or disable.• Delayed ACK: Specifies enable or disable.• Maximum Window Size: Specifies the size of the maximum window.• Ethernet MTU Size: Specifies the MTU size.• Keep Alive Timer: Specifies the keep alive timer.• VLAN Tagging Mode: Specifies enable or disable.

362 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 363: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• iSNS Server: Specifies enable or disable. If this option is selected toEnable, specify IP Address or TCP Port Number.

• CHAP User Name: Specifies the CHAP user name.• Secret and Re-enter Secret: Specifies the secret which is used for

host authentication.6. Click Finish. A message appears, confirming whether to switch the LUN

security. Clicking OK opens the Confirm window.7. In the Confirm window, confirm the settings, in Task Name type a

unique name for this task or accept the default, then click Apply. If Goto tasks window for status is checked, the Tasks window opens.

Adding CHAP users

Before you begin

• The Storage Administrator (Provisioning) role is required to perform thistask.

• The Security Administrator (View and Modify) role is required to performthis task.

Procedure

1. Open the Port/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the iSCSI target to register CHAP users.3. Display the Add CHAP Users window by performing one of the

following:• Click More Actions, select Add CHAP Users.• From the Actions menu, select Ports/Host Groups/iSCSI,

Authentication, then Add CHAP Users.4. In the Available CHAP Users table, select the CHAP user row. Click

Add. The selected CHAP user is registered in the Selected CHAP Userstable. If the CHAP user does not exist, perform the following steps toregister a new CHAP user:a. Click Add New CHAP User under the Available CHAP Users table.

The Add New CHAP User dialog box opens.b. Specify User Name and Secret.c. Click OK to close the Add New CHAP User dialog box.

5. Click Finish to display the Confirm window.

Managing logical volumes 363Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 364: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

6. Confirm the settings and enter the task name in the Task Name box. Atask name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default.

7. Click Apply in the Confirm window. If the Go to tasks window forstatus check box is selected, the Tasks window appears.

Editing CHAP users

Before you begin

• The Storage Administrator (Provisioning) role is required to perform thistask.

• The Security Administrator (View and Modify) role is required to performthis task.

Procedure

1. Open the Port/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the CHAP users.3. Display the Edit CHAP Users window by performing one of the

following:• Click Edit CHAP Users.• From the Actions menu, select Ports/Host Groups/iSCSI,

Authentication, then Edit CHAP Users.4. Specify User Name and Secret.5. Click Finish to display the Confirm window.6. Confirm the settings and enter the task name in the Task Name box. A

task name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default.

7. Click Apply in the Confirm window. If the Go to tasks window forstatus check box is selected, the Tasks window appears.

364 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 365: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Removing CHAP users

Before you begin

• The Storage Administrator (Provisioning) role is required to perform thistask.

• The Security Administrator (View and Modify) role is required to performthis task.

Procedure

1. Open the Port/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the CHAP users.3. Display the Remove CHAP Users window by performing one of the

following:• Click Remove CHAP Users.• From the Actions menu, select Ports/Host Groups/iSCSI,

Authentication, then Remove CHAP Users.4. Specify User Name and Secret.5. Click Finish to display the Remove CHAP Users window.6. Confirm the settings and enter the task name in the Task Name box. A

task name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default.

7. Click Apply in the Confirm window. If the Go to tasks window forstatus check box is selected, the Tasks window appears.

Removing target CHAP users

Before you begin

• The Storage Administrator (Provisioning) role is required to perform thistask.

• The Security Administrator (View and Modify) role is required to performthis task.

Managing logical volumes 365Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 366: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Port/Host Groups window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Select the iSCSI target.3. Display the Remove Target CHAP Users window by performing one of

the following:• Click More Actions > Remove Target CHAP Users.• From the Actions menu, select Ports/Host Groups/iSCSI,

Authentication, then Remove Target CHAP Users.4. Confirm the settings and enter the task name in the Task Name box. A

task name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default.

5. Click Apply in the Confirm window. If the Go to tasks window forstatus check box is selected, the Tasks window appears.

Managing LUN pathsThis module describes path management; how to specify LUN paths, edithost mode and host mode options, and how to specify LUN paths afterreplacing or exchanging a host bus adapter.

About LUN path managementWhen you allocate a volume to a host, Hitachi Command Suite allows you toassign or edit LUN paths between one or more volumes and one or morehosts.

LUN paths provide volume access for the host by pairing storage ports andhost ports. For example, one or more storage ports can be mapped to one ormore host ports or iSCSI targets.

After volumes have been allocated to a host, you can edit existing LUN pathinformation to add new paths or delete existing paths from a list of allocatedvolumes in the storage systems tree, hosts tree, logical groups tree, or fileservers tree, and from a list of volumes returned from a search. However,you cannot edit the LUN paths between NAS modules and User LUs.

366 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 367: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

To set a new LUN path between a specific volume and a host or file server,perform the allocate volume operation, and to delete all the LUN pathsettings between a specific volume and a host or file server, perform theunallocate volume operation. In contrast, in LUN path editing, you canchange the number of LUN paths and the connection-destination portsaccording to operations. In particular, edit LUN paths to achieve thefollowing:• Improvement of I/O performance

If the usage frequency of an application increases, you can add LUN pathsto increase the data transmission speed and improve the I/O performance.If the usage frequency decreases, you can delete the LUN paths.

• Enhancements to system redundancyTo prepare for an error in storage systems or host ports, you can enhancethe system redundancy by configuring multiple LUN paths, with each usinga different port.

• Response to failuresWhen a port is disabled because of an error, you can configure LUN pathsthat temporarily use alternate ports to continue system operation.

Because allocated volumes belong to a specific host group or iSCSI target,the same target host port is set for all volumes in any given host group oriSCSI target. Therefore, when adding or deleting a host port for one or morevolumes, you must select all of the volumes that belong to that host group oriSCSI target to maintain consistent LUN path assignments for all volumes.

When a FC or FCoE connection is used, you can change host modes and hostmode options depending on the situation, for example when an application isadded to the host and the operating system is upgraded.

You can add or exchange HBAs to improve performance and throughputrequirements. To edit LUN paths when replacing a failed HBA or performing aplanned HBA replacement, use any of the following options, depending onyour task purpose:• Add HBA• Exchange HBA• Remove HBA

You can use the LUN paths that were set for the old HBA for the new HBA,and delete LUN paths set for an HBA that is no longer necessary. You can alsoedit LUN paths for multiple HBAs collectively.

Editing LUN pathsYou can manage the LUN paths between storage systems and hosts byadding or deleting them as needed.

You manage LUN paths on your storage system by controlling theconnections between the storage systems and hosts. This allows you tobetter adapt to changing storage system and network conditions.

Managing logical volumes 367Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 368: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

As conditions change, you can create new paths or delete selected paths on aLUN path basis that are established between multiple host bus adapters(HBA) of a host within a host group. For example, some HBAs may becomeunnecessary when the related applications are unneeded or infrequentlyused, and you can delete selected LUN paths for such cases.

Before you begin

Allocate volumes to the existing hosts.

Procedure

1. On the Resources tab, expand the tree for storage systems, hosts, fileservers, or logical groups to display volumes.

2. Select one or more volumes for which you want to edit LUN paths. Ifselecting multiple volumes, they must belong to the same configurationhost group.

3. In the selected volume, click More Actions, and select Edit LUN Paths.4. In the Edit LUN Paths dialog box, use the topographical graph or

selection table view to map storage ports to host ports. In both views,you can use links to view WWN nickname information to confirm thetarget HBA.

Tip: When editing the LUN path of a global-active device pairedvolume, you can specify the settings while referencing the LUNpaths of the other volume.

a. In Topological Graph (or graph view), click on a storage port row toadd it to the LUN Path Editor panel. Connect the line to the targetHBA. Another line is displayed, which you can connect to another HBAor discard with a click.

b. In Selection Table (or table view), first select a storage port, thenselect a host port row, and click Add to move the mapping toSelected host Ports list.

If you add an incorrect mapping and want to delete it, click the connectorline in graph view, or use the remove button in table view, or clickCancel to close the dialog box and start over.

5. To delete an existing path, which is indicated by a green line in graphview, or by the 'In Use' state in the Selected host ports list in SelectionTable, do the following:

Tip: To delete all the LUN paths between a specific volume and ahost, file server so that there are no paths left, delete LUN pathsettings by unallocating the volumes.

a. In graph view, click the green line. The line will now be gray and thinwhich indicates that the LUN path is removed.

368 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 369: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

b. In table view, select the mapping row and click Remove to changethe state from 'In Use' to 'Remove.'

6. Change any other required settings.7. Click Show Plan and confirm that the information in the plan summary

is correct.Optionally, update the task name and provide a description.

8. Expand Schedule to specify the task schedule.The task can be run immediately or scheduled for later. The defaultsetting is Now.

9. Click Submit.If you selected Now, the editing LUN path process begins

10. You can check the progress and result of the editing LUN paths task onthe Tasks & Alerts tab.Verify the results for each task by viewing the details of the task.

ResultThe LUN path settings you edited are displayed correctly.

Editing LUN paths when exchanging a failed HBAExchange a failed HBA with a new HBA and restore the LUN path settings tothe new HBA.

Before you begin• Identify the new WWN for the HBA that is being added• Identify the WWN from which to model paths• Verify that the new HBA is physically connected.

Procedure

1. On the Resources tab, select Hosts.2. After selecting the target operating system, select the target host row,

and click More Actions > Exchange HBAs.3. Enter the New WWN or select a WWN from the table.4. Enter the WWN from which to model paths or select a WWN from the

list.The selected WWN will be removed from the host.

5. Click Add.6. In the WWN Pairs list, verify that the listed HBA WWN combination

before and after the replacement are paired correctly.

Tip:• If the WWN information is updated when the host is refreshed,

the target WWN might not be displayed in the list. In this case,you need to manually enter the WWN of the failed HBA.

Managing logical volumes 369Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 370: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• To edit a WWN nickname from list of WWN Pairs, click EditWWN Nicknames.

7. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

8. (Optional) Update the task name and provide a description.9. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

10. Click Submit.If the task is scheduled to run immediately, the process begins.

11. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Result

When the task completes, the LUN path settings are restored to the new HBAand the original WWN is removed from the host.

Editing LUN paths when adding or exchanging an HBAYou can add or exchange HBAs to improve performance and throughputrequirements. When adding an HBA, specify the WWN of the new HBA andthen select a WWN of an existing HBA from which to model paths.

Before you begin• Identify the new WWN for the HBA that is being added• Identify the WWN from which to model paths• Verify that the new HBA is physically connected.

Procedure

1. On the Resources tab, select Hosts.2. After selecting the target operating system, select the target host row,

and click More Actions > Add HBAs.3. Enter the New WWN or select a WWN from the list.4. Enter the WWN from which to model paths or select a WWN from the

list.5. Click Add.6. In the WWN Pairs list, verify that the listed HBA WWN combination before

and after the replacement are paired correctly.

Tip:• If the WWN information is updated when the host is refreshed,

the target WWN might not be displayed in the list. In this case,

370 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 371: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

you need to manually enter the WWN of the HBA you areadding.

• To edit a WWN nickname from list of WWN Pairs, click EditWWN Nicknames.

7. Click Show Plan and confirm that the information in the plan summaryis correct. If changes are required, click Back.

8. (Optional) Update the task name and provide a description.9. (Optional) Expand Schedule to specify the task schedule.

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

10. Click Submit.If the task is scheduled to run immediately, the process begins.

11. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Result

When the task completes, the new WWN is added and related LUN pathsettings are restored to the host.

Next steps

If you are performing a planned HBA replacement, remove any unnecessaryWWNs and LUN path settings.

Removing LUN paths after adding an HBARemove a WWN from the host and also delete the related LUN paths.

Before you begin• Identify the WWN of the HBA you are removing

Procedure

1. On the Resources tab, select Hosts.2. After selecting the target operating system, select the target host row,

and click More Actions > Remove HBAs.3. Enter the WWN to be removed from the host or select a WWN from the

list.4. (Optional) Select the check box Delete Host Group to delete the

selected host group. By default, the check box is clear.5. Click Show Plan and confirm that the information in the plan summary

is correct. If changes are required, click Back.6. (Optional) Update the task name and provide a description.7. (Optional) Expand Schedule to specify the task schedule.

Managing logical volumes 371Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 372: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

You can schedule the task to run immediately or later. The default settingis Now. If the task is scheduled to run immediately, you can select Viewtask status to monitor the task after it is submitted.

8. Click Submit.If the task is scheduled to run immediately, the process begins.

9. (Optional) Check the progress and result of the task on the Tasks &Alerts tab. Click the task name to view details of the task.

Result

When the task completes, the WWN and related LUN path settings areremoved from the host.

Releasing LUN reservation by hostThe following explains how to release forcibly a LUN reservation by a host.

Before you begin

You must have the Storage Administrator (System Resource Management)role to perform this task.

Caution: If you perform the releasing a LUN reservation by a host, the hostwhich is connected to LDEV by LUN path is affected.

Procedure

1. Open the Ports/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. Click the link of a Host Group Name.3. Select the LUNs tab.4. Click More Actions, and then click View Host-Reserved LUNs.5. In the Host-Reserved LUNs window, select a LUN, and then click

Release Host-Reserved LUNs.6. Confirm the settings and enter a unique Task Name.

A task name can consist of up to 32 ASCII characters (letters, numerals,and symbols). Task names are case-sensitive. (date) - (task name) isinput by default.

372 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 373: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

7. Click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Configuring LUN security

LUN security on portsTo protect mission-critical data in your storage system from illegal access,apply security policies to logical volumes. Use LUN Manager to enable LUNsecurity on ports to safeguard LUs from illegal access.

If LUN security is enabled on ports, host groups affect which host can accesswhich LUs. Hosts can access only the LUs associated with the host group towhich the hosts belong. Hosts cannot access LUs associated with other hostgroups. For example, hosts in the hp-ux host group cannot access LUsassociated with the windows host group. Also, hosts in the windows hostgroup cannot access LUs associated with the hp-ux host group.

Examples of enabling and disabling LUN security on ports

Enabling LUN security

In the following example, LUN security is enabled on port CL1-A. The twohosts in the hg-lnx host group can access only three LUs (00:00:00,00:00:01, and 00:00:02). The two hosts in the hg-hpux host group canaccess only two LUs (00:02:01 and 00:02:02). The two hosts in the hg-solar host group can access only two LUs (00:01:05 and 00:01:06).

Managing logical volumes 373Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 374: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Disabling LUN security

Typically, you do not need to disable LUN security on ports. For example, ifLUN security is disabled on a port, the connected hosts can access only theLUs associated with host group 0, and cannot access LUs associated with anyother host group.

Host group 0 is the only host group reserved, by default, for each port. If youuse the LUN Manager window to view a list of host groups in a port, hostgroup 0, indicated by the number 00, usually appears at the top of the list.

The default name of host group 0 consists of the port name, a hyphen, andthe number 00. For example, the default name of host group 0 for port 1A is1A-G00. However, you can change the default name of the host group 0.

374 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 375: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

LUN security is disabled, by default, on each port. When you configure yourstorage system, you must enable LUN security on each port to which hostsare connected.

Enabling LUN security on a port

Before you begin

One of the following roles is required to perform this task:• Storage Administrator (System Resource Management)• Storage Administrator (Provisioning)

To protect mission-critical data in your storage system from illegal access,secure the logical volumes in the storage system. Use LUN Manager to secureLUs from illegal access by enabling LUN security on ports.

By default, LUN security is disabled on each port. When registering hosts inmultiple host groups, you must enable LUN security (set the switch toenabled). When you change LUN security from disabled to enabled, you mustspecify the WWN of the host bus adapter.

Caution: It is best to enable LUN security on each port when configuringyour storage system. Although you can enable LUN security on a port whenhost I/O is in progress, I/O is rejected with a security guard after enabling.

Procedure

1. Open the Ports/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Expand the target host group and click Ports/Host Groups/iSCSI

Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. In the Ports/Host Groups/iSCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. Select the Port Security check box, and then select Enable.5. Click Finish. A message appears, confirming whether to switch the LUN

security. Clicking OK opens the Confirm window.6. In the Confirm window, confirm the settings, in Task Name type a

unique name for this task or accept the default, and then click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Managing logical volumes 375Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 376: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Disabling LUN security on a port

Before you begin

One of the following roles is required to perform this task:• Storage Administrator (System Resource Management)• Storage Administrator (Provisioning)

Caution: Do not disable LUN security on a port when host I/O is in progress.

Procedure

1. Open the Ports/Host Groups/iSCSI Targets window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Click Ports/Host Groups/iSCSI Targets.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Click Ports/Host Groups/iSCSI Targets.

2. In the Ports/Host Groups/iSCSI Targets window, click the Ports tab.3. Select the desired port, and then click Edit Ports.4. Select the Port Security check box, and then select Disable.5. Click Finish. If disabling LUN security, a message appears, indicating

that only host group 0 (the group whose number is 00) is to be enabled.Clicking OK opens the Confirm window.

6. In the Confirm window, confirm the settings, in Task Name type aunique name for this task or accept the default, and then click Apply.If Go to tasks window for status is checked, the Tasks windowopens.

Setting Fibre Channel authenticationWhen configuring a Fibre Channel environment, use the Authenticationwindow to set user authentication on host groups, Fibre Channel ports, andfabric switches of the storage system.

Note: Authentication operations are performed in a Device Manager -Storage Navigator secondary window. For more information about enablingand using secondary windows, see the System Administrator Guide.

The hosts to be connected must be configured for authentication by hostgroups (and for authentication of host groups by the host, if required). For

376 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 377: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

details on how to configure the host for CHAP authentication, see thedocumentation of the operating system and Fibre Channel driver in yourenvironment.

Note: In FCoE networks, user authentication is not supported.

The following topics provide information for managing user authentication onhost groups, Fibre Channel ports, and fabric switches:• User authentication on page 377• Fibre Channel authentication on page 385• Fibre channel port authentication on page 391• Setting Fibre Channel port authentication on page 391• Registering user information on a Fibre Channel port on page 392• Registering user information on a fabric switch on page 393• Clearing fabric switch user information on page 394• Setting the fabric switch authentication mode on page 395• Enabling or disabling fabric switch authentication on page 396

User authenticationWhen configuring a Fibre Channel environment, use LUN Manager to set userauthentication for ports between your storage system and hosts. In a FibreChannel environment, the ports and hosts use Null DH-CHAP or CHAP(Challenge Handshake Authentication Protocol with a Null Diffie-Hellmannalgorithm) as the authentication method.

User authentication is performed in a Fibre Channel environment in threephases:1. A host group of the storage system authenticates a host that attempts to

connect (authentication of hosts).2. The host authenticates the connection-target host group of the storage

system (authentication of host groups).

Caution: Because the host bus adapters at present do not supportthis function, this authentication phase is unusable in the FibreChannel environment.

3. A target port of the storage system authenticates a fabric switch thatattempts to connect (authentication of fabric switches).

The storage system performs user authentication by host groups. Therefore,the host groups and hosts need to have their own user information forperforming user authentication.

When a host attempts to connect to the storage system, the authentication ofhosts phase starts. In this phase, first it is determined whether the host

Managing logical volumes 377Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 378: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

group requires authentication of the host. If it does not, the host connects tothe storage system without authentication. If it does, authentication isperformed for the host, and when the host is authenticated successfully,processing goes on to the next phase.

After successful authentication of the host, if the host requires userauthentication for the host group that is the connection target, theauthentication of host groups phase starts. In this way, the host groups andhosts authenticate with each other, that is, mutual authentication. In theauthentication of host groups phase, if the host does not require userauthentication for the host group, the host connects to the storage systemwithout authentication of the host group.

The settings for authentication of host groups are needed only when youwant to perform mutual authentication. The following topics explain thesettings required for user authentication.• Settings for authentication of hosts on page 378• Settings for authentication of ports (required if performing mutual

authentication) on page 378

Settings for authentication of hostsOn the storage system, use LUN Manager to specify whether to authenticatehosts on each host group.

On a host group that performs authentication, register user information(group name, user name, and secret) of the hosts that are allowed toconnect to the host group. A secret is a password used in CHAPauthentication. When registering user information, you can also specifywhether to enable or disable authentication on a host basis.

On hosts, configure the operating system and Fibre Channel host bus adapterdriver for authentication by host groups with CHAP. You need to specify theuser name and secret of the host used for CHAP. For details, see thedocumentation of the operating system and Fibre Channel host bus adapterdriver in your environment.

Settings for authentication of ports (required if performing mutualauthentication)

On the storage system, use LUN Manager to specify user information (username and secret) of each host group.

On hosts, configure the operating system and Fibre Channel host bus adapterdriver for authenticating host groups with CHAP. You need to specify the username and secret of the host group that is the connection target. For details,see the documentation of the operating system and Fibre Channel host busadapter driver in your environment.

378 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 379: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Host and host group authenticationWhen a host attempts to connect to the storage system, the connection ofthe authentication of the host differs depending on the host group settings.The following diagram illustrates the flow of authentication of hosts in a FibreChannel environment. The connection use cases (Cases A, B, and C) aredescribed below the diagram.

Authenticating hosts (Cases A, B, and C)

The following cases describe the examples of performing authentication ofhost groups.

Case A - The user information of the host is registered on the hostgroup, and authentication of the host is enabled.

The host group authenticates the user information sent from the host. Ifauthentication of the host is successful, either of the following occurs:

Managing logical volumes 379Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 380: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• When the host is configured for mutual authentication, authentication ofthe host group is performed.

• When the host is not configured for mutual authentication, the hostconnects to the storage system.

If the host is not configured for authentication by host groups with CHAP, theauthentication fails and the host cannot connect to the storage system.

Case B - The user information of the host is registered on the hostgroup, but authentication of the host is disabled.

The host group does not perform authentication of the host. The host willconnect to the storage system without authentication regardless of whetherthe host is configured for authentication by host groups with CHAP.

Case C - The user information of the host is not registered on thehost group.

Regardless of the setting on the host, the host group performs authenticationof the host, but this results in failure. The host cannot connect to the storagesystem.

Not authenticating hosts (Case D)

Case D is an example of connecting via a host group that does not performauthentication of hosts. The host will connect to the storage system withoutauthentication of the host regardless of whether the host is configured forauthentication by host groups with CHAP. In this case, though you do notneed to register user information of the host on the host group, you canregister it.

You should register user information of all hosts to be connected to a hostgroup that performs authentication of hosts. To allow a specific host toconnect to such a host group without authentication, configure the hostgroup and the host as follows.

On the host group: Register the user information of the host you want toallow to connect without authentication, and then disable the authenticationsetting of the host.

Example of authenticating hosts in a Fibre Channel environmentFollowing is an example of authentication of hosts in a Fibre Channelenvironment. In this figure, WWNs of host bus adapters (HBAs) areabbreviated, such as A, B, and so on.

380 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 381: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

In the example, host group 1 performs authentication of hosts, and hostgroup 2 does not.

The user information of host A is registered on the host group 1, and theauthentication setting is enabled. Therefore, if the authentication of the hostis successful, host A can connect to the storage system (or, the processinggoes on to the authentication of the host group). As a precondition ofsuccessful authentication, host A should be configured for authentication byhost groups with CHAP.

The user information of host B is also registered on the host group 1, but theauthentication setting is disabled. Therefore, host B can connect to thestorage system without authentication.

The user information of host C is not registered on the host group 1.Therefore, when host C tries to connect to the storage system, theauthentication fails and the connection request is denied regardless of thesetting on host C.

Host D is attached to the host group 2 that does not perform authenticationof hosts. Therefore, host D can connect to the storage system withoutauthentication.

Managing logical volumes 381Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 382: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

During authentication of hosts, the connection is determined depending onthe combination of the following host group settings:

• Setting of the host group in the Port tree: enable ( ) or disable ( )

• Whether the user information of the host that attempts to connect isregistered on the host group

Port settings and connection resultsThe following table shows the relationships between host group settings andthe connection results in authentication of hosts. Unless otherwise noted,connection results are as described regardless of whether the host isconfigured for authentication by ports with CHAP.

Port settings

Host settings Connection resultsAuthentication athost group

Userinformation of

host

Enabled Registered Registered Connected if the authentication of thehost succeeded

Enabled Registered Not registered Failed to be authenticated and cannotbe connected

Enabled Not registered Registered Failed to be authenticated and cannotbe connected

Disabled --- --- Connected without authentication ofthe host

If a host is configured forauthentication by ports with CHAP,authentication of the host will fail. Toallow such a host to connect to theport without authentication, do notconfigure it for authentication by portswith CHAP.

---: This item does not affect the connection results, or cannot be specified.

Fabric switch authenticationWhen a host attempts to connect to the storage system, the connectionresults of the authentication of the fabric switch differs depending on thefabric switch setting related to each port.

The following figure illustrates the flow of authentication between fabricswitch settings and the connection results. The setting of fabric switchauthentication is independent from the setting of host authentication. Theconnection use cases are detailed below the diagram.

382 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 383: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Authenticating fabric switches by ports (Cases A, B, and C)• If the user information of the fabric switch is registered on the port, and

authentication of the fabric switch is enabled (Case A)Each port authenticates the fabric switch. If the authentication of thefabric switch ends successfully, either of the following actions occurs:○ When the fabric switch is configured for mutual authentication,

processing continues to authentication of the port.○ When the fabric switch is not configured for mutual authentication, the

fabric switch connects to the storage system.

If the fabric switch of the port is not configured for authentication withCHAP, the authentication fails and the fabric switch cannot connect to thestorage system.

• If the user information of the fabric switch is registered on the port, butauthentication of the fabric switch is disabled (Case B)

Managing logical volumes 383Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 384: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Each port does not perform authentication of the fabric switch. The fabricswitch connects to the storage system without authentication regardless ofwhether the fabric switch is configured for authentication with CHAP.

• If the user information of the fabric switch is not registered on the port(Case C)Regardless of the setting on the fabric switch, the port performsauthentication of the fabric switch, but results in failure. The fabric switchcannot connect to the storage system.

Not authenticating fabric switches by ports (Case D)

The fabric switch connects to the storage system without authentication ofthe host regardless of whether the fabric switch is configured forauthentication with CHAP. In this case, though you need not register the userinformation of the fabric switch on the port, you can register it.

During authentication of hosts, the connection result is determined dependingon the combination of the following port settings:

• Setting of the port in the Port tree: enable ( ) or disable ( )

• Whether the user information of the fabric switch that attempts to connectis registered on the port

fabric switch settings and connection resultsThe following table shows the relationship between the combinations of portsettings and the connection results in authentication of fabric switches.Unless otherwise noted, connection results are as described regardless ofwhether the host is configured for authentication by fabric switches withCHAP.

Port Settings

fabric switchsettings Connection resultsAuthentication at

fabric switch

Userinformation offabric switch

Enabled Registered Registered Connected if the authentication of thefabric switch succeeded

Enabled Registered Not registered Failed to be authenticated and cannot beconnected

Enabled Not registered Registered Failed to be authenticated and cannot beconnected

Disabled --- --- Connected without authentication of thefabric switch

If a fabric switch is configured forauthentication by ports with CHAP,authentication of the host will fail. To allowsuch a fabric switch to connect to the port

384 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 385: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Port Settings

fabric switchsettings Connection resultsAuthentication at

fabric switch

Userinformation offabric switch

without authentication, do not configure itfor authentication by ports with CHAP.

---: This item does not affect the connection results, or cannot be specified.

Mutual authentication of portsIf mutual authentication is required, when authentication of a host issuccessful, the host in return authenticates the port. In authentication ofports, when user information (user name and secret) specified on the portside matches with that stored on the host, the host allows the host group toconnect.

Fibre Channel authentication

Enabling or disabling host authentication on a host groupYou can specify whether to authenticate hosts on each host group. Changethe user authentication settings of host groups to enable or disableauthentication of hosts. By default, user authentication is disabled.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to enable or disable host authentication, other users orprograms are prevented from changing storage system settings. When youclose the secondary window, Modify mode is released. For more informationon Hitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

Managing logical volumes 385Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 386: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

4. Double-click the Fibre folder and Fibre Channel port icon under theFibre folder.When you double-click the Fibre folder, the Fibre Channel portscontained in the storage system appear as icons. If you double-click thefibre channel ports, host groups appear as icons. On the right of eachicon appears the host group name.

indicates the host group authenticates hosts. This is the default.

indicates the host group does not authenticate hosts.

5. Right-click a host group that appears with and select

Authentication:Disable > Enable. The host group icon changes to

, and the port name appears in blue.6. Click Apply in the Authentication window. A message appears asking

whether to apply the settings to the storage system.7. Click OK to close the message. The settings are applied to the storage

system.

To return the host group setting to , perform the same operation,except select the Authentication:Enable > Disable menu in step 4.

Registering host user informationOn a host group that performs authentication of hosts, register userinformation of all hosts that you allow to connect.

You should register user information of all the hosts to be connected to a hostgroup that performs authentication of hosts. To allow a specific host toconnect to such a host group without authentication, configure the hostgroup and the host as follows.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to register host user information, other users or programsare prevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

On the host: It does not matter if you configure the host for authenticationby ports with CHAP, or not.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.

386 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 387: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

b. On the menu bar, select Actions, Port/Host Group, and thenAuthentication.

In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, select a port or host group on which you want to

register user information of a host.The user information of hosts currently registered on the selected port orhost group appears in the Authentication Information (Host) listbelow the Authentication Information (Target) list.

You can register user information of a host even if the port status is .In this case, however, the registered user information of a host isignored.

4. Right-click any point in the Authentication Information (Host) listand select Add New User Information. The Add New UserInformation (Host) dialog box opens.

5. In this dialog box, specify the following user information of the host youwant to allow connection.• Group Name: Specify the group name of host bus adapter. Select one

from the list. In the list, all the group names of host bus adaptersconnected to the selected port by the cable appear.

• User Name: Specify the WWN of the host bus adapter with 16characters. You can use hexadecimal characters in a user name.

• Secret: Specify the secret (that is, a password used in CHAPauthentication) between 12 to 32 characters.You can use alphanumeric characters, spaces, and the followingsymbols in a secret: . - + @ _ = : [ ] , ~

• Re-enter Secret: Specify the secret, again, for confirmation.• Protocol: Specify the protocol used in the user authentication. This

protocol is fixed to CHAP.6. Click OK to close the Add New User Information (Host) dialog box.

The specified user information of the host is added in blue in theAuthentication Information (Host) list of the Authenticationwindow.

7. Click Apply in the Authentication window. A message appears askingwhether to apply the settings to the storage system.

8. Click OK to close the message. The settings are applied to the storagesystem.

Changing host user information registered on a host groupYou can change the registered user name or secret of a host, and enable anddisable authentication settings after registration.

Managing logical volumes 387Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 388: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to change host user information, other users or programsare prevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

You cannot change the WWN when you change user information.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, expand the Fibre folder and select a port or host group

on which the user information you want to change is registered.All the user information of the hosts registered on the selected port orhost group appears in the Authentication Information (Host) listbelow the Authentication Information (Target).

4. In the User Information (Host) list, right-click a user information itemthat you want to change and select Change User Information. TheChange User Information (Host) dialog box opens.

5. Change the user information of the host in the Change UserInformation (Host) dialog box. You can change the specifications ofUser Name, and Secret.

6. Click OK to close the Change User Information (Host) dialog box. Theuser information of the host is changed in blue in the AuthenticationInformation (Host) list of the Authentication window.

7. Click Apply in the Authentication window. A message appears askingwhether to apply the settings to the storage system.

8. Click OK to close the message. The settings are applied to the storagesystem.

Deleting host user informationYou can delete registered user information from a host group.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to delete host user information, other users or programsare prevented from changing storage system settings. When you close the

388 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 389: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

secondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, expand the Fibre folder and select a port or host group

on which the user information you want to delete is registered.The user information of hosts currently registered on the selected port orhost group appears in the Authentication Information (Host) listbelow the Authentication Information (Target).

4. In the Authentication Information (Host) list, right-click a userinformation item that you want to delete.

5. Select Delete User Information. The Delete AuthenticationInformation dialog box opens asking whether to delete the selectedhost user information.

6. Click OK to close the message.7. Click Apply in the Authentication window. A message appears asking

whether to apply the setting to the storage system.8. Click OK to close the message. The setting is applied to the storage

system.

Registering user information for a host group (for mutual authentication)You can perform mutual authentication by specifying user information forhost groups on the storage system ports. Specify unique user information foreach host group. You can change the specified user information for hostgroups in the same way you initially specify it.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to register user information, other users or programs areprevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Managing logical volumes 389Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 390: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, select a port or host group whose user information you

want to specify.The currently registered user information of the selected port or hostgroup appears in the Authentication Information (Target) list.

4. Right-click any point in the Authentication Information (Target) listand select Specify Authentication information.

5. In the Specify Authentication Information dialog box, specify theuser information of the port or host group selected in the Port tree.• Port Name: The port name of the selected port appears. You cannot

change the port name.• User Name: Specify the user name of the host group with 16

characters. You can use specified hexadecimal characters. User namesare not case-sensitive.

• Secret: Specify the secret (that is, a password used in CHAPauthentication) between 12 to 32 characters. You can usealphanumeric characters, spaces, and the following symbols in a username: . - + @ _ = : / [ ] , ~

• Re-enter Secret: Specify the secret, again, for confirmation.6. Click OK to close the Specify Authentication Information dialog box.

The specified user information of the port appears in blue in theAuthentication Information (Target) list of the Authenticationwindow.

7. Click Apply in the Authentication window. A message appears askingwhether to apply the settings to the storage system.

8. Click OK to close the message. The settings are applied to the storagesystem.

Clearing user information from a host groupYou can clear user information from a host group.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to clear user information from a host group, other users orprograms are prevented from changing storage system settings. When youclose the secondary window, Modify mode is released. For more information

390 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 391: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

on Hitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, expand the Fibre folder and select a port or host group

whose user information you want to clear.The currently registered user information of the port or host groupappears in the Authentication Information (Target).

4. Right-click any point in the Authentication Information (Target) listand select Clear Authentication information. The ClearAuthentication Information dialog box opens asking whether to clearthe user information of the selected host group.

5. Click OK to close the Clear Authentication Information dialog box.The user information of the selected host group disappears from theAuthentication Information (Target) list.

6. Click Apply in the Authentication window. A message appears askingwhether to apply the setting to the storage system.

7. Click OK to close the message. The setting is applied to the storagesystem.

Fibre channel port authentication

Setting Fibre Channel port authenticationYou can perform user authentication in a Fibre Channel environment byspecifying authentication information on the Fibre Channel ports of thestorage system.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to set port information, other users or programs areprevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on HitachiDevice Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Managing logical volumes 391Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 392: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder. Information about theport appears in the Port Information list of the Authenticationwindow.

4. Right-click any point in the Port Information list and select Set PortInformation.

5. In the Set Port Information dialog box, specify the port information.• Time out: Specify the period of time from when authentication fails to

when the next authentication session is ended. This period of time isbetween 15 to 60 seconds. The initial value of the Time out is 45seconds.

• Refusal Interval: Specify the interval from when connection to a portfails to when the next authentication session starts, with up to 60minutes. The initial value of the Refusal Interval is 3 minutes.

• Refusal Frequency: Specify the number of times of authenticationallowable for connection to a port with up to 10 times. The initial valueof the Refusal Frequency is 3 times.

6. Click OK to close the Set Port Information dialog box.7. Click Apply in the Authentication window. A message appears asking

whether to apply the settings to the storage system.8. Click OK to close the message. The settings are applied to the storage

system.

Registering user information on a Fibre Channel portYou can perform user authentication in a Fibre Channel environment byregistering user information on the Fibre Channel ports of the storagesystem.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to register user information, other users or programs areprevented from changing storage system settings. When you close thesecondary window, Modify mode is released. For more information on Hitachi

392 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 393: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Device Manager - Storage Navigator secondary windows and Modify mode,see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

4. In the Port tree, double-click the Fibre folder. Information about theport appears in the tree of the Authentication window.

5. Right-click any icon of port in the Port tree and select Default Setting(User Name/Secret).

6. In the Default Setting (User Name/Secret) dialog box, specify theuser information.• User Name: Specify the user name of Fibre Channel with up to 16

characters. You can use hexadecimal characters in a user name. Usernames are not case-sensitive.

• Secret: Specify the secret (that is, a password used in CHAPauthentication) between 12 to 32 characters.You can use alphanumeric characters, spaces, and the followingsymbols in a secret: . - + @ _ = : / [ ] , ~

• Re-enter Secret: Specify the secret, again, for confirmation.7. Click OK to close the Default Setting(User Name/Secret) dialog box.8. Click Apply in the Authentication window. A message appears asking

whether to apply the setting to the storage system.9. Click OK to close the message. The setting is applied to the storage

system.

Registering user information on a fabric switchYou can perform user authentication in a Fibre Channel environment byregistering user information on the fabric switch of the storage system.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to specify authentication information, other users orprograms are prevented from changing storage system settings. When you

Managing logical volumes 393Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 394: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

close the secondary window, Modify mode is released. For more informationon Hitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

4. In the Port tree, double-click the Fibre folder. Information about thefabric switch appears in the Fabric Switch Information list below thePort Information list.

5. Right-click any point in the Fabric Switch Information list and selectSpecify User Information.

6. In the Specify Authentication Information dialog box, specify theuser information of the host you want to allow connection.• User Name: Specify the user name of the fabric switch with up to 16

characters.You can use hexadecimal characters in a user name. User names arenot case-sensitive.

• Secret: Specify the secret (that is, a password used in CHAPauthentication) between 12 to 32 characters.You can use alphanumeric characters, spaces, and the followingsymbols in a secret: . - + @ _ = : / [ ] , ~

• Re-enter Secret: Specify the secret, again, for confirmation.7. Click OK to close the Specify Authentication Information dialog box.8. Click Apply in the Authentication window. A message appears asking

whether to apply the settings to the storage system.9. Click OK to close the message. The settings are applied to the storage

system.

Clearing fabric switch user informationYou can clear the specified user information of a fabric switch from thestorage system.

394 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 395: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to clear authentication information, other users orprograms are prevented from changing storage system settings. When youclose the secondary window, Modify mode is released. For more informationon Hitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

4. In the Port tree, double-click the Fibre folder. Information about thefabric switch appears in the Fabric Switch Information list below thePort Information list.

5. Right-click any point in the Fabric Switch Information list and selectClear Authentication Information. The Clear AuthenticationInformation dialog box opens asking whether to clear the userinformation of the selected fabric switch.

6. Click OK to close the Clear Authentication Information dialog box.7. Click Apply in the Authentication window. A message appears asking

whether to apply the settings to the storage system.8. Click OK to close the message. The settings are applied to the storage

system.

Setting the fabric switch authentication modeYou can specify the authentication mode of a fabric switch.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to set the fabric switch authentication mode, other usersor programs are prevented from changing storage system settings. When youclose the secondary window, Modify mode is released. For more informationon Hitachi Device Manager - Storage Navigator secondary windows andModify mode, see the System Administrator Guide.

Managing logical volumes 395Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 396: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.b. On the menu bar, select Actions, Port/Host Group, and then

Authentication.In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

4. In the Port tree, double-click the Fibre folder. Information about thefabric switch appears in the Fabric Switch Information list below thePort Information list.

5. Right-click any point in the Fabric Switch Information list and selectAuthentication Mode: unidirectional > bi-directional.

6. Click Apply in the Authentication window. A message appears askingwhether to apply the settings to the storage system.

7. Click OK to close the message. The settings are applied to the storagesystem.

8. To return the Fibre Channel setting, perform the same operation, exceptthat you must select the Authentication Mode: bi-directional >unidirectional menu in step 4.

Enabling or disabling fabric switch authenticationBy default, the fabric switch authentication is disabled. To enable fabricswitches to authenticate hosts, enable the user authentication settings offabric switches.

Hitachi Device Manager - Storage Navigator secondary windows must bedefined for use in advance. When you select Modify from the Authenticationsecondary window to enable or disable fabric switch authentication, otherusers or programs are prevented from changing storage system settings.When you close the secondary window, Modify mode is released. For moreinformation on Hitachi Device Manager - Storage Navigator secondarywindows and Modify mode, see the System Administrator Guide.

Procedure

1. Open the Authentication window.In Hitachi Command Suite:a. On the Resources tab, expand the Storage Systems tree, right-click

the target storage system, and then select Other Functions.

396 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 397: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

b. On the menu bar, select Actions, Port/Host Group, and thenAuthentication.

In Device Manager - Storage Navigator:On the menu bar, select Actions,Port/Host Group, and then Authentication.

2. In the Authentication window, click to change to Modify mode.3. In the Port tree, double-click the Storage System folder.

If the storage system contains any Fibre Channel adapters, the Fibrefolder appears below the Storage System folder.

4. In the Port tree, double-click the Fibre folder. Information about thefabric switch appears in the Fabric Switch Information list below thePort Information list.

5. Right-click any point in the Fabric Switch Information list and selectAuthentication:Disable > Enable.

6. Click Apply in the Authentication window. A message appears askingwhether to apply the settings to the storage system.

7. Click OK to close the message. The settings are applied to the storagesystem.To return the fabric switch setting so that the switch cannot authenticatehosts, perform the same operation, except select theAuthentication:Enable > Disable menu in step 4.

Managing logical volumes 397Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 398: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

398 Managing logical volumesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 399: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

8Configuring VASA integrated storage

systemsStorage systems can be integrated with an ESXi host or VMware® vCenterServer by Hitachi Storage Provider for VMware vCenter, which is called theVASA provider. Snapshot or replication functions are used in these storagesystems that are configured with the procedures in this section.

Do not operate LDEVs with an SLU or ALU attribute from Device Manager -Storage Navigator or Command Control Interface. If you must operate LDEVsof the SLU or ALU attribute, contact customer support.

In accordance with instructions from our company, if you must perform theconfiguration change operation for LDEVs of the SLU or ALU attribute fromStorage Navigator or CCI, shut down associated virtual machines in advance.If the virtual machine is shut down, LDEVs of the SLU or ALU attribute relatedto the virtual machine are unbound automatically.

If virtual machines cannot be shut down, unbind LDEVs of SLU attribute fromLDEVs of ALU attribute related to the virtual machines by Storage Navigator,then perform the configuration change operation. For the procedure aboutthe unbinding of LDEVs of the SLU attribution, see Unbinding LDEVs of SLUsattribution on page 403. While the running of the virtual machine, if youcomplete the configuration change operation without performing of theunbinding of LDEVs of the SLU attribution, contact the storage administrator.

For information on setting up and operating VMware virtualization servers,see the Hitachi Command Suite Administrator Guide.

For information on installing, deploying, and configuring Hitachi StorageProvider for VMware vCenter using the vSphere APIs for Storage Awareness(VASA), see the Hitachi Storage Provider for VMware vCenter DeploymentGuide.

□ Creating LDEVs of ALU attribution

□ Viewing LDEVs of ALUs or SLU attribution

Configuring VASA integrated storage systems 399Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 400: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

□ Unbinding LDEVs of SLUs attribution

400 Configuring VASA integrated storage systemsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 401: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Creating LDEVs of ALU attributionUse this procedure to create LDEVs of ALU attribution.

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Pools window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right click Parity Groups, select System GUI, and then click Create

LDEVs.In Device Manager - Storage Navigator:a. Click Actions > Logical Device > Create LDEVs .

2. In the Create LDEVs window, from the Provisioning Type list, selectan ALU.

3. In Number of LDEVs, type the number of LDEVs to be created.4. In LDEV Name, specify a name for this LDEV.

a. In Prefix, type the characters that will become the fixed charactersfor the beginning of the LDEV name. The characters are casesensitive.

b. In Initial Number, type the initial number that will follow the prefixname.

5. Click Options to show more options.6. In Initial LDEV ID, make sure that an LDEV ID is set. To confirm the

used number and unavailable number, click View LDEV IDs to open theView LDEV IDs window.a. In Initial LDEV ID in the Create LDEVs window, click View LDEV

IDs.

In the View LDEV IDs window, the matrix vertical scale representsthe second-to-last digit of the LDEV number, and the horizontal scalerepresents the last digit of the LDEV number. The LDEV IDs tableshows the available, used, and disabled LDEV IDs.

In the table, used LDEV numbers appear in blue, unavailable numbersappear in gray, and unused numbers appear in white. LDEV numbersthat are unavailable may be already in use, or already assigned toanother emulation group (group by 32 LDEV numbers).

b. Click Close7. In SSID, type four digits, in hexadecimal format (0004 to FEFF), for the

SSID.

Configuring VASA integrated storage systems 401Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 402: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

8. To confirm the created SSID, click View SSIDs to open the ViewSSIDs.a. In the Create LDEVs window, in Initial SSID, click View SSIDs. In

the SSIDs window, the SSIDs table shows the used SSIDs.b. Click Close.

The Create LDEVs window appears.9. In the CLPR list, select the CLPR ID.

10. From the MP Blade list, select an MP blade to be used by the LDEVs.• If you assign a specific MP blade, select the ID of the MP blade.• If you can assign any MP blade, click Auto.

11. Click Add.

The created LDEVs are added to the Selected LDEVs table.

The Provisioning Type and Number of LDEVs must be set. If theserequired items are not registered, you cannot click Add.

12. If necessary, change the following LDEV settings:• Click Edit SSIDs to open the SSIDs window. If the new LDEV is to be

created in the CU, change SSID to be allocated to the LDEV.

• Click Change LDEV Settings to open the Change LDEV Settingswindow.

13. If necessary, delete an LDEV from the Selected LDEVs table.

Select an LDEV to delete, and then click Remove.14. Click Finish.

The Confirm window opens.

To continue the operation for setting the LU path and defining a logicalunit, click Next. For details about how to set the LU path, see the HitachiCommand Suite User Guide.

15. In the Task Name text box, type a unique name for the task or acceptthe default.

You can enter up to 32 ASCII characters and symbols, with the exceptionof: \ / : , ; * ? " < > |. The value "date-window name" is entered bydefault.

16. Click Apply.

If the Go to tasks window for status check box is selected, the Taskswindow appears.

Viewing LDEVs of ALUs or SLU attributionUse this procedure to the view the ALUs or SLUs of the storage system. Theprocedure can be performed on an ESXi host or vCenter Server as well.

402 Configuring VASA integrated storage systemsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 403: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Before you begin

The Storage Administrator (Provisioning) role is required to perform this task.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Select Logical Devices.

2. In the LDEVs pane, click More Actions > View ALUs/SLUs.

Unbinding LDEVs of SLUs attributionUse this procedure to unbind SLUs from ALUs. The procedure can beperformed on an ESXi host or vCenter Server.

Procedure

1. Open the Logical Devices window.In Hitachi Command Suite:a. On the Resources tab, click Storage Systems, and then expand All

Storage Systems and the target storage system.b. Right-click Volumes, and then select System GUI.In Device Manager - Storage Navigator:a. Click Storage Systems, and then expand the Storage Systems

tree.b. Select Logical Devices.

2. In the LDEVs pane, select LDEV IDs of the ALU provisioning type.3. Click More Actions > Unbind SLUs.

If the Go to tasks window for status check box is selected, the Taskswindow appears.

Configuring VASA integrated storage systems 403Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 404: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

404 Configuring VASA integrated storage systemsHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 405: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

9Troubleshooting

Troubleshooting for provisioning operations involves identifying the cause ofthe error and resolving the problem. If you are unable to solve a problem,please contact customer support.

If a failure occurs and a message appears, see Hitachi Device Manager -Storage Navigator Messages for further instructions.

For problems and solutions related to using Hitachi Command Suite, see theHitachi Command Suite User Guide.

□ Troubleshooting Virtual LUN

□ Troubleshooting Dynamic Provisioning

□ Troubleshooting Data Retention Utility

□ Troubleshooting provisioning while using Command Control Interface

□ Calling customer support

Troubleshooting 405Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 406: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Troubleshooting Virtual LUNIf a failure occurs while you are operating, see the Hitachi Device Manager -Storage Navigator Messages.

For the problems and solutions regarding the Hitachi Device Manager -Storage Navigator, see the Hitachi Command Suite User Guide.

Troubleshooting Dynamic ProvisioningThe following table provides troubleshooting information for DynamicProvisioning.

If you are unable to solve a problem, or if you encounter a problem notlisted, please contact customer support.

When an error occurs during operations, the error code and error messageare displayed in the error message dialog box. For details about errormessages, see Hitachi Device Manager - Storage Navigator Messages.

Problems Causes and solutions

Cannot create a DP-VOL. Causes:• Usage of the pool has reached to 100%.• Something in the storage system is blocked.• The available capacity of DP-VOL is restricted due to the

value of Subscription-Limit set for the pool.

Solutions:• Add some pool-VOLs to the pool. See .• Perform the operation to reclaim zero pages in order to

release pages in which zero data are stored. See .• Adjust the value of Subscription Limit for the pool. See .• Ask customer support to solve the problem.

Cannot add a pool-VOL. Causes:• 1,024 pool-VOLs are already defined in the pool.• The pool-VOL does not fill the requirements for a pool-

VOL.• Something in the storage system is blocked.

Solution:• Change the setting of the LDEV to satisfy the

requirement of the Pool-VOL. See Pool-VOLrequirements on page 132.

A pool-VOL is blocked. SIM code627xxx is reported.

Causes:• A failure occurred in data drives greater than the parity

group redundancy. The redundancy of the parity groupdepends on the number of the blocked PDEVs (datadrives). For example:○ When the parity group configuration is 3D +1P and

failures occur in two or more drives, the failures are

406 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 407: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Problems Causes and solutions

considered to have occurred in data drives beyondthe parity group redundancy.

○ When the parity group configuration is 6D+2P andfailures occur in three or more drives, the failuresare considered to have occurred in data drivesbeyond the parity group redundancy.

Solutions:• Ask customer support to solve the problem.

A pool is blocked. Solutions:• Ask customer support to solve the problem.

A pool cannot be restored. Causes:• Processing takes time, because something in the

storage system is blocked.• Usage of the pool has reached to 100%.

Solutions:• After waiting for a while, click refresh the display, and

check the pool status.• Add some pool-VOLs to the pool to increase the capacity

of the pool. See .• Perform the operation to reclaim zero pages in order to

release pages in which zero data are stored. See .• Ask customer support to solve the problem.

A pool cannot be deleted. Causes:• The pool usage is not 0.• External volumes are removed from the pool before you

delete the pool.• DP-VOLs have not been deleted.

Solutions:• Confirm that the pool usage is 0 after the DP-VOLs are

deleted, and that you can delete the pool.• Ask customer support to solve the problem.

A failure occurs to the application formonitoring the volumes installed in ahost.

Causes:• Free space of the pool is insufficient.• Some areas in the storage system are blocked.

Solutions:• Check the free space of the pool and increase the

capacity of the pool. See .• Perform the operation to reclaim zero pages in order to

release pages in which zero data are stored. See .• Ask customer support to solve the problem.

When the host computer tries toaccess the port, error occurs and thehost cannot access the port.

Causes:• Free space of the pool is insufficient.• Some areas in the storage system are blocked.

Solutions:• Check the free space of the pool and increase the

capacity of the pool. See .• Perform the operation to reclaim zero pages in order to

release pages in which zero data are stored. See .• Ask customer support to solve the problem.

Troubleshooting 407Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 408: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Problems Causes and solutions

When you are operating HitachiDevice Manager - Storage Navigator,a timeout occurs frequently.

Causes:• The load on the Hitachi Device Manager - Storage

Navigator computer is too heavy, so that it cannotrespond to the SVP.

• The period of time until when time-out occurs is set tooshort.

Solutions:• Wait for a while, then try the operation again.• Verify the setting of the environment parameter of

Hitachi Device Manager - Storage Navigator RMI time-out period. For information about how to set the RMItime-out period, see the System Administrator Guide.

DP-VOL capacity cannot be increased. See Troubleshooting provisioning while using CommandControl Interface on page 411 and identify the cause.

Solutions:• After refreshing the display, confirm whether the

processing for increasing DP-VOL capacity meets theconditions described in Requirements for increasing DP-VOL capacity on page 135.

• Retry the operation after 10 minutes or so.• Ask customer support to solve the problem.

Cannot reclaim zero pages in a DP-VOL.

Causes:• Zero pages in the DP-VOL cannot be reclaimed from

Device Manager - Storage Navigator because the DP-VOL does not meet conditions for releasing pages in aDP-VOL.

Solutions:• Make sure that the DP-VOL meets the conditions

described in .

The DP-VOL cannot be released if theprocess to reclaim zero pages in theDP-VOL is interrupted.

Causes:• Pages of the DP-VOL are not released because the

process of reclaiming zero pages was interrupted.

Solutions:• Make sure that the DP-VOL meets the conditions

described in .

Cannot release the Protect attributeof the DP-VOLs.

Causes:• The pool is full.• The pool-VOL is blocked.• The pool-VOL that is an external volume is blocked.

Solutions:• Add pool-VOLs to the pool to increase the free space in

the pool. See .• Perform the reclaiming zero pages operation to release

pages in which zero data are stored. See .• Contact customer support to restore the pool-VOL.• If the blocked pool-VOL is an external volume, verify the

status of the path blockade and the external storagesystem.

• After performing above solutions, release the Protectattribute (Data Retention Utility) of the DP-VOL. For

408 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 409: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Problems Causes and solutions

information about Data Retention Utility, see theProvisioning Guide for Open Systems.

SIM code 624000 was issued. Causes:• The pools and DP-VOLs configuration, of which the size

is more than the supported capacity, is created.

Solutions:• Remove pools that are not used.• Remove DP-VOLs that are not used.• Remove Thin Image pairs that are not used.• Shrink pools capacities.

Formatted pool capacity displayed inthe View Pool Management Statuswindow does not increase.

Causes:• Another pool is being formatted.• The pool usage level reaches up to the threshold.• The pool is blocked.• I/O loads to the storage system are high.• The cache memory is blocked.• Pool-VOLs are blocked.• Pool-VOLs that are external volumes are blocked.

Solutions:• Confirm the display again after waiting for a while.• Add pool-VOLs to the pool to increase the free space in

the pool. See .• Perform the operation to reclaim zero pages in order to

release pages in which zero data are stored. See .• Confirm the display again after decreasing I/O loads of

the storage system.• Contact customer support to restore the cache memory.• Contact customer support to restore the pool-VOL.• If the blocked pool-VOL is an external volume, confirm

following:○ Path blockage○ Status of the storage system

The Assign Deduplication SystemData Volume option is set to Yeswhen creating a pool, but thededuplication system data volume isnot created.

Cause:• After creating a pool, errors occur when the

deduplication system data volume is created, and thenthe processing aborts.

Solution:• Resolve the causes of the errors, and then assign a

deduplication system data volume to the pool by usingthe Edit Pools window.

A Deduplication-Available pool cannotbe deleted, and after the failed pooldeletion, the Deduplication setting ischanged to Not Available.

Cause:• After deleting a deduplication system data volume,

errors occur when a pool is deleted, and then theprocessing aborts.

Solution:• Resolve the causes of the errors, and then delete the

pool by using the Delete Pools window.

DP-VOLs whose capacity savingsetting is Compression orDeduplication and Compression arecreated, but the capacity saving

Cause:

Troubleshooting 409Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 410: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Problems Causes and solutions

setting of the DP-VOLs is set toDisabled.

• After creating DP-VOLs, errors occur when the capacitysaving setting changes to Compression or Deduplicationand Compression, and then the processing aborts.

Solution:• Resolve the causes of the errors, and then change the

capacity saving setting to Compression or Deduplicationand Compression by using the Edit LDEVs window.

The processing stops when CapacitySaving Status is Enabling,Rehydrating, or Deleting Volume.

Cause:• Errors occur while the capacity saving status is

changing, and then the processing aborts. Afterrecovering from the errors, the resumed status changeprocess fails.

Solution:• For DP-VOLs, change Capacity Saving to Disabled, and

then retry the operation by using the Edit LDEVswindow.

The processing when the capacitysaving setting is changed to Disabledfails in the Tasks window.

Message ID: 00002 065740 (W)

Causes:• The Disable retry of data updating setting is Disabled.

Solutions:• For the target LDEVs, verify Capacity Saving Status has

changed from Rehydrating to Disabled. Then if CapacitySaving of the LDEVs displays Disabled, the operationhas succeeded.

• For the target LDEVs, if Capacity Saving Status does notchange from Rehydrating to Disabled, verify theprocessing progress of Capacity Saving Status in theLDEV Properties window. If the processing progressdoes not increase, perform the troubleshooting for: Theprocessing stops when Capacity Saving Status isEnabling, Rehydrating, or Deleting Volume.

A capacity saving status with DP-VOLs changes to the Failed status.

Causes:• The shared memory is volatilized and then the storage

system is started again.• The pool is initialized.• The pool volumes are formatted.

Solution:1. Back up all of the capacity-saving-enabled DP-VOLs

assigned to a pool.2. Block all of the capacity-saving-enabled DP-VOLs.3. If you are using Device Manager - Storage Navigator,

perform the Format LDEVs operation on thededuplication system data volume. If you are usingCCI, specify the deduplication system data volume,and execute the raidcom initialize pool command.

4. Perform the Format LDEVs operation for all blockedLDEVs with capacity saving enabled.

5. Restore the back-up data.

410 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 411: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Troubleshooting Data Retention UtilityIf an error occurs with Data Retention Utility, the Error Detail dialog boxappears. The Error Detail dialog box displays error locations and errormessages.

The Error Detail dialog box does not display Hitachi Device Manager -Storage Navigator error messages. To find information about Hitachi DeviceManager - Storage Navigator errors and solutions, see the Hitachi DeviceManager - Storage Navigator Messages.

Data Retention Utility troubleshooting instructionsThe following table provides troubleshooting instructions for Data RetentionUtility.

Problems Probable Causes and Solutions

The Disable/ Enable or theEnable/Disable button on theData Retention window isunavailable. Nothing happenswhen you click the button.

You have been making changes in the Data Retention window,but the changes have not been applied to the storage system.Apply the changes first, and then perform the extension lockoperation.

You can find the changes by:• scrolling the current list up and down.• selecting another CU from the tree and then scrolling the list

up and down.

Open-systems hosts cannotread from or write to a volume.

• The volume is protected by the read-only attribute. Writefailure is reported as an error message.

• The volume is protected by the Protect attribute. Read (orwrite) failure is reported as an error message.

Mainframe hosts cannot readfrom or write to a volume.

• The volume is protected by the read-only attribute. Writefailure is reported as a Write Inhibit condition.

• The volume is protected by the Protect attribute. Read (orwrite) failure is reported as a cc=3 condition.

The number of days in RetentionTerm does not decrease

The number of days in Retention Term is calculated based on theoperating time of the storage system. Therefore, the number ofdays in Retention Term might not decrease.

Troubleshooting provisioning while using Command ControlInterface

If an error occurs while operating Data Retention Utility or DynamicProvisioning while using CCI, you might identify the cause of the error byreferring to the log appearing on the CCI window or the CCI operation logfile.

The CCI operation log file is stored in the following directory./HORCM/log*/curlog/horcmlog_HOST/horcm.log

Troubleshooting 411Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 412: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

where• * is the instance number.• HOST is the host name.

The following is an example of a log entry in the CCI window.

Errors when operating CCI (Dynamic Provisioning, SSB1:0x2e31/0xb96d/0xb980)

Error Code(SSB2) Error Contents Solutions

0x0b27 The command cannot be executedbecause the virtual LDEV is notdefined.

Define the virtual LDEV and thenexecute the command.

0x2c3a Because the specified volume wasbeing enabled for the attribute of thedata direct mapping, the operation wasrejected.

Specify the volume of which theattribute of the data direct mapping isdisabled.

0x2c77 Because the specified DP-VOL was adeduplication system data volume, theoperation was rejected.

Specify a DP-VOL that is not adeduplication system data volume.

0x9100 The command cannot be executedbecause user authentication is notperformed.

Perform user authentication.

0xb900/0xb901/ 0xaf28

Error occurred when increasing DP-VOLcapacity operation.

Ask customer support to solve theproblem.

0xb902 The operation was rejected becausethe configuration was being changedby SVP or Hitachi Device Manager -Storage Navigator, or because the DP-VOL capacity was going to be increasedby another instance of the CCI.

Increase the DP-VOL capacity afterfinishing operations on your storagesystem, such as the Virtual LUNoperation or a maintenance operation.See Caution in Requirements forincreasing DP-VOL capacity onpage 135.

0xb903 The operation cannot be performedbecause the specified resource iscontained inNAS_Platform_System_RSG.

Move the specified resource to aresource group other thanNAS_Platform_System_RSG.

0xaf22 The operation was rejected becausethe specified volume is placed onlinewith the OS which does not supportEAV (Extended Address Volume).

Increase the DP-VOL capacity after thespecified volume is placed online withthe OS which supports EAV.

0xaf24 The operation was rejected becausethe total DP-VOL capacity exceeded thepool reservation rate after the capacitywas increased.

Specify a capacity so that the poolreservation rate will not be exceeded.

412 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 413: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Error Code(SSB2) Error Contents Solutions

0xaf25 The operation to increase capacitycannot be performed on the specifiedDP-VOL.

Check the emulation type of thespecified DP-VOL.

0xaf26 The operation was rejected because oflack of cache management devices dueto increased capacity.

Specify a capacity so that themaximum number of cachemanagement devices will not beexceeded.

0xaf29 Because the specified volume was nota DP-VOL, the operation was rejected.

Makes sure that the volume is a DP-VOL.

0xaf2a Because the specified capacities areinvalid or exceeded the valueimmediately below LDEV Capacity inthe Expand Virtual Volumes window,the operation was rejected.

To increase capacity, specify thecorrect capacity that does not exceedthe value immediately below LDEVCapacity in the Expand VirtualVolumes window. See the conditionsfor increasing DP-VOL capacity in Requirements for increasing DP-VOLcapacity on page 135.

0xaf2b Because the specified volume operationwas not finished, the operation wasrejected.

Re-execute the operation after a briefinterval.

0xaf2c Because the shared memory capacity isnot enough to increase the specifiedcapacity, the operation was rejected.

Confirm the value immediately belowLDEV Capacity in the Expand VirtualVolumes window.

0xaf2e Because the specified DP-VOL was usedby other software or was beingformatted, the operation was rejected.

Wait until formatting of the specifiedvolume is finished, or see UsingDynamic Provisioning or DynamicTiering or active flash with othersoftware products on page 139 andconfirm whether the DP-VOL is usedwith software in which that the DP-VOLcapacity cannot be increased.

0xaf2f Because the configuration of journalvolumes is being changed, thespecified DP-VOL capacity cannot beexpanded.

Re-execute the operation after thejournal volume configuration ischanged.

0x0b2b Because the raidcom extend ldevcommand was executed with specifyingthe -cylinder option to the DP-VOLfor the open system, the operation wasrejected.

Re-execute the raidcom extend ldevcommand without specifying the -cylinder option.

0xaf60 The operation was rejected.

Because the pages capacity is sure tobe exceeded of the depletion thresholdif the specified pages capacity isreserved in the pool.

Increase the pool capacity and performthe operation again.

Troubleshooting 413Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 414: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Errors when operating CCI (Data Retention Utility, SSB1:2E31/B9BF/B9BD)

Error Code(SSB2) Description

9100 The command cannot be executed because user authentication is notperformed.

B9BD The setting failed because the specified volume does not exist.

B9C2 The specified volume is a command device.

B9C4 The command was rejected due to one of the following reasons:• The specified volume is a virtual volume.• The specified volume is a pool volume.• The specified volume is an secondary volume of Universal Replicator.• The specified volume is a journal volume.• The specified volume is a primary volume or secondary volume of

ShadowImage.• The consumed capacity exceeded the licensed capacity.• The access attribute cannot be changed because the data retention term is

set.• The specified volume is a command device.• The specified volume is in the PAIR or COPY status.• The specified volume does not exist.• The S-VOL Disable attribute is set to the specified volume.• The reserve function cannot be canceled using CCI.• The specified volume is a quorum disk for global-active device, so that the

requested setting of Data Retention Utility cannot be performed.• The specified volume is in an accelerated compression-enabled parity

group.• The specified volume is a deduplication system data volume.

B9C7 Data Retention Utility is not installed.

B9C9 The consumed capacity exceeded the licensed capacity.

B9CA The command was rejected due to one of the following reasons:• Fewer days are set as the data retention term.• More than 60 years are set as the data retention term.• An interface other than Java® updated the settings while Data Retention

Utility was in the process of changing them. A conflict occurred betweenJava and the other interface.

B9CB The retention term cannot be set because the access attribute is read/write.

Calling customer supportIf you need to call customer support, make sure you provide as muchinformation about the problem as possible, including the following:• The circumstances surrounding the error or failure.• The exact content of any error messages displayed on the host systems.• The exact content of any error messages displayed by Hitachi Command

Suite or Hitachi Device Manager - Storage Navigator.

414 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 415: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

• The Hitachi Device Manager - Storage Navigator configuration information(use the Dump Tool).

• The service information messages (SIMs), including reference codes andseverity levels, displayed by Hitachi Command Suite or Hitachi DeviceManager - Storage Navigator.

The customer support staff is available 24 hours a day, seven days a week. Ifyou need technical support, log on to the Hitachi Data Systems Portal forcontact information: https://support.hds.com/en_us/contact-us.html

Troubleshooting 415Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 416: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

416 TroubleshootingHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 417: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

ACCI command reference

This appendix provides information on Hitachi Device Manager - StorageNavigator tasks and corresponding Command Control Interface commandsused in provisioning.

□ Hitachi Device Manager - Storage Navigator tasks and CCI command list

CCI command reference 417Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 418: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Device Manager - Storage Navigator tasks and CCIcommand list

The following lists actions (tasks) that can be performed in the Hitachi DeviceManager - Storage Navigator GUI, and the corresponding commands that canbe issued in CCI.

Item Action name CCI command

Logical Device Create LDEVs raidcom add ldev

Delete LDEVs raidcom delete ldev

Edit LDEVs raidcom modify ldev

Format LDEVs raidcom initialize ldev

Block LDEVs raidcom modify ldev

Restore LDEVs raidcom modify ldev

Assign MP Blade raidcom modify ldev

Add LUN Paths raidcom add lun

Delete LUN Paths raidcom delete lun

Expand V-VOLs raidcom extend ldev

Reclaim Zero Pages raidcom modify ldev

Shredding raidcom initialize ldev

Port/Host Group/iSCSI Target(Fibre Channel)

Create Host Groups raidcom add host_grp

Delete Host Groups raidcom delete host_grp

Edit Host Groups raidcom modify host_grp

Add Hosts raidcom add hba_wwn

Add to Host Groups raidcom add hba_wwn

Remove Hosts raidcom delete hba_wwn

Edit Host raidcom add hba_wwn

Create Alternate LUN Paths raidcom add lun

Edit Ports raidcom modify port

Pool Create Pools raidcom add dp_pool

Expand Pool raidcom add dp_pool

Shrink pools raidcom delete pool

Delete Pools raidcom delete pool

Edit Pools raidcom modify pool

Monitor Pools raidcom monitor pool

Stop Monitoring Pools raidcom monitor pool

Start Tier Relocation raidcom reallocate pool

Stop Tier Relocation raidcom reallocate pool

Restore Pools raidcom modify pool

View Tier Properties raidcom get dp_pool

418 CCI command referenceHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 419: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Item Action name CCI command

External Storage Disconnect External Volumes raidcom disconnectexternal_grp

Reconnect External Volumes raidcom check_ext_storage

Port/Host Group/iSCSI (iSCSI) Create iSCSI Targets raidcom add host_grp

Delete iSCSI Targets raidcom delete host_grp

Edit iSCSI Targets raidcom modify host_grp

Add Hosts raidcom add hba_iscsi

Remove Hosts raidcom delete hba_iscsi

Edit Host raidcom set hba_iscsi

Add CHAP Users raidcom add chap_user

Remove CHAP Users raidcom delete chap_user

Edit CHAP User raidcom set chap_user

Create Alternate LUN Paths raidcom add lun

Edit Ports raidcom modify port

CCI command reference 419Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 420: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

420 CCI command referenceHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 421: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

BGuidelines for pools when accelerated

compression is enabledYou must follow specific guidelines for sizing, creating, and maintaining apool that uses LDEVs carved from parity groups with accelerated compressionenabled.

□ Checking whether accelerated compression can be enabled

□ Estimating required FMC capacity

□ Workflow for creating parity groups, LDEVs, and pools with acceleratedcompression

□ Monitoring the pool capacity

□ Estimating FMC capacity when pool capacity is insufficient

□ Disabling accelerated compression on a parity group

Guidelines for pools when accelerated compression is enabled 421Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 422: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Checking whether accelerated compression can be enabledBefore enabling accelerated compression on a parity group, check whether itcan be used with the parity group. Accelerated compression cannot be usedon an FMC parity group if the parity group meets any of the followingconditions:• Encryption is enabled on the parity group.• The LDEV carved from the parity group is not used as a pool volume.• The LDEVs carved from the parity group are used as pool volumes in

multiple pools.• The full allocation function is enabled for all or any single DP-VOL

associated with the pool containing the LDEV that is carved from the paritygroup.

If the DP-VOL that is associated with the pool containing the LDEV createdfrom the FMC parity group is used as a journal volume of a UniversalReplicator pair, we do not recommend using accelerated compression on thisparity group.

Estimating required FMC capacityTo create or expand a pool that uses LDEVs carved from acceleratedcompression-enabled parity groups, you must first estimate the required FMCcapacity. The following sections describe how to estimate the amount of FMCcapacity to install for a new pool or when expanding an existing pool.• Hitachi Accelerated Flash Compression Estimator Tool on page 422• Estimating FMC capacity for a new pool on page 423• Estimating FMC capacity to expand an existing pool on page 426

Hitachi Accelerated Flash Compression Estimator ToolThe Hitachi Accelerated Flash Compression Estimator Tool(hafdc2_estimator.exe) samples existing data and estimates a compressionratio.

By using the Compression Estimator Tool before storing data to an FMC, youcan estimate the saving percent, and the compressibility of your data. Thetool samples data that is in a file or volume that you specify and calculates acompression ratio using the same compression algorithm as the storagesystem and FMC. By calculating a saving percent for the actual data, you canconfirm the effect of accelerated compression with high precision. TheCompression Estimator Tool must be installed on the server host that hasaccess to the data you want to sample. For details about how to get or usethe Compression Estimator Tool, contact customer support.

422 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 423: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

The following is an example of executing the Compression Estimator Tool:

Best practice is to use the value of Saving % (Except Zero data) in theformula for estimating FMC capacity. The following explains the items in theexample:• SAMPLE: The number of data samples. In this example, the value is 3,000.

To shorten the time required for the estimate, the Compression EstimatorTool extracts random samples from the target data. Therefore, the resultoutput by the tool is an estimated value. The actual savings might varydepending on the situation.

• Uncompressed Bytes: The data size before compression. The value isapproximately 24 MB in the example.

• Compressed Bytes: The data size after compression. The value isapproximately 16 MB in the example.

• Saving %: The data-size saving rate after compression. The value is34.32% in the example.

• Saving % (Except Zero data): The data-size saving rate with all zero pagesexcluded from the data before compression. The value is 32.57% in theexample.

• Compression Ratio: The ratio of data size compression. In the example,the data size after compression is assumed to be 1, and the ratio of datasize compression is 1.5.

Estimating FMC capacity for a new poolWhen you need to install FMC drives to create a new pool, use the followingworkflow to estimate the required capacity.

Guidelines for pools when accelerated compression is enabled 423Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 424: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note: To estimate the FMC capacity to be used for an acceleratedcompression-enabled parity group, estimate a buffer capacity in addition tothe main capacity for storing data. Add approximately 20% of the requiredFMC capacity as buffer capacity. Buffer capacity refers to the total expectedincrease in FMC capacity, which includes the following:• Expected increase in capacity used to store management information of

the storage system• Expected increase in capacity to offset degradation of the Saving %

compared with estimated values

1. Estimate the required pool capacity.Estimate the pool capacity required for user data in the same way youestimate capacity when creating a pool.

424 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 425: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

2. Estimate the Saving % using one of the following methods:• If data will be migrated to the FMC, use the Compression Estimator

Tool to estimate the Saving %. The tool reports a Saving % that canbe used to determine the capacity needed. If the estimate is less than20%, best practice is to use a parity group with acceleratedcompression disabled, and to estimate the pool capacity by theconventional method.

• If new data will be stored on the FMC, or if the Compression EstimatorTool cannot be executed in the environment, consider setting theSaving % to 0% and using the conventional method to size capacity.With accelerated compression disabled for the parity group, you canmonitor the Saving % through the management software and thendecide to enable accelerated compression at a later time.

Note: If the 14-TB FMC is used and the Saving % that isestimated in this step exceeds 75%, apply 75% for "Saving %" inthe formula of the next step.

3. Estimate the required FMC capacity to be purchased.If there is data to migrate to the FMC, use the following formula tocalculate the required FMC capacity.Required FMC capacity to be purchased = Required pool capacity × (100% - (Saving % - 10%)) × 110%The buffers in the above formula are as follows:• - 10%: Buffer representing expected increase in capacity because of

degradation in Saving %• × 110%: Buffer representing expected increase in capacity because of

additional space required to store management information of thestorage system

Then, enable accelerated compression and create parity groups, LDEVs,and pools.

Note: When using Dynamic Tiering or active flash, if the Tier 1 isconfigured of FMC drives, use 1.2 times the calculated requiredpool capacity. Use the following formula to calculate the value:

Required pool capacity = Required pool capacity estimated in Step 1 × 120 % To prevent the depletion of assured capacity for writing due to thetier relocation, Dynamic Tiering or active flash uses a 20% bufferwhen calculating the number of pages that can fit in the FMC tier.If accelerated compression is disabled, Dynamic Tiering or activeflash do not use the 20% buffer.

Guidelines for pools when accelerated compression is enabled 425Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 426: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Therefore, if the same FMC (Tier 1) capacity is applied, the amountof data that can be stored with accelerated compression enabled issmaller than the amount of data that can be stored withaccelerated compression disabled.

Estimating FMC capacity to expand an existing poolWhen you need to install additional FMC drives to expand an existing pool,use the following workflow to estimate the required capacity.

1. Estimate the pool capacity to be added.Estimate the pool capacity required for user data in the same way youestimated capacity when creating a pool. If you are expanding a pool,also estimate the additional capacity.

2. Check the Saving %.The Saving % is displayed in the Pools window. View the Saving % byclicking Pools > FMC Pool Volumes Capacity > Saving (%).

426 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 427: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Note: If the 14-TB FMC is used and the Saving % confirmed in thisstep exceeds 75%, apply 75% for "Saving %" in the formula of thenext step.

3. Estimate the required FMC capacity.Use the following formula to calculate the required FMC capacity:Required FMC capacity = Required pool capacity × [100% - (Saving % - 10%)] The buffer in the above formula is as follows:• - 10%: Buffer representing expected increase in capacity because of

degradation in Saving %

Estimate the additional FMC capacity to be purchased according to therequired FMC capacity calculated using the above formula, and the freecapacity assured for writing.

4. Check the free capacity assured for writing.In the Pools tab of the Pools window, check the value of FMC PoolVolumes Capacity. The free capacity reserved for writing is the differencebetween the Total and Used values.

5. Estimate the additional FMC capacity to be purchased.If the space calculated in step 4 is sufficient, this step is unnecessary. Ifthe space is insufficient, use the following formula to calculate therequired FMC capacity:Required FMC capacity to be purchased = (Required FMC capacity - free capacity assured for writing in step 4) × 110%

Workflow for creating parity groups, LDEVs, and pools withaccelerated compression

When you have confirmed that the Saving % on used pool capacity issufficient, use the following workflow to create accelerated compression-enabled parity groups, LDEVs, and pools.

Guidelines for pools when accelerated compression is enabled 427Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 428: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

1. Check whether FMC parity groups are already used.If yes, go to step 2.If no, go to step 3.

2. Enable the accelerated compression function for an existing parity group.Use the Edit Parity Groups window to enable accelerated compression.

3. Use the new FMC capacity to create accelerated compression-enabledparity groups. Use the Create Parity Groups window to create paritygroups.

4. Create LDEVs to be used as pool-VOLs. Use the Create LDEVs window tocreate LDEVs.Best practice is to create 2.99-TB LDEVs because this is the maximumcapacity of a pool-VOL. Use the following formula to calculate the

428 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 429: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

recommended value of the total LDEV capacity to be defined for oneparity group:Total LDEV capacity = FMC capacity of the parity group ÷ [100% - (Saving % - 10%)] ÷ 110%The buffers in the formula are as follows:• - 10%: Buffer representing the expected increase in capacity used

because of degradation in the Saving %• ÷ 110%: Buffer representing the expected increase in capacity used

to store management information of the storage system

Note: If the 14-TB FMC is used and the Saving % exceeds 75%,apply 75% for "Saving %" in the formula for total LDEV capacity.

For example, when the 3.2-TB FMC is used in a 3D+1P configuration andthe estimated Saving % is 40%, the number of required LDEVs iscalculated as follows:• Calculate the total capacity of the LDEVs to be created as follows:

Total LDEV capacity = 9.6 TB ÷ (100% - (40% - 10%)) ÷ 110% = 12.5 TB

• Calculate the number of LDEVs as follows. The value enclosed in ceil( )must be rounded up to the nearest whole number.ceil(12.5 TB ÷ 2.99 TB ) = 5If the capacity of each LDEV is 2.99 TB, 5 LDEVs are required.

Note: If you use multiple parity groups, best practice is toconfigure the same basic usable capacity expansion rate for eachparity group. Use the following formula to calculate the basicusable capacity expansion rate of the parity groups:Basic usable capacity expansion rate of the parity groups = Total capacity of the LDEVs created from the parity groups ÷ FMC capacity of the parity groups

5. Create or expand the pool, and then add all of the created LDEVs to thepool.

Note:• Add all of the LDEVs that were created from a single parity

group to the same pool.• If LDEVs cannot be added to the pool, the cause might be that

data restoration for the drive failed. Make sure that all LDEVsare registered to the pool. If an LDEV cannot be added to thepool, in the event of drive failure, the LDEV will be included inthe restoration targets when data is restored to a replacement

Guidelines for pools when accelerated compression is enabled 429Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 430: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

FMC. In this situation, because the data size of the LDEV to berestored is larger than the size before restoration, the FMCcapacity to be used for copying data might be depleted, causingdata restoration to fail.

Monitoring the pool capacityIf you are regularly monitoring pool capacity and notice that a pool hasinsufficient space, or if insufficient space is reported in a related SIM report,you need to estimate the capacity to be added.

Estimating FMC capacity when pool capacity is insufficientIf the pool capacity or physical pool capacity is insufficient, use the followingworkflow to estimate the capacity to be added.

Note: If the 14-TB FMC is used and the Saving % exceeds 75%, apply 75%for "Saving %" when calculating the required FMC capacity (third task in theworkflow).

430 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 431: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Disabling accelerated compression on a parity groupUse the following workflow to disable accelerated compression on a paritygroup.

Guidelines for pools when accelerated compression is enabled 431Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 432: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

1. For the targeted pool, use the following formulas to determine whetherthe shrink pool operation can be performed:• Pool capacity after shrinking =

(pool-capacity-before-shrinking) - (total-capacity-of-pool-VOLs-with-Expanded-Space-Used=Yes)

• Decision formula:(used-pool-capacity) < (pool-capacity-after-shrinking) × (depletion-threshold)

If the condition of the decision formula is met, you can delete pool-VOLswith Expanded Space Used = Yes. Go to step 3.If the condition of the decision formula is not met, you cannot deletepool-VOLs with Expanded Space Used = Yes. Go to step 2.

2. Expand the pool.For the LDEVs to be added as pool-VOLs, use LDEVs with ExpandedSpace Used = No. Add capacity that is larger than the total of pool-VOLswith Expanded Space Used = Yes.

3. Shrink the pool so that all pool-VOLs in the pool are deleted.4. Format the LDEVs with Expanded Space Used = Yes.

432 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 433: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

5. Delete the LDEVs with Expanded Space Used = Yes.

Guidelines for pools when accelerated compression is enabled 433Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 434: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

434 Guidelines for pools when accelerated compression is enabledHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 435: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

CNotices

This software product includes the following redistributable software:

□ LZ4 Library

Notices 435Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 436: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

LZ4 LibraryThis software product includes LZ4 Library.

LZ4 Library

Copyright © 2011-2014, Yann Collet

All rights reserved.

Redistribution and use in source and binary forms, with or withoutmodification, are permitted provided that the following conditions are met:• Redistributions of source code must retain the above copyright notice, this

list of conditions and the following disclaimer.• Redistributions in binary form must reproduce the above copyright notice,

this list of conditions and the following disclaimer in the documentationand/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ANDCONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OFMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AREDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER ORCONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUTNOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVERCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IFADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

436 NoticesHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 437: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

GlossaryAallocated volume

A logical device (LDEV) for which one or more host LUN paths are defined.

Ccache logical partition (CLPR)

Virtual cache memory that is set up to be allocated to hosts that are incontention for cache memory. CLPRs can be used to segment storagesystem cache that is assigned to parity groups.

CLI

command line interface

CLPR

See cache logical partition.

concatenated parity group

The concatenation of two or more parity groups into one group. Using aconcatenated parity group reduces the time that is required to accessdata (especially sequential data).

control unit (CU)

Created in an enterprise-class storage system. Also called a CU image.The LDEVs created in a storage system are connected to a single CU, anda number is assigned to each CU for identifying its LDEVs. Therefore,volumes (LDEVs) in a storage system are specified by the CU number(CU#) and LDEV number.

Glossary 437Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 438: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

copy pair

A primary and secondary volume pair linked by the volume replicationfunctionality of a storage system. The primary volume contains originaldata, and the secondary volume contains the copy of the original.

Copy operations can be synchronous or asynchronous, and the volumes ofthe copy pair can be located in the same storage system (local copy) or indifferent storage systems (remote copy).

CSV

comma-separated values

Ddata pool

One or more logical volumes designated to temporarily store originaldata. When a snapshot is taken of a primary volume, the data pool isused if a data block in the primary volume is to be updated. The originalsnapshot of the volume is maintained by storing the changeable datablocks in the data pool.

device (dev or DEV)

A physical or logical unit with a specific function.

discovery

A process that finds and identifies network objects. For example,discovery may find and identify all hosts within a specified IP addressrange.

DP poolThe area where DP pool volumes (actual volumes) are registered. When aDP volume (virtual volume) receives a write operation from a host, thatdata is stored in a DP pool volume.

When Dynamic Provisioning and Dynamic Tiering must be differentiated,this document uses the terms DP pool and HDT pool.

DP pool volumeAn actual volume that is one of the volumes making up a DP pool.

When Dynamic Provisioning and Dynamic Tiering need to bedistinguished, this manual uses the terms DP pool volume or HDT poolvolume.

438 GlossaryHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 439: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

DP volumeA virtual volume that is created from a Dynamic Provisioning (DP) pool(that is, it is associated with a DP pool).

When Dynamic Provisioning and Dynamic Tiering must be differentiated,this document uses the terms DP volume and HDT volume.

Eexternal path

A path from a storage port of a storage system to a volume on aconnected external storage system.

external volume

A logical volume whose data resides on drives that are physically locatedin an externally connected storage system.

Fflash module drive (FMD)

A storage device, which is developed by Hitachi, that uses flash memory.

FMD

See flash module drive.

HHDP

Hitachi Dynamic Provisioning. See Dynamic Provisioning.

HDT

See Hitachi Dynamic Tiering.

Hitachi Dynamic Provisioning (HDP)

Functionality that allocates virtual volumes to a host and uses the physicalcapacity that is necessary according to the data write request.

Hitachi Dynamic Tiering (HDT)

Functionality that is used with Hitachi Dynamic Provisioning that placesdata in a hardware tier according to the I/O load. For example, a dataarea that has a high I/O load is placed in a high-speed hardware tier, anda data area that has a low I/O load is placed in a low-speed hardware tier.

Glossary 439Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 440: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

host group

Custom grouping of hosts that segregates hosts in a meaningful way, forexample, a group of hosts that is segregated by operating system. A hostgroup can be shared with another virtual port or another physical port foralternate path support.

HSD

Host storage domain. A group used to strengthen the security of volumesin storage systems. By associating and grouping hosts and volumes bystorage system port, host storage domains can be used to restrict accessfrom hosts to volumes.

Device Manager defines the host groups set up with the storage systemLUN security function as host storage domains. Host storage domains forstorage systems that do not have host groups are defined in the samemanner as if they had been set with the LUN security function.

II/O

input/output

internal volume

A logical volume whose data resides on drives that are physically locatedwithin the storage system.

IOPS

I/Os per second

iSCSI

Internet Small Computer Systems Interface

LLDKC

Logical disk controller

logical device (LDEV)

A volume created in a storage system. See also LU.

440 GlossaryHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 441: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

logical group

A user-defined collection of managed resources (hosts and volumes) thatare grouped according to business operations, geographic locations, orother organizational divisions. Logical groups can be public or private:• Public logical groups are accessible by any HCS user.• Private logical groups are accessible only by HCS users who belong to

user groups that are associated with the logical group.

logical unit (LU)

A volume, or LDEV, created in an open storage system, or configured foruse by an open-systems host, for example, OPEN-V.

logical unit number (LUN)

A unique management number that identifies a logical unit (LU) in astorage system. A logical unit can be an end user, a file, a disk drive, aport, a host group that is assigned to a port, an application, or virtualpartitions (or volumes) of a RAID set.

Logical unit numbers (LUNs) are used in SCSI protocols to differentiatedisk drives in a common SCSI target device, such as a storage system. Anopen-systems host uses a LUN to access a particular LU.

LU

See logical unit.

LUN

See logical unit number.

Mmain control unit (MCU)

A storage system at a primary, or main, site that contains primaryvolumes of remote replication pairs. The main control unit (MCU) isconfigured to send remote I/O instructions to one or more storagesystems at the secondary, or remote, site, called remote control units(RCUs). RCUs contain the secondary volumes of the remote replicationpairs. See also remote control unit (RCU).

master journal (M-JNL)

The primary, or main, journal volume. A master journal holds differentialdata on the primary replication system until the data is copied to therestore journal (R-JNL) on the secondary system. See also restore journal.

Glossary 441Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 442: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Ppair status

Indicates the condition of a copy pair. A pair must have a specific statusfor specific operations. When a pair operation completes, the status of thepair changes to a different status determined by the type of operation.

pool volume (pool-VOL)

A logical volume that is reserved for storing Copy-on-Write Snapshot dataor Dynamic Provisioning write data.

primary volume (P-VOL)

In a volume pair, the source volume that is copied to another volumeusing the volume replication functionality of a storage system. The dataon the P-VOL is duplicated synchronously or asynchronously on thesecondary volume (S-VOL).

Rremote control unit (RCU)

A storage system at a secondary, or remote, site that is configured toreceive remote I/O instructions from one or more storage systems at theprimary, or main, site. See also main control unit.

resource group

A collection of resources that are grouped by one or more systemresource types.

resource pool

A type of resource group to which resources of a virtual storage machinein a VSP G series or VSP F series storage system belong, if thoseresources have not been added to another resource group. There are twotypes of resource pools: the resource pool on the default virtual storagemachine and the resource pool that is created automatically for eachvirtual storage machine that you create on the storage system.

restore journal (R-JNL)

The secondary, or remote, journal volume. A restore journal holdsdifferential data on the secondary replication system until the data iscopied to the secondary volume (S-VOL). See also master journal (M-JNL).

442 GlossaryHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 443: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

role

Permissions that are assigned to users in a user group to control access toresources in a resource group. Resource groups can be assigned todifferent user groups with different roles.

Ssecondary volume (S-VOL)

After a backup, the volume in a copy pair that is the copy of the originaldata on the primary volume (P-VOL). Recurring differential data updateskeep the data in the S-VOL consistent with the data in the P-VOL.

storage pool

A collection of system drives or the logical container of a file system.Storage pools are created in the NAS Platform family of products. Storagepools are also called spans.

system drive

The basic (logical) storage element that is managed by the Hitachi NASPlatform family of products. A system drive is equivalent to a storagesystem volume.

Ttiered storage

A layered structure of performance levels, or tiers, that matches dataaccess requirements with the appropriate performance tiers.

Uunallocated volume

A volume (LDEV) for which no host paths are assigned.

user group

A collection of users who have access to the same resources and have thesame permissions for those resources. Permissions for users aredetermined by the user groups to which they belong. Users and resourcegroups can be assigned to multiple user groups.

Glossary 443Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 444: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Vvirtual storage machine

A virtual storage system that you create on a VSP G series or VSP F seriesstorage system that allows multiple resource groups to be treated as asingle device.

volume (vol or VOL)

A name for the logical device (LDEV), or logical unit (LU), or concatenatedLDEVs, that are created in a storage system that have been defined toone or more hosts as a single data storage unit.

Wweb client

An application that is used on a client machine to access a server onwhich management software is installed. A web client contains two parts:dynamic web pages and the web browser.

WWN nickname

World wide name nickname. A name that is set for the WWN of an HBAthat identifies which HBA to operate on. The WWN nickname easilyidentifies accessible HBAs that are registered in a host group. You candisplay a list of WWN nicknames to confirm target HBAs for a host whileyou edit LUN paths during or after volume allocation or when you replacean HBA.

444 GlossaryHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 445: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

IndexA

accelerated compression 31, 33, 422about pools 33disabling 84, 431enabling 82guidelines for pools 421workflow for creating parity groups, LDEVs,and pools 427

access attributechanging to read/write 288

access attributesassigning to a volume 286changing to read-only or protect 287configuring 283expiration lock 289overview 284permitted operations 285requirements 284reserving volumes 291restrictions 285retention term 289workflow 286

active flash 38, 129, 193, 195, 198, 222active flash pool

creating by selecting pool-VOLs automatically216creating by selecting pool-VOLs manually 212

adding 357Adding CHAP users 363alternate LU paths 298ALUA 117attribute 137, 274, 280Attribute command 287authentication

configuring on fibre channel ports 391configuring on fibre channels 385fabric switch 382fibre channel 376host settings 378hosts and host groups 379hosts, enabling fibre channel switch 396mutual 377mutual of ports 385port settings 378users 377

Bbasic provisioning

overview 21workflow 25

boundary values 81

Ccapacity 34

pool-VOLs 137capacity expansion 31

about pools 33disabling 84enabling 82

capacity of a slot 81capacity saving 31, 32Changing 117CHAP authentication 376, 378–380, 382, 384,386, 389, 392, 393checking 422clustered-host storage

(overview) 322creating 323

Command Control Interfaceaccess attributes restrictions 285

command device 287compression 32

disabling on a DP-VOL 233configuration 336creating

active flash pool by selecting pool-VOLsautomatically 216active flash pool by selecting pool-VOLsmanually 212DP pool by selecting pool-VOLs automatically209DP pool by selecting pool-VOLs manually 206DP-VOL with data direct mapping enabled 278Dynamic Tiering pool by selecting pool-VOLsautomatically 216Dynamic Tiering pool by selecting pool-VOLsmanually 212external volume with data direct mappingenabled 274

Index 445Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 446: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

iSCSI target 359LDEVs

overview 93procedure 93

parity groups, LDEVs, and pools withaccelerated compression 427pool 205pool with data direct mapping enabled 276resource group 72

Creating 274custom policies 177custom-sized provisioning

disadvantages 23overview 23when to use 23

Ddata direct mapping 137, 274, 280data retention strategies 40Data Retention window 292data transfer speed

Fibre Channel ports 329data-size 422data-transfer speed and connection type 331deduplication 32

disabling on a DP-VOL 233disabling on a pool 224enabling on a new pool 205enabling on an existing pool 220

deduplication system data volume 32requirements 135

deleting 355all DP-VOLs with capacity saving enabled in apool 223DP-VOL 235some DP-VOLs with capacity saving enabled ina pool 223

Deleting 355disabling accelerated compression 431disabling compression on DP-VOLs 233disabling deduplication on a pool 224disabling deduplication on DP-VOLs 233DKA encryption 61DP pool

creating by selecting pool-VOLs automatically209creating by selecting pool-VOLs manually 206

DP-VOLactive flash 227attribute 27creating 227data direct mapping 27deleting 235disabling compression 233disabling deduplication 233enabling capacity saving function 232enabling compression 232enabling deduplication 232

protection function 231DP-VOL with capacity saving enabled

deleting all in a pool 223deleting some in a pool 223

DP-VOL with data direct mapping enabledcreating 278

DP-VOLsinteroperability 139requirements 134

Dynamic Tiering poolcreating by selecting pool-VOLs automatically216creating by selecting pool-VOLs manually 212

EEditing 49, 280Editing CHAP users 364Editing port settings 362enabled 422enabling capacity saving functions on DP-VOLs232enabling compression on DP-VOLs 232enabling deduplication on a new pool 205enabling deduplication on an existing pool 220enabling deduplication on DP-VOLs 232Error Detail dialog box 411estimating 422Estimating 137estimating FMC capacity 422

for a new pool 423to expand an existing pool 426when pool capacity is insufficient 430

expiration lock 293enabling/disabling 289

external volume with data direct mappingenabled

creating 274

Ffabric switch 382fabric topology 334FC-AL (Fibre Channel-Arbitrated Loop) topology334Fibre Channel 298fibre channel authentication

setting 376fibre channel ports

configuring 329configuring authentication 385, 391registering user information 392setting port information 391

Fibre Channel portsaddresses 332configuring 333data transfer speed 329

446 IndexHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 447: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

fibre channel switchauthentication settings and connection results384clearing user information 394enabling or disabling authentication 396registering user information 393setting authentication mode 395

Fibre Channel topologyoverview 334

file serveradding volumes 307unallocating volumes 326

finding WWNAIX 346HP-UX 346IRIX 346Oracle Solaris 345Sequent 346Windows 345

fixed-sized provisioningdisadvantages 22overview 22when to use 22

FMC capacityestimating 422estimating for a new pool 423estimating to expand an existing pool 426estimating when pool capacity is insufficient430

Gglobal-active device 66

Hhafdc2_estimator 422HBA

remove 370, 371replace, exchange 369

host 357registering in iSCSI target 359

host authenticationdisabling in a host group 385enabling in a host group 385

host bus adapterschanging HBA iSCSI name 351

host group 0 373host groups 298

adding 356authentication 379

host mode optionslisted and defined 340

host modeslisted and defined 339

hosts 354authentication 379change manually registered 347changing host user information 387

configuring workflow 339deleting host user information 388device manger updates 348managing 339registering host group user information 389registering host user information 386

Iimportant terms 20interoperability of DP-VOLs and pool-VOLs 139iSCSI 298, 336

changing 351iSCSI target 355, 357

creating 359iSCSI targets 354iSCSI Targets

changing iSCSI target setting 352changing name 352

Kkey terms 20

LLDEV 117LDEVs

blocking 101, 102changing settings 100confirming SSID 98editing SSID 99formatting 108, 111formatting in a parity group 112removing from registering task 101restoring if blocked 105

LDEVs of ALU attribution 401LDEVs of ALUs or SLU attribution 125, 402leap year 287license information 435

LZ4 436logical groups

deleting 359logical units 298logical volumes

managing 297workflow for managing 329

login iSCSI names 355LU paths 298

configuring on Fibre Channel 298rules, restrictions, and guidelines 301

LUNdefined 298

LUN Manager Function 298LUN paths

editing host group information 349editing paths in host clusters 367editing WWN nicknames 349, 367

Index 447Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 448: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

managing WWN nicknames 366LUN security

enabling on ports 375example of disabling 373example of enabling 373settings for authentication of hosts 378settings for authentication of ports 378

LUN security on ports 373disabling 376

LZ4 license information 436

Mmanagement 49management area capacity

open-systems volume 81managing

hosts 339virtualized resources 45

managing logical volumesrules, restrictions, and guidelines 301

meta_resource 55mode setting 117monitoring 430monitoring capacity 262monitoring pools 265mutual authentication 377

ports 385, 389

NNetwork 336Network configuration for iSCSI 336nicknames

changing 351

Ooperating system and file system capacity 138Overview for iSCSI 336

Ppair operations

for virtual storage machine pairs 47parity groups 33

configuring volumes 82path management 366performance

optimizing by setting data transfer speed for aFibre Channel port 329

Performance Monitorautomatic starting considerations 285

point-to-point topology 334pool 280, 430

accelerated compression-enabled 33creating 205

creating active flash pool by selecting pool-VOLs automatically 216creating active flash pool by selecting pool-VOLs manually 212creating DP pool by selecting pool-VOLsautomatically 209creating DP pool by selecting pool-VOLsmanually 206creating Dynamic Tiering pool by selectingpool-VOLs automatically 216creating Dynamic Tiering pool by selectingpool-VOLs manually 212disabling deduplication 224enabling deduplication 205, 220prerequisites for creating 205

pool with data direct mapping enabledcreating 276

pool-VOLsinteroperability 139requirements 132

pools 34deleting 225, 248threshold, modifying 248creating 238, 247creating, dialog box 240deleting 225, 248expanding or converting 245expanding volumes 249overview 236requirements 130verifying 244

portsmutual authentication 385rules, restrictions, and guidelines 301

protection function for DP-VOLsenabling and disabling 231

provisioning operationsfor virtual storage machine resources 47

Qqueue depth 301Quick Format function 108

Rreclaiming zero pages 249Removing 354Removing CHAP users 365Removing target CHAP users 365requirements

shared memory 41reserving volumes with access attributes 291resource group

creating 72resource groups

adding resources to 73assignments 55deleting 74

448 IndexHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 449: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

example not sharing a port 58example sharing a port 56meta_resource 55overview 53resource lock 55rules, restrictions, and guidelines 60strategies 54system configuration 54user groups 56

resource lock 55retention term

changing access attributes 289

SS-VOL disable attribute 290saving rate 422secret

in CHAP authentication 378setting

T10 PI mode on a port 358settings 49ShadowImage 144shared memory

requirements 41SIM reference codes

listed 268SIMs

completing automatically 271completing manually 272overview 268

slot capacity 81software license information 435

LZ4 436software operations

for virtual storage machine resources 49specifications

Virtual LUN volumes 76SSID

confirming 98editing 99requirements 86

storagevirtualization with pools 236virtualized tiers 250

storage machinesmanaging virtualized resources 45

Support Center 414System Area 137system requirements for provisioning 41

TT10 PI mode

setting on a port 358terms 20thin provisioning

advantages 26configuring 127

example 31overview 129work flow 31workflow 146

thin provisioning requirements 129tier capacity

reserving 182reserving example 183

tier relocationrules, restrictions, and guidelines 157

tieringworkflow 196

tiering policy 180changing execution modes example 192notes on using 185overview 177relationship with graphs 181relationship with tiers 180reserving tier capacity 182setting on a V-VOL 180

tool 422topology 334

example of FC-AL and point-to-point 335troubleshooting 405

Dynamic Provisioning 406provisioning while using CCI 411

TrueCopy 143

Uunbinding LDEVs of SLUs attribution 403Universal Replicator 144user authentication 377

VV-VOL

active flash 227creating 227

V-VOL page reservation requirement 137V-VOLs

requirements for increasing capacity 135VASA integrated storage systems 399Virtual LUN

size calculations 77specifications 76

Virtual LUN volume specifications 76Virtual Partition Manager 146virtual storage machine pairs

pair operations for 47virtual storage machine resources

provisioning operations for 47virtual storage machines

about 46managing resources 45pair operations 47resource provisioning operations for 47software operations for virtualized resources49

Index 449Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 450: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

virtualization 49virtualized resources

about 46virtualizing tiers

overview 250volume

allocation 310, 312Volume Migration

automatic starting considerations 285volumes

allocating 305, 306, 308, 309, 311, 323allocating (overview) 91, 302allocating, dialog box 312allocating, prerequisites 303, 305boundary values 81creating 88, 92creating (overview) 86, 92creating, dialog box 89data placement profile, applying 260data placement profile, creating 258data placement profile, editing 260data placement profile, scheduling 261data placement profile, updating 259data placement, notes 257migrating data in, prerequisites 262quick format 87shredding 91tier relocation, editing 254tier relocation, monitoring 252tier relocation, scheduling 253tier relocation, starting 253tiering policy, applying 254tiering policy, customizing 255unallocating 325unallocating (overview) 324unallocating, dialog box 327unallocating, prerequisites 325

WWorld Wide Name 344WWN 344

finding on AIX, IRIX, or Sequent 346finding on different operating systems 344finding on Oracle Solaris 345finding on Windows 345nicknames 312, 366

WWN nicknameediting 350

ZZero Read Cap mode 295

450 IndexHitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 451: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Virtual Storage Platform G1000, G1500, and F1500 Provisioning Guide for Open Systems

Page 452: Provisioning Guide for Open Systems · Provisioning Guide for Open Systems Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Data Retention

Hitachi Data Systems

Corporate Headquarters2845 Lafayette StreetSanta Clara, California 95050-2639U.S.A.www.hds.com

Regional Contact Information

Americas+1 408 970 [email protected]

Europe, Middle East, and Africa+44 (0) 1753 [email protected]

Asia Pacific+852 3189 [email protected]

Contact Uswww.hds.com/en-us/contact.html

MK-92RD8014-11

October 2016