32
White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments Page 1 of 32 http://fujitsu.com/eternus_cs White paper FUJITSU Storage ETERNUS CS8000 in IBM® z/OS® environments Radically simplifying backup and archiving. Content 1 Mainframe – reliable and powerful 2 1.1 History of the Virtual Tape in the IBM z/OS environment 2 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 1.3 Why use ETERNUS CS8000? 2 1.3.1 True Tape Virtualization 2 1.3.2 Highest levels of data availability 2 1.3.3 Investment protection 3 1.3.4 Consolidation of mainframe and open systems 3 1.4 Objective of this White Paper 3 2 ETERNUS CS8000 - Short System Overview 4 2.1 Dual-site / Split-site Configuration 4 2.2 Cascaded Configuration / Long Distance Replication 5 3 ETERNUS CS8000 in IBM z/OS environments 6 3.1 Configuration of Logical Drives 6 3.2 ETERNUS CS LINK for IBM z/OS Concept and Description 9 3.2.1 Installation and Customizing Process 10 3.2.2 DFSFMS Considerations 10 3.2.3 IFSFMS Considerations 11 3.3 Optimize your Backup / Restore Production Business 17 3.4 Migration from a foreign Virtual Tape System 18 3.5 The Volume Group Concept of ETERNUS CS8000 19 3.6 ETERNUS CS8000 FICON Performance 22 3.6.1 Performance Measurement Environment 22 3.6.2 Main results 22 3.6.3 Basic Single Job Performance 23 3.6.4 Maximum Performance of one FICON channel to one ICP Port 25 3.6.5 Performance Measurement of 1 ICP 26 3.6.6 Performance History of 1 ICP since 2008 29 4 Appendix 30 4.1 The ETERNUS CS8000 client software family 30 4.2 Models of the ETERNUS CS8000 Series 31

ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

  • Upload
    vunhu

  • View
    250

  • Download
    3

Embed Size (px)

Citation preview

Page 1: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 1 of 32 http://fujitsu.com/eternus_cs

White paper FUJITSU Storage ETERNUS CS8000 in IBM® z/OS® environments

Radically simplifying backup and archiving.

Content

1 Mainframe – reliable and powerful 2 1.1 History of the Virtual Tape in the IBM z/OS environment 2 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 1.3 Why use ETERNUS CS8000? 2

1.3.1 True Tape Virtualization 2 1.3.2 Highest levels of data availability 2 1.3.3 Investment protection 3 1.3.4 Consolidation of mainframe and open systems 3

1.4 Objective of this White Paper 3 2 ETERNUS CS8000 - Short System Overview 4

2.1 Dual-site / Split-site Configuration 4 2.2 Cascaded Configuration / Long Distance Replication 5

3 ETERNUS CS8000 in IBM z/OS environments 6 3.1 Configuration of Logical Drives 6 3.2 ETERNUS CS LINK for IBM z/OS Concept and Description 9

3.2.1 Installation and Customizing Process 10 3.2.2 DFSFMS Considerations 10 3.2.3 IFSFMS Considerations 11

3.3 Optimize your Backup / Restore Production Business 17 3.4 Migration from a foreign Virtual Tape System 18 3.5 The Volume Group Concept of ETERNUS CS8000 19 3.6 ETERNUS CS8000 FICON Performance 22

3.6.1 Performance Measurement Environment 22 3.6.2 Main results 22 3.6.3 Basic Single Job Performance 23 3.6.4 Maximum Performance of one FICON channel to one ICP Port 25 3.6.5 Performance Measurement of 1 ICP 26 3.6.6 Performance History of 1 ICP since 2008 29

4 Appendix 30 4.1 The ETERNUS CS8000 client software family 30 4.2 Models of the ETERNUS CS8000 Series 31

Page 2: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 2 of 32 http://fujitsu.com/eternus_cs

1 Mainframe – reliable and powerful For decades, mainframes have stood for maximum data availability. Furthermore, mainframes can handle large quantities of data and support numerous end users. Mainframe computers thus play an important role in hosting business-critical processes, for example in banking, insurance, government and for many other enterprises; they help to run complex and large programs and databases with thousands of users such as customer lists, payroll and accounting information. The importance of the data managed by mainframes not only requires a reliable and powerful host system but also an equally reliable and powerful data protection solution. After all, backup is your last line of defense against data loss. 1.1 History of the Virtual Tape in the IBM z/OS environment In 1996 IBM launched the first virtual tape system, the VTS. In 1998, StorageTek (today Oracle StorageTek) followed with its VSM. Both solutions were built to support only the IBM z/OS platform. Several other IBM z/OS ISV and hardware providers also built Virtual Tape Solutions for this platform, but were not able to gain a significant share of the market. In 1999 Fujitsu Siemens Computers (today FUJITSU) launched its CentricStor virtual tape solution which became the most open, flexible virtual tape library supporting a great range of existing mainframe and major Unix and Windows operating systems as well as major tape libraries. Today CentricStor is known under the name ETERNUS CS8000. It has been conceptually designed as a data protection appliance to be used with all types of tape management and backup software, servers, tape libraries and tape technology. It was initially released to support the BS2000/OSD, MVS mainframe platforms and ADIC tape libraries. However, it has evolved over the years into the most versatile data protection appliance which can be deployed in almost any environment and while protecting your previous investments in tape automation. Today, IBM mainframe customers use ETERNUS CS8000 on a global scale as their Virtual Tape Appliance, either just for z/platforms or in conjunction with open systems platforms (e.g. UNIX, Linux, Windows). 1.2 History of ETERNUS CS8000 in IBM z/OS environments ETERNUS CS8000, the data protection appliance from FUJITSU, has had a record of success in the mainframe environment since 1999. The appliance is used around the world by the largest data centers in very data-intensive segments, such as financial services, insurance, telecommunications, public administration, industry and IT service provisioning. Its ongoing development has continued for many years in close collaboration with a well-established user group. This all translates into optimal support and a keen awareness of the demands arising from daily system use – all based on real-world best practices. 1.3 Why use ETERNUS CS8000? 1.3.1 True Tape Virtualization FUJITSU Storage ETERNUS CS8000 makes the cross-media mix of disk and tape technology extremely efficient and thus benefits from the best of both worlds. For the mainframe host the appliance appears as a virtual tape library (VTL) – but that’s not all: the unified ETERNUS CS8000 platform provides an intelligent management of cross-media mix – known as automated Information Lifecycle Management (ILM). The appliance combines the strengths of disk and tape with rule-based management, providing seamless and automated backup-to-disk-to tape (B2D2T) processes. ETERNUS CS8000 is not only the ideal target for mainframe backup but it also offers the highest degree of flexibility in the market. It supports mainframe and Open Systems backup and archiving as well as disk-only, B2D2T, and VTL with deduplication functionality with the same appliance. 1.3.2 Highest levels of data availability ETERNUS CS8000 provides the highest level of data protection possible thus ensuring the availability of business-critical data from the mainframe environment. ■ Highly available appliance ensures backup processes:

To terminate complex, time-consuming and error prone tape management, ETERNUS CS8000 handles the management of all target systems, providing a highly available virtualized tape robot. The redundant architecture enables reception of the backup data, even if a component should fail, e.g. if the physical tape fails, then the data is automatically directed to another tape drive or tape library in order to ensure that the data is safe.

Page 3: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 3 of 32 http://fujitsu.com/eternus_cs

■ Tape reorganization and self-healing:

The tape reorganization function effectively utilizes the capacity available. Furthermore, regular reorganization of tape media refreshes the magnetization. The appliance automatically makes quality checks of duplicate copies. If an error occurs, the self-healing functionality recovers the inaccessible information from the other copy in order to ensure the availability of data even over the very long-term. The self-healing also plays an important role if a copy is requested for a restore case but is not available for some reason. The recall request is automatically redirected to the second copy, the data is provided and the job for the backup software is done. The automated self-healing also recovers the failed copy and the level of redundancy is thus restored.

■ Flexible copy management: ETERNUS CS8000 manages different levels of data availability. Data can be mirrored between two sites, replicated over very long distances and saved as multiple copies. This function is user-defined and policy-based. There is almost zero management effort after setup.

■ Automated continuation with cache mirror: The core element of the most disaster resilient architecture is one logical ETERNUS CS8000 system which is deployed over two geographically separate sites, the so called “split-site” configuration with “cache mirror”. The internal infrastructure is thereby extended to a second site. The host still sees one logical setup without being aware of the geographical location. The result is a system with no single point of failure which continues to run even after a complete site failure.

1.3.3 Investment protection ETERNUS CS8000 is based on a modular architecture. Independent building blocks provide a genuine scale-out platform in both, capacity and performance. Tape libraries are decoupled from the backup software. A wide range of libraries and tape drives are supported. Even if the libraries and drives are different, they can be connected in parallel. This makes it very easy to introduce a new generation of tape libraries or tape drives. ETERNUS CS8000 handles the data migration, from the old library to the new one. 1.3.4 Consolidation of mainframe and open systems ETERNUS CS8000 is a unified data protection appliance for all applications and the complete physical and virtual server environment, comprising mainframe and open systems. Flexible VTL and NAS support integrates backup, archiving, compliant archiving and second-tier file storage, while supporting the complete consolidation of heterogeneous environments - resources are leveraged and the costs are dramatically reduced. 1.4 Objective of this White Paper The next section provides you with basic architecture information in order to show the differences between ETERNUS CS8000 and other VTL (Virtual Tape Library) appliances and technical IBM z/OS-related aspects. The main part of this White Paper will focus on the IBM z/OS environment and how to get ETERNUS CS8000 ■ Installed ■ Customized ■ Operated ■ Optimized ■ Performance-tuned A general overview with more details about principles, architecture and the advantages of ETERNUS CS8000 can be found in the White Paper: “Technical Concepts of ETERNUS CS8000” (available at the public webpage http://fujitsu.com/eternus_cs).

Page 4: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 4 of 32 http://fujitsu.com/eternus_cs

2 ETERNUS CS8000 - Short System Overview From a hardware viewpoint the ETERNUS CS8000 appliance operates as a modular system, built up with appliance processor nodes (front-end and backend) and internal disk storage which can consist of a number of RAID systems. The internal disk storage can act as cache storage for data or as the target disk store. As a result of the optional deduplication function, the internal disk store can also be used as a highly efficient deduplication disk storage pool. The front-end processor nodes (ICP – integrated channel processors) act as an interface to the hosts. From an outside-in viewpoint, ETERNUS CS8000 is seen as a huge backup repository leveraging the advantages of a tape infrastructure while eliminating the disadvantages via an embedded virtualization layer. The backend processor nodes (IDP – integrated device processor) operate the HSM (Hierarchical Storage Management) services with interfaces to the attached target storage systems with a tape interface, i.e. tape drives and tape libraries, WORM tape or disk libraries. A disk based backend target can also be another ETERNUS CS8000. The third type of processor node, the Virtual Library Processor (VLP), performs the appliance management services. Some of the services are used for configuring, operating and monitoring the appliance. Other services are specific for the subsystems. For the VTL subsystem the VLP also performs the inventory services for logical volumes, physical volumes and the tape volume cache (TVC) management. The VLP also provides TCP/IP-based control services for the virtual library emulation at the front-end and the physical tape libraries at the backend. The possible number of processor nodes with front-end/backend operation is currently a maximum of 20. The possible number of RAID systems is a maximum of 16 that provide a usable capacity of up to 22PB. The scale-out cluster architecture allows seamless future extensions beyond the current limits. 2.1 Dual-site / Split-site Configuration IBM z/OS mainframes are designed to support the most critical applications of a business, such as banking, assurance, automotive and reservation systems etc. They have to support high volumes of transactions and very fast response times, as well as be up and running all the time with a minimum of outage. These mission-critical applications also require the highest degree of security, availability and systems management. ETERNUS CS8000 models for these highest availability demands are built with redundant processor nodes, redundant RAID systems and internal network redundancies and its failover control and management software. Such redundancies make it possible to distribute an ETERNUS CS8000 system configuration physically over two geographically distributed locations A and B whereas the appliance cluster framework still operates as one single logical entity. The internal communication and data transfer infrastructure of the appliance is extended in such a way that maintains the logical structure of the internal SAN and LAN. The distance between the two locations greatly relies on the long distance network infrastructure which has to be provided in order to extend the internal SAN. This architecture concept helps to implement synchronous cache mirroring between locations A and B in an easy manner. Even if the cluster is split into two parts, the clustered nodes on each site are able to work with the cluster file system (CAFS) which resides at their site.

Figure 1: Architecture of synchronous cache mirror

Please note the third location Site C and the Tiebreaker Processor (TBP).

Page 5: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 5 of 32 http://fujitsu.com/eternus_cs

The Tie-Breaker Processor (TBP) works as an arbitration entity in case the SAN and LAN connection between the locations A and B breaks down and the CAFS cluster is split into two parts. The TBP helps to decide which side of the ETERNUS CS8000 system is to continue working with the IBM z/OS and which side is to be deactivated. The definition of such quorums is essential to ensure that a functioning and consistent file system remains available and that no split-brain situation occurs when a location fails or when connections between locations are interrupted. The Tie-Breaker should ideally reside on a third location with a separate power supply line. If there is no third location, the Tie-Breaker can be installed either at location A or location B based on the requirements of the DR set-up. For more detailed information about the split-site configuration please read sections 4.2 and 5.4 in the White Paper “ETERNUS CS8000: Disaster-resilient architectures (see public web http://fujitsu.com/eternus_cs) or sections 3.4 and 4.4 in the White Paper: “Technical Concepts of ETERNUS CS8000”. 2.2 Cascaded Configuration / Long Distance Replication ETERNUS CS8000 can also be configured and used as a typical “GRID” Environment. As shown in the configuration below, a split-site system can copy the data asynchronously into another split-site system or a single-site system, such as a Global Mirror Environment and vice versa.

Figure 2: Long distance volume replication with split-site source sytem

A split-site ETERNUS CS8000 configuration operating as a source system already maintains two volume copies (like Dual Save). A second ETERNUS CS8000 system at the remote site is connected via a long-distance network. This target system can also be a split-site system or just a single-site system. The target system receives the third volume copy via Triple Save as defined by the source system. The second ETERNUS CS8000 VTL target system which can be very far away from the split-site source locations is suitable for disaster recovery. In the event of a major disaster that affects the two locations covered by the split-site configuration, the target system can take over the work from the source location.

Page 6: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 6 of 32 http://fujitsu.com/eternus_cs

3 ETERNUS CS8000 in IBM z/OS environments An ETERNUS CS8000 data protection appliance behaves towards IBM z/OS systems like physical IBM-3490E 36 track tape technology and can communicate with more than one LPAR or SYSLEX in parallel. The virtual tape drives (devices) offered by ETERNUS CS8000 are configured like physically connected devices within the IBM z/OS HCD Dialog. The physical connection between ETERNUS CS8000 and the IBM host is implemented with FICON or over the FICON director channels. From an implementation viewpoint the connection is established by the host software for an ATL (Automatic Tape Library) which runs as a subsystem in IBM z/OS. The Library Communication from the Mainframe Started Task is via a TCP/IP connection The Virtual Library Service (library emulation) is set to VAMU in the ETERNUS CS8000 installation and customizing dialog; the type of physical library attached to the backend of the ETERNUS CS8000 is not visible to the host. Depending on the system type and the number of FICON ports in the front-end processor nodes the ETERNUS CS8000 can provide a respective number of virtual drives towards the host.

Figure 3: ETERNUS CS LINK for IBM z/OS

3.1 Configuration of Logical Drives From the host viewpoint each virtual tape drive is connected like a physical tape drive. The ETERNUS CS8000 can be controlled in several ways (additional information about the control software is in the appendix). This White Paper focuses on ETERNUS CS LINK for IBM z/OS Library Management Software to integrate ETERNUS CS8000 as a MTL (Manual Tape Library which was established with the Release IBM z/OS 1.5) into an IBM z/OS environment. The MTL definitions are performed via the 'Define Device Parameters / Features' panel of HCD. An MTL resident device is defined by specifying YES for MTL and valid values for the LIBRARY-ID and LIBPORT-ID fields. The LIBRARY-ID identifies a library within a SYSPLEX. A LIBPORT-ID corresponds to a control unit within a defined library. The values for the LIBRARY-ID and LIBPORT-ID are arbitrary because MTL libraries are purely logical entities. One ICP (Integrated Channel Processer = front-end node of an ETERNUS CS8000) can be licensed with 16, 32, 64, or 128 virtual tape drives. As a maximum of 10 ICPs is offered, the overall number of virtual tape drives can be up to 1280. Each ICP can be attached with a maximum of 4 physical FICON Channels Please note: For a heterogeneous environment the ICPs can be shared between mainframe and open systems. In such a situation, the front-end processor is equipped with a FICON and a Fibre Channel Card, each with 2 ports. The configuration for the drives can be symmetric, e.g. 64 + 64 as well asymmetric 32 + 96 depending on the expected system load. The granularity of drive definitions should not be less than 8.

Page 7: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 7 of 32 http://fujitsu.com/eternus_cs

The following HCD examples describe a configuration level for 64 tape drives. For IBM z/OS, an ICP represents up to 8 x 3490E Control Units (CU). A single 3490E controller supports 16 devices. On FICON channels four logical control units are defined to represent all 64 drive addresses. The IOCP sub parameter CUADD within the UNITADD statement is used to define the logical control units (0-7). In a configuration of 64 logical drives per ICP, only CUADD=0, CUADD=1, CUADD=2, CUADD=3 are required. The total amount of 1280 drives per ETERNUS CS8000 cannot be exceeded. Every ICP is equipped with at least two FICON ports which should be used for redundancy. Two FICON cards can be implemented in the ICP so that a maximum of 4 physical FICON channels can be used for each ICP, Long Wave and Short Wave FICON Mode are supported. Both, a direct connection and a switched configuration via a FICON director (switch), are possible.

Figure 4: HCD Definition: Control Unit and IO Device Definition with Path Relation

Figure 5: HCD Drive Definition

Please note: The values for LIBRARY-ID, LIBPORT-ID, and the MTL=YES definition are relevant and required for the MTL LIB configuration and for the installation and usage of the ETERNUS CS LINK for IBM z/OS communication and control software. The LIBRARY ID will be referenced (and is mandatory) in the SMS environment and must be specified again in the ISMF Library Definition Dialog. The LIBRARY-ID can be specified arbitrary (but non-zero); the 5-digit hexadecimal value must be unique within the SYSPLEX in order to identify this library among all the MTLs and ATLs within the SYSPLEX. The LIBPORT-ID can be specified arbitrary (but non-zero); 2-digit hexadecimal value. Although the value is arbitrary, it is recommended that the values begin with x''01'' and increase accordingly. All devices physically attached to the same control unit and defined within the same MTL must have the same LIBPORT-ID value. The number of devices with the same LIBPORT-ID value in the same library cannot exceed 16.

Page 8: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 8 of 32 http://fujitsu.com/eternus_cs

Figure 6: HCD 3490 MTL Tape Drives Device Parameter

Page 9: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 9 of 32 http://fujitsu.com/eternus_cs

3.2 ETERNUS CS LINK for IBM z/OS Concept and Description 3.2.1 Concept ETERNUS CS LINK for IBM z/OS is the Library Management Software to integrate ETERNUS CS8000 into an IBM z/OS environment. The concept is: ■ the use of standard IBM z/OS mechanism for tape allocation and mount influence ■ coexistence with all other available software for this purpose (e.g. NCS, ELS) ■ support of tape management systems (CA1, TLMS and DFSMSrmm)

Note: Other tape management systems may be supported on a case-by-case base. ETERNUS CS LINK for IBM z/OS provides the following main features in IBM z/OS: ■ Allocation via standard IBM DFSMS functions ■ Mount interpretation via EMCS console and scratch selection ■ TCP/IP connection to ETERNUS CS8000 connected via LAN ■ utilities to meet DFSMS and TMS requirements

The basic goal for this new control software was to reduce the manual operator intervention and replace it by automatic functions for all requirements. All tape-related activities for mount/dismount and error handling are processed by integrated recovery functions. ETERNUS CS LINK for IBM z/OS also uses the IBM z/OS and tape management recovery function. The major basis is IBM z/OS Extended Management Console Service and SMS-MTL for selecting and allocating drives 3.2.2 Description Allocation: The device allocation is via the IBM z/OS DFSMS MTL function. Allocations that result in unspecific (scratch) mounts are interpreted via DFSMS ACS routines and by choosing the related storage group when the corresponding logical volume group of ETERNUS CS8000 is assigned. Allocations resulting in specific mounts are monitored by IBM z/OS and are directed to the library that is recorded in the volume entry of the TCDB. Workload balancing is achieved by standard IBM z/OS algorithms Mount interpretation via EMCS console and scratch selection The interpretation of all tape-related messages is via an EMCS console task. The EMCS console task uses a message table that is customized according to the installed tape management system. All actions are pre-filtered from the global message traffic and stored within a WTO table. The message queue analyzer then builds up the action queue for the VAMU communicator. For unspecific mounts the pool name is extracted from the mount message. For DFSMSrmm tape management the available API is used to search for a suitable scratch volume. For all other tape management systems the catalog search interface (CSI) is used to select scratch tapes from the TCDB (VOLCAT), which is always kept in sync with the tape management database, if OAM address space is active. Multiple volume pools based on TMS functions can be used to organize tape data processing. The EMCS task uses the esoteric name that is generated for the ETERNUS CS logical drives to select the unit addresses that it manages. VAMU Configuration and TCP/IP ETERNUS CS LINK for IBM z/OS communicates with ETERNUS CS8000 via the VAMU interface feature, which emulates a tape library. ETERNUS CS8000 can run multiple VAMU instances which use dedicated pairs of TCP/IP ports. Each LPAR can operate a separate ETERNUS CS LINK for IBM z/OS task. In SYSPLEX environments only one ELINKZ instance is needed to run in global mode which handle the total SYSPLEX or multiple ETERNUS CS LINK for IBM z/OS tasks in LPAR mode. The parameter SCOPE=SPEC means it is possible to support more than one CS-ELINKZ task in a SYSPLEX. A maximum of 4 CS-ELINKZ tasks is supported in one SYSPLEX, each managing 1 to maximal 8 LPARs. Up to 99 independent IBM z/OS systems or LPARs can be connected with ETERNUS CS8000 Version 6.0 or higher. The definition of an IP4-based Internet address and two port numbers (default value 9055, 9056) are needed as primary and alternate ports for the connection to the ETERNUS CS8000 appliance. These port numbers must correspond to the defined ports of the ETERNUS CS8000 VAMU definition. ETERNUS CS LINK for IBM z/OS calls the Internet address and the defined port to establish the connection. When using different TCP/IP names the TCP name used can be specified in the ELINKZ PARMLIB. The default name is TCPIP. The VAMU started task user needs the assignment of the OMVS segment.

Page 10: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 10 of 32 http://fujitsu.com/eternus_cs

3.2.1 Installation and Customizing Process Note: The entire installation process is described in the ETERNUS CS LINK for IBM z/OS User Guide. The installation and version/SP/patch update process is straightforward without needing the SMP/E environment. The installation package includes the following materials: The installation file on a CD-ROM contains all ELINKZ software functions and sample jobs. All IBM z/OS related files are delivered in IBM z/OS XMIT format. The media includes the following files: ■ Load library with all executable modules ■ Sample library with all JCL to install ETERNUS CS LINK for IBM z/OS ■ Sample parmlib with ETERNUS CS8000 VAMU configuration for 256 drives ■ Documentation as PDF file ■ IBM z/OS RECEIVE JOB in text format with allocation for XMIT files An installation CD-ROM is delivered with each new service pack or version. Only the LOADLIB has to be replaced with the new one. New parameters or changes in the PARMLIB member required are documented in the enhancement news. 3.2.2 DFSFMS Considerations The ETERNUS CS LINK for IBM z/OS/DFSMS interface invokes the Storage Management Subsystem automatic class selection routines (referred as DFSMS ACS routines) and uses the returned constructs as follows: ■ to alter the UNIT parameter for static or dynamic allocation requests for new data sets ■ to influence ETERNUS CS LINK for IBM z/OS allocation and ETERNUS CS8000 selection The DFSMS interface has to be involved in the following phases of request processing: ■ During JCL interpretation or dynamic allocation, the interface is used to perform unit name substitution for the requested UNIT parameter

value. ■ During ETERNUS CS LINK for IBM z/OS common allocation, the interface provides the means for ELINKZ device separation and directed

allocation to modify the eligible device list (EDL), which ensures that a device is allocated matching the characteristics of the requested data set. At this time ELINKZ may perform scratch subpool specification, etc. The interface also ensures that the appropriate media type (logical size) is selected to satisfy the request.

Unit name substitution uses the DFSMS storage group name returned by the DFSMS storage group ACS routine to override the unit name for both static (JCL DD statements) and dynamic (SVC 99) allocation requests before IBM z/OS builds the eligible device list (EDL) for the request Media type and recording technique requirements for a request can be specified in the DFSMS data class instead of using a different tape technique (virtual, real) with a different capacity. Storage groups, storage classes, and data classes can be defined and are used by the ELINKZ in the following ways: ■ ELINKZ uses the storage group name to perform unit name substitution as described above. ■ ELINKZ does not use the DFSMS storage class construct, but the ACS routines require a storage class to perform SMS allocation and to

eventually assign a storage group. A default storage class thus has to be defined. ■ ETERNUS CS LINK does not use the data class for the selection of volume subpools.

A default data class may be specified when defining the library entry to avoid unnecessary warning messages during the startup of the OAM, or if there is a need to specify different recording techniques.

Page 11: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 11 of 32 http://fujitsu.com/eternus_cs

3.2.3 IFSFMS Considerations ISMF Storage Administrator Mode is required to configure a new library. ■ Select Option 10 “Library management “in the ISMF PRIMARY OPTION MENU. ■ Select Option 3 “Tape Library” in the LIBRARY MANAGEMENT SELECTION MENU

Figure 7: The new ETERNUS CS8000 is specified as library in the active control data set

… press and use the following example to define the new ETERNUS CS8000 library The library ID which has been defined in the HCD process needs to be specified. The defined library needs a Default Entry Class. Best practice is to use a name like ETCS No further definitions are required on page 2 of the dialog

Page 12: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 12 of 32 http://fujitsu.com/eternus_cs

Figure 8: The system is specified in which the newly created library should be ONLINE

The tape library is now defined.

The next steps are to define the ■ Default Data Class ISMF Dialog Option 4 ■ Storage Groups representing the scratch subpools ■ Storage Class ISMF Dialog Option 5 ■ Default Management Class ISMF Dialog Option 3 Details are provided in the following 4 figures.

Page 13: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 13 of 32 http://fujitsu.com/eternus_cs

Figure 9: Definition of the Default Data Class (DCETCS) ISMF Dialog Option 4 Default Data Class (DCETCS) ISMF Dialog Option 4 For the next 6 panels the default values can be used.

Page 14: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 14 of 32 http://fujitsu.com/eternus_cs

Figure 9: Definition of the Storage Class Groups representing the scratch subpools The Storage Groups representing the scratch subpools (if non-RMM TMS only one storage group is needed). Use the ISMF dialog again to start the definitions (Option 6). This scratch subpool is mapped to an ETERNUS CS logical volume group (LVG). The Storage Group type is: TAPE.

The description and the library name previously specified in the ISMF Library Dialog need to be entered in the next panel. Remember that for non-RMM only one storage group is needed to represent the library. The storage group is enabled by entering “Y” in the “DEFINE SMS Storage Group Status” field”.

Page 15: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 15 of 32 http://fujitsu.com/eternus_cs

Figure 10: Definition of the Storage Class ISMF Dialog Option 5 Storage Class ISMF Dialog Option 5 A storage class needs to be defined for SMS tape processing. The storage class is only used for reference to ensure that the storage group can be assigned later. All values can be kept with their default values.

Page 16: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 16 of 32 http://fujitsu.com/eternus_cs

Figure 11: Definition of the default Management Class ISMF Dialog Option 3 Default Management Class ISMF Dialog Option 3 The default values can be used for the next 6 panels within this dialog

Page 17: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 17 of 32 http://fujitsu.com/eternus_cs

3.3 Optimize your Backup / Restore Production Business As described in the previous sections the ETERNUS CS8000 system in conjunction with the ETERNUS CS Link for IBM z/OS Software appears as a Manual Tape Library with up to 1024 tape drives for mainframe backup and archiving applications. For investment protection the existing tape environment may be used at the backend of ETERNUS CS8000. 3.3.1 Physical Tape Environment If the existing tape environment is based on physical tapes drives, then quite often the backup jobs are consolidated to fill up the physical tapes to their maximum in order to reduce the amount of tapes required. This optimizes the physical tape usage but reduces the possibility of parallel work. The overall throughput is not optimized either. With a Virtual Tape Solution such as ETERNUS CS8000 more jobs can run in parallel until the maximum number of available Virtual Tape Drives is reached. Thanks to this parallel execution the required backup time is reduced and backup windows can be better met. Due to the Logical Volume Group concept each application can be satisfied regarding its special requirements on logical volume size and grouping. Example: If full volume dumps of 3390-9 volumes with the DFSMSdss or Innovation FDR are to be created, it will be meaningful to use a logical volume size of 5 GB. With a compression factor of 3 each 3390-9 volume can be dumped to one logical tape volume. Space is not wasted, neither in the ETERNUS CS disk cache nor on the physical tape. Only the data which was written in compressed form is stored in the cache. The gap between the allocation of, for example, 3 GB to the max of 5 GB is not allocated and thus available for the next allocation. Smaller backups from the TSO Environment can be easily satisfied with a logical volume size group of 900 MB up to 2 GB. DB2 log files from large ‘storage pools’ or 3390 -54 can be grouped together into larger logical volumes like 20GB or 50GB LV Sizes. HSM backup and archiving data can be grouped together as well depending on respective business needs There are additional advantages resulting from introducing ETERNUS CS into a mainframe backup environment: The heavy start/stop operation in a plain physical tape environment is eliminated and the physical tape drives in the backend are operating in a healthy streaming mode. The allocation and re-allocation of drives requires applications to synchronize, i.e. not to block each other. With the high number of available virtual tape drives concurrent data streams can co-exist without mutual blocking. 3.3.2 Virtual Tape Environment Even a migration from an existing Virtual Tape System to ETERNUS CS8000 is easy. As described above, the main advantage is to have a platform independent Virtual Tape Solution which can be shared even with an Open System backup environment. This dramatically reduces the costs and ensures that the resources within ETERNUS CS8000 (e.g. cache, front-end processors) can be shared without impacting backup requirements. Another advantage of ETERNUS CS8000 is the scalability of up to 1280 Virtual Tape Drives in a single system - even if that system is physically split across two sites. The data is secured via real-time cache mirroring after writing from the physical block on both cache sides. With long distance replication or the cascading function work in a grid environment is possible; i.e. in total with more than 2,000 available Virtual Tape Drives. The third copy function makes it possible to have three copies of data on tape or to create a quadruple plex. You can also choose which physical tape drives and robotics are used at the backend. Several customers use different hardware (disk, tape) to have a real media break at the backend.

Page 18: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 18 of 32 http://fujitsu.com/eternus_cs

3.4 Migration from a foreign Virtual Tape System Migrating a Virtual Tape System to ETERNUS CS8000 is straightforward if several factors influencing the migration are considered: The different tape management systems Backup environments Storage management systems and their policies Different robotic control environments

Which migration software should be preferred and how can Fujitsu support in such a migration? Dedicated migration software is recommended for a very heterogeneous tape environment where several tools, programs and applications are used. It is thus possible to copy all standard application files as well as many proprietary file types, such as DB2, DFSMSdss, DFSMShsm, CA-Disk (DMS), FDR, Mobius Infopac, SAR (CA=View), SAS. All this data can be copied without any changes in the catalog environment (for example, last used or last modification date, creation job name etc.). With a streamlined backup and tape environment (such as IBM DFHSM or CA-DISK only, or Innovation FDR) the on-board tools of the respective programs are recommended for migrating its own data from the old tape environment to ETERNUS CS8000. Lots of data may not have to be migrated based on the defined retention period (for instance 40 days). It is always recommended to run the old and the new environment in parallel at the beginning of a new production phase. The new backups are written to ETERNUS CS8000 and the old backup data may expire on the old Virtual tape System. Long term backups and archives should be migrated as listed above. Which Tape Management Systems are supported and which Automated Tape Library Interfaces or Virtual Tape Systems can be migrated? IBM DFSMSrmm, CA-1, CA-TLMS, BMC Control-T as Tape Management Systems are supported – others may be supported on a case-by-case scenario. If the existing tape environment is stored on standard IBM z/OS mainframe technique tapes, there should be no hurdles or problems migrating this data to ETERNUS CS8000. How long does a migration from a (Virtual) Tape System to ETERNUS CS8000 take? In general, this depends on the number of logical tapes, the physical devices in the backend of the old VTS, the usage of the tapes, and the average mount time. The following rough estimation is possible: IBM VTS B10 (4 x 3590 backend devices) 15,000 (800 MB) Tapes in 30 calendar days IBM VTS B20 (8 x 3590 backend devices) 25,000 (800 MB) Tapes in 30 calendar days How does the migration software work? The data on the old (virtual) tape library is copied on a block level base without altering the characteristics of the data sets . The input data set characteristics (BLKSIZE, RECFM, LRECL, creating job name, creation date, etc.) are copied to the output dataset without any change. This ensures that the output dataset is an exact replica of the input dataset.

Page 19: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 19 of 32 http://fujitsu.com/eternus_cs

3.5 The Volume Group Concept of ETERNUS CS8000 Prior to defining logical volumes in the ETERNUS CS8000, consider the total number of logical volumes required, the volume serial ranges to define, and the number of volumes within each range The VOLSERs have to be unique throughout a SYSPLEX. The number of logical volumes required for handling the workload also needs to be determined. And consider the size of logical volumes, the number of scratch volumes needed on a daily base, the time needed to return to scratch processing, and how often scratch processing is performed. Size of logical volumes If a compression rate of 3:1 can be achieved, the effective maximum logical volume size for a 4 GB logical volume is 12 GB. ETERNUS CS8000 allows the formation of volume groups which are named entities. LVs (Logical Volumes) or PVs (Physical Volumes) with identical attributes can be grouped and their attributes can be managed by set operations, i.e. only one command is required for all elements of the group. As a high overall number of volumes can be typically found in many installations this grouping facilitates the management of LVs and PVs. It is worth grouping the HSM data with a logical volume size (900 MB – 2 GB or 4 GB) together with other small backup files. Large backup files with the same retention period should also be kept together. Grouping is effective for LVs by LVGs (Logical Volume Groups) as well as for PVs by PVGs (Physical Volume Groups). Each LVG is associated to at least one PVG. Each PVG is uniquely associated to a tape library. If a tape library has tape drives of different types, each PVG can only be associated to tape drives of one single type within the library. At least top PVGs are thus required in order to operate a tape library with two different tape technologies. It is possible to configure: Up to 3 million LVs Up to 513 LVGs Up to 101 PVGs

Figure 12: Volume Group Concept

Page 20: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 20 of 32 http://fujitsu.com/eternus_cs

As hosts are not aware of the volume group concept and hosts address volumes preferably by “name” (depending on the backup software: Volser, VSN, Label), sets of LVs are most conveniently formed by grouping LVs by their name. Grouping cartridges by a PVG is also handled by the administrator in consideration of the LVGs assigned to the PVG. PVGs are one of several means for the administrator to influence ETERNUS CS8000 operations. Virtual tape libraries of different type and capacity can thus basically be provided for the hosts based on a single, centrally managed physical data storage. For each LV, as seen and addressed by the host, the LVG determines uniformly for all its LVs: The properties of the LVs, such as format, type of media and maximum size. The PV on which the data of LVs is permanently stored, by assigning the LVG to one or more PVGs. For example. with a LVG assigned to

two/three PVGs, all the LVs of the LVG are saved redundantly on two/three PVGs (Dual Save/Triple Save). The assignment of a LVG to a PVG is limited by the size and capacity of the LVs in this LVG and the capacity of cartridges in the PVG. A LV of maximum size must be stored as a whole on one single physical cartridge and cannot be split into parts and continued on a second physical cartridge.

Page 21: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 21 of 32 http://fujitsu.com/eternus_cs

3.5.1 Example for Mainframe and Open System LVG Definition Two different systems (a mainframe host and a UNIX system) may use ETERNUS CS8000 in conjunction with a common type library system. By grouping volumes, mainframe data and UNIX data can be stored on different PVs. The LVs of the mainframe are assigned to the LVG1, while those of the UNIX system are assigned to the LVG2. These LVGs are assigned to different PVGs. As a result of these assignments, mainframe data is now stored on the PV PV0001 through PV0300, while UNIX data is stored on the PV PV0501 through PV0600.

Figure 13: Example for Mainframe and Open System LVG Definition

Before defining LVs within ETERNUS CS8000, the total number of LVs required must be considered, the volume serial ranges and the number of volumes within each range have to be defined. The VSN for LVs and PVs must be unique. The number of LVs that are required depends on the planned ETERNUS CS8000 workload. The numbers of PVs required depends on the workload and accommodated capacity at the backend of ETERNUS CS8000. The following needs to be considered: The number of LVs/LVGs The average amount of data on a volume The average compression ratio achieved for the data Whether the Multiple Save function is to be used Whether the VAULT function is to be used The number of PVs/scratch PVs The number of PVGs

Page 22: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 22 of 32 http://fujitsu.com/eternus_cs

3.6 ETERNUS CS8000 FICON Performance This section presents an extract of the FICON performance measurements of one ICP of an ETERNUS CS8000. Performance measurements for various scenarios are regularly handled by Fujitsu in order to ensure a well-balanced system, highest end-to-end performance, and accurate sizing in customer projects. For the complete report, please contact your local Sales or Pre-Sales Consultant. 3.6.1 Performance Measurement Environment The measurements were performed on an ETERNUS CS8400 V6.0A equipped with RX300-S8 ISPs; the TVC consists of two DX90-S2 RAID systems. The host is an IBM z114 2818-M05-K04 with 10 LPARs running with IBM z/OS 1.13. The connection to the ICP is switched with up to 4 x FICON8 channels (8Gbit). Each measurement was carried out under laboratory conditions using a (EXCP) measurement program. Some varying results may be observed depending on the version of application type, disk performance, IO configuration and general workload. Fujitsu uses similar environments for its performance measurements in order to ensure comparability across these measurements over time. All tests have been carried out with TFS without mirror (noCMF). VTL and Base DX90-S2 performance including the impact of using CMF (mirror) is documented in separate measurement reports. The tests have been carried out using an upgraded version of the ETERNUS configuration used for the V5.1A measurement. Performance data for V5.1A is taken from the V5.1A FICON performance report. The RAID configuration used for the V6.0A measurement is the same as already used for the V5.1A measurements. Measurements have been made for the following aspects Single job performance for different types of programs Maximum throughput, sustained performance Example workloads with 32, 64 and 128 jobs on one ICP Number of FICON-channel path Comparison with version V5.1A Workload on DEDUP ICP

3.6.2 Main results The maximum throughput for writing to one ICP is 2280 MiB/s (4 x FICON8) The maximum throughput for reading from one ICP is 2500 MiB/s (4 x FICON8) Sustained performance of medium compressible data (HTS8) and high workload with 64 mixed write/read jobs is

1500 MiB/s (32K Block size), 2000 MiB/s (64K block size) 1900 MiB/s (256K block size)

z114 channel utilization can reach more than 90% using 1 or 2 channel paths and high workload, but 4 x FICON8 channels to one ICP are never exhausted

The throughput is limited by the FICON channel when using only one or two channel paths with high workload and low compression; more FICON channels can increase the data throughput essentially

Performance greatly depends on the type of workload running on the host; bigger block sizes and ccw-chaining are the key factors to improve performance on the host side.

Recommendations Use 4 channel-path with high workload (64 and 128 drives) Use more than one TFS and each TFS with several LUNs Use large block sizes and CCW-chaining on the host (QSAM BUFNO-parameter).

Page 23: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 23 of 32 http://fujitsu.com/eternus_cs

3.6.3 Basic Single Job Performance Measurement Methods The performance for a single FICON job mainly depends on the block size and how the CCWs are coded. In general, moderate long CCW-chains and bigger block sizes have better performance. The measurements were carried out using an assembler (EXCP) program that uses different block sizes and different amounts of blocks written with one SSCH (ccw-chaining). This is specified by a type parameter. One reason why this program was used to measure FICON performance, rather than system utilities such as DFSORT, IEBGENER, and ADRDSSU, is to avoid performance degradation due to reading data from disk. All IBM z/OS disks in the measurement configuration are connected with FICON4 (and not zHPF) and are a possible bottleneck. The measurement program writes and reads the data from/to memory. A great variety of block sizes and CCW-chaining are covered by the measurement program. The performance is computed by the measurement program itself. The start time is taken after the tape open, i.e. mount-wait times are not counted.

Figure 14: Mapping table of program type parameter, block size and SSCH data transfer size

Type Block size x counter in one CCW transferred Data in one SSCH

8 8 Byte x 8 64 byte

10 1 KB x 8 8 KB

16 16 KB x 4 64 KB

32 32 KB x 8 256 KB

33 32 KB x 1 32 KB

64 64 KB x 1 64 KB

65 64 KB x 21 1344 KB

66 64 KB x 5 320 KB

256 256 KB x 1 256 KB

257 256 KB x 2 520 KB

258 256 KB -1 x 1 256 KB -1

259 256 KB x 4 1024 KB

99-NOOP 64 KB x 1 64 KB with NOOP commands

Results Type 65 shows the best single performance and runs similar to IBM Utility DFSORT; Utility ADRDSSU is similar to type 64. As shown later the program with best single performance is not always the best when running multiple jobs, but these programs are always in

the top ranking. The measurement was carried out with just one channel path as more channel paths cause overheads and will not improve the performance of

a single job Single job performance mainly depends on the CPU power and the power control settings in the ISP A single job does not reach the maximum performance of one FICON8-channel All subsequent throughput measurements are done with type 65,256 and 32 to gain a mean mix.

Page 24: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 24 of 32 http://fujitsu.com/eternus_cs

Figure 15: Basic Single Job FICON Performance : Mapping table of program type parameter, block size and SSCH data transfer size

HTS8 write

HTS8 read

0,07

7,10

89

139

52

78

219

174

132

161

132

167

62

0,08

8,68

96

147

52

89

212

164

132

152

139

161

69

0,09

11,16

114

167

52

89

234

184

147

172

139

182

69

0,00 50,00 100,00 150,00 200,00 250,00

8

10

16

32

33

64

65

66

256

257

258

259

99

MiB/sec

Typ

e

Compare ICP-Type and CS Version write, FICON8, 1-PATH, single job, HTS8

RX300S8 V6.0A

RX300S7 V6.0A

RX300S6 V5.1A

0,06

7,10

96

156

62

89

268

208

156

185

156

204

69

0,07

7,81

104

167

62

89

273

195

156

192

147

227

69

0,09

9,76

104

192

62

78

312

223

156

227

156

263

78

0,00 50,00 100,00 150,00 200,00 250,00 300,00 350,00

8

10

16

32

33

64

65

66

256

257

258

259

99

MiB/sec

Typ

e

Compare ICP-Type and CS Version read, FICON8, 1-PATH, single job, HTS8

RX300S8 V6.0A

RX300S7 V6.0A

RX300S6 V5.1A

Page 25: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 25 of 32 http://fujitsu.com/eternus_cs

3.6.4 Maximum Performance of one FICON channel to one ICP Port The following line provides information about the maximum performance that can be measured with one FICON-channel (FICON8 or FICON4) to one ICP port. Measurement Methods Several write/read or mixed jobs write data in parallel to one ICP with one channel path using Pattern HTS0. The measurement was carried out on a PRIMERGY RX300S8 based ICP. The FICON performance is measured with the FICONSTATS Performance Utility.

Figure 16: Maximum performance of one FICON channel to one ICP port

Results The maximum is already reached with a few (6-14) jobs If a job does not have a good single stream performance, the maximum performance cannot be achieved even if many parallel jobs are used The FICON channel utilization is about 65-70% when reaching the maximum performance If more jobs are scheduled, the total performance drops slightly. With more jobs on one ICP (64 to 128) the performance drop is significant

(more than 30%) and 2 or (even better) 4 channel paths should be used The maximum performance reported by IBM with an IO driver Benchmark is similar, see FICON Express4 4Gpbs and FICON Express8 8 Gpbs

column in second table (not zHPF, zHPF is only used for IBM (disk) Storage Systems such as DS8000).

630

610

660

635

665

628

580590600610620630640650660670

type 65 (BLK 64K) FICON8 type 32 (BLK 32K) FICON8

MB/s

Job-Type, z114-FICON-Adaptertype

Maximum performance of one FICON channel

write

read

mix

Page 26: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 26 of 32 http://fujitsu.com/eternus_cs

3.6.5 Performance Measurement of 1 ICP Measurement Methods This chapter outlines the performance results for a FICON job mix to one ICP. Different workloads are measured relating to: Compression pattern. Medium load (32 Jobs) write / read / mixed. High and maximum load (64 and 128 jobs) write / read / mixed The number of TFSs is set to 2. The FICON performance is measured by the measurement program and with GSTAT and/or FICONSTATS. The z114 channel (and CPU) utilization is measured from the activity panel on the HMC or RMF. Resource utilization of the ICP is measured with mpstat and cafs_iostat. Four channel paths are used for the measurement.

Figure 17: Performance measurement of 1 ICP - Throughput with medium workload (32 jobs)

Job type 65

Job type 256

1504

1568

2074

714

1462

1766

2107

2364

2295

1564

1761

1948

2364

1651

1750

0 500 1000 1500 2000 2500

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 6

5

Throughput MiB/s, RX300-S8, 32 jobs, 4 x FICON8 compare pattern

mix

read

write

1464

1507

1858

723

1473

1622

1820

1895

1946

1594

1884

1782

2147

1389

1838

0 500 1000 1500 2000 2500

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 2

56

Throughput MiB/s, RX300-S8, 32 jobs, 4 x FICON8 compare pattern

mix

read

write

Page 27: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 27 of 32 http://fujitsu.com/eternus_cs

Figure 18: Performance measurement of 1 ICP - Throughput with high workload (64 jobs)

Job type 65

Job type 256

1488

1668

1950

767

1633

1729

2261

2356

2402

1593

1697

2088

2348

1722

1711

0 500 1000 1500 2000 2500 3000

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 6

5

Throughput MiB/s, RX300-S8, 64 jobs, 4 x FICON8 compare pattern

mix

read

write

1526

1755

2092

803

1580

1567

2283

2558

2051

1674

1714

1989

2286

1476

1699

0 500 1000 1500 2000 2500 3000

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 2

56

Throughput MiB/s, RX300-S8, 64 jobs, 4 x FICON8 compare pattern

mix

read

write

Page 28: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 28 of 32 http://fujitsu.com/eternus_cs

Figure 19: Performance measurement of 1 ICP - Throughput with maximum workload (128 jobs)

Job type 65

Job type 256

820

989

1586

453

1370

1528

1224

2053

1433

1483

1025

0

1545

641

1399

0 500 1000 1500 2000 2500

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 6

5

Throughput MiB/s, RX300-S8, 128 jobs, 4 x FICON8 compare pattern

mix

read

write

1526

1765

1997

789

0

1534

2517

2411

1954

1560

1531

2017

2311

1438

1509

0 500 1000 1500 2000 2500 3000

HTSZ

HTS8

HTS0

HTSP

NOCOMP

Job

-Typ

e 2

56

Throughput MiB/s, RX300-S8, 128 jobs, 4 x FICON8 compare pattern

mix

read

write

Page 29: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 29 of 32 http://fujitsu.com/eternus_cs

Results: Summary of medium workload z114 CPU utilization is between 20% and 50% during all tests with 32 jobs. The FICON8 channels were always below 65% utilization Utilization of the cache LUNs is up to 100% depending on the pattern Utilization of the ICP CPU is up to 95% V6.0A on the same hardware (RX300S7) is better than the V5.1A results because the FICON drivers have been performance-tuned.

Summary of high workload z114 CPU utilization is below 55% during all tests with 64 jobs. The FICON8 channels were always below 70% utilization Utilization of the cache LUNs is up to 100% Utilization of the ICP CPU is up to 99% In most cases, high load has a slightly better result than medium load. No performance sag is observed. z114 CPU utilization is between 20%

and 50% for all tests with 32 jobs. Summary of maximum workload There may be a significant performance drop when using 128 drives compared to 64 or 32 FICON channels usage is from 35% to over 90%; the CPU usage of the LPARs is from 35% to 100% CPU usage in an ICP is below 100% Using 128 drives on one ICP means flexibility, but performance with maximal load greatly depends on the block size and chaining. 3.6.6 Performance History of 1 ICP since 2008 Measurement Methods Mixed Read/Write High load on 64 parallel job streams with 64k BLKSIZE (in 2008 32k and 16 Jobs)

Figure 20: Performance history of 1 ICP since 2008

0

200

400

600

800

1000

1200

1400

1600

1800

2000

2008 2010 2012 2014

285

875

1116

2000

Average Sustained R/W Throughput per ICP

Mib/s

Page 30: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 30 of 32 http://fujitsu.com/eternus_cs

4 Appendix 4.1 The ETERNUS CS8000 client software family The following shows the control software on the different mainframe operating system platforms which are offered to communicate with ETERNUS CS8000.

Figure 21: Overview - ETERNUS CS8000 client software family

The ETERNUS CS LINK for IBM z/OS is part of the ETERNUS CS8000 client software family

■ ETERNUS CS LINK for IBM z/OS is the new base to integrate the ETERNUS CS8000 in large scale IBM z/OS environments.

Key feature is the usage of standard IBM z/OS mechanism for tape device allocation and mount processing.

■ ETERNUS CS LINK for IBM System i is the new base to integrate the ETERNUS CS8000 in IBM System i environments.

Key feature is the usage of standard AS/400 BRMS mechanism for tape allocation and mount influence.

■ Oracles ELS/NCS Software can be used and delivered to integrate the ETERNUS CS8000 into a heterogeneous or existing Virtual Tape

Environment.

■ MTC CI (for VM/VSE and i-Series) for integrating ETERNUS CS8000 into VM/VSE or i-Series environments.

A hardware gateway (HW Gateway) is required and part of the offering. Please see the corresponding data sheet for more information:

■ ETERNUS CS Link for IBM z/OS

■ ETERNUS CS Link for IBM System i

Page 31: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 31 of 32 http://fujitsu.com/eternus_cs

4.2 Models of the ETERNUS CS8000 Series The core components of the ETERNUS CS8000 systems are based on the rack mountable Fujitsu PRIMERGY RX Industry Standard Server and Fujitsu ETERNUS DX RAID storage system. One ETERNUS DX RAID system exists of a base unit and RAID expansion shelves. The appliance supports up to 16 RAID systems for use as Tape Volume Cache (TVC), ViNS Online Store (VOS), and Deduplication BLOCKPOOL. The RAID systems are configured with RAID 5 for SAS disk based configurations and RAID 6 for NL SAS disks. The components are installed in Fujitsu standard racks. All racks are 42HUs height and 1100mm deep with an asymmetric construction for optimized cable management and enhanced service access. These standard building blocks mean that each system can be configured and seamlessly scaled according to individual needs. Each system is manufactured in Fujitsu’s own factories and has to pass the most stringent quality assurance processes before delivery. Models at a glance Each model configuration can be tailored to almost any individual needs with the option of further specific extensions. The storage subsystems CS VTL and CS ViNS can coexist within a single appliance, but depending on the specific customer requirements, some functions may be limited or excluded. Tailored configurations differ basically in the degree of component redundancies and data throughput. Extensive throughput requirements are possible with a flexible scalability regarding number of interfaces and RAID storage capacity. The basic functions of the embedded appliance operating software can easily expanded with optional functions by activating respective license keys.

Figure 22: Overview – Models of the ETERNUS CS8000 series

Models CS8200 CS8400 CS8800

Front-end VTL / ViNS

Data reduction VTL compression / VTL deduplication

Scalability RAID system scale-up Full scale-out

Scalability nodes 2 nodes (standard) Up to 20 nodes

Scalability RAID system 1 RAID system Up to 16 RAID systems

Site location Single–site Split-site

System failover Single-site failover Split–site failover

Synchronous replication None Single-site cache mirror Split–site cache mirror

Asynchronous replication VTL cascading ViNS replication

Backend tape replication VTL dual save VTL multiple save ViNS dual save

Page 32: ETERNUS CS8000 in IBM z/OS environments - Fujitsu · 1.2 History of ETERNUS CS8000 in IBM z/OS environments 2 ... major Unix and Windows operating systems as well as major tape libraries

White paper FUJITSU Storage ETERNUS CS8000 in IBM z/OS environments

Page 32 of 32 http://fujitsu.com/eternus_cs

ETERNUS CS8200 The ICP, IDP and VLP processors are designed to run together on a single physical server, referred as Integrated Universal Processor (IUP). The model CS8200 is built with two IUPs and one dedicated RAID system. The two IUP nodes with type ICP_IDP combine the attributes of ICP, IDP and VLP in a single ISP. An ICP_IDP has both, a FC host adapter and a SCSI or FC controller to operate physical tape drives and tape libraries. This model delivers the entire CS VTL functionality and can be deployed excellently in cascaded environments. As a result of the two VLPs, the two ICP_IDP nodes can provide automatic failover between each other. Together with the internal RAID system the ETERNUS CS8200 model provide basic redundancy and high availability for tape backup. ETERNUS CS8400 / ETERNUS CS8800 Within the models ETERNUS CS8400 and ETERNUS CS8800, the processors ICP, IDP and VLP are distributed among several appliance nodes, each running dedicated ICP, IDP or VLP services. This provides a reliable and scalable operation at the front-end and backend. With the dual-site model ETERNUS CS8800 the appliance can be spanned over two sites. A dual-site configuration providing cache mirroring and automatic failover is the ideal choice for fastest disaster recovery. Maximum configurations Max. 16 RAID storage base units per system Max. 22 ISPs (10 ICP, 10 IDPs, 2 VLP) and one TBP Max. 10 dedicated ICPNs for ViNS at front-end Max. 8 (4 active + 4 hot stand-by) dedicated IDPN for ViNS backend No mixture of VTL and ViNS in one ICP or IDP Max. 4 per ICP / 6 per IDP interface ports

Figure 23: Overview – Maximum configuration

Logical Scalability Overview of maximum logical volumes and volume groups for ViNS, VTL / ViNS mixed mode (the number of PV in native mode is the same as the number of LV).

Model RAID Systems LV LVG PV 1) PVG TVPs/TVPGs

CS8200 1 300,000 340 50,000 100 - CS8400 – CS8800 16 3,000,000 512 50,000 100 100

For more information, please see the ETERNUS CS8000 datasheet.

Published by FUJITSU Copyright ƒ 2015 Fujitsu Limited www.fujitsu.com/eternus 2015-01-28 WW-EN

ƒ Copyright 2015 Fujitsu Technology Solutions GmbH, Fujitsu, the Fujitsu logo, are trademarks or registered trademarks of Fujitsu Limited in Japan and other countries. IBM and z/OS are registered trademarks of International Business Machines Corporation in the United States and other jurisdictions worldwide. Other company, product and service names may be trademarks or registered trademarks of their respective owners. Technical data subject to modification and delivery subject to availability. Any liability that the data and illustrations are complete, actual or correct is excluded. Designations may be trademarks and/or copyrights of the respective manufacturer, the use of which by third parties for their own purposes may infringe the rights of such owner.