241
SYBASE ON EMC STORAGE SYSTEMS Version 2.1 Corporate Information Corporate Headquarters Hopkinton, MA 01748-9103 (508) 435-1000 http://www.EMC.com Kelly (KJ) Bedard EMC Database Solutions Team

SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Embed Size (px)

Citation preview

Page 1: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

SYBASE ON EMC STORAGE SYSTEMS

Version 2.1

Corporate Information Corporate Headquarters

Hopkinton, MA 01748-9103

(508) 435-1000

http://www.EMC.com

Kelly (KJ) Bedard

EMC Database Solutions Team

Page 2: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 3: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Copyright © 2006 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

Trademark Information

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

Sybase on EMC Storage Systems

Version 2.1

Solutions Guide

P/N 300-003-937 Rev A01 H2345

Sybase on EMC Storage Systems Version 2.1 Solutions Guide iii

Page 4: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

Contents

Preface..............................................................................................................................xv Chapter 1 Sybase ASE (Adaptive Server Enterprise) on UNIX..................................................... 1-1

1.1 Adaptive Server devices and system databases.................................................. 1-2 1.1.1 The master device..................................................................................... 1-3 1.1.2 Client/server communication.................................................................... 1-3 1.1.3 Backup Server .......................................................................................... 1-4

Chapter 2 EMC Foundation Products............................................................................................. 2-1 2.1 EMC Symmetrix DMX ...................................................................................... 2-4 2.2 EMC Solutions Enabler base management ........................................................ 2-4 2.3 Change Tracker .................................................................................................. 2-6 2.4 EMC Symmetrix Remote Data Facility ............................................................. 2-6

2.4.1 SRDF benefits .......................................................................................... 2-7 2.4.2 SRDF modes of operation ........................................................................ 2-8 2.4.3 SRDF device and composite groups......................................................... 2-8 2.4.4 SRDF consistency groups......................................................................... 2-9 2.4.5 SRDF terminology.................................................................................. 2-11 2.4.6 SRDF control operations ........................................................................ 2-13 2.4.7 Failover and failback operations............................................................. 2-16 2.4.8 SRDF/A operations ................................................................................ 2-18 2.4.9 EMC SRDF/Cluster Enabler solutions ................................................... 2-20

2.5 EMC TimeFinder ............................................................................................. 2-21 2.5.1 TimeFinder/Mirror establish operations................................................. 2-22 2.5.2 TimeFinder split operations.................................................................... 2-23 2.5.3 TimeFinder restore operations................................................................ 2-24 2.5.4 TimeFinder consistent split .................................................................... 2-24 2.5.5 TimeFinder reverse split ......................................................................... 2-27 2.5.6 TimeFinder/Clone operations ................................................................. 2-27 2.5.7 TimeFinder/Snap operations................................................................... 2-29

2.6 EMC Storage Resource Management .............................................................. 2-31

iv Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 5: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

2.7 EMC ControlCenter..........................................................................................2-36 2.8 EMC PowerPath ...............................................................................................2-39 2.9 EMC Replication Manager ...............................................................................2-40

Chapter 3 Cloning Sybase Databases..............................................................................................3-1 3.1 Cloning with EMC TimeFinder and Sybase shutdown ......................................3-4 3.2 Cloning with EMC TimeFinder and Sybase quiesce ..........................................3-6 3.3 Cloning with EMC TimeFinder consistent split .................................................3-8 3.4 Cloning with EMC SRDF consistency groups .................................................3-11

3.4.1 Populating consistency group definitions ...............................................3-13 3.4.2 Propagating consistency group definitions .............................................3-14 3.4.3 Creating a consistency group explicit trip...............................................3-14 3.4.4 Cloning considerations for ConGroup ....................................................3-15

3.5 Summary of Sybase cloning techniques ...........................................................3-15 Chapter 4 Backup Considerations for Sybase Environments..........................................................4-1

4.1 Backup using Sybase Backup Server..................................................................4-2 4.2 Backup using Standby Access ............................................................................4-3 4.3 Backup using quiesce for external dump............................................................4-4

Chapter 5 Sybase Recovery Procedures..........................................................................................5-1 5.1 Restoring with Sybase Backup Server................................................................5-2 5.2 Restoring with standby access ............................................................................5-2 5.3 Restoring with quiesce for external dump ..........................................................5-3 5.4 Summary.............................................................................................................5-4

Chapter 6 Understanding Disaster Restart and Disaster Recovery .................................................6-1 6.1 Definitions ..........................................................................................................6-3

6.1.1 Dependent-write consistency ....................................................................6-3 6.1.2 Database restart .........................................................................................6-3 6.1.3 Database recovery .....................................................................................6-3 6.1.4 Roll-forward recovery...............................................................................6-4

6.2 Design considerations for disaster recovery and disaster restart ........................6-4 6.2.1 Recovery Point Objective .........................................................................6-4 6.2.2 Recovery Time Objective .........................................................................6-5 6.2.3 Operational complexity.............................................................................6-5 6.2.4 Source server activity................................................................................6-6 6.2.5 Production impact .....................................................................................6-6 6.2.6 Target server activity ................................................................................6-6 6.2.7 Number of copies of data ..........................................................................6-6 6.2.8 Distance for solution .................................................................................6-6 6.2.9 Bandwidth requirements ...........................................................................6-7 6.2.10 Federated consistency ...............................................................................6-7 6.2.11 Testing the solution...................................................................................6-7 6.2.12 Cost ...........................................................................................................6-7

6.3 Tape-based solutions ..........................................................................................6-8 6.3.1 Tape-based disaster recovery ....................................................................6-8

Page 6: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

6.3.2 Tape-based disaster restart........................................................................ 6-8 6.4 Remote replication challenges............................................................................ 6-9

6.4.1 Propagation delay ..................................................................................... 6-9 6.4.2 Bandwidth requirements......................................................................... 6-10 6.4.3 Network infrastructure............................................................................ 6-10 6.4.4 Method of instantiation........................................................................... 6-10 6.4.5 Method of reinstantiation ....................................................................... 6-11 6.4.6 Change rate at the source site ................................................................. 6-11 6.4.7 Locality of reference............................................................................... 6-11 6.4.8 Expected data loss .................................................................................. 6-11 6.4.9 Failback operations................................................................................. 6-12

6.5 Array-based remote replication ........................................................................ 6-12 6.6 Planning for array-based replication ................................................................ 6-13 6.7 SRDF/S single Symmetrix to single Symmetrix .............................................. 6-14

6.7.1 How to restart in the event of a disaster ................................................. 6-16 6.8 SRDF/S and consistency croups....................................................................... 6-17

6.8.1 Rolling disaster ....................................................................................... 6-17 6.8.2 Protection against a rolling disaster........................................................ 6-18 6.8.3 SRDF/S with multiple source Symmetrix arrays.................................... 6-20

6.9 SRDF/A............................................................................................................ 6-21 6.9.1 SRDF/A using a single source Symmetrix array.................................... 6-23 6.9.2 SRDF/A multiple source Symmetrix arrays........................................... 6-23 6.9.3 How to restart in the event of a disaster ................................................. 6-25

6.10 SRDF/AR single hop........................................................................................ 6-25 6.10.1 How to restart in the event of a disaster ................................................. 6-27

6.11 SRDF/AR multihop.......................................................................................... 6-27 6.11.1 How to restart in the event of a disaster ................................................. 6-29

6.12 Database log shipping solutions ....................................................................... 6-29 6.12.1 Overview of log shipping ....................................................................... 6-29 6.12.2 Log shipping considerations................................................................... 6-29 6.12.3 Log shipping and standby access database............................................. 6-32 6.12.4 Log shipping and quiesce for external dump.......................................... 6-33

6.13 Running database solutions .............................................................................. 6-35 6.13.1 Mirror Activator ..................................................................................... 6-35

Chapter 7 Performance Considerations .......................................................................................... 7-1 7.1 Introduction ........................................................................................................ 7-2

7.1.1 The performance stack ............................................................................. 7-2 7.2 Traditional Sybase layout recommendations ..................................................... 7-3 7.3 Symmetrix DMX performance guidelines ......................................................... 7-5

7.3.1 Front-end connectivity.............................................................................. 7-5 7.3.2 Symmetrix cache ...................................................................................... 7-6 7.3.3 Back-end considerations........................................................................... 7-9

7.4 RAID considerations ........................................................................................ 7-11 7.4.1 Types of RAID ....................................................................................... 7-11

vi Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 7: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

7.4.2 RAID recommendations .........................................................................7-13 7.4.3 Symmetrix metavolumes.........................................................................7-14

7.5 Host- versus array-based striping .....................................................................7-15 7.5.1 Host-based striping .................................................................................7-16 7.5.2 Symmetrix-based striping (metavolumes) ..............................................7-16 7.5.3 Striping recommendations ......................................................................7-17

7.6 Data placement considerations .........................................................................7-18 7.6.1 Disk performance considerations............................................................7-18 7.6.2 Hypervolume contention.........................................................................7-20 7.6.3 Maximizing data spread across the back end..........................................7-21 7.6.4 Minimizing disk head movement............................................................7-22

7.7 SRDF and Sybase Bulk Copy Program (bcp)...................................................7-22 7.7.1 Overview of bcp......................................................................................7-22 7.7.2 bcp data format .......................................................................................7-23 7.7.3 bcp speed.................................................................................................7-23 7.7.4 Batch size ................................................................................................7-24 7.7.5 Packet size...............................................................................................7-24 7.7.6 Partitioned table bulk copy .....................................................................7-24 7.7.7 Buffer caches and I/O block size ............................................................7-24 7.7.8 Log I/O size.............................................................................................7-25

7.8 Improving slow bcp performance .....................................................................7-26 Chapter 8 EMC ControlCenter and Sybase.....................................................................................8-1

8.1 Storage Allocation ..............................................................................................8-3 8.2 Monitoring ..........................................................................................................8-6 8.3 Performance Management ..................................................................................8-6 8.4 ECC Administration ...........................................................................................8-7 8.5 Data Protection ...................................................................................................8-8

Chapter 9 EMC SRDF and Sybase Replication Server...................................................................9-1 9.1 EMC SRDF overview.........................................................................................9-2 9.2 Sybase Replication Server overview ..................................................................9-3

9.2.1 Distributed primary fragments ..................................................................9-4 9.2.2 Corporate rollup ........................................................................................9-5 9.2.3 Redistributed corporate rollup ..................................................................9-5 9.2.4 Warm Standby ..........................................................................................9-5 9.2.5 Materialization ..........................................................................................9-5

9.3 Sybase Mirror Activator overview .....................................................................9-6 9.4 Implementing Mirror Activator ..........................................................................9-7

9.4.1 Implementation guidelines for Mirror Activator with SRDF/S ................9-8 9.4.2 Choosing an implementation method .......................................................9-9 9.4.3 Pros and cons of the two implementation methods.................................9-12 9.4.4 Implementation using concurrent SRDF.................................................9-13 9.4.5 Implementation using the Enterprise Restart consistency group solution .........................................................................................9-14

Page 8: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

9.4.6 Sybase Mirror Activator with SRDF/A .................................................. 9-16 9.4.7 Implementation guidelines for Mirror Activator with SRDF/A ............. 9-17 9.4.8 Implementation using SRDF/A .............................................................. 9-18

9.5 EMC SRDF and Sybase Replication Server .................................................... 9-22 9.5.1 Synchronous operations.......................................................................... 9-22 9.5.2 Asynchronous operations ....................................................................... 9-22 9.5.3 Automatic switching............................................................................... 9-22 9.5.4 Redundancy ............................................................................................ 9-23 9.5.5 Data loss ................................................................................................. 9-23 9.5.6 Transactional consistency....................................................................... 9-23 9.5.7 Immediate recovery ................................................................................ 9-24 9.5.8 Administration/maintenance................................................................... 9-24 9.5.9 CPU intensive......................................................................................... 9-25 9.5.10 Database restrictions............................................................................... 9-25 9.5.11 Feature summary .................................................................................... 9-25

Chapter 10 Symmetrix Storage Considerations for Sybase IQ-Multiplex...................................... 10-1 10.1 IQ-Multiplex capability.................................................................................... 10-2 10.2 IQ-Multiplex architecture................................................................................. 10-3

10.2.1 Write and query servers.......................................................................... 10-4 10.2.2 Tools for system administration ............................................................. 10-4

10.3 IQ-Multiplex indexing...................................................................................... 10-5 10.4 Backup and data recovery ................................................................................ 10-5 10.5 Qualifying IQ-Multiplex on Symmetrix systems ............................................. 10-6

10.5.1 Loading the database .............................................................................. 10-7 10.6 Integrating IQ-Multiplex with TimeFinder ...................................................... 10-7

10.6.1 Combining the technologies ................................................................... 10-8 Appendix A Related Documents ....................................................................................................... A-1

A.1 Related documents ............................................................................................ A-2 Appendix B Sample SYMCLI Group Creation Commands ..............................................................B-4

B.1 Sample SYMCLI group creation commands .....................................................B-5 Appendix C Using Sybase Standby Access Method with TimeFinder ..............................................C-9

C.1 Required steps ..................................................................................................C-10 Appendix D Using Sybase quiesce for external dump with TimeFinder .......................................... D-1

D.1 Required steps ................................................................................................... D-2 Appendix E Using TimeFinder Consistent Split for Sybase..............................................................E-1

E.1 Examples and output ..........................................................................................E-2 Appendix F Recovering an IQ-Multiplex Write Server with TimeFinder......................................... F-1

F.1 Recovery............................................................................................................. F-2 Appendix G Configuring Sybase ASE for Mirror Activator ............................................................. G-1

G.1 Configuring the primary ASE ........................................................................... G-2 G.1.1 Build the Primary ASE............................................................................ G-2 G.1.2 Build the standby ASE ............................................................................ G-2

viii Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 9: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Contents

G.1.3 Configure the primary ASE .....................................................................G-3 G.1.4 Configure the standby ASE......................................................................G-4 G.1.5 Configuring Mirror Activator ..................................................................G-5

G.2 Initialization output............................................................................................ G-6 G.2.1 Initialize the primary database .................................................................G-6 G.2.2 Initialize Replication Server.....................................................................G-6 G.2.3 Materialize the standby database (execute from MRA)...........................G-8 G.2.4 Resume replication.................................................................................G-13

Page 10: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase on EMC Storage Systems Version 2.1 Solutions Guide x

Page 11: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Figures

Figure 1-1 Architectural representation of the ASE server environment...................1-2 Figure 2-1 Basic synchronous SRDF configuration ...................................................2-7 Figure 2-2 SRDF consistency group .........................................................................2-10 Figure 2-3 SRDF establish and restore control operations........................................2-15 Figure 2-4 SRDF failover and failback control operations .......................................2-17 Figure 2-5 Geographically distributed four-node EMC SRDF/CE clusters..............2-21 Figure 2-6 EMC Symmetrix configured with standard volumes and BCVs.............2-22 Figure 2-7 ECA consistent split across multiple database associated hosts..............2-25 Figure 2-8 ECA consistent split on a local Symmetrix system.................................2-26 Figure 2-9 Creating a copy session using the symclone command...........................2-28 Figure 2-10 TimeFinder/Snap copy of a standard device to a VDEV ......................2-31 Figure 2-11 SRM commands ....................................................................................2-32 Figure 2-12 ControlCenter family overview.............................................................2-37 Figure 3-1 SYMCLI mapping component facility .....................................................3-3 Figure 3-2 Cloning with EMC TimeFinder and Sybase shutdown ............................3-5 Figure 3-3 Cloning with EMC TimeFinder and Sybase quiesce................................3-7 Figure 3-4 Cloning Sybase with EMC TimeFinder consistent split ........................3-10 Figure 3-5 Cloning Sybase with EMC SRDF consistency groups...........................3-12 Figure 6-1 Database components for Sybase ...........................................................6-13 Figure 6-2 Synchronous replication internals ..........................................................6-15 Figure 6-3 Rolling disaster with multiple production Symmetrix arrays.................6-17 Figure 6-4 Rolling disaster with SRDF consistency group protection.....................6-19 Figure 6-5 SRDF with multiple source Symmetrix arrays and ConGroup protection6-20 Figure 6-6 SRDF/A replication internals .................................................................6-22 Figure 6-7 SRDF/AR single-hop replication internals.............................................6-26 Figure 6-8 SRDF/AR multihop replication internals ...............................................6-28 Figure 6-9 Log shipping via dump and load database..............................................6-33 Figure 6-10 Log shipping and standby access with quiesce for external dump.......6-34 Figure 6-11 Sybase Mirror Activator in an SRDF/S environment...........................6-36 Figure 7-1 The performance stack .............................................................................7-3 Figure 7-2 Relationship between host blocksize and IOPS/throughput.....................7-6

Sybase on EMC Storage Systems Version 2.1 Solutions Guide xi

Page 12: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Figures

Figure 7-3 Output from symstat indicating we have reached the write-pending limit7-8 Figure 7-4 Write-pending count versus write-pending limit ..................................... 7-9 Figure 7-5 RAID 5 (3+1) layout detail .................................................................... 7-12 Figure 7-6 Anatomy of a RAID 5 random write ..................................................... 7-12 Figure 7-7 Disk performance factors ....................................................................... 7-20 Figure 8-1 Main ControlCenter Storage Allocation screen ....................................... 8-4 Figure 8-2 Databases on the Sybase Server named losan070.................................... 8-5 Figure 8-3 Free space allocation for the ControlCenter database.............................. 8-6 Figure 8-4 Performance View for selected storage devices....................................... 8-7 Figure 8-5 ECC Administration menu....................................................................... 8-8 Figure 8-6 Data Protection menu............................................................................... 8-9 Figure 9-1 Sybase Replication Server components ................................................... 9-3 Figure 9-2 Database materialization process ............................................................. 9-6 Figure 9-3 Sybase Mirror Activator in an EMC SRDF/S environment..................... 9-7 Figure 9-4 Mirror Activator implementation using concurrent SRDF .................... 9-10 Figure 9-5 Mirror Activator implementation with an Enterprise Restart consistency group ...................................................................................................... 9-11 Figure 9-6 SRDF device configuration for DEV002................................................ 9-13 Figure 9-7 List the eligible dynamic devices............................................................ 9-14 Figure 9-8 Mirror Activator in SRDF/A configuration ........................................... 9-17 Figure 9-9 Output of symcg show MAER command .............................................. 9-19 Figure 9-10 Status of remote devices ...................................................................... 9-20 Figure 9-11 Output of symrdf query command ....................................................... 9-21 Figure 10-1 Sybase IQ-Multiplex architecture ........................................................ 10-4 Figure E-1 Listing of sybhome and c20dg device groups .........................................E-2 Figure E-2 Consistent split of all devices in splitfile.................................................E-3 Figure E-3 Consistent split with –rdb option.............................................................E-4

xii Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 13: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Tables

Table 2-1 SYMCLI base commands...........................................................................2-5 Table 2-2 TimeFinder device type summary ............................................................2-30 Table 2-3 Data object SRM commands ....................................................................2-33 Table 2-4 Data object mapping commands...............................................................2-35 Table 2-5 File system SRM command......................................................................2-35 Table 2-6 File system SRM commands ....................................................................2-36 Table 2-7 SRM statistics command ..........................................................................2-36 Table 3-1 Sybase and EMC cloning summary..........................................................3-15 Table 4-1 Recovery times of disaster recovery and restart technologies ....................4-2 Table 4-2 Backup operation using the –q option ........................................................4-5 Table 7-1 Large I/O log size bcp test results.............................................................7-26 Table 9-1 Minimum software requirements................................................................9-8 Table 9-2 The Symmetrix device configuration for ConGroup ................................9-15 Table 9-3 EMC SRDF and Sybase Software Matrix ................................................9-25 Table F-1 Role of IQ-Multiplex Server for TimeFinder testing .................................F-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide xiii

Page 14: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase on EMC Storage Systems Version 2.1 Solutions Guide xiv

Page 15: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Preface

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this guide may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

This Solutions Guide documents the EMC and Sybase products and software technology that are used by our joint customer base. It describes how these products can be used together in various computing environments. This document serves as a single document for all Sybase information as it relates to the topics of integration, performance, and restart/recovery within an EMC Symmetrix hardware and software environment.

This Solutions Guide does not provide detailed product installation procedures. Please refer to specific product documentation where necessary. If a product does not function properly or does not function as described in this manual or guide, please contact your EMC representative.

Audience

This solutions guide is intended for database and system administrators, systems integrators, and members of EMC Technical Global Services. The contents that apply to Sybase in this guide, apply to UNIX platforms.

Sybase on EMC Storage Systems Version 2.1 Solutions Guide xv

Page 16: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase ASE (Adaptive Server Enterprise) on UNIX

xvi Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 17: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 1 Sybase ASE (Adaptive Server Enterprise) on UNIX

This chapter presents the following topic:

1.1 Adaptive Server devices and system databases ..................................................1-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 1-1

Page 18: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase ASE (Adaptive Server Enterprise) on UNIX

Sybase Adaptive Server Enterprise (ASE) is designed to support transaction-intensive, mission-critical OLTP, decision support, and mixed-load applications. Sybase’s ASE performs data management and transaction functions, independent of client applications and user interface functions. Adaptive Server also manages multiple databases and users, keeps track of the data’s location on disks, maintains the mapping of the logical data description to the physical data storage, and maintains data and procedure caches in memory. Adaptive Server uses auxiliary programs to perform dedicated tasks such as the backup server, which manages database load, dump, backup, and restoration activities. ASE Historical Server obtains performance data from Monitor Server and saves the data in files for use at a later time. XP Server stores the extended stored procedures (ESPs) that allow Adaptive Server to run operating system and user-defined commands.

Figure 1-1 Architectural representation of the ASE server environment

In Figure 1-1 on page 1-2, the Open Client connection is usually managed by ODBC and configures every database that resides on the Sybase server. Figure 1-1 represents the architecture of a Sybase ASE server environment.

1.1 Adaptive Server devices and system databases Devices are files or portions of a disk that are used to store databases and database objects. Users can initialize devices using raw disk partitions or file system devices. When Adaptive Server is installed, it automatically creates these system databases:

♦ The master database; maintains all metadata for a Sybase instance

1-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 19: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase ASE (Adaptive Server Enterprise) on UNIX

♦ The model database; used as a template for new user databases

♦ The system stored procedure database, sybsystemprocs

♦ The temporary database, tempdb

Optional components are:

♦ The auditing database, sybsecurity

♦ The two-phase commit transaction database, sybsystemdb

♦ The sample and testing databases, pubs2 and pubs3

♦ The database used for consistency checking, dbccdb

1.1.1 The master device

The Sybase master device is configured on a separate logical volume and contains the following databases: master, model, tempdb.

♦ Master — Controls the operation of Adaptive Server as a whole and stores information about all users, user databases, devices, objects, and system table entries. The master database is contained entirely on the master device and cannot be expanded onto any other device.

♦ Model — Provides a template for new user databases. The model database contains required system tables, which are copied into a new user database with the create database command.

♦ Tempdb — Serves as the work area for Adaptive Server. Each time Adaptive Server is started, the tempdb database is cleared and rebuilt from the model database.

1.1.2 Client/server communication

Adaptive Server communicates with other Adaptive Servers, Open Server applications (such as the Backup Server), and client software on the network. Clients can talk to one or more servers, and servers can communicate with other servers by remote procedure calls.

For Sybase products to interact with one another, each product needs to know where the others reside on the network. Names and addresses of every known server are listed in a directory services file. This information can be stored in a directory services file two different ways: in an interfaces file, named interfaces on UNIX platforms, located in the $SYBASE installation directory, or in an LDAP server.

Sybase, Inc., product documentation titled Sybase Configuration Guide provides more information on topics related to Adaptive Server such as configuring an Adaptive Server, starting and stopping servers, and configuring the operating system.

Adaptive Server devices and system databases 1-3

Page 20: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase ASE (Adaptive Server Enterprise) on UNIX

1.1.3 Backup Server

The Sybase ASE Backup Server utility comes as part of the standard software, but runs and operates as a separate component. This component requires its own installation process. Using the Backup Server for backup and restore of a database requires the use of the dump and load commands to facilitate the backup and restore process for a Sybase database. The Backup Server commands are performed by an Open Server program (sybackup.exe), which runs on the same machine as Adaptive Server. Backups can be performed over the network, or by using a Sybase Backup Server on a remote computer and another on the local computer.

Some features of Sybase Backup Server include:

♦ Creating and loading from striped dumps. Dump striping allows up to 32 backup devices in parallel. This splits the database into approximately equal portions and backs up each portion to a separate device.

♦ Performing dumps and loads over the network to a backup server running on another machine.

♦ Dumping several databases or transaction logs onto a single tape.

♦ Loading a single file from a tape that contains many databases or log dumps, and platform-specific tape handling options.

A dump database command makes a backup copy of the entire database, including the transaction log, in a form that can be restored with load database. The dump transaction command makes a copy of a transaction log and removes the inactive portion. The inactive portion of the database log file contains transactions that have been committed to disk. The load database command loads a backup copy of a user database, including its transaction log and finally, load transaction loads a backup copy of the transaction log.

An example of a full database backup and restore using the Sybase Backup Server is as follows:

dump database database_name to dump_device_1

load database database_name from dump_device_1

An example of an incremental database backup and restore using the Sybase Backup Server is as follows:

dump transaction database_name to dumptran_device_1

load transaction database_name from dumptran_device_1

The use of Sybase Backup Server for backup and recovery is discussed in detail in Chapter 4 and Chapter 5 of this document.

1-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 21: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase ASE (Adaptive Server Enterprise) on UNIX

Adaptive Server devices and system databases 1-5

Page 22: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 23: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 2 EMC Foundation Products

This chapter presents these topics:

2.1 EMC Symmetrix DMX.......................................................................................2-4 2.2 EMC Solutions Enabler base management.........................................................2-4 2.3 Change Tracker...................................................................................................2-6 2.4 EMC Symmetrix Remote Data Facility..............................................................2-6 2.5 EMC TimeFinder..............................................................................................2-21 2.6 EMC Storage Resource Management...............................................................2-31 2.7 EMC ControlCenter..........................................................................................2-36 2.8 EMC PowerPath ...............................................................................................2-39 2.9 EMC Replication Manager ...............................................................................2-40

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 2-1

Page 24: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

EMC provides many hardware and software products that support application environments on Symmetrix arrays. The following products, which are highlighted and discussed, were used and/or tested with the Sybase products discussed in this document. This chapter provides a technical overview of the EMC products used in this Solutions Guide.

♦ EMC Symmetrix — EMC offers an extensive product line of high-end storage solutions targeted to meet the requirements of customers’ mission-critical databases and applications. The Symmetrix product line includes the DMX Direct Matrix Architecture® series, and the 8000, 5000, and 3000 series family. EMC Symmetrix is a fully redundant, high-availability storage processor, providing nondisruptive component replacements and code upgrades. The Symmetrix system features high levels of performance, data integrity, reliability, and availability.

♦ EMC Solutions Enabler — Solutions Enabler is a package that contains the SYMAPI runtime libraries and the SYMCLI command line interface. SYMAPI provides the interface to the Symmetrix operating system. SYMCLI commands can be invoked via the command line or within scripts. These commands can be used to monitor device configuration and status, and to perform control operations on devices and data objects within a storage complex. The target storage environments are typically Symmetrix-based; however, CLARiiON® arrays can also be managed via the SYMCLI SRM component.

♦ EMC Symmetrix Remote Data Facility (SRDF®) — SRDF is a business continuity software solution that replicates and maintains a mirror image of data at the storage block level in a remote Symmetrix® system. The SRDF component extends the basic SYMCLI command set of Solutions Enabler to include commands that specifically manage SRDF.

• EMC SRDF consistency groups — An SRDF consistency group is a collection of related Symmetrix devices that are configured to act in unison to maintain data integrity. The devices in consistency groups can be spread across multiple Symmetrix systems.

♦ EMC TimeFinder® — TimeFinder is a family of products that enable LUN-based replication within a single Symmetrix array. Data is copied from Symmetrix devices using array-based resources without using host CPU or I/O. The source Symmetrix devices remain online for regular I/O operations while the copies are created. The TimeFinder family has three separate and distinct software products, TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap:

• TimeFinder/Mirror allows users to configure special devices, called business continuance volumes (BCVs), to create a mirror image of Symmetrix standard devices. Using BCVs, TimeFinder creates a point-in-time copy of data that can be repurposed. The TimeFinder/Mirror component extends the basic SYMCLI command set of Solutions Enabler to include commands that specifically manage Symmetrix BCVs and standard devices.

• TimeFinder/Clone allows users to make copies of data simultaneously on multiple target devices from a single source device. The data is available to a target’s host immediately upon activation, even if the copy process has not completed. Data may be copied from a single source device to as many as 16 target devices. A source device can be either a Symmetrix standard device or a

2-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 25: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

TimeFinder BCV device. A target device can be any volume in the array of equal or greater size than the source device with which it is paired.

• TimeFinder/Snap allows users to configure special devices in the Symmetrix DMX array called virtual devices (VDEVs) and save area devices (SAVDEVs). These devices can be used to make pointer-based, space-saving copies of data simultaneously on multiple target devices from a single source device. The data is available to a target’s host immediately upon creation. Data may be copied from a single source device to as many as 15 VDEVs. A source device can be either a Symmetrix standard device or a TimeFinder BCV device. A target device is a VDEV. A SAVDEV is a special device without a host address that is used to hold the changing contents of the source or target device.

♦ EMC Open Replicator — Open Replicator provides block-level data replication and data migration between a Symmetrix DMX™ and secondary heterogeneous storage environments over a SAN. Open Replicator is array-based replication software that runs exclusively on a Symmetrix DMX that enables copy operations independent of the host, operating system, and data type.

♦ EMC Change Tracker — EMC Symmetrix Change Tracker software measures changes to data on a Symmetrix volume or group of volumes. Change Tracker software is often used as a planning tool in the analysis and design of configurations that use the EMC TimeFinder or SRDF components to store data at remote sites.

♦ Solutions Enabler Storage Resource Management (SRM) Component — The SRM component extends the basic SYMCLI command set of Solutions Enabler to include commands that allow users to systematically find and examine attributes of various objects on the host, within a specified relational database, or in the EMC enterprise storage. The SRM commands provide mapping support for relational databases, file systems, logical volumes and volume groups, as well as performance statistics.

♦ EMC ControlCenter® — EMC ControlCenter is an integrated family of software products that enables users to discover, monitor, automate, provision, and report on storage area networks, host resources, and storage resources across the entire information environment.

♦ EMC PowerPath® — PowerPath is host-based software that provides I/O path management. PowerPath operates with several storage systems, on several enterprise operating systems and provides failover and load balancing transparent to the host application and database.

♦ EMC SRDF/Cluster Enabler — SRDF/Cluster Enabler (SRDF/CE) for MSCS is one in a family of automated business restart solutions that integrates EMC SRDF with cluster technology. SRDF/CE for MSCS provides disaster recovery protection in geographically distributed clusters.

♦ Connectrix® — Connectrix is a Fibre Channel director or switch that moves information throughout the SAN environment, enabling the networked storage solution.

♦ EMC Replication Manager — EMC Replication Manager is software that creates replicas of mission-critical databases on disk arrays with traditional tape media. Replication Manager can create a disk replica of data simply, quickly, and automatically. It automates all tasks and/or procedures related to data replication,

Adaptive Server devices and system databases 2-3

Page 26: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

and reduces the amount of time, resources, and expertise involved with integrating and managing disk-based replication technologies.

2.1 EMC Symmetrix DMX All Symmetrix systems provide advanced data replication capabilities, full mainframe and open systems support, and flexible connectivity options, including Fibre Channel, FICON, ESCON, Gigabit Ethernet, and iSCSI.

Interoperability between Symmetrix storage systems enables current customers to migrate their storage solutions from one generation to the next, protecting their investment even as their storage demands expand.

Symmetrix DMX enhanced cache director technology allows configuration of up to 256 GB of cache. The cache can be logically divided into 32 independent regions providing up to 32 concurrent 500 MB/s transaction throughput.

The Symmetrix on-board data integrity features include:

♦ Continuous cache and on-disk data integrity checking and error detection/correction.

♦ Fault isolation.

♦ Nondisruptive hardware and software upgrades.

♦ Automatic diagnostics and phone-home capabilities.

In addition to the models listed previously, for environments that require ultra-high performance, EMC provides DMX1000-P and DMX2000-P systems. These two storage systems are built for extra speed to operate in extreme performance-intensive environments such as decision support, data warehousing, and other high-volume, back-end sequential processing applications.

At the software level, advanced integrity features ensure information is always protected and available. By choosing a mix of RAID 1 (mirroring), RAID 1/0, and high performance RAID 5 (3+1 and 7+1) protection, users have the flexibility to choose the protection level most appropriate to the value and performance requirements of their information. The Symmetrix DMX is EMC’s latest generation of high-end storage solutions.

From the perspective of the host operating system, a Symmetrix system appears to be multiple physical devices connected through one or more I/O controllers. The host operating system addresses each of these devices using a physical device name. Each physical device includes attributes, vendor ID, product ID, revision level, and serial ID. The host physical device maps to a Symmetrix device. In turn, the Symmetrix device is a virtual representation of a section of the physical disk called a hypervolume.

2.2 EMC Solutions Enabler base management The EMC Solutions Enabler kit contains all the base management software that provides a host with SYMAPI-shared libraries and the basic Symmetrix command line interface (SYMCLI). Other optional subcomponents in the Solutions Enabler (SYMCLI) series

2-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 27: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

enable users to extend functionality of the Symmetrix systems. Three principle subcomponents are:

♦ Solutions Enabler SYMCLI SRDF, SRDF/CG, and SRDF/A.

♦ Solutions Enabler SYMCLI TimeFinder/Mirror, TimeFinder/CG, TimeFinder/Snap, TimeFinder/Clone.

♦ Solutions Enabler SYMCLI Storage Resource Management (SRM).

These components are discussed later in this chapter.

SYMCLI resides on a host system to monitor and perform control operations on Symmetrix storage units. SYMCLI commands are invoked from the host operating system command line or via scripts. SYMCLI commands invoke low-level channel commands to specialized devices (called gatekeepers) on the Symmetrix system. Gatekeepers are very small devices carved from disks in the Symmetrix that act as SCSI targets for the SYMCLI commands.

SYMCLI commands are used in single command line entries or in scripts to monitor and perform control operations on devices and data objects as part of managing the storage complex. It also monitors device configuration and status of devices that make up the storage environment. To reduce the number of inquiries from the host to the Symmetrix units, configuration and status information is maintained in a host database file.

SYMCLI base commands discussed in this document are listed in Table 2-1.

Table 2-1 SYMCLI base commands

Command Argument Description symdg

create delete rename release list show

Performs operations on a device group (dg) Creates an empty device group Deletes a device group Renames a device group Releases a device external lock associated with all devices in a device group Displays a list of all device groups known to this host Shows detailed information about a device group and any gatekeeper or BCV devices associated with the device group

symcg create add remove delete rename release hold unhold

Performs operations on a composite group (cg) Creates an empty composite group Adds a device to a composite group Removes a device from a composite group Deletes a composite group Renames a composite group Releases a device external lock associated with all devices in a composite group Holds devices in a composite group Unholds devices in a composite group

EMC Solutions Enabler base management 2-5

Page 28: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

list show

Displays a list of all composite groups known to this host Shows detailed information about a composite group, and any gatekeeper or BCV devices associated with the composite group

symld add list remove rename show

Performs operations on a device in a device group (dg) Adds devices to a device group and assigns the device a logical name Lists all devices in a device group and any associated BCV devices Removes a device from a device group Renames a device in the device group Shows detailed information about a device in a device group

symbcv list associate disassociate associate –rdf disassociate –rdf

Performs support operations on BCV pairs Lists BCV devices Associates BCV devices to a device group – required to perform operations on the BCV device Disassociates BCV devices from a device group Associates remotely attached BCV devices to a RDF device group Disassociates remotely attached BCV devices from an RDF device group

2.3 Change Tracker The EMC Symmetrix Change Tracker software is also part of the base Solutions Enabler SYMCLI management offering. Change Tracker commands are used to measure changes to data on a Symmetrix volume or group of volumes. Change Tracker functionality is often used as a planning tool in the analysis and design of configurations that use the EMC SRDF and TimeFinder components to create copies of production data.

The Change Tracker command (symchg) is used to monitor the amount of changes to a group of hypervolumes. The command timestamps and marks specific volumes for tracking and maintains a bitmap to record which tracks have changed on those volumes. The bitmap can be interrogated to gain an understanding of how the data on the volume changes over time and to assess the locality of reference of applications.

2.4 EMC Symmetrix Remote Data Facility The Symmetrix Remote Data Facility (SRDF) component of EMC Solutions Enabler extends the basic SYMCLI command set to enable users to manage SRDF. SRDF is a business continuity solution that provides a host-independent, mirrored data storage solution for duplicating production site data to one or more physically separated target Symmetrix systems. In basic terms, SRDF is a configuration of multiple Symmetrix units whose purpose is to maintain multiple copies of logical volume data in more than one location.

SRDF replicates production or primary (source) site data to a secondary (target) site transparently to users, applications, databases, and host processors. The local SRDF device, known as the source (R1) device, is configured in a partner relationship with a

2-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 29: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

remote target (R2) device, forming an SRDF pair. While the R2 device is mirrored with the R1 device, the R2 device is write-disabled to the remote host. After the R2 device synchronizes with its R1 device, the R2 device can be split from the R1 device at any time, making the R2 device fully accessible again to its host. After the split, the target (R2) device contains valid data and is available for performing business continuity tasks through its original device address.

SRDF requires configuration of specific source Symmetrix volumes (R1) to be mirrored to target Symmetrix volumes (R2). If the primary site is no longer able to continue processing when SRDF is operating in synchronous mode, data at the secondary site is current up to the last I/O transaction. When primary systems are down, SRDF enables fast failover to the secondary copy of the data so that critical information becomes available in minutes. Business operations and related applications may resume full functionality with minimal interruption.

Figure 2-1 on page 2-7 illustrates a basic SRDF configuration. In the figure, connectivity between the two Symmetrix systems is provided using ESCON, Fibre Channel, or Gigabit Ethernet. The connection between the R1 and R2 volumes is through a logical grouping of devices called an RA group. The RA group is independent of the device and composite groups defined and discussed in section 2.4.3.

Figure 2-1 Basic synchronous SRDF configuration

2.4.1 SRDF benefits

SRDF offers the following features and benefits:

♦ High data availability

♦ High performance

EMC Symmetrix Remote Data Facility 2-7

Page 30: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

♦ Flexible configurations

♦ Host and application software transparency

♦ Automatic recovery from a component or link failure

♦ Significantly reduced recovery time after a disaster

♦ Increased integrity of recovery procedures

♦ Reduced backup and recovery costs

♦ Reduced disaster recovery complexity, planning, testing, and such

♦ Business continuance across and between multiple databases on multiple servers and Symmetrix systems

2.4.2 SRDF modes of operation

SRDF currently supports the following modes of operation:

♦ Synchronous mode (SRDF/S) provides realtime mirroring of data between the source Symmetrix system(s) and the target Symmetrix system(s). Data is written simultaneously to the cache of both systems in real time before the application I/O is completed, thus ensuring the highest possible data availability. Data must be successfully stored in both the local and remote Symmetrix units before an acknowledgment is sent to the local host. This mode is used mainly for metropolitan area network distances less than 200 km.

♦ Asynchronous mode (SRDF/A) maintains a dependent-write consistent copy of data at all times across any distance with no host application impact. Customers wanting to replicate data across long distances historically have had limited options. SRDF/A delivers high-performance, extended-distance replication and reduced telecommunication costs while leveraging existing management capabilities with no host performance impact.

♦ Adaptive copy mode transfers data from source devices to target devices regardless of order or consistency, and without host performance impact. This is especially useful when transferring large amounts of data during data center migrations, consolidations, and in data mobility environments. Adaptive copy mode is the data movement mechanism of the Symmetrix Automated Replication solution.

2.4.3 SRDF device and composite groups

Applications running on Symmetrix arrays normally involve a number of Symmetrix devices. Therefore, any Symmetrix operation must ensure all related devices are operated upon as a logical group. Defining device or composite groups achieves this.

A device group is a user-defined group of devices that SYMCLI commands can execute upon atomically. Device groups are limited to a single Symmetrix array and RA group. A composite group, on the other hand, can span multiple Symmetrix arrays and RA groups. The device or composite group type may contain R1 or R2 devices and may contain various device lists for standard, BCV, virtual, and remote BCV devices. The

2-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 31: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

symdg and symcg commands are used to create and manage device and composite groups, respectively.

2.4.4 SRDF consistency groups

An SRDF consistency group is collection of devices defined by a composite group enabled for consistency. Its purpose is to protect data integrity for applications that span multiple RA groups and/or multiple Symmetrix arrays. The protected applications may comprise multiple heterogeneous data resource managers across multiple host operating systems.

An SRDF consistency group uses PowerPath support to provide synchronous disaster restart with zero data loss. Disaster restart solutions that use Consistency groups provide remote restart with short recovery time objectives. Zero data loss implies that all completed transactions at the beginning of a disaster will be available at the target.

When the amount of data for an application becomes very large, the time and resources required for host-based software to protect, back up, or run decision-support queries on these databases become critical factors. The time required to quiesce or shut down the application for offline backup is no longer acceptable. SRDF consistency groups allow users to remotely mirror the largest data environments and automatically split off dependent-write consistent, restartable copies of applications in seconds without interruption to online service.

A consistency group is a composite group of SRDF devices (R1 or R2) that act in unison to maintain the integrity of applications distributed across multiple Symmetrix units or multiple RA groups within a single Symmetrix. If a source (R1) device in the consistency group cannot propagate data to its corresponding target (R2) device, EMC software suspends data propagation from all R1 devices in the consistency group, halting all data flow to the R2 targets. This suspension, called tripping the consistency group, ensures that a dependent-write consistent R2 copy of the database up to the point in time that the consistency group tripped.

Tripping a consistency group can occur either automatically or manually. Scenarios in which an automatic trip would occur include:

♦ One or more R1 devices cannot propagate changes to their corresponding R2 devices.

♦ The R2 device fails.

♦ The SRDF directors on the R1 side or R2 side fail.

In an automatic trip, the Symmetrix unit completes the write to the R1 device, but indicates that the write did not propagate to the R2 device. EMC software intercepts the I/O and instructs the Symmetrix to suspend all R1 source devices in the consistency group from propagating any further writes to the R2 side. Once the suspension is complete, writes to all of the R1 devices in the consistency group continue normally, but they are not propagated to the target side until normal SRDF mirroring resumes.

An explicit trip occurs when the command symrdf –cg suspend or split is invoked. Suspending or splitting the consistency group creates an on-demand, restartable copy of

EMC Symmetrix Remote Data Facility 2-9

Page 32: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

the database at the R2 target site. BCV devices that are synchronized with the R2 devices are then split after the consistency group is tripped, creating a second dependent-write consistent copy of the data. During the explicit trip, SYMCLI issues the command to create the dependent-write consistent copy, but may require assistance from PowerPath if I/O is received on one or more R1 devices, or if the SYMCLI commands issued are abnormally terminated before the explicit trip.

An EMC consistency group maintains consistency within applications spread across multiple Symmetrix units in an SRDF configuration, by monitoring data propagation from the source (R1) devices in a consistency group to their corresponding target (R2) devices as shown in Figure 2-2 on page 2-10. Consistency groups provide data integrity protection during a rolling disaster.

Figure 2-2 on page 2-10 depicts a dependent-write I/O sequence where a predecessor log write happens before a page flush from a database buffer pool. The log device and data device are on different Symmetrix arrays with different replication paths. The figure demonstrates how rolling disasters can be prevented using EMC consistency group technology.

Figure 2-2 SRDF consistency group

A consistency group protection is defined containing volumes X, Y, and Z on the source Symmetrix. This consistency group definition must contain all of the devices that need to maintain dependent-write consistency and reside on all participating hosts involved in issuing I/O to these devices. A mix of CKD (mainframe) and FBA (UNIX/Windows) devices can be logically grouped together. In some cases, the entire processing environment may be defined in a consistency group to ensure dependent-write consistency.

2-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 33: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

The rolling disaster described previously begins, preventing the replication of changes from volume Z to the remote site.

Since the predecessor log write to volume Z cannot be propagated to the remote Symmetrix system, a consistency group trip occurs.

A ConGroup trip holds the write that could not be replicated along with all of the writes to the logically grouped devices. The writes are held by PowerPath on UNIX/Windows hosts, and IOS are held on the mainframe hosts long enough to issue two I/Os to all of the Symmetrix arrays involved in the consistency group. The first I/O changes the state of the devices to a suspend-pending state.

The second I/O performs the suspend actions on the R1/R2 relationships for the logically grouped devices, immediately disabling all replication to the remote site. This allows other devices outside of the group to continue replicating, provided the communication links are available. After the relationship is suspended, the completion of the predecessor write is acknowledged back to the issuing host. Furthermore, all I/Os that were held during the consistency group trip operation are released.

After the second I/O per Symmetrix completes, the I/O is released, allowing the predecessor log write to complete to the host. The dependent data write is issued by the DBMS and arrives at X but is not replicated to the R2(X).

When a complete failure occurs from this rolling disaster, the dependent-write consistency at the remote site is preserved. If a complete disaster does not occur and the failed links are activated again, the consistency group replication can be resumed. It is recommended to create a copy of the dependent-write consistent image while the resume takes place. Once the SRDF process reaches synchronization the dependent-write consistent copy is achieved at the remote site.

2.4.5 SRDF terminology

This section describes various terms related to SRDF operations.

2.4.5.1 Suspend and resume operations

Practical uses of suspend and resume operations usually involve unplanned situations in which an immediate suspension of I/O between the R1 and R2 devices over the SRDF links is desired. In this way, data propagation problems can be stopped. When suspend is used with consistency groups, immediate backups can be performed off the R2s without affecting I/O from the local host application. I/O can then be resumed between the R1 and R2 and return to normal operation.

2.4.5.2 Establish and split operations

The establish and split operations are normally used in planned situations in which use of the R2 copy of the data is desired without interfering with normal write operations to the R1 device. Splitting a point-in-time copy of data allows access to the data on the R2 device for various business continuity tasks. The ease of splitting SRDF pairs to provide exact database copies makes it convenient to perform scheduled backup operations,

EMC Symmetrix Remote Data Facility 2-11

Page 34: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

reporting operations, or new application testing from the target Symmetrix data while normal processing continues on the source Symmetrix system.

The R2 copy can also be used to test disaster recovery plans without manually intensive recovery drills, complex procedures, and application service interruptions. Upgrades to new versions can be tested or changes to actual code can be made without affecting the online production server. For example, modified server code can be run on the R2 copy of the database until the upgraded code runs with no errors before upgrading the production server.

In cases where an absolute realtime copy of the production data is not essential, users may choose to split the SRDF pair periodically and use the R2 copy for queries and report generation. The SRDF pair can be reestablished periodically to provide incremental updating of data on the R2 device. The ability to refresh the R2 device periodically provides the latest information for data processing and reporting.

2.4.5.3 Failover and failback operations

Practical uses of failover and failback operations usually involve the need to switch business operations from the production site to a remote site (failover) or the opposite (failback). Once failover occurs, normal operations continue using the remote (R2) copy of synchronized application data. Scheduled maintenance at the production site is one example of where failover to the R2 site might be needed.

Testing of disaster recovery plans is the primary reason to temporarily fail over to a remote site. Traditional disaster recovery routines involve customized software and complex procedures. Offsite media must be either electronically transmitted or physically shipped to the recovery site. Time-consuming restores and the application of logs usually follow. SRDF failover/failback operations significantly reduce the recovery time by incrementally updating only the specific tracks that have changed; this accomplishes in minutes what might take hours for a complete load from dumped database volumes.

2.4.5.4 Update operation

The update operation allows users to resynchronize the R1s after a failover while continuing to run application and database services on the R2s. This function helps reduce the amount of time that a failback to the R1 side takes. The update operation is a subset of the failover/failback functionality. Practical uses of the R1 update operation usually involve situations in which the R1 becomes almost synchronized with the R2 data before a failback, while the R2 side is still online to its host. The -until option, when used with update, specifies the target number of invalid tracks that are allowed to be out of sync before resynchronization to the R1 completes.

2.4.5.5 Concurrent SRDF

Concurrent SRDF means having two target R2 devices configured as concurrent mirrors of one source R1 device. Using a concurrent SRDF pair allows the creation of two copies of the same data at two remote locations. When the two R2 devices are split from their source R1 device, each target site copy of the application can be accessed independently.

2-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 35: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

2.4.5.6 R1/R2 swap

Swapping R1/R2 devices of an SRDF pair causes the source R1 device to become a target R2 device and vice versa. Swapping SRDF devices allows the R2 site to take over operations while retaining a remote mirror on the original source site. Swapping is especially useful after failing over an application from the R1 site to the R2 site. SRDF swapping is available with Enginuity™ version 5567 or later.

2.4.5.7 Data mobility

Data mobility is an SRDF configuration that restricts SRDF devices to operating only in adaptive copy mode. This is a lower-cost licensing option that is typically used for data migrations. It allows data to be transferred asynchronously from source to target, and is not designed as a solution for DR requirements unless used in combination with TimeFinder.

2.4.5.8 Dynamic SRDF

Dynamic SRDF allows the creation of SRDF pairs from non-SRDF devices while the Symmetrix unit is in operation. Historically, source and target SRDF device pairing has been static and changes required assistance from EMC personnel. However, beginning with the SRDF component of EMC Solutions Enabler version 5.0 running on Symmetrix units using Enginuity version 5568, users can use SRDF-capable, non-SRDF devices in creating and synchronizing SRDF pairs. This feature provides greater flexibility in deciding where to copy protected data.

Beginning with Solutions Enabler version 5.2 running on Symmetrix units using Enginuity version 5669, dynamic RA groups can be created in a SRDF switched fabric environment. An RA group represents a logical connection between two Symmetrix units. Historically, RA groups were limited to those static RA groups defined at configuration time. However, RA groups can now be created, modified, and deleted while the Symmetrix unit is in operation. This feature provides greater flexibility in forming SRDF-pair-associated links.

2.4.6 SRDF control operations

This section describes typical control operations that can be performed by the Solutions Enabler symrdf command.

Solutions Enabler SYMCLI SRDF commands perform the following basic control operations on SRDF devices:

♦ Establish synchronizes an SRDF pair by initiating a data copy from the source (R1) side to the target (R2) side. This operation can be a full or incremental establish. Changes on the R2 volumes are discarded by this process.

♦ Restore resynchronizes a data copy from the target (R2) side to the source (R1) side. This operation can be a full or incremental restore. Changes on the R1 volumes are discarded by this process.

♦ Split stops mirroring for the SRDF pair(s) in a device group and write-enables the R2 devices.

EMC Symmetrix Remote Data Facility 2-13

Page 36: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

♦ Swap exchanges the source (R1) and target (R2) designations on the source and target volumes.

♦ Failover switches data processing from the source (R1) side to the target (R2) side. The source side volumes (R1), if still available, are write-disabled.

♦ Failback switches data processing from the target (R2) side to the source (R1) side. The target side volumes (R2), if still available, are write-disabled.

2.4.6.1 Establishing an SRDF pair

Establishing an SRDF pair initiates remote mirroring—the copying of data from the source (R1) device to the target (R2) device. SRDF pairs come into existence in two different ways:

♦ At configuration time through the pairing of SRDF devices. This is a static pairing configuration discussed earlier.

♦ Anytime during a dynamic pairing configuration in which SRDF pairs are created on demand.

A full establish (symrdf establish –full) is typically performed after an SRDF pair is initially configured and connected via the SRDF links. After the first full establish, users can perform an incremental establish, where the R1 device copies to the R2 device only the new data that was updated while the relationship was split or suspended.

To initiate an establish operation on all SRDF pairs in a device or composite group, all pairs must be in the split or suspended state. The symrdf query command is used to check the state of SRDF pairs in a device or composite group.

When the establish operation is initiated, the system write-disables the R2 device to its host and merges the track tables. The merge creates a bitmap of the tracks that need to be copied to the target volumes discarding the changes on the target volumes. When the establish operation is complete, the SRDF pair is in the synchronized state. The R1 device and R2 device contain identical data, and continue to do so until interrupted by administrative command or unplanned disruption. Figure 2-3 on page 2-15 shows SRDF establish and restore operations.

2-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 37: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-3 SRDF establish and restore control operations

The establish operation may be initiated by any host connected to either Symmetrix array, provided that an appropriate device group has been built on that host. The following command initiates an incremental establish operation for all SRDF pairs in the device group named device_group:

symrdf –g device_group establish –noprompt

2.4.6.2 Splitting an SRDF pair

When read/write access to a target (R2) device is necessary, the SRDF pair can be split. When the split completes, the target host can access the R2 device for write operations. The R2 device contains valid data and is available for business continuity tasks or restoring data to the R1 device if there is a loss of data on that device.

While an SRDF pair is in the split state, local I/O to the R1 device can still occur. These updates are not propagated to the R2 device immediately. Changes on each Symmetrix system are tracked through bitmaps and are reconciled when normal SRDF mirroring operations are resumed. To initiate a split, an SRDF pair must already be in one of the following states:

♦ Synchronized

♦ Suspended

♦ R1 updated

♦ SyncInProg (if the –symforce option specified for the split)

EMC Symmetrix Remote Data Facility 2-15

Page 38: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

The split operation may be initiated from either host. The following command initiates a split operation on all SRDF pairs in the device group named device_group:

symrdf –g device_group split –noprompt

The symrdf split command provides exactly the same functionality as the symrdf suspend and symrdf rw_enable R2 commands together. Furthermore, the split and suspend operations have exactly the same consistency characteristics as SRDF consistency groups. Therefore, when SRDF pairs are in a single device group, users can split the SRDF pairs in the device group as shown previously and have restartable copies on the R2 devices. If the application data spans multiple Symmetrix units or multiple RA groups, include SRDF pairs in a consistency group to achieve the same results.

2.4.6.3 Restoring an SRDF pair

When the target (R2) data must be copied back to the source (R1) device, the SRDF restore command is used (Figure 2-3 on page 2-15). After an SRDF pair is split, the R2 device contains valid data and is available for business continuance tasks (such as running a new application) or restoring data to the R1 device. Moreover, if the results of running a new application on the R2 device need to be preserved, moving the changed data and new application to the R1 device is another option.

Users may perform a full or incremental restore. A full restore operation copies the entire contents of the R2 device to the R1 device. An incremental restore operation is much quicker because it copies only new data that was updated on the R2 device while the SRDF pair was split. Any tracks on the R1 device that changed while the SRDF pair was split are replaced with the corresponding tracks on the R2 device.

To initiate a restore, an SRDF pair must already be in the split state. The restore operation can be initiated from either host. The following command initiates an incremental restore operation on all SRDF pairs in the device group named device_group (add the –full option for a full restore):

symrdf –g device_group restore –noprompt

The restore operation is complete when the R1 and R2 devices contain identical data. The SRDF pair is then in a synchronized state and may be reestablished by initiating the symrdf establish command.

2.4.7 Failover and failback operations

Having a synchronized SRDF pair allows users to switch data processing operations from the source site to the target site if operations at the source site are disrupted or if downtime must be scheduled for maintenance. This switchover from source to target is called failover. When operations at the source site are back to normal, a failback operation is used to reestablish I/O communications links between source and target, resynchronize the data between the sites, and resume normal operations on the R1 devices as shown in Figure 2-4 on page 2-17.

2-16 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 39: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-4 shows the failover and failback operations. The failover and failback operations relocate the processing from the source site to the target site or vice versa. This may or may not imply movement of data.

Figure 2-4 SRDF failover and failback control operations

2.4.7.1 Failover

Scheduled maintenance or storage system problems can disrupt access to production data at the source site. In this case, a failover operation can be initiated from either host to make the R2 device read/write-enabled to its host. Before issuing the failover, all applications services on the R1 volumes must be stopped. This is because the failover operation will make the R1 volumes read-only. The following command initiates a failover on all SRDF pairs in the device group device_group:

symrdf –g device_group failover –noprompt

To initiate a failover, the SRDF pair must already be in one of the following states:

♦ Synchronized

♦ Suspended

♦ R1 updated

♦ Partitioned (when invoking this operation at the target site)

EMC Symmetrix Remote Data Facility 2-17

Page 40: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

The failover operation:

♦ Suspends data traffic on the SRDF links.

♦ Write-disables the R1 devices.

♦ Write-enables the R2 volumes.

2.4.7.2 Failback

To resume normal operations on the R1 side, a failback (R1 device takeover) operation is initiated. This means read/write operations on the R2 device must be stopped, and read/write operations on the R1 device must be started. When the failback command is initiated, the R2 becomes read-only to its host, while the R1 becomes read/write-enabled to its host. The following command performs a failback operation on all SRDF pairs in the device group device_group:

symrdf –g device_group failback -noprompt

The SRDF pair must already be in one of the following states for the failback operation to succeed:

♦ Failed over

♦ Suspended and write-disabled at the source

♦ Suspended and not ready at the source

♦ R1 Updated

♦ R1 UpdInProg

The failback operation:

♦ Write-enables the R1 devices.

♦ Performs a track table merge to discard changes on the R1s.

♦ Transfers the changes on the R2s.

♦ Resumes traffic on the SRDF links.

♦ Write-disables the R2 volumes.

2.4.8 SRDF/A operations

SRDF Asynchronous (SRDF/A) operations are supported with EMC Solutions Enabler SYMCLI 5.3 and later. Symmetrix DMX arrays running Enginuity 5670 code and later support SRDF/A mode for SRDF devices. Asynchronous mode provides a dependent-write consistent, point-in-time image on the target (R2) devices, which are only slightly behind the source (R1) device. SRDF/A session data is transferred to the remote Symmetrix array in predefined timed cycles or delta sets. This functionality requires an SRDF/A license.

2-18 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 41: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

The default configuration for SRDF/A is for the target volumes to be at most two cycles behind the data state of the source volumes. In the default configuration for SRDF/A, this would equate to 60 seconds.

SRDF/A provides a long-distance replication solution with minimal impact on performance that preserves data consistency within application data in the specified business process. This level of protection is intended for DR environments that always need a DBMS restartable copy of data at the R2 site. Dependent-write consistency is guaranteed on a delta set boundary. In the event of a disaster at the R1 site or if SRDF links are lost during data transfer, a partial delta set of data will be discarded. This preserves consistency on the R2 with a maximum data loss of two SRDF/A cycles.

2.4.8.1 Enabling SRDF/A

To set remotely mirrored pairs in the prod group to the asynchronous mode (SRDF/A), enter:

symrdf –g device_group set mode async –noprompt

2.4.8.2 SRDF/A benefits

SRDF/A mode provides the following features and benefits:

♦ Lowers operational costs for long-distance data replication with application consistency

♦ Promotes efficient link utilization, resulting in lower link bandwidth

♦ Maintains a dependent-write consistent, point-in-time image on the R2 devices at all times

♦ Supports all current SRDF topologies, including point-to-point and switched fabric

♦ Operates at any given distance without adding response time to the R1 host

♦ Includes all hosts and data emulation types supported by the Symmetrix array (such as FBA, CKD, and AS/400)

♦ Minimizes the impact imposed on the back-end directors

♦ Provides an application response time equivalent to writing to local non-SRDF devices

♦ Allows restore, failover, and failback capability between the R1 and R2 sites

2.4.8.3 SRDF/A-capable devices

The following command lists SRDF/A-capable devices:

symrdf list –rdfa

EMC Symmetrix Remote Data Facility 2-19

Page 42: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

2.4.8.4 SRDF/A session status

When asynchronous mode is set for a group of devices, the SRDF/A capable devices in the group are considered part of the SRDF/A session. To check SRDF/A session status, use either of the following commands:

symdg show device_group symcg show composite_group

The session status is displayed as active or inactive:

♦ Inactive — The SRDF/A devices are either ready or not ready on the link.

♦ Active — The SRDF/A mode is activated and the SRDF/A session data is currently being transmitted in operational cycles to the R2.

2.4.9 EMC SRDF/Cluster Enabler solutions

EMC SRDF/Cluster Enabler (SRDF/CE) for MSCS is an integrated solution that combines SRDF and clustering protection over distance. EMC SRDF/CE provides disaster-tolerant capabilities that enable a cluster to span geographically separated Symmetrix systems. It operates as a software extension (MMC snap-in) to the Microsoft Cluster Service (MSCS).

SRDF/CE achieves this capability by exploiting SRDF disaster restart capabilities. SRDF allows the MSCS cluster to have two identical sets of application data in two different locations. When cluster services are failed over or failed back, SRDF/CE is invoked automatically to perform the SRDF functions necessary to enable the requested operation.

Figure 2-5 on page 2-21 shows the hardware configuration of two, four-node, geographically distributed EMC SRDF/CE clusters using bidirectional SRDF.

More information on the EMC SRDF/CE offering is provided in the Integrating EMC SRDF and Cluster Technology for High Availability and Disaster Restart Solutions Guide.

2-20 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 43: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-5 Geographically distributed four-node EMC SRDF/CE clusters

2.5 EMC TimeFinder The SYMCLI TimeFinder component extends the basic SYMCLI command set to include TimeFinder or business continuity commands that allow control operations on device pairs within a local replication environment. This section specifically describes the functionality of the following:

♦ TimeFinder/Mirror — General monitor and control operations for business continuance volumes (BCV)

♦ TimeFinder/CG — Consistency groups

♦ TimeFinder/Clone — Clone copy sessions

♦ TimeFinder/Snap — Snap copy sessions

Commands such as symmir and symbcv perform a wide spectrum of monitor and control operations on standard/BCV device pairs within a TimeFinder/Mirror environment. The TimeFinder/Clone command, symclone, creates a point-in-time copy of a source device on nonstandard device pairs (such as standard/standard, BCV/BCV). The TimeFinder/Snap command, symsnap, creates virtual device copy sessions between a source device and multiple virtual target devices. These virtual devices only store pointers to changed data blocks from the source device, rather than a full copy of the data. Each product requires a specific license for monitoring and control operations.

EMC TimeFinder 2-21

Page 44: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Configuring and controlling remote BCV pairs requires EMC SRDF business continuity software discussed in section 2.4. The combination of TimeFinder with SRDF provides for multiple local and remote copies of production data.

Figure 2-6 on page 2-22 illustrates application usage for a TimeFinder/Mirror configuration in a Symmetrix system.

Figure 2-6 EMC Symmetrix configured with standard volumes and BCVs

2.5.1 TimeFinder/Mirror establish operations

A BCV device can be fully or incrementally established. After configuration and initialization of a Symmetrix unit, BCV devices contain no data. BCV devices, like standard devices, can have unique host addresses and can be online and ready to the host(s) to which they are connected. A full establish must be used the first time the standard devices are paired with the BCV devices. An incremental establish of a BCV device can be performed to resynchronize any data that has changed on the standard since the last establish.

When BCVs are established, they are inaccessible to any host.

Symmetrix arrays allow up to four mirrors for each logical volume. The mirror positions are commonly designated M1, M2, M3, and M4. An unprotected BCV can be the second, third, or fourth mirror position of the standard device. A host, however, logically views the Symmetrix M1/M2 mirrored devices as a single device.

2-22 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 45: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

To assign a BCV as a mirror of a standard Symmetrix device, the symmir establish command is used. One method of establishing a BCV pair is to allow the standard/BCV device-pairing algorithm to arbitrarily create BCV pairs from multiple devices within a device group:

symmir -g device_group establish –full -noprompt

With this method, TimeFinder/Mirror first checks for any attach assignments (specifying a preferred BCV match from among multiple BCVs in a device group). TimeFinder/Mirror then checks if there are any pairing relationships among the devices. If either of these previous conditions exists, TimeFinder/Mirror uses these assignments.

2.5.2 TimeFinder split operations

Splitting a BCV pair is a TimeFinder/Mirror action that detaches the BCV from its standard device and makes the BCV ready for host access. When splitting a BCV, the system must perform housekeeping tasks that may require a few milliseconds on a busy Symmetrix array. These tasks involve a series of steps that result in separation of the BCV from its paired standard:

1. I/O is suspended briefly to the standard device.

2. Write-pending tracks for the standard device that have not yet been written out to the BCV are duplicated in cache to be written to the BCV.

3. The BCV is split from the standard device.

4. The BCV device status is changed to ready.

2.5.2.1 Regular split

A regular split is the type of split that has existed for TimeFinder/Mirror since its inception. With a regular split (before Enginuity version 5568), I/O activity from the production hosts to a standard volume was not accepted until it was split from its BCV pair. Therefore, applications attempting to access the standard or the BCV would experience a short wait during a regular split. Once the split was complete, no further overhead was incurred.

Beginning with Enginuity version 5568, any split operation is an instant split. A regular split is still valid for earlier versions and for current applications that perform regular split operations. However, current applications that perform regular splits with Enginuity version 5568 actually perform an instant split.

By specifying the –instant flag on the command line, an instant split with Enginuity 5x66 and 5x67 can be performed. Since 5568, this option is no longer required because instant split mode has become the default behavior.

2.5.2.2 Instant split

An instant split shortens the wait period during a split by dividing the process into a foreground split and a background split. During an instant split, the system executes the foreground split almost instantaneously and returns a successful status to the host. This

EMC TimeFinder 2-23

Page 46: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

instantaneous execution allows minimal I/O disruptions to the production volumes. Furthermore, the BCVs are accessible to the hosts as soon as the foreground process is complete. The background split continues to split the BCV pair until it is complete. When the -instant option is included or defaulted, SYMCLI returns immediately after the foreground split, allowing other operations while the BCV pair is splitting in the background.

The following operation performs an instant split on all BCV pairs in device_group, and allows SYMCLI to return to the server process while the background split is in progress.

symmir -g device_group split –instant –noprompt

The following symmir query command example checks the progress of a split on composite_group. The –bg option is provided to query the status of the background split:

symmir –cg composite_group query –bg

2.5.3 TimeFinder restore operations

A BCV device can be used to fully or incrementally restore data on the standard volume. Like the full establish operation, a full restore operation copies the entire contents of the BCV devices to the standard devices. Optionally, users may specify devices in a device group, composite group, or device file as in the following examples:

symmir -g device_group -full restore –noprompt symmir -cg composite_group -full restore –noprompt symmir -f[ile] filename -full –sid ### restore -noprompt

The incremental restore process accomplishes the same thing as the full restore process with a major time-saving exception. The BCV copies to the standard device only new data that was updated on the BCV device while the BCV pair was split. The data on the corresponding tracks of the BCV device also overwrites any changed tracks on the standard device. This maximizes the efficiency of the resynchronization process. This process is useful, for example, if, after testing or validating an updated version of a database or a new application on the BCV device is completed, a user wants to migrate and utilize a copy of the tested data or application on the production standard device.

An incremental restore of a BCV volume to a standard volume is only possible when the two volumes have an existing TimeFinder relationship

2.5.4 TimeFinder consistent split

TimeFinder consistent split allows users to split off a dependent-write consistent, restartable image of an application without interrupting online services. Consistent split helps to avoid inconsistencies and restart problems that can occur when splitting an application-related BCV without first quiescing or halting the application. Consistent split is implemented using the Enginuity Consistency Assist (ECA) feature. This functionality requires a TimeFinder/CG license.

2-24 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 47: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

2.5.4.1 Enginuity consistency assist

The Enginuity Consistency Assist feature of the Symmetrix operating environment can be used to perform consistent split operations across multiple heterogeneous environments. This functionality requires a TimeFinder/CG license and uses the –consistent option of the symmir command.

To use ECA to consistently split BCV devices from the standards, a control host with no database or a database host with a dedicated channel to gatekeeper devices must be available. The dedicated channel cannot be used for servicing other devices or to freeze I/O. For example, to split a device group, execute:

symmir –g device_group split –consistent -noprompt

Figure 2-7 on page 2-25 illustrates an Enginuity Consistency Assist split across three database hosts that access devices on a Symmetrix array.

Figure 2-7 ECA consistent split across multiple database associated hosts

Symmetrix device or composite groups must be created on the controlling host for the target application to be consistently split. Device groups can be created to include all of the required devices for maintaining business continuity. For example, if a device group is defined that includes all of the devices being accessed by Hosts A, B, and C (see Figure 2-7 on page 2-25), then all of the BCV pairs related to those hosts can be consistently split with a single command.

However, if a device group is defined that includes only the devices accessed by Host A, then the BCV pairs related to Host A can be split without affecting the other hosts. The solid vertical line in Figure 2-7 represents the ECA holding of I/Os during an instant split process, creating a dependent-write consistent image in the BCVs.

EMC TimeFinder 2-25

Page 48: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-8 on page 2-26 illustrates the use of local consistent split with a database management system (DBMS).

Figure 2-8 ECA consistent split on a local Symmetrix system

When a split command is issued with ECA from the production host, a consistent database image is created using the following sequence of events (shown in Figure 2-8 on page 2-26):

5. The Symmetrix API (SYMAPI) identifies the standard devices that hold the database.

6. SYMAPI validates that all identified BCV pairs can be split.

7. SYMAPI sends a suspend write message to ECA.

8. ECA suspends writes to the standard devices that hold the database. The DBMS cannot write to the devices and subsequently waits for these devices to become available before resuming any further write activity. Read activity to the device is not affected unless attempting to read from a device which has a write queued against it.

9. SYMAPI sends an instant split request to all BCV pairs in the specified device group and waits for the Symmetrix to acknowledge that the foreground split has occurred. SYMAPI then sends a resume I/O message to ECA.

10. The application resumes writing to the production devices.

The BCV devices now contain a restartable copy of the production data that is consistent up until the time of the instant split. The production application is unaware that the split or suspend/resume operation occurred. When the application on the secondary host is

2-26 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 49: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

started using the BCVs, there is no record of a successful shutdown. Therefore, the secondary application instance views the BCV copy as a crashed instance and proceeds to perform the normal crash recovery sequence to restart.

When performing a consistent split, it is a good practice to issue host-based commands that will commit any data that has not been written to disk before the split. For example on UNIX systems, the sync command can be run. From a database perspective, a checkpoint or equivalent should be executed.

2.5.5 TimeFinder reverse split

BCVs can be mirrored to guard against data loss through physical drive failures. A reverse split is applicable for a BCV that is configured to have two local mirrors. It is generally used to recover from an unsuccessful restore operation. When data is restored from the BCV to the standard device, any writes that occur while the standard is being restored alter the original copy of data on the BCV’s primary mirror. If the original copy of BCV data is needed again at a later time, it can be restored to the BCV’s primary mirror from the BCV’s secondary mirror using a reverse split. For example, whenever logical corruption is reintroduced to a database during a recovery process (following a BCV restore), both the standard device and the primary BCV mirror are left with corrupted data. In this case, a reverse split can restore the original BCV data from a BCV’s secondary mirror to its primary mirror.

This is particularly useful when performing a restore and immediately restarting processing on the standard devices when the process may have to be restarted many times.

Reverse split is not available when protected restore is used to return the data from the BCVs to the standards.

2.5.6 TimeFinder/Clone operations

Symmetrix TimeFinder/Clone operations using SYMCLI have been available since Enginuity version 5568. TimeFinder/Clone can create up to 16 copies from a source device onto target devices. Unlike TimeFinder/Mirror, TimeFinder/Clone does not require the traditional standard to BCV device pairing. Instead, TimeFinder/Clone allows any combination of source and target devices. For example, a BCV can be used as the source device, while another BCV can be used as the target device. Any combination of source and target devices can be used. Additionally, TimeFinder/Clone does not use the traditional mirror positions the way that TimeFinder/Mirror does. Because of this, TimeFinder/Clone is a useful option when more than three copies of a source device are desired.

Normally, one of the three copies is used to protect the data against hardware failure.

The source and target devices must be the same emulation type (FBA or CKD). The target device must be equal or greater in size than the source. Clone copies of striped or concatenated metavolumes can also be created providing the source and target metavolumes are identical in configuration. Once activated, the target device can be instantly accessed by a target’s host, even before the data is fully copied to the target device.

EMC TimeFinder 2-27

Page 50: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

TimeFinder/Clone copies are appropriate in situations where multiple copies of production data is needed for testing, backups, or report generation. Clone copies can also be used to reduce disk contention and improve data access speed by assigning users to copies of data rather than accessing the one production copy. A single source device may maintain as many as 16 relationships that can be a combination of BCVs, clones, and snaps. When using the -copy option of TimeFinder/Clones, one can copy up to four full data copies simultaneously, without disruption to database production activity.

2.5.6.1 Clone copy sessions

TimeFinder/Clone functionality is controlled via copy sessions, which pair the source and target devices. Sessions are maintained on the Symmetrix array and can be queried to verify the current state of the device pairs. A copy session must first be created to define and set up the TimeFinder/Clone devices. The session is then activated, enabling the target device to be accessed by its host. When the information is no longer needed, the session can be terminated. TimeFinder/Clone operations are controlled from the host by using the symclone command to create, activate, and terminate the copy sessions.

Figure 2-9 on page 2-28 illustrates a copy session where the controlling host creates a TimeFinder/Clone copy of standard device DEV001 on target device DEV005, using the symclone command.

Figure 2-9 Creating a copy session using the symclone command

2-28 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 51: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

The symclone command is used to enable cloning operations. The cloning operation happens in two phases: creation and activation. The creation phase builds bitmaps of the source and target that are later used during the activation or copy phase. The creation of a symclone pairing does not start copying of the source volume to the target volume. To create clone sessions on all the standards and BCVs in the device group device_group, use the following command:

symclone -g device_group create -noprompt

The activation of a clone enables the copying of the data. The data may start copying immediately if the –copy keyword is used. If the –copy keyword is not used, tracks are only copied when they are accessed from the target volume or when they are changed on the source volume.

Activation of the clone session established in the previous create command can be accomplished using the following command:

symclone –g device_group activate -noprompt

2.5.7 TimeFinder/Snap operations

Symmetrix DMX arrays running Enginuity 5669 or later provide another technique to create copies of application data. The functionality, called TimeFinder/Snap, allows users to make pointer-based, space-saving copies of data simultaneously on multiple target devices from a single source device. The data is available for access instantly. TimeFinder/Snap allows data to be copied from a single source device to as many as 15 target devices (a sixteenth Snap session is reserved for restoring data from one of the targets). A source device can be either a Symmetrix standard device or a BCV device controlled by TimeFinder/Mirror. The target device is a Symmetrix virtual device (VDEV) that consumes negligible physical storage through the use of pointers to track changed data.

A VDEV is a host-addressable Symmetrix device with special attributes created when the Symmetrix DMX system is configured. However, unlike a BCV which contains a full volume of data, a VDEV is a logical-image device that offers a space-saving way to create instant, point-in-time copies of volumes. Any updates to a source device after its activation with a virtual device, causes the pre-update image of the changed tracks to be copied to a save device. The virtual device’s indirect pointer is then updated to point to the original track data on the save device, preserving a point-in-time image of the volume. TimeFinder/Snap uses this copy-on-first-write technique to conserve disk space, since only changes to tracks on the source cause any incremental storage to be consumed.

The symsnap create and symsnap activate commands are used to create source/target Snap pair.

Table 2-2 on page 2-30 summarizes some of the differences between devices used in TimeFinder/Snap operations.

EMC TimeFinder 2-29

Page 52: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Table 2-2 TimeFinder device type summary

Device Description Virtual device A logical-image device that saves disk space through the use of pointers

to track data that is immediately accessible after activation. Snapping data to a virtual device uses a copy-on-first-write technique.

Save device A device that is not host-accessible but accessed only through the virtual devices that point to it. Save devices provide a pool of physical space to store snap copy data to which virtual devices point.

BCV A full volume mirror that has valid data after fully synchronizing with its source device. It is accessible only when split from the source device that it is mirroring.

2.5.7.1 Snap copy sessions

TimeFinder/Snap functionality is managed via copy sessions, which pair the source and target devices. Sessions are maintained on the Symmetrix array and can be queried to verify the current state of the devices. A copy session must first be created—a process which defines the Snap devices in the operation. On subsequent activation, the target virtual devices become accessible to its host. Unless the data is changed by the host accessing the virtual device, the virtual device always presents a frozen point-in-time copy of the source device at the point of activation. When the information is no longer needed, the session can be terminated.

TimeFinder/Snap operations are controlled from the host by using the symsnap command to create, activate, terminate, and restore the TimeFinder/Snap copy sessions. The TimeFinder/Snap operations described in this section explain how to manage the devices participating in a copy session through SYMCLI.

Figure 2-10 on page 2-31 illustrates a virtual copy session where the controlling host creates a copy of standard device DEV001 on target device VDEV005.

2-30 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 53: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-10 TimeFinder/Snap copy of a standard device to a VDEV

The symsnap command is used to enable TimeFinder/Snap operations. The Snap operation happens in two phases: creation and activation. The creation phase builds bitmaps of the source and target that are later used to manage the changes on the source and target. The creation of a snap pairing does not copy the data from the source volume to the target volume.

To create snap sessions on all the standards and BCVs in the device group device_group, use the following command:

symsnap -g device_group create -noprompt

The activation of a snap enables the protection of the source data tracks. When protected tracks are changed on the source volume, they are first copied into the save pool and the VDEV pointers are updated to point to the changed tracks in the save pool. When tracks are changed on the VDEV, the data is written directly to the save pool and the VDEV pointers are updated in the same way.

Use the following command to activate the snap session created in the previous create command:

symsnap –g device_group activate -noprompt

2.6 EMC Storage Resource Management The Storage Resource Management (SRM) component of EMC Solutions Enabler extends the basic SYMCLI command set to include SRM commands that allow users to discover and examine attributes of various objects on a host or in the EMC storage enterprise.

EMC Storage Resource Management 2-31

Page 54: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

SYMCLI commands support SRM in the following areas:

♦ Data objects and files

♦ Relational databases

♦ File systems

♦ Logical volumes and volume groups

♦ Performance statistics

SRM allows users to examine the mapping of storage devices and the characteristics of data files and objects. These commands allow the examination of relationships between extents and data files or data objects, and how they are mapped on storage devices. Frequently, SRM commands are used with TimeFinder and SRDF to create point-in-time copies for backup and restart.

Figure 2-11 on page 2-32 outlines the process of how SRM commands are used with TimeFinder in a database environment.

Figure 2-11 SRM commands

EMC Solutions Enabler with a valid license for TimeFinder and SRM is installed on the host. In addition, the host must also have PowerPath or use ECA, and must be used with a supported DBMS system. As discussed in section 2.5.2, when splitting a BCV, the system must perform housekeeping tasks that may require a few seconds on a busy Symmetrix array. These tasks involve a series of steps (shown in Figure 2-11 on page 2-32) that result in the separation of the BCV from its paired standard:

1. Using the SRM base mapping commands, first query the Symmetrix unit to display the logical-to-physical mapping information about any physical device, logical volume, file, directory, and/or file system.

2-32 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 55: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

2. Using the database mapping command, query the Symmetrix to display physical and logical database information.

3. Use the database mapping command to translate:

• The devices of a specified database into a device group or a consistency group, or

• The devices of a specified tablespace into a device group or a consistency group.

4. Split the BCV from the standard device.

Table 2-3 on page 2-33lists the SYMCLI commands that can be used to examine the mapping of data objects.

Table 2-3 Data object SRM commands

Command Argument Action symrslv pd

lv file dir fs

Displays logical-to-physical mapping information about any physical device. Displays logical-to-physical mapping information about a logical volume. Displays logical-to-physical mapping information about a file. Displays logical-to-physical mapping information about a directory. Displays logical-to-physical mapping information about a file system.

SRM commands allow users to examine the host database mapping and the characteristics of a database. The commands provide listings and attributes that describe various databases, their structures, files, tablespaces, and user schemas. Typically, the database commands work with Oracle, Informix, SQL Server, Sybase, Microsoft Exchange, SharePoint Portal Server, and DB2 LUW database applications.

EMC Storage Resource Management 2-33

Page 56: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Table 2-4 on page 2-35 lists the SYMCLI commands that can be used to examine the mapping of database objects.

2-34 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 57: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Table 2-4 Data object mapping commands

Command Argument Action symrdb list

show

rdb2dg

rdb2cg tbs2cg

tbs2dg

Lists various physical and logical database objects:

Current relational database instances available

Tablespaces, tables, files, or schemas of a database

Files, segments, or tables of a database tablespace or schema

Shows information about a database object:

Tablespace, tables, file, or schema of a database

File, segment, or a table of a specified tablespace or schema

Translates the devices of a specified database into a device group.

Translates the devices of a specified tablespace into a composite group or a consistency group.

Translates the devices of a specified tablespace into a composite group. Only data database files are translated.

Translates the devices of a specified tablespace into a device group. Only data database files are translated.

The SYMCLI file system SRM command allows users to investigate the file systems that are in use on the operating system. The command provides listings and attributes that describe file systems, directories, and files, and their mapping to physical devices and extents.

Table 2-5 on page 2-35 lists the SYMCLI command that can be used to examine the file system mapping.

Table 2-5 File system SRM command

Command Argument Action symhostfs

list

show

Displays a list of file systems, files, or directories.

Displays more detail information about a file system or file system object.

SYMCLI logical volume SRM commands allow users to map logical volumes to display a detailed view of the underlying storage devices. Logical volume architecture defined by a Logical Volume Manager (LVM) is a means for advanced applications to improve performance by the strategic placement of data.

Table 2-6 on page 2-36 lists the SYMCLI commands that can be used to examine the logical volume mapping.

EMC Storage Resource Management 2-35

Page 58: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Table 2-6 File system SRM commands

Command Argument Action symvg

deport

import

list

rescan

show

vg2cg vg2dg

Deports a specified volume group so it can be imported later.

Imports a specified volume group.

Displays a list of volume groups defined on the host system by the logical volume manager.

Rescans all the volume groups.

Displays more detail information about a volume group.

Translates volume groups to composite groups.

Translates volume groups to device groups.

symlv

list

show

Displays a list of logical volumes on a specified volume group.

Displays detail information (including extent data) about a logical volume.

SRM performance statistics commands allow users to retrieve statistics about a host’s CPU, disk, and memory.

Table 2-7 on page 2-36 lists the statistics commands.

Table 2-7 SRM statistics command

Command Argument Action symhost

show

stats

Displays host configuration information.

Displays performance statistics.

2.7 EMC ControlCenter EMC ControlCenter is an integrated family of products that enables users to discover, monitor, automate, provision, and report on networks, host resources, and storage resources across the entire information environment. Figure 2-12 on page 2-37 presents the ControlCenter product family and the broad scope of manageability that ControlCenter provides in managing the storage enterprise.

2-36 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 59: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

Figure 2-12 ControlCenter family overview

The ControlCenter suite of products provides end-to-end management of storage networks, storage devices, and other storage resources. From the Web Console, ControlCenter can manage, monitor, configure and report on the following components:

♦ Storage components for Symmetrix, CLARiiON, Celerra®, HDS, HP StorageWorks, IBM ESS, and NetApp filers

♦ Host components, such as logical volume managers, file systems, databases, backup applications, and host processes

♦ Connectivity components, such as Fibre Channel switches and hubs

♦ Databases, such as Oracle, Sybase, and Microsoft SQL Server

Every physical and logical element that ControlCenter manages is known as a managed object. From a console anywhere on the network, ControlCenter shows an integrated and consolidated view of the storage environment, enabling the storage administrator to monitor, report, manage, and configure each managed object.

ControlCenter is designed for use in a heterogeneous environment where information is distributed in multivendor storage networks with disparate computing, connectivity, and storage arrays. The ability to manage host applications and their associated storage needs across heterogeneous environments from a single interface simplifies storage management and makes it possible to implement cross-platform storage-wide strategies.

At the heart of ControlCenter is a common foundation and infrastructure, which provides scalability, usability, and information sharing across all ControlCenter applications. This design enables applications to span storage array, storage network, and host management functions. The core components of ControlCenter are known as the Open Integration Components. These provide the infrastructure (ECC Server, Repository, and Store), common services, and a common user interface (both a web-

EMC ControlCenter 2-37

Page 60: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

based console for monitoring and a java-based application console for full management functionality). The common services provide the following features:

♦ Discover storage arrays, connectivity devices, and hosts.

♦ Map the storage topology.

♦ Map the relationship of storage structures from databases and file systems to their logical and physical location within the storage array.

♦ Display the properties of any object.

♦ Monitor the real-time or historical performance of objects.

♦ Monitor the status of objects and issue alerts.

The ControlCenter applications include the following:

♦ SAN Manager™ — provides integrated network discovery, topology, and alert capabilities. It actively controls SAN management functions such as zoning and LUN masking.

♦ Automated Resource Manager™ (ARM) — simplifies and automates storage resource management (SRM) across the enterprise. It provides policy-based storage provisioning for file systems and volume groups from a host perspective. This includes determining the location of a free storage pool and automating the allocation of that storage to the host. ARM manages backup and restore operations from within ControlCenter, and increases the availability of storage environments.

♦ StorageScope™ — provides a variety of SRM reports to help users assess their current storage environment and determine future storage requirements. StorageScope users can create custom report layouts, export reports into CSV or XML format, and integrate StorageScope with third-party applications such as billing or customer service.

♦ Performance Manager — collects and presents statistics for performance analysis. It collects metrics for Windows, UNIX, and MVS hosts, file systems, Oracle databases, Fibre Channel switches, and Symmetrix and CLARiiON storage arrays. Performance Manager also allows users to generate web-based reports on scheduled intervals; hourly, daily, weekly.

♦ Symmetrix Manager — monitors, automates operations, and provisions storage for Symmetrix arrays.

♦ Symmetrix Optimizer — automatically tunes Symmetrix arrays for optimal performance based on business requirements.

♦ SRDF-TimeFinder Manager — monitors, provisions, and automates SRDF and TimeFinder in the storage environment.

♦ Common Array Manager — simplifies the management of multivendor storage environments by providing monitoring and reporting capabilities for HP StorageWorks, Hitachi/HP/Sun, and IBM ESS storage arrays.

♦ SAN Architect —online, subscription-based product that provides users with an easy-to-use SAN design, validation, and modeling solution. Users can model

2-38 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 61: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

changes to existing SANs or design new topologies with immediate validation against EMC internal knowledge bases. SAN Architect guides users through the entire design process and automatically validates array, switch, and topology choices.

♦ CLARiiON, Celerra, and Network Appliance management — ControlCenter can also be used with other storage element managers like Navisphere® Manager, Connectrix Manager, and non-EMC array and switch managers. Many of these can be launched from the console to deliver extended functionality or capabilities.

2.8 EMC PowerPath EMC PowerPath is host-based software that works with networked storage systems to intelligently manage I/O paths. PowerPath manages multiple paths to a storage array. Supporting multiple paths enables recovery from path failure because PowerPath automatically detects path failures and redirects I/O to other available paths. PowerPath also uses sophisticated algorithms to provide dynamic load balancing for several kinds of path management policies that the user can set. With the help of PowerPath, systems administrators are able to ensure that applications on the host have highly available access to storage and perform optimally at all times.

A key feature of path management in PowerPath is dynamic, multipath load balancing. Without PowerPath, an administrator must statically load balance paths to logical devices to improve performance. For example, based on current usage, the administrator might configure 3 heavily used logical devices on one path, 7 moderately used logical devices on a second path, and 20 lightly used logical devices on a third path. As I/O patterns change, these statically configured paths may become unbalanced, causing performance to suffer. The administrator must then reconfigure the paths, and continue to reconfigure them as I/O traffic between the host and the storage system shifts in response to use changes.

Designed to use all paths concurrently, PowerPath distributes I/O requests to a logical device across all available paths, rather than requiring a single path to bear the entire I/O burden. PowerPath can distribute the I/O for all logical devices over all paths shared by those logical devices, so that all paths are equally burdened. PowerPath load balances I/O on a host-by-host basis, and maintains statistics on all I/O for all paths. For each I/O request, PowerPath intelligently chooses the least-burdened available path, depending on the load-balancing and failover policy in effect. In addition to improving I/O performance, dynamic load balancing reduces management time and downtime because administrators no longer need to manage paths across logical devices. With PowerPath, configurations of paths and policies for an individual device can be changed dynamically, taking effect immediately, without any disruption to the applications.

PowerPath provides the following features and benefits:

♦ Multiple paths, for higher availability and performance — PowerPath supports multiple paths between a logical device and a host bus adapter (HBA, a device through which a host can issue I/O requests). Having multiple paths enables the host to access a logical device even if a specific path is unavailable. Also, multiple paths can share the I/O workload to a given logical device.

EMC PowerPath 2-39

Page 62: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

♦ Dynamic multipath load balancing — Through continuous I/O balancing, PowerPath improves a host’s ability to manage heavy I/O loads. PowerPath dynamically tunes paths for performance as workloads change, eliminating the need for repeated static reconfigurations.

♦ Proactive I/O path testing and automatic path recovery — PowerPath periodically tests failed paths to determine if they are available. A path is restored automatically when available, and PowerPath resumes sending I/O to it. PowerPath also periodically tests available but unused paths, to ensure they are operational.

♦ Automatic path failover — PowerPath automatically redirects data from a failed I/O path to an alternate path. This eliminates application downtime; failovers are transparent and nondisruptive to applications.

♦ Enhanced high-availability cluster support — PowerPath is particularly beneficial in cluster environments, as it can prevent interruptions to operations and costly downtime. PowerPath’s path failover capability avoids node failover, maintaining uninterrupted application support on the active node in the event of a path disconnect (as long as another path is available).

♦ Consistent split — PowerPath allows users to perform TimeFinder consistent splits by suspending device writes at the host level for a fraction of a second while the foreground split occurs. PowerPath software provides suspend-and-resume capability that avoids inconsistencies and restart problems that can occur if a database-related BCV is split without first quiescing the database.

♦ Consistency groups — A consistency group is a composite group of Symmetrix devices specially configured to act in unison to maintain the integrity of a database distributed across multiple SRDF units controlled by an open systems host computer.

2.9 EMC Replication Manager EMC Replication Manager is an EMC software application that dramatically simplifies the management and use of disk-based replications to improve the availability of user’s mission-critical data and rapid recovery of that data in case of corruption.

Replication Manager helps users manage replicas as if they were tape cartridges in a tape library unit. Replicas may be scheduled or created on demand, with predefined expiration periods and automatic mounting to alternate hosts for backups or scripted processing. Individual users with different levels of access ensure system and replica integrity. In addition to these features, Replication Manager is fully integrated with many critical applications such as DB2 LUW, Oracle, and Microsoft Exchange.

Replication Manager makes it easy to create point-in-time, disk-based replicas of applications, file systems, or logical volumes residing on existing storage arrays. It can create replicas of information stored in the following environments:

♦ Oracle databases

♦ DB2 LUW databases

♦ Microsoft SQL Server databases

2-40 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 63: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC Foundation Products

♦ Microsoft Exchange databases

♦ UNIX file systems

♦ Windows file systems

The software utilizes a Java-based client/server architecture. Replication Manager can:

♦ Create point-in-time replicas of production data in seconds.

♦ Facilitate quick, frequent, and nondestructive backups from replicas.

♦ Mount replicas to alternate hosts to facilitate offline processing (for example, decision-support services, integrity checking, and offline reporting).

♦ Restore deleted or damaged information quickly and easily from a disk replica.

♦ Set the retention period for replicas so that storage is made available automatically.

Replication Manager has a generic storage technology interface that allows it to connect and invoke replication methodologies available on the following:

♦ EMC Symmetrix arrays

♦ EMC CLARiiON arrays

♦ HP StorageWorks arrays

Replication Manager uses Symmetrix API (SYMAPI) Solutions Enabler software and interfaces to the storage array’s native software to manipulate the supported disk arrays. Replication Manager automatically controls the complexities associated with creating, mounting, restoring, and expiring replicas of data. Replication Manager performs all of these tasks and offers a logical view of the production data and corresponding replicas. Replicas are managed and controlled with the easy-to-use Replication Manager console.

EMC Replication Manager 2-41

Page 64: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 65: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 3 Cloning Sybase Databases

This chapter presents these topics:

3.1 Cloning with EMC TimeFinder and Sybase shutdown ......................................3-4 3.2 Cloning with EMC TimeFinder and Sybase quiesce ..........................................3-6 3.3 Cloning with EMC TimeFinder consistent split .................................................3-8 3.4 Cloning with EMC SRDF consistency groups .................................................3-11 3.5 Summary of Sybase cloning techniques ...........................................................3-15

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 3-1

Page 66: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

There are various ways to clone an entire Sybase ASE database instance (also known as a Sybase server) or a single Sybase database. This chapter primarily focuses on cloning methods for a Sybase database only.

Cloning a Sybase instance is possible, but there are several precautions and certain criteria that must be met in order to successfully clone an entire Sybase instance. A Sybase ASE database instance is made up of the master device, the Sybase system procedures database (default name is sybsystemprocs) and any/all database(s) the user has defined. In order to clone an entire Sybase instance, the following criteria must be met:

♦ The master device on the target ASE server must be identical in every way to the primary.

♦ The target ASE server must contain the same databases as the primary.

♦ The target devices on the ASE server must have the exact absolute pathnames as the primary (for example, /dev/rdsk/c2t0d64s2) unless symbolic links are used as a workaround.

♦ Care must be taken to ensure that each storage device (LUN) contains one and only one Sybase database device (data file or log file).

The purpose of cloning a Sybase instance or database accomplishes several things, but the duplication of data is the primary requirement. In a DBMS environment, cloning a database is typically one of the most time-intensive processes. Using TimeFinder or SRDF to clone a database will reduce the processing time from hours to minutes. The main objective for creating a clone is to have a restartable copy of the database, which can then be used for web content refresh, backups, decision support, data warehousing, application testing, or third-party software updates/upgrades.

This section explains four different cloning techniques for Sybase databases:

♦ Cloning Sybase with shutdown

♦ Cloning Sybase with quiesce

♦ Cloning Sybase with TimeFinder consistent split

♦ Cloning Sybase with consistency groups

EMC provides users with the software technology to duplicate databases on Symmetrix arrays. This section focuses on the use of TimeFinder and SRDF to clone a Sybase database on a Symmetrix array.

The base component of the SYMCLI provides the host operating system command line with a set of commands that obtain device configuration information, provide configuration control, and retrieve status and performance data on attached Symmetrix units. Either SYMCLI or EMC ControlCenter can be used to create device relationship pairs, establish, split, restore, and reestablish BCV device states.

3-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 67: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

The Solutions Enabler Mapping Component (also known as Storage Resource Management — SRM) extends the basic SYMCLI command set to include mapping commands that allow users to systematically find and examine attributes of various objects on the host, within a specific relational database, or in the EMC storage area network.

The SYMCLI mapping commands support mapping to the following areas:

♦ Data objects and files

♦ Relational databases

♦ File systems

♦ Logical volumes and volume groups

The data object mapping commands allow users to perform operations on all devices associated with a single database. The data object mapping facility allows users to specify a database type along with the name of a database. SYMCLI will invoke the API mapping facility, log in to the Sybase server, detect all devices associated with a specified database, and perform a consistent split on the database devices.

Figure 3-1 on page 3-3 depicts the SYMCLI data object mapping facility and how this feature maps the Sybase database and Symmetrix devices.

Figure 3-1 SYMCLI mapping component facility

EMC Replication Manager 3-3

Page 68: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

The following describes the sequence of events in Figure 3-1:

1. The SYMCLI split command syntax to invoke mapping looks like this:

symmir –g c20dg split –instant –rdb –dbtype Sybase –db c20Test

Where:

-g c20dg specifies the device group containing the devices that will be involved in the split operation.

-rdb invokes data object mapping and specifies that all devices associated with

–dbtype and –db must be selected. In the case of Sybase, the Master database table (named sysdevices) is analyzed for the appropriate disk device information.

-dbtype specifies the database type (in this example, Sybase).

-db specifies the name of the Sybase database (in this example, c20Test).

2. The SYMCLI database APIs (application programming interfaces) are invoked. The APIs access Sybase’s Master database (metadata) to identify all the devices associated with the c20Test database.

3. The Sybase database devices are mapped to Symmetrix LUNs. This process is part of the internals of the Solutions Enabler mapping facility.

4. The example command in step 1 issues a consistent split. The TimeFinder consistent split occurs on all of the devices that are associated with the Sybase database named c20Test.

3.1 Cloning with EMC TimeFinder and Sybase shutdown The first method for cloning Sybase databases requires shutting down the Sybase instance. Users may log in to a Sybase instance (server) and issue the shutdown command, which will, by default, perform a checkpoint on all databases and wait for all transactions to complete before the databases are brought offline and the server shuts down. To perform this type of shutdown, use the following command:

isql> shutdown isql> go

This is the safest shutdown method and ensures that a database will go through its normal restart and recovery procedures on the secondary or target host, and should come up clean.

Users can also issue the shutdown with nowait command, but this is an immediate shutdown and is not recommended for cloning purposes. This means that any in-flight transactions that have not been committed to disk will be rolled back when the server is restarted. The user has no way of ensuring that a restartable image of the database has been cloned. A checkpoint is not performed, and there is no guarantee that the database will recover properly on the secondary or target host.

3-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 69: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

Figure 3-2 on page 3-5 depicts the steps to shut down the Sybase instance, clone a database using TimeFinder, and restart a secondary Sybase instance using the BCV.

Figure 3-2 Cloning with EMC TimeFinder and Sybase shutdown

The following describes the sequence of events in Figure 3-2.

1. Establish the STD and BCV device(s). This synchronizes the pairs and the device track tables. Any changes to the STD device are copied to the BCV device, essentially resynchronizing the mirrored pair. During the establish process, the BCV cannot be mounted to any host. symmir –g device-group establish

Where:

device-group is the name of the device group containing the devices to be established.

2. Shut down the Sybase server via ISQL, allowing all transactions to complete. isql> shutdown isql> go

Cloning with EMC TimeFinder and Sybase shutdown 3-5

Page 70: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

3. Issue the TimeFinder split command to break the relationship between the STD and BCV pairs. TimeFinder instant split would be the recommended method to minimize split times. Once the BCV is split, it is available for mounting to a target (secondary) host. symmir –g device-group split

Where:

device-group is the name of the device group containing the devices to be split.

The –instant option is no longer invoked on the command line. Since 5x68, this option is no longer required because instant split mode has become the default behavior. Chapter 2 provides more detail on TimeFinder instant split functionality.

4. Mount the BCV devices to a target or secondary host. If raw partitions are in use, simply use the UNIX import and activate commands. If using an LVM, such as VxFS, then import, activate, and mount commands are required.

5. Restart the Sybase database server (on the target host) using the preconfigured “run server” file for Sybase. This file is configured with all the necessary information to restart the Sybase server. startserver –f RUN_serverfile

3.2 Cloning with EMC TimeFinder and Sybase quiesce The quiesce database feature was introduced in ASE 12.0 and is a Sybase Transact-SQL proprietary command. The quiesce database command, used with the hold keyword, suspends all updates to a user-specified list of one or more databases. Transactions are then prevented from updating the persistent store of data in suspended databases, and background tasks such as the checkpoint and housekeeper processes will skip all databases that are in the suspended state. All writes to any databases in the quiesce state are blocked within ASE until quiesce database is subsequently invoked with the release keyword. Then, updates resume on databases that were previously suspended.

The benefit of this feature is that during a BCV split, the production database server can be kept up, active, and all databases remain online. For databases not in the quiesced state, write activity continues. Any transactions blocked by a quiesce hold command will resume and run to completion after the quiesce release command.

For any ASE 12.x version prior to 12.0.0.1, there is one caveat with Sybase quiesce, which involves nonlogged transactions on a Sybase database. If a nonlogged transaction occurs, the user will not be allowed to quiesce that database again unless a full database dump is performed. The database dump resets the quiesce marker and thus allows a future quiesce database command.

While a database is quiesced, users may choose to split the BCVs for a single database, or for multiple databases, creating a cloned copy of the quiesced database. This cloned copy can be synchronized with the production instance at any user-defined interval.

3-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 71: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

The syntax for quiesce database is:

Quiesce database tag_name hold database_name [, database_name…]

Quiesce database tag_name release

Where:

tag_name is a user-defined label for the list of databases to hold or release.

database_name is the name of a database for which updates are suspended. A user may quiesce up to 12 databases at one time.

Figure 3-3 on page 3-7 and the following steps describe cloning a Sybase instance using the quiesce option.

Figure 3-3 Cloning with EMC TimeFinder and Sybase quiesce

1. Establish the STD and BCV device(s). This synchronizes the pairs and the device track tables. Any changes to the STD device are being copied to the BCV device, essentially resynchronizing the mirrored pair. During the establish process, the BCV is not available to any host. symmir –g device-group establish

Where:

device-group is the name of the device group containing the devices to be established.

Cloning with EMC TimeFinder and Sybase quiesce 3-7

Page 72: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

2. Issue the Sybase quiesce command to hold database device I/O. The Sybase quiesce database hold command will freeze all I/O to the specified database, allowing read access only, and suspending all write activity. quiesce database tagname hold database1, database2,…

Where:

tagname is a user-defined label containing the list of databases to hold.

database1, database2,… are the specific Sybase databases that will be affected by the quiesce.

3. Issue the TimeFinder split command to break the relationship between the STD and BCV pairs. TimeFinder instant split would be the recommended method to minimize split times. Once the BCV is split, it is available for mounting to a target (secondary) host. symmir –g device-group split –instant

Where:

device-group is the name of the device group containing the devices to be split.

–instant in this case improves the performance of a typical split operation by performing a quick foreground split.

4. Issue the Sybase command to release database device I/O, releasing the suspend state and resuming write activity. quiesce database tagname release

Where:

tagname is a user-defined label containing the list of databases to release from the suspend state and resume write activity.

5. Mount the BCV devices to a target or secondary host. If raw partitions are in use, simply use the UNIX mount command. If using an LVM, such as VxFS, then import and mount commands are required.

6. Restart the Sybase database server using the pre-configured Sybase RUN_server file. startserver –f RUN_serverfile

Users on versions prior to 12.0.0.1 should avoid using Sybase quiesce. This is due to an issue that requires a full database dump after any nonlogged transaction. If a nonlogged transaction occurred on the database, a quiesce would not be allowed unless a full database dump had been performed.

3.3 Cloning with EMC TimeFinder consistent split EMC Solutions Enabler for TimeFinder (Version 5.x) can create a dependent-write consistent copy of a Sybase server or database with both consistent split and SRDF consistency groups. This EMC software technology allows restart recoverability of the

3-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 73: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

Sybase DBMS server and all databases that were involved in the TimeFinder or SRDF procedures.

EMC TimeFinder allows a user to create a relationship between disk devices within a single Symmetrix unit that will maintain dependent-write consistency when split. Maintaining consistency between these devices during a BCV split is critical to ensuring a dependent-write consistent state and one that is transactionally consistent upon restart.

TimeFinder software provides the consistent split implementation of instant split. Consistent split creates a DBMS-restartable BCV copy of the database without having to quiesce or shut down the database first. The impact on a production environment is negligible. TimeFinder can use ECA, with EMC Solutions Enabler Version 5.1 or later, to perform a consistent split across multiple, heterogeneous hosts without PowerPath support

Figure 3-4 on page 3-10 depicts cloning a Sybase database using TimeFinder consistent split. The line associated with step 3 (beside the host) is showing that the I/Os are being held from propagating to the standard volumes while a consistent split command is issued.

The consistent split, by default, performs an instant split while the actual disk split occurs in the background. Symmetrix Enginuity maintains the tracks that need to be destaged to the BCV. The last step of the consistent split is to release the I/Os to the standard volumes. The applications, databases, and users are unaware that a consistent split process has occurred. Dependent-write consistency is inherent in all database management systems with logging capabilities. DBMSs use this logic to maintain the dependent-write I/O concept, which states that a write I/O will not be issued by an application until a prior related write has been logged and physically resides on disk.

The BCVs contain a dependent-write consistent copy of the data, also known as a DBMS restartable copy. This means that a target Sybase server can be started using the BCVs. All databases will recover during restart operations, and any in-flight transactions will be rolled back. All transactions that were committed on the primary Sybase server will be recovered when Sybase is restarted using the dependent-write consistent copy. Figure 3-4 on page 3-10 and the following steps describe cloning a Sybase instance using TimeFinder consistent split.

Cloning with EMC TimeFinder consistent split 3-9

Page 74: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

Figure 3-4 Cloning Sybase with EMC TimeFinder consistent split

1. Establish the STD and BCV device(s). This synchronizes the STD/BCV pairs and the device track tables. Any changes to the STD device are being copied to the BCV device, essentially resynchronizing the mirrored pair. During the establish process, the BCV is not available to another host. symmir –g device-group establish

Where:

device-group is the name of the device group containing the devices to be established.

2. Notice that the Sybase production server is up and running without impact to the production environment.

3. Issue the TimeFinder consistent split command. The consistent split command will automatically perform three functions in this order:

• Freeze all specified device I/O

• Perform TimeFinder instant split

• Release all specified device I/O

3-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 75: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

symmir –g device-group split –instant –rdb –dbtype Sybase –db device_group

Where:

device-group specifies the device group containing the devices that will be involved in the consistent split.

-rdb specifies that all devices belonging to the specified database will be frozen just before the instant split is performed, and thawed as soon as the foreground split completes.

-dbtype specifies that this is a Sybase database.

-db specifies the name of the database (device_group in this case).

Appendix E provides more examples on TimeFinder consistent split command usage, and sample screenshots from proof-of-concept testing.

4. Mount the BCV devices to a target or secondary host. If raw partitions are in use, simply use the UNIX mount command. If using an LVM, such as VxFS, then import and mount commands are required.

5. Restart the Sybase database server using the preconfigured Sybase RUN_server file. Sybase will go through its normal DBMS restart procedures bringing each database online. Any in-flight transactions at the time of the consistent split will be rolled back. All committed transactions will be rolled forward upon server restart. startserver –f RUN_serverfile

3.4 Cloning with EMC SRDF consistency groups EMC SRDF consistency group is a facility for maintaining dependent-write consistency during remote replication processes. Symmetrix SRDF devices are specially configured to maintain data integrity even when the devices are spread across multiple Symmetrix systems. When creating a Sybase server instance or database clone using a consistency group ( or ConGroup), the process is very similar to using TimeFinder consistent split. The difference is that multiple hosts and multiple Symmetrix units may be involved and configured in the consistency group.

A consistency group allows a user to define a relationship between devices that span multiple platforms. ConGroup environments can span open systems as well as mainframe environments. Sybase ASE will perform its normal DBMS recovery procedures when the target server is brought up from the R2 devices. In remotely mirrored environments that use SRDF and ConGroups, an explicit ConGroup trip can be used to create a consistent image of the Sybase environment on the R2 devices. The ConGroup can then be immediately resumed using the remote split option to trigger an instant split of the R2s and BCVs before resuming the ConGroup. Figure 3-5 on page 3-12 depicts how a Sybase clone is created using a ConGroup.

Cloning with EMC SRDF consistency groups 3-11

Page 76: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

Figure 3-5 Cloning Sybase with EMC SRDF consistency groups

The ellipse in the figure depicts the consistency group relationship. The solid line on the (R2) remote Symmetrix unit depicts a split state of the devices.

1. Establish the R2 devices to the remote BCVs. This synchronizes the R2/BCV pairs and the device track tables. symrdf –cg mycongroup establish

Where:

mycongroup is the name of the Consistency Group containing the devices to be established.

2. Perform a ConGroup suspend. symrdf -cg mycongroup suspend

This command suspends the consistency group performing an explicit ConGroup trip. Only the I/O for the devices in the ConGroup is held. I/O for all other devices defined on that RDF link is unaffected.

3. Issue a remote split command. This will split the R2s and BCVs on the remote Symmetrix system. If a consistency group configuration includes BCV devices that are already synchronized with the R2 target devices using symmir establish commands, you can trip the consistency group manually from a local host using a suspend operation, and then split all BCV pairs on each Symmetrix unit at the target site. symmir -g RemoteDeviceGroup split -rdf

Where

3-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 77: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

-g specifies the device group on the remote Symmetrix. The -rdf option tells SYMCLI to split the remote BCV pairs.

4. Immediately resume the RDF links between the SRDF pairs in the consistency group and I/O traffic between the R1 devices and their paired R2 devices. Use the following command: symrdf -cg mycongroup resume

3.4.1 Populating consistency group definitions

Creating and populating the ConGroup is simple. The following commands and syntax explain the procedure:

Create the consistency group by first defining its name:

symcg create mycongroup

Where:

symcg is the Solutions Enabler ConGroup command.

mycongroup is the name that the user is assigning (or creating) for the ConGroup.

Add the devices are added to the ConGroup. The device being added in the following command is a PowerPath power device:

symcg -cg mycongroup add pd /dev/emcpower61c

Devices can also be added to the ConGroup in the following format specifying the physical device number:

symcg -cg mycongroup add dev 00C –sid 2603

Devices can be added by identifying the SID (Symmetrix ID number). In the following example, all devices defined to the host on SID 6033 will be added to the ConGroup.

symcg -cg mycongroup addall -sid 012000426033

Translating device groups allows a user to quickly assign an existing Symmetrix device group to a consistency group:

symdg dg2cg existingdevicegroup mycongroup

Where:

existingdevicegroup is the name of a device group assigned on the host and recognized by symdg.

mycongroup is the name that the user is assigning (or created) for the ConGroup.

Cloning with EMC SRDF consistency groups 3-13

Page 78: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

EMC Solutions Enabler mapping functionality allows a user to define a ConGroup for every device that is defined for the specified database.

symrdb –type DbType –db DbName rdb2cg mycongroup cgtype RDF1

Where:

Dbtype is Sybase.

Dbname is the name of the Sybase database for which the user wants to create the ConGroup.

cgtype assigns this ConGroup to be of type RDF1.

How to translate VERITAS volume groups:. symvg vg2cg logicalvolgroup mycongroup –cgtype RDF1

Where:

vg2cg is the name of the VERITAS volume group.

logicalvolgroup will translate the devices to the mycongroup name that the user assigns.

3.4.2 Propagating consistency group definitions

Implementing a ConGroup requires that all of the associated device groups be defined exactly the same across host platforms. Propagating the ConGroup definitions helps make the creation of device groups across platforms simple and quick.

symcg export mycongroup –f filename

The –f option is used to export all the devices defined in the ConGroup named mycongroup, to a text file named filename.

The following command imports all devices defined in the file named filename to mycongroup.

symcg import mycongroup –f filename

3.4.3 Creating a consistency group explicit trip

Suspend the ConGroup for an explicit trip: symrdf -cg mycongroup suspend

Display information about mycongroup: symrdf –cg mycongroup query

Perform an instant split on all R2s and BCV devices in mycongroup: symmir split –g devicegroup -rdf

Immediately resume the RDF link to continue I/O propagation to the devices in the ConGroup.

symrdf -cg mycongroup resume

3-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 79: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Cloning Sybase Databases

Verify that mycongroup is in a synchronized state. symrdf –cg mycongroup verify

3.4.4 Cloning considerations for ConGroup

The Sybase ASE database server must be restarted at the target (or remote) site after the ConGroup is tripped or suspended.

All volumes needing consistency must be included in the ConGroup definition, whether a single DBMS or multiple DBMSs.

3.5 Summary of Sybase cloning techniques Table 3-1 on page 3-15 lists the Sybase ASE version along with the functionality provided by EMC TimeFinder and SRDF in support of cloning a Sybase instance and/or database.

Table 3-1 Sybase and EMC cloning summary

Sybase ASE version

Cloning via Sybase shutdown (Symmetrix)

Cloning via Sybase quiesce

Cloning via TimeFinder consistent split

Cloning via EMC consistency groups

11.9.x YES NO YES YES

12.0.1 YES YES YES YES

12.5 YES YES YES YES

Summary of Sybase cloning techniques 3-15

Page 80: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 81: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 4 Backup Considerations for Sybase Environments

This chapter presents these topics:

4.1 Backup using Sybase Backup Server..................................................................4-2 4.2 Backup using Standby Access ............................................................................4-3 4.3 Backup using quiesce for external dump............................................................4-4

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 4-1

Page 82: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Backup Considerations for Sybase Environments

This section outlines the technical detail on the various mechanisms and software functionality for creating a backup copy of a Sybase database using the Sybase Backup Server utility. Sybase databases can be backed up to either intelligent disk or to tape media.

Backup considerations start with the creation of a business continuance strategy and deployment of the appropriate disaster recovery and disaster restart technologies to support that strategy. Disaster recovery and restart technologies differ in terms of recovery times, as shown in the following table.

Table 4-1 Recovery times of disaster recovery and restart technologies

Available technology Application recovery time Disaster restart/recovery

Recovery from backup tapes Days Recovery

Recovery from backup disks Hours Recovery

Recovery from mirrored disks Minutes Restart

Mirroring and HA applications Instantaneous Restart

4.1 Backup using Sybase Backup Server A database backup is only useful if it can be restored from the backup media, and a DBMS restartable or recoverable copy of the database can be created. Sybase users commonly use the Sybase Backup Server utility or third-party software products such as BMC’s SQL Back-Track to create backup copies of Sybase databases.

Basic features and functionality of Sybase Backup Server were discussed in section 1.1.3.

A Sybase dump database command makes a backup copy of the entire database, including the transaction log, in a form that can be restored with load database. Dumps and loads are performed through the Backup Server utility. The dump transaction command makes a copy of a transaction log and removes the inactive portion. The inactive portion contains transactions from the database log file that have been committed to disk.

The load database command loads a backup copy of a user database, including its transaction log. The load transaction command loads a backup copy of the transaction log; this is used for incremental backups. Users are encouraged by Sybase to create a backup schedule combining database and transaction dumps in order to ensure a full database recovery, in the event that a database must be rebuilt using the Backup Server utility.

The following is an example of a common Sybase 12.x application environment backup strategy using the Sybase Backup Server utility.

4-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 83: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Backup Considerations for Sybase Environments

12:00 A.M. (midnight) Perform full database backup. dump database database_name to dump_device_1

Where:

database_name is an existing database. All tables, indexes, and database structures have been defined and reside on the ASE server in the Master database table sysdatabases.

dump_device_1 has been defined, the device is initialized, and the device information resides in the Master database table sysdevices.

6:00 A.M. Perform incremental transaction log dump. Capture all nightly activity (for example, batch job updates).

dump transaction database_name to dumptran_device_1

Using this command, a copy of the transaction log will be made capturing all incremental activity since the full database dump was taken at midnight. The inactive portion of the log will be removed. This means that it is safe to remove all transactions from the database log file that have been committed to disk because they are now recorded here in this dump transaction file.

12:00 P.M. (noon) Repeat incremental transaction log dump backup. dump transaction database_name to dumptran_device_2

This is a repeat of the dump transaction that was performed at 6:00 A.M. This ensures that all DBMS activity (such as morning OLTP input) has been captured and can be restored in the event of a database disaster.

6:00 P.M. Repeat incremental transaction log dump backup. dump transaction database_name to dumptran_device_3

Again, the dump transaction file is created one last time for the day capturing all incremental transactions since noon.

Restoring a Sybase database or instance is discussed in detail in Chapter 5, “Sybase Recovery Procedures.”

4.2 Backup using Standby Access The Standby Access feature allows a user to create a copy of the production database on a target server, and continually apply transaction logs to keep the database current. This target database may be used for decision support (reporting) as a read-only database. At any time, when it becomes necessary to use the database for failover, issue the online database command and make this a read/write production database. Furthermore, in the event of a production database disaster, the BCV copy of the database may be restored to the primary server using TimeFinder, and then enabled for read/write production use.

Dump transaction with standby_access allows a user to keep a copy of a production database current by continuously applying transaction logs using load transaction.

Backup using Standby Access 4-3

Page 84: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Backup Considerations for Sybase Environments

After each execution of the load transaction command, the target database can be brought online using the for standby_access option so that it may be used for decision support (reporting) as a read-only database. At any time, should it become necessary to use the database for failover, issue the ordinary online database command without the for standby_access syntax to enable read/write production use.

Dump transaction with standby_access dumps only completed transactions to the specified dump device. It dumps the transaction log up to its most recent location where two conditions exist:

♦ A transaction has just completed.

♦ There are no other active transactions.

After load transaction, these conditions will allow online database for standby_access to roll forward the events in the transaction log without writing the new log records that would have been required to roll back incomplete transactions. Because no new log records are generated in the target instance of the database, a subsequent load transaction is still allowed.

The first load transaction in a load sequence requires a preceding load database. A clone of a database produced by a BCV cannot receive a load transaction without first receiving a load database.

The required steps for using Sybase Standby Access method with EMC TimeFinder are described in detail in Appendix C.

4.3 Backup using quiesce for external dump The quiesce database hold for external dump command allows the user to quiesce databases to a consistent state, split point-in-time clones of production databases, and continuously apply transactions to the secondary server to achieve warm standby databases.

These database clones can be used as a substitute for backing up the database on the primary server (via dump database), and loading the database on the secondary server (via load database). Sybase ASE (version 12.5 and later) allows users to continuously apply transaction logs from the primary to the secondary server under the following conditions:

♦ On the primary server, use quiesce database hold for external dump

♦ On the secondary server, the database server must be started with the –q option

When the secondary server is started with –q, any user database that has been split off in a quiescent state using the external dump command will be recovered similarly to the load database command. Once recovery is complete, the database will remain offline. The databases on the secondary server will remain offline and subsequent transaction logs will be permitted to roll forward.

4-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 85: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Backup Considerations for Sybase Environments

In this way, databases on the secondary server can act as warm standby databases, periodically refreshed from the production host. The secondary clones do not require server rebooting and the copies can be kept up to date at all times.

Table 4-2 on page 4-5 summarizes the use of the –q option combined with the quiesce database for external dump.

Table 4-2 Backup operation using the –q option

Operation Option Result

Quiesce database hold for external dump

With -q Can achieve warm standby databases

Quiesce database hold for external dump

Without -q Cannot apply a transaction log to the secondary (BCV) host

Quiesce database hold (No external dump option)

With –q Cannot apply a transaction log to the secondary (BCV) host

Quiesce database hold (No external dump option)

Without -q Cannot apply a transaction log to the secondary (BCV) host

The required steps for using –q with the for external dump option and EMC TimeFinder are described in detail in Appendix D, “Using Sybase quiesce for external dump with TimeFinder.”

Backup using quiesce for external dump 4-5

Page 86: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 87: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 5 Sybase Recovery Procedures

This chapter presents these topics:

5.1 Restoring with Sybase Backup Server................................................................5-2 5.2 Restoring with standby access ............................................................................5-2 5.3 Restoring with quiesce for external dump ..........................................................5-3 5.4 Summary.............................................................................................................5-4

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 5-1

Page 88: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase Recovery Procedures

Database recovery is the process of rebuilding a database from a backup image, and then explicitly applying subsequent logs to roll the data state forward to a designated point of consistency.

This section covers the techniques and technical details surrounding the use of the Sybase Backup Server for restoring a Sybase database.

Chapter 6 discusses disaster restart and disaster recovery in more detail. It describes the various technical methods and procedures for restarting or restoring a Sybase database using various EMC software and technologies.

5.1 Restoring with Sybase Backup Server Building on the backup procedure described in the previous chapter, in the event of a database disaster, the last 18 hours of data processing has been captured by the dump database and dump transaction procedures. The database and transaction log dump can be restored using the backup server. The following procedures list the steps required to restore a Sybase database using the backup server load process.

1. Restore all data that was captured at the time of the full database dump (it will be loaded to the database from the dump device):

load database database_name from dump_device_1

2. Restore all transactions that were backed up from the log file at the time of the transaction log dump (from midnight until the 6:00 A.M.):

load transaction database_name from dumptran_device_1

3. Restore all transactions that were backed up from the log file at the time of the transaction log dump (from 6:00 A.M. to 12:00 noon.):

load transaction database_name from dumptran_device_2

4. Restore all transactions that were backed up from the log file at the time of the transaction log dump (from 12:00 noon to 6:00 P.M.):

load transaction database_name from dumptran_device_3

When the load process has completed, the database has effectively been restored and recovered. At midnight, a full database dump is taken again. This starts the cycle all over again for the next day.

5.2 Restoring with standby access As discussed in section 6.12.4, the standby access feature allows a user to create a copy of the production database on a target server, and continually apply transaction logs to keep the database current.

The dump transaction with standby_access command allows a user to keep a copy of a production database current by continually applying transaction logs using the load transaction command.

5-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 89: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase Recovery Procedures

After each execution of the load transaction, the target database can be brought online using the for standby_access option so that it may be used as a read-only database. At any time, should it become necessary to use the standby database, issue the command online database without the for standby_access syntax to enable read/write production use.

Examples:

online database dbname for standby_access

This command brings the database online and in a read-only state. The database is expecting that more transaction logs will be loaded into it.

online database dbname

This command brings the database online permanently and in a read/write state. This database will no longer accept transaction logs from the original primary database. If this database is truly a new primary, then a full backup using the dump database command must be performed to create a backup copy.

The required steps for using Sybase Standby Access method with EMC SRDF are described in detail in section 6.12.3, “Log shipping and standby access .”

Note that the standby access feature can be used with either TimeFinder or SRDF. However, using TimeFinder means that the database backup would be located on the local host, meaning the same host as the primary database. In the event that the host becomes unavailable for any reason, the backup copy would not be accessible either. For either restartability or restoreability of the database, SRDF would be considered the best practice.

5.3 Restoring with quiesce for external dump The quiesce database hold for external dump command, allows the user to quiesce databases to a consistent state, split point-in-time replicas of production databases, and continuously apply transactions to the secondary server to achieve warm standby databases.

When the secondary server is started with –q, any database that has been split off in a quiescent state using the external dump command will be recovered similarly to the load database command. Once recovery is complete, the database will remain offline, leaving the secondary offline and subsequent transaction logs are permitted to roll forward.

In this way, databases on the secondary server can act as warm standby databases, and be periodically refreshed from the production host. The secondary replicas do not require server rebooting and the copies can be kept up to date at all times.

The required steps for using –q with the, for external dump option and SRDF are described in detail in section 6.12.4, “Log shipping and quiesce for external .”

Restoring with quiesce for external dump 5-3

Page 90: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sybase Recovery Procedures

Note that this feature can be used with either TimeFinder or SRDF. However, using TimeFinder means that the database backup would be located on the local host, meaning the same host as the primary database. If the host becomes unavailable for any reason, the backup copy would not be accessible either. For either restartability or restoreability of the database, SRDF would be considered the best practice.

5.4 Summary The Sybase standby access and the quiesce for external dump methods can easily be confused. The main difference lies in the fact that the Sybase quiesce for external dump feature does not require a full database dump to start the process. Transaction log dumps may be taken on the primary database and applied to the target, as long as the target Sybase instance has been started with the –q flag.

With the Sybase standby access feature, the user continually applies the transaction log dumps to the standby database. When it becomes necessary to promote the target database to primary database, the user must perform a full database dump on the new primary in order to begin (or reinitialize) the backup process.

5-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 91: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 6 Understanding Disaster Restart and Disaster Recovery

This chapter presents these topics:

6.1 Definitions ..........................................................................................................6-3 6.2 Design considerations for disaster recovery and disaster restart ........................6-4 6.3 Tape-based solutions ..........................................................................................6-8 6.4 Remote replication challenges ............................................................................6-9 6.5 Array-based remote replication ........................................................................6-12 6.6 Planning for array-based replication.................................................................6-13 6.7 SRDF/S single Symmetrix to single Symmetrix ..............................................6-14 6.8 SRDF/S and consistency croups .......................................................................6-17 6.9 SRDF/A ............................................................................................................6-21 6.10 SRDF/AR single hop ........................................................................................6-25 6.11 SRDF/AR multihop ..........................................................................................6-27 6.12 Database log shipping solutions .......................................................................6-29 6.13 Running database solutions ..............................................................................6-35

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 6-1

Page 92: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

A critical part of managing a database is planning for unexpected loss of data. The loss can occur from a disaster like fire or flood or it can come from hardware or software failures. It can even come through human error or malicious intent. In each instance, the database must be restored to some usable point, before application services can be resumed.

The effectiveness of any plan for restart or recovery involves answering the following questions:

♦ How much down time is acceptable to the business?

♦ How much data loss is acceptable to the business?

♦ How complex is the solution?

♦ Does the solution accommodate the data architecture?

♦ How much does the solution cost?

♦ What disasters does the solution protect against?

♦ Is there protection against logical corruption?

♦ Is there protection against physical corruption?

♦ Is the database restartable or recoverable?

♦ Can the solution be tested?

♦ If failover happens, will failback work?

All restart and recovery plans include a replication component. In its simplest form, the replication process may be as easy as making a tape copy of the database and application. In a more sophisticated form, it could be real-time replication of all changed data to some remote location. Remote replication of data has its own challenges centered around:

♦ Distance

♦ Propagation delay (latency)

♦ Network infrastructure

♦ Data loss

This section provides an introduction to the spectrum of disaster recovery and disaster restart solutions for Sybase databases on EMC Symmetrix arrays.

6-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 93: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.1 Definitions In the following sections, the terms dependent-write consistency, database restart, database recovery, and roll-forward recovery are used. A clear definition of these terms is required to understand the context of this section.

6.1.1 Dependent-write consistency

A dependent-write I/O is one that cannot be issued until a related predecessor I/O has completed. Dependent-write consistency is a data state where data integrity is guaranteed by dependent-write I/Os embedded in application logic. Database management systems are good examples of the practice of dependent-write consistency.

Database management systems must devise protection against abnormal termination in order to successfully recover from one. The most common technique used is to guarantee that a dependent write can not be issued until a predecessor write has completed. Typically, the dependent write is a data or index write while the predecessor write is a write to the log. Because the write to the log must be completed prior to issuing the dependent write, the application thread is synchronous to the log write; that is, it waits for that write to complete prior to continuing. The result of this kind of strategy is a dependent-write consistent database.

6.1.2 Database restart

Database restart is the implicit application of database logs during the database’s normal initialization process to ensure a transactionally consistent data state.

If a database is shut down normally, the process of getting to a point of consistency during restart requires minimal work. If the database abnormally terminates, then the restart process will take longer depending on the number and size of in-flight transactions at the time of termination. An image of the database created by using EMC consistency technology while it is running, without conditioning the database, will be in a dependent-write consistent data state, which is similar to that created by a local power failure. This is also known as a DBMS restartable image. The restart of this image transforms it to a transactionally consistent data state by completing committed transactions and rolling back uncommitted transactions during the normal database initialization process.

6.1.3 Database recovery

Database recovery is the process of rebuilding a database from a backup image, and then explicitly applying subsequent logs to roll the data state forward to a designated point of consistency. Database recovery is only possible with databases configured with archive logging.

A recoverable Sybase database copy can be taken in one of three ways:

♦ With the database shut down and copying the database components using external tools

♦ With the database running using Sybase backup tools

Definitions 6-3

Page 94: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

♦ With the database in a “quiesced state” and copying the database using external tools.

6.1.4 Roll-forward recovery

With some databases, it may be possible to take a DBMS restartable image of the database, and apply subsequent archive logs, to roll forward the database to a point-in-time after the image was created. This means that the image created can be used in a backup strategy in combination with archive logs. A DBMS restartable image of Sybase can use subsequent logs to roll forward transactions, only if the Sybase quiesce for external dump functionality is properly invoked. The quiesce database hold for external dump command allows the user to quiesce databases to a consistent state, split point-in-time clones of production databases, and continuously apply transactions to the secondary server to achieve warm standby databases. Section 5.3 provides specific details regarding this functionality.

6.2 Design considerations for disaster recovery and disaster restart Loss of data or loss of application availability has a varying impact from one business type to another. For instance, the loss of transactions for a bank could cost millions, whereas system downtime may not have a major fiscal impact. On the other hand, businesses that are primarily web-based may require 100 percent application availability in order to survive. The two factors, loss of data and loss of uptime, are the business drivers that are baseline requirements for a DR solution. When quantified, these two factors are more formally known as Recovery Point Objective (RPO) and Recovery Time Objective (RTO), respectively.

When evaluating a solution, the RPO and RTO requirements of the business need to be met. In addition the solution must take into consideration operational complexity, cost, and the ability to return the whole business to a point of consistency. Each of these aspects is discussed in the following sections.

6.2.1 Recovery Point Objective

The RPO is a point of consistency to which a user wants to recover or restart. It is measured in the amount of time from when the point of consistency was created or captured to the time the disaster occurred. This time equates to the acceptable amount of data loss. Zero data loss (no loss of committed transactions from the time of the disaster) is the ideal goal but the high cost of implementing such a solution must be weighed against the business impact and cost of a controlled data loss.

Some organizations, like banks, have zero data loss requirements. The database transactions entered at one location must be replicated immediately to another location. This can have an impact on application performance when the two locations are far apart. On the other hand, keeping the two locations close to one another might not protect against a regional disaster like the Northeast power outage or the hurricanes in Florida.

Defining the required RPO is usually a compromise between the needs of the business, the cost of the solution, and the risk of a particular event happening.

6-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 95: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.2.2 Recovery Time Objective

The RTO is the maximum amount of time allowed for recovery or restart to a specified point of consistency. This time involves many factors—the time taken to:

♦ Provision power, utilities, and such.

♦ Provision servers with the application and database software.

♦ Configure the network.

♦ Restore the data at the new site.

♦ Roll forward the data to a known point of consistency.

♦ Validate the data.

Some delays can be reduced or eliminated by choosing certain DR options like having a hot site where servers are preconfigured and on standby. Also, if storage-based replication is used, the time taken to restore the data to a usable state is completely eliminated.

As with RPO, each solution for RTO will have a different cost profile. Defining the RTO is usually a compromise between the cost of the solution and the cost to the business when database and applications are unavailable.

6.2.3 Operational complexity

The operational complexity of a DR solution may be the most critical factor in determining the success or failure of a DR activity. The complexity of a DR solution can be considered as three separate phases:

1. Initial setup of the implementation

2. Maintenance and management of the running solution

3. Execution of the DR plan in the event of a disaster

While initial configuration complexity and running complexity can be a demand on human resources, the third phase, execution of the plan, is where automation and simplicity must be the focus. When a disaster is declared, key personnel may not be available in addition to the loss of servers, storage, networks, buildings, and such. If the complexity of the DR solution is such that skilled personnel with an intimate knowledge of all systems involved are required to restore, recover, and validate application and database services, the solution has a high probability of failure.

Multiple database environments grow organically over time into complex federated database architectures. In these federated database environments, reducing the complexity of DR is absolutely critical. Validation of transactional consistency within the complex database architecture is time consuming, costly, and requires application and database familiarity. One reason for this is due to the heterogeneous databases and operating systems in these federated environments. Across multiple heterogeneous

Design considerations for disaster recovery and disaster restart 6-5

Page 96: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

platforms it is hard to establish a common clock, and therefore hard to determine a business point of consistency across all platforms. This business point of consistency has to be created from intimate knowledge of the transactions and data flows.

6.2.4 Source server activity

DR solutions may or may not require additional processing activity on the source servers. The extent of that activity can impact both response time and throughput of the production application. This effect should be understood and quantified for any given solution to ensure the impact to the business is minimized. The effect for some solutions is continuous while the production application is running; for other solutions, the impact is sporadic, where bursts of write activity are followed by periods of inactivity.

6.2.5 Production impact

Some DR solutions delay the host activity while taking actions to propagate the changed data to another location. This action only affects write activity and although the introduced delay may only be of the order of a few milliseconds it can impact response time in a high-write environment. Synchronous solutions introduce delay into write transactions at the source site, asynchronous solutions do not.

6.2.6 Target server activity

Some DR solutions require a target server at the remote location to perform DR operations. The server has both software and hardware costs and needs personnel with physical access to it for basic operational functions like power on and power off. Ideally, this server could have some usage like running development or test databases and applications. Some DR solutions require more target server activity and some require none.

6.2.7 Number of copies of data

DR solutions require replication of data in one form or another. Replication of a database and associated files can be as simple as making a tape backup and shipping the tapes to a DR site or as sophisticated as asynchronous array-based replication. Some solutions require multiple copies of the data to support DR functions. More copies of the data may be required to perform testing of the DR solution in addition to those that support the DR process.

6.2.8 Distance for solution

Disasters, when they occur, have differing ranges of impact. For instance, a fire may take out a building, an earthquake may destroy a city, or a tidal wave may devastate a region. The level of protection for a DR solution should address the probable disasters for a given location. For example, when protecting against an earthquake, the DR site should not be in the same locale as the production site. For regional protection, the two sites need to be in two different regions. The distance associated with the DR solution affects the kind of DR solution that can be implemented.

6-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 97: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.2.9 Bandwidth requirements

One of the largest costs for DR is in provisioning bandwidth for the solution. Bandwidth costs are an operational expense; this makes solutions that have reduced bandwidth requirements very attractive to customers. It is important to recognize the bandwidth consumption of a given solution to be able to anticipate the running costs. Incorrect provisioning of bandwidth for DR solutions can have an adverse affect on production performance and can invalidate the overall solution.

6.2.10 Federated consistency

Databases are rarely isolated islands of information with no interaction or integration with other applications or databases. Most commonly, databases are loosely and/or tightly coupled to other databases using triggers, database links, and stored procedures. Some databases provide information downstream for other databases using information distribution middleware; other databases receive feeds and inbound data from message queues and EDI transactions. The result can be a complex interwoven architecture with multiple interrelationships. This is referred to as a federated database architecture.

With federated database architectures, making a DR copy of a single database without regard to other components invites consistency issues and creates logical data integrity problems. All components in a federated architecture need to be recovered or restarted to the same dependent-write consistent point of time to avoid these problems.

With this in mind, it is possible that point database solutions for DR, such as log shipping, do not provide the required business point of consistency in a federated database architecture. Federated consistency solutions guarantee that all components, databases, applications, middleware, flat files, and such, are recovered or restarted to the same dependent-write consistent point in time.

6.2.11 Testing the solution

Tested, proven, and documented procedures are also required for a DR solution. Many times the DR test procedures are operationally different from a true disaster set of procedures. Operational procedures need to be clearly documented. In the best case scenario, companies should periodically execute the actual set of procedures for DR. This could be costly to the business because of the application downtime required to perform such a test, but is necessary to ensure validity of the DR solution.

6.2.12 Cost

The cost of doing DR can be justified by comparing it to the cost of not doing it. What does it cost the business when the database and application systems are unavailable to users? For some companies this is easily measurable, and revenue loss can be calculated per hour of downtime or per hour of data loss.

Design considerations for disaster recovery and disaster restart 6-7

Page 98: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Whatever the business, the DR cost is going to be an extra expense item and, in many cases, with little in return. The costs include, but are not limited to the following:

♦ Hardware (storage, servers and maintenance)

♦ Software licenses and maintenance

♦ Facility leasing/purchase

♦ Utilities

♦ Network infrastructure

♦ Personnel

6.3 Tape-based solutions 6.3.1 Tape-based disaster recovery

Traditionally, the most common form of disaster recovery was to make a copy of the database onto tape and using PTAM (Pickup Truck Access Method), take the tapes offsite to a hardened facility. In most cases, the database and application needed to be available to users during the backup process. Taking a backup of a running database created a “fuzzy” image of the database on tape, one that required database recovery after the image had been restored. Recovery usually involved application of logs that were active during the time the backup was in process. These logs had to be archived and kept with the backup image to ensure successful recovery.

The rapid growth of data over the last two decades has meant that this method has become unmanageable. Making a “hot” copy of the database is now the standard, but this method has its own challenges. How can a consistent copy of the database and supporting files be made when they are changing throughout the duration of the backup? What exactly is the content of the tape backup at completion? The reality is that the tape data is a “fuzzy image” of the disk data, and considerable expertise is required to restore the database back to a database point of consistency.

In addition, the challenge of returning the data to a business point of consistency, where a particular database must be recovered to the same point as other databases or applications, is making this solution less viable.

6.3.2 Tape-based disaster restart

Tape-based disaster restart is a recent development in disaster recovery strategies and is used to avoid the “fuzziness” of a backup taken while the database and application are running. A “restart” copy of the system data is created by locally mirroring the disks that contain the production data, and splitting off the mirrors to create a dependent-write consistent point-in-time image of the disks. This image is a DBMS restartable image as described earlier. Thus, if this image was restored and the database brought up, the database would perform an implicit recovery to attain transactional consistency. Roll-forward recovery using archived logs from this database image is not possible with

6-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 99: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Sybase ASE without conditioning the database prior to the consistent split. This conditioning process is described in section 3.3.

The restartable image on the disks can be backed up to tape and moved offsite to a secondary facility. If this image is created and shipped offsite on a daily basis, the maximum amount of data loss is 24 hours.

The time taken to restore the database is a factor to consider since reading from tape is typically slow. Consequently, this solution can be effective for customers with relaxed RTOs.

6.4 Remote replication challenges Replicating database information over long distances for the purpose of disaster recovery is challenging. Synchronous replication over distances greater than 200 km may not be feasible due to the negative impact on the performance of writes because of propagation delay; some form of asynchronous replication must be adopted. Considerations in this section apply to all forms of remote replication technology whether they are array-based, host-based, or managed by the database.

Remote replication solutions usually start with initially copying a full database image to the remote location. This is called instantiation of the database. There are a variety of ways to perform this. After instantiation, only the changes from the source site are replicated to the target site in an effort to keep the target up to date. Some methodologies may not send all of the changes (certain log-shipping techniques for instance), by omission rather than design. These methodologies may require periodic reinstantiation of the database at the remote site.

The following considerations apply to remote replication of databases:

♦ Propagation delay (latency due to distance)

♦ Bandwidth requirements

♦ Network infrastructure

♦ Method of instantiation

♦ Method of re-instantiation

♦ Change rate at the source site

♦ Locality of reference

♦ Expected data loss

♦ Failback operations

6.4.1 Propagation delay

Electronic operations execute at the speed of light. The speed of light in a vacuum is 186,000 miles per second. The speed of light through glass (in the case of fiber-optic

Remote replication challenges 6-9

Page 100: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

media) is less, approximately 115,000 miles per second. In other words, in an optical network like SONET for instance, it takes 1 millisecond to send a data packet 125 miles or 8 milliseconds for 1000 miles. All remote replication solutions need to be designed with a clear understanding of the propagation delay impact.

6.4.2 Bandwidth requirements

All remote replication solutions have some bandwidth requirements because the changes from the source site must be propagated to the target site. The more changes there are, the greater the bandwidth that is needed. It is the change rate and replication methodology that determine the bandwidth requirement, not necessarily the size of the database.

Data compression can help reduce the quantity of data transmitted and therefore the size of the “pipe” required. Certain network devices, like switches and routers, provide native compression, some by software and some by hardware. GigE directors provide native compression in a DMX to DMX SRDF pairing. The amount of compression achieved depends on the type of data that is being compressed. Typical character and numeric database data compresses at about a 2 to 1 ratio. A good way to estimate how the data will compress is to assess how much tape space is required to store the database during a full backup process. Tape drives perform hardware compression on the data prior to writing it. For instance, if a 300 GB database takes 200 GB of space on tape, the compression ratio is 1.5 to 1.

For most customers, a major consideration in the disaster recovery design is cost. It is important to recognize that some components of the end solution represent a capital expenditure and some an operational expenditure. Bandwidth costs are operational expenses and thus any reduction in this area, even at the cost of some capital expense, is highly desirable.

6.4.3 Network infrastructure

The choice of channel extension equipment, network protocols, switches, routers, and such, ultimately determines the operational characteristics of the solution. EMC has a proprietary “BC Design Tool” to assist customers in analysis of the source systems and to determine the required network infrastructure to support a remote replication solution.

6.4.4 Method of instantiation

In all remote replication solutions, a common requirement is for an initial, consistent copy of the complete database to be replicated to the remote site. The initial copy from source to target is called instantiation of the database at the remote site. Following instantiation, only the changes made at the source site are replicated. For large databases, sending only the changes after the initial copy is the only practical and cost-effective solution for remote database replication.

In some solutions, instantiation of the database at the remote site uses a process that is similar to the one that replicates the changes. Some solutions do not even provide for instantiation at the remote site (log shipping for instance). In all cases, it is critical to understand the pros and cons of the complete solution.

6-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 101: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.4.5 Method of reinstantiation

Some methods of remote replication require periodic refreshing of the remote system with a full copy of the database. This is called reinstantiation. Technologies such as log shipping frequently require this since not all activity on the production database may be represented in the log. In these cases, the disaster recovery plan must account for reinstantiation and also for the fact there may be a disaster during the refresh. The business objectives of RPO and RTO must likewise be met under those circumstances.

6.4.6 Change rate at the source site

After instantiation of the database at the remote site, only changes to the database are replicated remotely. There are many methods of replication to the remote site and each has differing operational characteristics. The changes can be replicated using logging technology such as hardware and software mirroring. Before designing a solution with remote replication, it is important to quantify the average change rate. It is also important to quantify the change rate during periods of burst write activity. These periods might correspond to end of month/quarter/year processing, billing, or payroll cycles. The solution needs to be designed to allow for peak write workloads.

6.4.7 Locality of reference

Locality of reference is a factor that needs to be measured to understand if there will be a reduction of bandwidth consumption when any form of asynchronous transmission is used. Locality of reference is a measurement of how much write activity on the source is skewed. For instance, a high locality of reference application may make many updates to a few tables in the database, whereas a low locality of reference application rarely updates the same rows in the same tables during a given period of time.

It is important to understand that while the activity on the tables may have a low locality of reference, the write activity into an index might be clustered when inserted rows have the same or similar index column values, rendering a high locality of reference on the index components.

In some asynchronous replication solutions, updates are “batched” into periods of time, and sent to the remote site to be applied. In a given batch, only the last image of a given row/block is replicated to the remote site. So, for highly skewed application writes, this results in bandwidth savings. Generally, the greater the time period of batched updates, the greater the savings on bandwidth.

Log-shipping technologies do not take into account locality of reference. For example, a row updated 100 times, is transmitted 100 times to the remote site, whether the solution is synchronous or asynchronous.

6.4.8 Expected data loss

Synchronous DR solutions are zero-data-loss solutions, that is to say, there is no loss of committed transactions from the time of the disaster. Synchronous solutions may also be impacted by a rolling disaster in which case work completed at the source site after the rolling disaster started may be lost. Rolling disasters are discussed in detail in a later section.

Remote replication challenges 6-11

Page 102: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Nonsynchronous DR solutions have the potential for data loss. How much data is lost depends on many factors, most of which have been defined earlier. The quantity of data loss that is expected for a given solution is the RPO. For asynchronous replication, where updates are batched and sent to the remote site, the maximum amount of data lost will be two cycles or two batches worth. The two cycles that may be lost include the cycle currently being captured on the source site and the one currently being transmitted to the remote site. With inadequate network bandwidth, data loss could increase due the increase in transmission time.

6.4.9 Failback operations

If there is the slightest chance that failover to the DR site may be required, then there is a 100 percent chance that failback to the primary site will also be required, unless the primary site is lost permanently. The DR architecture should be designed in such a way as to make failback simple, efficient, and low risk. If failback is not planned for, there may be no reasonable or acceptable way to move the processing from the DR site, where the applications may be running on tier 2 servers and tier 2 networks, and such, back to the production site.

In a perfect world, the DR process should be tested once a quarter, with database and application services fully failed over to the DR site. The integrity of the application and database needs to be verified at the remote site to ensure that all required data was copied successfully. Ideally, production services are brought up at the DR site as the ultimate test. This means that production data would be maintained on the DR site, requiring a failback when the DR test completed. While this is not always possible, it is the ultimate test of a DR solution. It not only validates the DR process, but also trains the staff on managing the DR process should a catastrophic failure ever occur. The downside for this approach is that duplicate sets of servers and storage need to be present in order to make an effective and meaningful test. This tends to be an expensive proposition.

6.5 Array-based remote replication Customers can use the capabilities of a Symmetrix storage array to replicate the database from the production location to a secondary location. No host CPU cycles are used for this, leaving the host dedicated to running the production application and database. In addition, no host I/O is required to facilitate this, the array takes care of all replication and no hosts are required at the target location to manage the target array.

EMC provides multiple solutions for remote replication of databases:

♦ SRDF/S: Synchronous SRDF

♦ SRDF/A: Asynchronous SRDF

♦ SRDF/AR: SRDF Automated Replication

Each of these solutions is discussed in detail in the following sections. To use any of the array-based solutions, be sure to coordinate the disk layout of the databases with this kind of replication in mind.

6-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 103: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.6 Planning for array-based replication All Symmetrix solutions replicating data from one array to another are disk-based. This allows the Symmetrix to be agnostic to volume manager, file system, database system, and so forth. However, this does not mean that file system and volume manager concerns can be ignored. Effectively, the smallest level of granularity for disk-based replication is a volume group, in the case of UNIX. On Windows, the smallest unit could be a disk, or a volume-set or disk group, depending on how the disks are set up in disk manager.

In addition, if a database is to be replicated independently of other databases, it should have its own dedicated disks. That is, the disks used by a database should not be shared with other applications or databases.

In many cases, when a database is being restored, it is desired that only the data devices be restored and not the logs. An array-based restore copies the whole host volume, so if the current logs need to be preserved then they should be placed on separate volumes from the data devices. Logically the database can be divided into recovery structures and data. The recovery structures are those components that assist in recovery of the database after the restore of the data components.

Figure 6-1 on page 6-13 depicts the separation of all structures and components for a Sybase server in preparation for a TimeFinder implementation. This separation is useful for restoring the data and then applying the log to some known point of consistency. This is usually for local replication and recovery purposes but can be used for solutions that combine database and array-based replication solutions.

Figure 6-1 Database components for Sybase

Planning for array-based replication 6-13

Page 104: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

When a set of volumes has been defined for a database for remote replication, care must be taken to ensure that the disks hold everything that is needed to restart the database at the remote site. Simply replicating a single database is not sufficient. As a minimum, the master database must be replicated, as well as the device containing the Sybase and Backup Server libraries and executables. Typically, the default path for the database home directory is /usr/Sybase. The sysdevices table in the master database contains all information relevant to each database and the devices on which they reside. For example, database device_group may reside on /dev/prod_data1, /dev/prod_data2 and /dev/prod_log. These device names must also reside on the remote host or the Sybase server will not be able to bring the database online upon restart. The best way to ensure that devices on the remote server are mapped with the same names as the primary is by creating symbolic links. For example:

ln –s /dev/rdsk/c4t0d12s2 prod_data1 ln –s /dev/rdsk/c4t0d13s2 prod_data2 ln –s /dev/rdsk/c4t0d14s2 prod_log

Where:

/dev/rdsk/c#t#d#s# is a device residing on the target host.

prod_* is the name of the database device (defined in master.sysdevices) for device_group.

The Sybase server is expecting to restart device_group on these devices; therefore, they must logically or physically reside on the host.

6.7 SRDF/S single Symmetrix to single Symmetrix Synchronous SRDF, or SRDF/S, is a method of replicating production data changes from locations that are no greater than 200 km apart. Synchronous replication takes writes that are inbound to the source Symmetrix array and copies them to the target Symmetrix array. The write operation is not acknowledged as complete to the host until both Symmetrix arrays have the data in cache. It is important to realize that while the following examples involve Symmetrix, the fundamentals of synchronous replication described here are true for all synchronous replication solutions. Figure 6-2 on page 6-15 depicts the process.

6-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 105: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-2 Synchronous replication internals

The following is an explanation of the process in Figure 6-2 on page 6-15:

1. A write is received into the source Symmetrix cache. At this time, the host has not received acknowledgement that the write is complete.

2. The source Symmetrix array uses SRDF/S to push the write to the target Symmetrix array.

3. The target Symmetrix array sends an acknowledgement back to the source that the write was received.

4. Ending status of the write is presented to the host.

These four steps cause a delay in the processing of writes as perceived by the database on the source server. The amount of delay depends on the exact configuration of the network, the storage, the write block size, and the distance between the two locations. Reads to the source Symmetrix are not affected by the replication.

The following steps outline the process of setting up synchronous replication using Solutions Enabler (SYMCLI) commands.

1. Before the synchronous mode of SRDF can be established, initial instantiation of the database has to have taken place. In other words, a baseline full copy of all the volumes that are going to participate in the synchronous replication must be executed first. This is usually accomplished using the adaptive copy mode of SRDF. The following command creates a group device_group: symdg create device_group –type rdf1

2. Add devices to the group as described in Appendix B.

SRDF/S single Symmetrix to single Symmetrix 6-15

Page 106: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

3. The following command puts the group device_group into adaptive copy mode: symrdf –g device_group set mode acp_disk –noprompt

4. The following command causes the source Symmetrix to send all the tracks on the source site to the target site using the current mode: symrdf –g device_group establish –full -noprompt

The adaptive copy mode of SRDF has no impact to host application performance. It transmits tracks to the remote site that have never been sent before or that have changed since the last time the track was sent. It does not preserve write order or dependent-write consistency.

5. When both sides are synchronized, SRDF can then be put into synchronous mode. In the following command, the device group is put into synchronous mode: symrdf –g device_group set mode sync –noprompt

There is no requirement for a host at the remote site during the synchronous replication. The target Symmetrix array manages the in-bound writes and updates the appropriate volumes in the array.

Dependent-write consistency is inherent in a synchronous relationship as the target R2 volumes are at all times equal to the source provided that a single RA group is used. If multiple RA groups are used or if multiple Symmetrix arrays are used on the source site, SRDF Consistency Groups (SRDF/CG) must be used to guarantee consistency. SRDF/CG is described in section 6.8.

6.7.1 How to restart in the event of a disaster

In the event of a disaster where the primary source Symmetrix array is lost, it becomes necessary to run database and application services from the DR site. A host at the DR site is required for this. The first requirement is to write-enable the R2 devices. If the device group is not yet built on the remote host, it must be created using the R2 devices that were mirrors of the R1 devices on the source Symmetrix array. Group Named Services (GNS) can be used to propagate the device group to the remote site if there is a host being utilized there. The Solutions Enabler Symmetrix Base Management CLI Product Guide provides more details on GNS.

The following command write enables the R2s in group device_group. symld –g device_group rw_enable –noprompt

At this point, the host can issue the necessary commands to access the disks. For instance, on a UNIX host, import the volume group, activate the logical volumes, use fsck to check the file systems, and then mount them.

Once the data is available to the host, the database can be restarted. The database will perform an implicit recovery once the Sybase server is restarted. Transactions that were committed but not completed are rolled forward. Transactions that have updates applied to the database but were not committed are rolled back. The result is a transactionally consistent database.

6-16 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 107: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.8 SRDF/S and consistency croups Zero-data-loss disaster recovery techniques tend to use straightforward database and application restart procedures. These procedures work well if all processing and data mirroring stop at the same instant at the production site, when a disaster happens. Such is the case when there is a site power failure.

However, in most cases, it is unlikely that all data processing ceases at an instant in time. Computing operations can be measured in nanoseconds and even if a disaster takes only a millisecond to complete, many such computing operations could be completed between the start of a disaster until all data processing ceases. This gives us the notion of a rolling disaster. A rolling disaster is a series of events taken over a period of time that comprise a true disaster. The specific period of time that makes up a rolling disaster could be milliseconds (in the case of an explosion) or minutes in the case of a fire. In both cases, the DR site must be protected against data inconsistency.

6.8.1 Rolling disaster

Protection against a rolling disaster is required when the data for a database resides on more than one Symmetrix array or multiple RA groups. Figure 6-3 on page 6-17 depicts a dependent-write I/O sequence where a predecessor log write is happening prior to a page flush from a database buffer pool. The log device and data device are on different Symmetrix arrays with different replication paths. Figure 6-3 on page 6-17 demonstrates how rolling disasters can affect this dependent-write sequence.

Figure 6-3 Rolling disaster with multiple production Symmetrix arrays

SRDF/S and consistency croups 6-17

Page 108: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

1. This example of a rolling disaster starts with a loss of the synchronous links between the bottom source Symmetrix and the target Symmetrix. This will prevent the remote replication of data on the bottom source Symmetrix.

2. The Symmetrix array, which is now no longer replicating, receives a predecessor log write of a dependent-write I/O sequence. The local I/O is completed, however it is not replicated to the remote Symmetrix, and the tracks are marked as being ‘owed’ to the target Symmetrix. Nothing prevents the predecessor log write from completing to the host completing the acknowledgement process.

3. Now that the predecessor log write has completed, the dependent data write is issued. This write is received on both the source Symmetrix and the target Symmetrix because the rolling disaster has not yet affected those communication links.

4. If the rolling disaster ended in a complete disaster, the condition of the data at the remote site is such that it creates a “data ahead of log” condition, which is an inconsistent state for a database. The severity of the situation is that when the database is restarted, performing an implicit recovery, it may not detect the inconsistencies. A person extremely familiar with the transactions running at the time of the rolling disaster might be able to detect the inconsistencies. Database utilities could also be run to detect some of the inconsistencies.

A rolling disaster can happen in such a manner that data links providing remote mirroring support are disabled in a staggered fashion, while application and database processing continues at the production site. The sustained replication during the time when some Symmetrix units are communicating with their remote partners through their respective links while other Symmetrix units are not (due to link failures) can cause data integrity exposure at the recovery site. Some data integrity problems caused by the rolling disaster cannot be resolved through normal database restart processing and may require a full database recovery using appropriate backups, journals, and logs. A full database recovery elongates overall application restart time at the recovery site.

6.8.2 Protection against a rolling disaster

SRDF consistency group (SRDF/CG) technology provides protection against rolling disasters. A consistency group is a set of Symmetrix volumes spanning multiple RA groups and/or multiple Symmetrix frames that replicate as a logical group to other Symmetrix arrays using synchronous SRDF. It is not a requirement to span multiple RA groups and/or Symmetrix frames when using consistency groups. Consistency group technology guarantees that if a single source volume is unable to replicate to its partner for any reason, then all the volumes in the group stop replicating. This ensures that the image of the data on the target Symmetrix is consistent from a dependent write perspective.

Figure 6-4 on page 6-19 depicts a dependent-write I/O sequence where a predecessor log write is happening prior to a page flush from a database buffer pool. The log device and data device are on different Symmetrix arrays with different replication paths. Figure 6-4 on page 6-19 demonstrates how rolling disasters can be prevented using EMC consistency group technology.

6-18 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 109: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-4 Rolling disaster with SRDF consistency group protection

1. Consistency group protection is defined containing volumes X, Y, and Z on the source Symmetrix. This consistency group definition must contain all of the devices that need to maintain dependent-write consistency and reside on all participating hosts involved in issuing I/O to these devices. A mix of CKD (mainframe) and FBA (UNIX/Windows) devices can be logically grouped together. In some cases, the entire processing environment may be defined in a consistency group to ensure dependent-write consistency.

2. The rolling disaster just described begins preventing the replication of changes from volume Z to the remote site.

3. The predecessor log write occurs to volume Z, causing a consistency group (ConGroup) trip.

4. A ConGroup trip will hold the I/O that could not be replicated along with all of the I/O to the logically grouped devices. The I/O is held by PowerPath on the UNIX or Windows hosts, and IOS on the mainframe host. It is held long enough to issue two(2) I/Os per Symmetrix. The first I/O will put the devices in a suspend-pending state.

5. The second I/O performs the suspend of the R1/R2 relationship for the logically grouped devices, which immediately disables all replication to the remote site. This allows other devices outside of the group to continue replicating provided the communication links are available.

SRDF/S and consistency croups 6-19

Page 110: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6. After the R1/R2 relationship is suspended, all deferred write I/Os are released allowing the predecessor log write to complete to the host. The dependent data write is issued by the DBMS and arrives at X but is not replicated to the R2(X).

7. If a complete failure occurred from this rolling disaster, dependent-write consistency at the remote site is preserved. If a complete disaster did not occur and the failed links were activated again, the consistency group replication could be resumed once synchronous mode is achieved. It is recommended to create a copy of the dependent-write consistent image while the resume takes place. Once the SRDF process reaches synchronization, the dependent-write consistent copy is achieved at the remote site.

6.8.3 SRDF/S with multiple source Symmetrix arrays

The implications of spreading a database across multiple Symmetrix frames or across multiple RA groups and replicating in synchronous mode were discussed in previous sections. The challenge in this type of scenario is to protect against a rolling disaster. SRDF consistency groups can be used to avoid data corruption in a rolling disaster situation.

Consider the architecture depicted in Figure 6-5 on page 6-20.

Figure 6-5 SRDF with multiple source Symmetrix arrays and ConGroup protection

To protect against a rolling disaster, a consistency group can be created that encompasses all the volumes on all Symmetrix arrays participating in replication as shown by the dotted oval.

6-20 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 111: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

The following steps outline the process of setting up synchronous replication with consistency groups using Solutions Enabler (SYMCLI) commands.

1. Create a consistency group for the source side of the synchronous relationship, i.e. the R1 side: symcg create device_group –type rdf1 -ppath

2. Add devices to the group as described in Appendix B.

3. Before the synchronous mode of SRDF can be established, the initial instantiation of the database has to have taken place. In other words, the baseline full copy of all the volumes that are going to participate in the synchronous replication must be executed first. This is usually accomplished using adaptive copy mode of SRDF.

4. Put the group device_group into adaptive copy mode: symrdf –cg device_group set mode acp_disk –nop

5. Instruct the source Symmetrix to send all tracks at the source site to the target site using the current mode: symrdf –cg device_group establish –full -nop

Adaptive copy mode has no host impact. It transmits tracks to the remote site that have never been sent before or that have changed since the last time the track was sent. It does not preserve order or consistency. When both sides are synchronized, SRDF can be put into synchronous mode.

6. Put the device group device_group into synchronous mode: symrdf –cg device_group set mode sync –nop5

7. Enable consistency protection: symcg –cg device_group enable –nop

There is no requirement for a host at the remote site during the synchronous replication. The target Symmetrix manages the inbound writes and updates the appropriate disks in the array.

6.9 SRDF/A SRDF/A, or asynchronous SRDF, is a method of replicating production data changes from one Symmetrix array to another using delta set technology. Delta sets are the collection of changed blocks grouped together by a time interval that can be configured at the source site. The default time interval is 30 seconds. The delta sets are then transmitted from the source site to the target site in the order they were created. SRDF/A preserves the dependent-write consistency of the database at all times at the remote site.

The distance between the source and target Symmetrix is unlimited and there is no host impact. Writes are acknowledged immediately when they hit the cache of the source Symmetrix array. SRDF/A is only available on the DMX family of Symmetrix. Figure 6-6 on page 6-22 depicts the process.

SRDF/A 6-21

Page 112: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-6 SRDF/A replication internals

1. Writes are received into the source Symmetrix cache. The host receives immediate acknowledgement that the write is complete. Writes are gathered into the capture delta set for 30 seconds.

2. A delta set switch occurs and the current capture delta set becomes the transmit delta set by changing a pointer in cache. A new empty capture delta set is created.

3. SRDF/A sends the changed blocks that are in the transmit delta set to the remote Symmetrix. The changes collect in the receive delta set at the target site. When the replication of the transmit delta set is complete, another delta set switch occurs and a new empty capture delta set is created with the current capture delta set becoming the new transmit delta set. The receive delta set becomes the apply delta set.

4. The apply delta set marks all the changes in the delta set against the appropriate volumes as invalid tracks and begins destaging the blocks to disk.

5. The cycle repeats continuously.

With sufficient bandwidth for the source database write activity, SRDF/A will transmit all changed data within the default 30 seconds. This means that the maximum time the target data will be behind the source is 60 seconds (two replication cycles). At times of high write activity, it may not be possible to transmit all the changes that occur during a 30 second interval. This means that the target Symmetrix will fall behind the source Symmetrix by more than 60 seconds. Careful design of the SRDF/A infrastructure and a thorough understanding of write activity at the source site are necessary to design a solution that meets the RPO requirements of the business at all times.

Consistency is maintained throughout the replication process on a delta set boundary. The Symmetrix will not apply a partial delta set which would invalidate consistency. Dependent-write consistency is preserved by placing a dependent-write in either the same delta set as the write it depends on or a subsequent delta set.

There is no requirement for a host at the remote site during asynchronous replication. The target Symmetrix manages in-bound writes and updates the appropriate disks in the array.

6-22 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 113: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Different command sets are used to enable SRDF/A depending on whether the SRDF/A group of devices is contained within a single Symmetrix or is spread across multiple Symmetrix arrays.

6.9.1 SRDF/A using a single source Symmetrix array

Before the asynchronous mode of SRDF can be established, initial instantiation of the database has to have taken place. In other words, a baseline full copy of all the volumes that are going to participate in the asynchronous replication must be executed first. This is usually accomplished using the adaptive copy mode of SRDF.

The following steps outline the process of setting up asynchronous replication using Solutions Enabler (SYMCLI) commands.

1. Create an SRDF disk group for the source side of the synchronous relationship (the R1 side):

symdg create device_group –type rdf1

2. Add devices to the group as described in Appendix B.

3. Put the group device_group into adaptive copy mode:

symrdf –g device_group set mode acp_disk –nop

4. Instruct the source Symmetrix to send all the tracks at the source site to the target site using the current mode:

symrdf –g device_group establish –full -nop

The adaptive copy mode of SRDF has no impact to host application performance. It transmits tracks to the remote site that have never been sent before or that have changed since the last time the track was sent. It does not preserve write order or consistency. When both sides are synchronized, SRDF can be put into asynchronous mode.

5. Put the device group device_group into asynchronous mode:

symrdf –g device_group set mode async –nop

There is no requirement for a host at the remote site during the asynchronous replication. The target Symmetrix manages the inbound writes and updates the appropriate disks in the array.

6.9.2 SRDF/A multiple source Symmetrix arrays

When a database is spread across multiple Symmetrix arrays and SRDF/A is used for long-distance replication, separate software must be used to manage the coordination of the delta set boundaries between the participating Symmetrix arrays and to stop replication if any of the volumes in the group can not replicate for any reason. The software must ensure that all delta set boundaries on every participating Symmetrix in the configuration are coordinated to give a dependent-write consistent point-in-time image of the database.

SRDF/A 6-23

Page 114: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

SRDF/A multisession consistency (MSC) provides consistency across multiple RA groups and/or multiple Symmetrix arrays. MSC is available on 5671 microcode and above with Solutions Enabler V6.0 and later. SRDF/A with MSC is supported by an SRDF process daemon that performs cycle-switching and cache recovery operations across all SRDF/A sessions in the group. This ensures that a dependent-write consistent R2 copy of the database exists at the remote site at all times. A composite group must be created using the SRDF consistency protection option (-rdf_consistency) and must be enabled using the symcg enable command before the RDF daemon begins monitoring and managing the MSC consistency group. The RDF process daemon must be running on all hosts that can write to the set of SRDF/A volumes being protected. At the time of an interruption (SRDF link failure, for instance), MSC analyzes the status of all SRDF/A sessions and either commits the last cycle of data to the R2 target or discards it.

The following steps outline the process of setting up synchronous replication with consistency groups using Solutions Enabler (SYMCLI) commands.

1. Create the replication composite group for the SRDF/A devices: symcg create device_group -rdf_consistency -type rdf1

The –rdf_consistency option indicates that the volumes that will be in the group are to be protected by MSC.

2. Add devices to the group as described in Appendix B.

Before the asynchronous mode of SRDF can be established, the initial instantiation of the database has to have taken place. In other words, the baseline full copy of all the volumes that are going to participate in the asynchronous replication must be executed first. This is usually accomplished using the adaptive copy mode of SRDF.

3. Put the group device_group into adaptive copy disk mode: symrdf –g device_group set mode acp_disk –noprompt

4. Instruct the source Symmetrix to send all the tracks at the source site to the target site using the current mode: symrdf –g device_group establish –full -noprompt

The adaptive copy mode of SRDF has no impact on host application performance. It transmits tracks to the remote site that have never been sent before or that have changed since the last time the track was sent. It does not preserve write order or consistency. When both sides are synchronized, SRDF can be put into asynchronous mode.

5. Put the device group device_group into asynchronous mode: symrdf –g device_group set mode async –noprompt

1. Enable multisession consistency for the group: symcg –cg device_group enable

There is no requirement for a host at the remote site during the asynchronous replication. The target Symmetrix itself manages the in-bound writes and updates the appropriate disks in the array.

6-24 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 115: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.9.3 How to restart in the event of a disaster

In the event of a disaster when the primary source Symmetrix is lost, database and application services must be run from the DR site. A host at the DR site is required for this. If the device group is not built yet on the remote host, it must first be created using the R2 devices that were mirrors of the R1 devices on the source Symmetrix. The first thing that must be done is to write-enable the R2 devices.

symld –g device_group rw_enable –noprompt (R2s on a single Symmetrix) symcg –cg composite_group rw_enable –noprompt (R2s on

multiple Symmetrix)

At this point, the host can issue the necessary commands to access the disks. For instance, on a UNIX host, import the volume group, activate the logical volumes, use fsck to check the file systems, and then mount them.

Once the data is available to the host, the database can be restarted. The database will perform recovery when the Sybase server is restarted. Transactions that were committed but not completed are rolled forward and completed using the information in the log file. Transactions that have updates applied to the database but not committed are rolled back. The result is a transactionally consistent database.

6.10 SRDF/AR single hop SRDF Automated Replication (SRDF/AR) is a continuous movement of dependent-write consistent data to a remote site using SRDF adaptive copy mode and TimeFinder consistent split technology. TimeFinder BCVs are used to create a dependent-write consistent point-in-time image of the data to be replicated. The BCVs also have an R1 personality which means that SRDF in adaptive copy mode can be used to replicate the data from the BCVs to the target site. Since the BCVs are not changing, replication completes in a finite length of time. The length of time for replication depends on the size of the network “pipe” between the two locations, the distance between the two locations, the quantity of changed data tracks, and the locality of reference of the changed tracks. On the remote Symmetrix, another BCV copy of the data is made using data on the R2s. This is necessary because the next SRDF/AR iteration replaces the R2 image in a nonordered fashion, and if a disaster were to occur while the R2s were synchronizing, there would not be a valid copy of the data at the DR site. The BCV copy of the data in the remote Symmetrix is commonly called the gold copy of the data. The whole process then repeats.

With SRDF/AR, there is no host impact. Writes are acknowledged immediately when they hit the cache of the source Symmetrix. Figure 6-7 on page 6-26 depicts the process.

SRDF/AR single hop 6-25

Page 116: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-7 SRDF/AR single-hop replication internals

1. Writes are received into the Source Symmetrix cache and are acknowledged immediately. The BCVs are already synchronized with the STDs at this point. A consistent split is executed against the STD-BCV pairing to create a point-in-time image of the data on the BCVs.

2. SRDF transmits the data on the BCV/R1s to the R2s in the remote Symmetrix.

3. When the BCV/R1 volumes are synchronized with the R2 volumes, they are re-established with the standards in the source Symmetrix. This causes the SRDF links to be suspended. At the same time, an incremental establish is performed on the target Symmetrix to create a “gold” copy on the BCVs in that frame.

4. When the BCVs in the remote Symmetrix are fully synchronized with the R2s, they are split and the configuration is ready to begin another cycle.

5. The cycle repeats based on configuration parameters. The parameters can specify the cycles to begin at specific times, specific intervals, or to run back to back

It should be noted that cycle times for SRDF/AR are usually in the minutes to hours range. The RPO is double the cycle time in a worst case scenario. This may be a good fit for customers with relaxed RPOs.

The added benefit of having a longer cycle time is that the locality of reference will likely increase. This is because there is a much greater chance of a track being updated more than once in a 1-hour interval than in, say, a 30-second interval. The increase in locality of reference shows up as reduced bandwidth requirements for the final solution.

Before SRDF/AR can be started, instantiation of the database has to have taken place. In other words, a baseline full copy of all the volumes that are going to participate in the SRDF/AR replication must be executed first. This requires a full establish to the BCVs in the source array, a full SRDF establish of the BCV/R1s to the R2s, and a full establish of the R2s to the BCVs in the target array. There is an option to automate the initial set up of the relationship.

6-26 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 117: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

As with other SRDF solutions, SRDF/AR does not require a host at the DR site. The commands to update the R2s and manage the synchronization of the BCVs in the remote site are all managed in-band from the production site.

The SRDF/AR Solutions Guide on Powerlink™ provides more details.

6.10.1 How to restart in the event of a disaster

In the event of a disaster, it is necessary to determine if the most current copy of the data is located on the remote site BCVs or R2s at the remote site. Depending on when in the replication cycle the disaster occurs, the most current version could be on either set of disks. This determination is simple and is described in the SRDF/AR Solutions Guide.

6.11 SRDF/AR multihop SRDF Automated Replication multihop (SRDF/AR multihop) is an architecture that allows long-distance replication with zero seconds of data loss through use of a bunker Symmetrix. Production data is replicated synchronously to the bunker Symmetrix, which is within 200 km of the production Symmetrix allowing synchronous replication, but also far enough away that potential disasters at the primary site may not affect it. Typically, the bunker Symmetrix is placed in a hardened computing facility.

BCVs in the bunker frame are periodically synchronized to the R2s and consistent split in the bunker frame to provide a dependent-write consistent point-in-time image of the data. These bunker BCVs also have an R1 personality, which means that SRDF in adaptive copy mode can be used to replicate the data from the bunker array to the target site. Since the BCVs are not changing, the replication can be completed in a finite length of time. The length of time for the replication depends on the size of the “pipe” between the bunker location and the DR location, the distance between the two locations, the quantity of changed data, and the locality of reference of the changed data. On the remote Symmetrix, another BCV copy of the data is made using the R2s. This is because the next SRDF/AR iteration replaces the R2 image in a nonordered fashion, and if a disaster were to occur while the R2s were synchronizing, there would not be a valid copy of the data at the DR site. The BCV copy of the data in the remote Symmetrix is commonly called the gold copy of the data. The whole process then repeats.

With SRDF/AR multihop, there is minimal host impact. Writes are only acknowledged when they hit the cache of the bunker Symmetrix and a positive acknowledgment is returned to the source Symmetrix. Figure 6-8 on page 6-28 depicts the process.

SRDF/AR multihop 6-27

Page 118: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-8 SRDF/AR multihop replication internals

1. BCVs are synchronized and consistently split against the R2s in the bunker Symmetrix. The write activity is momentarily suspended on the source Symmetrix to get a dependent-write consistent point-in-time image on the R2s in the bunker Symmetrix, which creates a dependent-write consistent point-in-time copy of the data on the BCVs.

2. SRDF transmits the data on the bunker BCV/R1s to the R2s in the DR Symmetrix.

3. When the BCV/R1 volumes are synchronized with the R2 volumes in the target Symmetrix, the bunker BCV/R1s are established again with the R2s in the bunker Symmetrix. This causes the SRDF links to be suspended between the bunker Symmetrix and the DR Symmetrix. At the same time an incremental establish is performed on the DR Symmetrix to create a gold copy on the BCVs in that frame.

4. When the BCVs in the DR Symmetrix are fully synchronized with the R2s, they are split and the configuration is ready to begin another cycle.

5. The cycle repeats based on configuration parameters. The parameters can specify the cycles to begin at specific times, specific intervals, or to run immediately after the previous cycle completes.

Note that even though cycle times for SRDF/AR multihop are usually in the minutes to hours range, the most current data is always in the bunker Symmetrix. Unless there is a regional disaster that destroys both the primary site and the bunker site, the bunker Symmetrix will transmit all data to the remote DR site. This means zero data loss at the point of the beginning of the rolling disaster or an RPO of 0 seconds. This solution is a good fit for customers with a requirement of zero data loss and long-distance DR.

6-28 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 119: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

An added benefit of having a longer cycle time means that the locality of reference will likely increase. This is because there is a much greater chance of a track being updated more than once in a 1-hour interval than in say a 30-second interval. The increase in locality of reference shows up as reduced bandwidth requirements for the network segment between the bunker Symmetrix and the DR Symmetrix.

Before SRDF/AR can be initiated, initial instantiation of the database has to have taken place. In other words, a baseline full copy of all the volumes that are going to participate in the SRDF/AR replication must be executed first. This means a full establish of the R1s in the source location to the R2s in the bunker Symmetrix. The R1s and R2s need to be synchronized continuously. Then, a full establish from the R2s to the BCVs in the bunker Symmetrix, a full SRDF establish of the BCV/R1s to the R2s in the DR Symmetrix, and a full establish of the R2s to the BCVs in the DR Symmetrix is performed. There is an option to automate this process of instantiation.

The SRDF/AR Solutions Guide on Powerlink provides more details.

6.11.1 How to restart in the event of a disaster

In the event of a disaster, it is necessary to determine if the most current copy of the data is on the R2s on the remote site or on the BCV/R1s in the bunker Symmetrix. Depending on when the disaster occurs, the most current version could be on either set of disks. This determination is simple and is outlined in the SRDF/AR Solutions Guide.

6.12 Database log shipping solutions Log shipping is a strategy that some companies employ for disaster recovery. The process only works for databases using archive logging. The essence of log shipping is that changes to the database at the source site that are reflected in the log are propagated to the target site. These logs are then applied to a standby database at the target site to maintain a consistent image of the database that can be used for DR purposes.

6.12.1 Overview of log shipping

The change activity on the source database generates log information that is eventually copied from the active logs to the archive logs to free up active log space. A process external to the database takes the archived logs and transmits them (usually over IP) to a remote DR location. This location has a database in standby mode. A server at the standby location receives the archive logs and uses them to roll forward changes to the standby database.

If a disaster were to happen at the primary site, the standby database could be brought online and made available to users, albeit with some loss of data.

6.12.2 Log shipping considerations

When considering a log shipping strategy, it is important to understand:

♦ What log shipping covers.

♦ What log shipping doesn’t cover.

Database log shipping solutions 6-29

Page 120: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

♦ Server requirements.

♦ How to instantiate and reinstantiate the database.

♦ How failback works.

♦ Federated consistency requirements.

♦ How much data will be lost in the event of a disaster.

♦ Manageability of the solution.

♦ Scalability of the solution.

6.12.2.1 Log shipping limitations

Log shipping transfers only the changes that happen to the database that are written into the active log and are then copied to the transaction log. Consequently, operations that happen in the database that are not written to the log do not get shipped to the remote site. Here are some examples of activities that can happen to the source database that are not externalized to the log:

♦ Nonlogged transactions

♦ Fast bulk copy (bcp) into a table with no triggers or indexes

♦ Truncate table

♦ Rows inserted by the select into statement

When deploying a log shipping architecture, all of the above database components must be considered and managed.

Log shipping is a database centric strategy. It is completely agnostic and does not address changes that occur outside of the database. The changes include, but are not limited to the following:

♦ Application files and binaries

♦ Database configuration files

♦ Database binaries

♦ OS changes

♦ Flat files

In order to sustain a working environment at the DR site, procedures must be executed to keep these objects up to date.

6-30 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 121: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.12.2.2 Server requirements

Log shipping requires a server at the remote DR site to receive and apply the logs to the standby database. It may be possible to offset this cost by using the server for other functions when it is not being used for DR. Database licensing fees for the standby database may also apply.

6.12.2.3 How to instantiate and reinstantiate the database

Log shipping architectures need to be supported by a method of instantiating the database at the remote site. The method needs to be manageable and timely. For example, shipping 200 tapes from the primary site to the DR site may not be an adequate approach, considering the transfer time and database restore time.

Reinstantiation must also be managed. Some operations, mentioned earlier, do not carry over into the standby database. Periodically, it may be necessary to reinstantiate the database at the DR site. The process should be easily managed but also should provide continuous DR protection. That is to say, there must be a contingency plan for a disaster during reinstantiation.

6.12.2.4 How failback works

An important component of any DR solution is designing a failback procedure. If the DR setup is tested with any frequency, this method should be simple and risk free. Log shipping can be done in reverse and works well when the primary site is still available. In the case of a disaster where the primary site data is lost, the database has to be reinstantiated at the production site.

6.12.2.5 Federated consistency requirements

Most databases are not isolated islands of information. They frequently have upstream inputs and downstream outputs, triggers and stored procedures that reference other databases. There may also be a workflow management system like MQ Series, Lotus Notes, or TIBCO managing queues containing work to be performed. This entire environment is a federated structure that needs to be recovered to the same point in time to get a transactionally consistent disaster restart point.

Log shipping solutions are single-database-centric and are not adequate solutions in federated database environments.

6.12.2.6 Data loss expectations

If sufficient bandwidth is provisioned for the solution, the amount of data lost in a disaster is going to be approximately two logs worth of information. In terms of time, it would be approximately twice the length of time it takes to create an archive log. This time will most likely vary during the course of the day due to fluctuations in write activity.

Database log shipping solutions 6-31

Page 122: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

6.12.2.7 Manageability of the solution

The manageability of a DR solution is a key to its success. Log shipping solutions have many components to manage including servers, databases, and external objects as noted above. Some of the questions that need to be answered to make a clear determination of the manageability of a log shipping solution are:

♦ How much effort does it take to set up log shipping?

♦ How much effort is needed to keep it running on an on-going basis?

♦ What is the risk if something required at the target site is missed?

♦ If ftp is being used to ship the log files, what kind monitoring is needed to guarantee success?

6.12.2.8 Scalability of the Solution

The scalability of a solution is directly linked to its complexity. In order to successfully scale the DR solution, the following questions must be addressed:

♦ How much more effort does it take to add more databases?

♦ How easy is the solution to manage when the database grows much larger?

♦ What happens if the quantity of updates increases dramatically?

6.12.3 Log shipping and standby access database

A remote standby database is a Sybase database with the same metadata as the production database. It can be created by restoring a backup of the production database or by using a storage hardware mirroring technology. However, it is important to understand that using these methods to restore the database does not allow roll forward recovery. In other words, subsequent log files cannot be applied to this database without using the Sybase roll-forward log recovery method, which is quiesce for external dump. The quiesce for external dump method is described in detail in the next section.

Figure 6-9 on page 6-33 depicts a standby database created from taking a backup of the production database.

6-32 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 123: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-9 Log shipping via dump and load database

6.12.4 Log shipping and quiesce for external dump

The database on the secondary server can remain offline and subsequent transaction logs can be applied to the target in order to keep the standby database current. This is very similar to the standby access feature mentioned above, however, the difference here is that quiesce for external dump allows roll-forward log recovery with the use of a transaction log, and does not require that a full database dump is first taken.

In this way, a database on the secondary server can act as a warm standby and periodically be refreshed from the production host. The secondary database does not require a server reboot and the copy can be kept up to date at all times. This roll-forward log recovery method is accomplished by using the Sybase quiesce for external dump feature. At any point in time that it becomes necessary to use the standby database for reporting or production purposes, it may be brought into an online state. At this point, the standby becomes completely independent of the original primary database, and must be treated as such. For example, this new primary must have all operational features and controls invoked.

In the event of a disaster the standby database needs to be activated for production availability. The following command can be used to activate the standby database:

online database dbname

Figure 6-10 on page 6-34 depicts the process of log shipping and standby access using the quiesce for external dump method.

Database log shipping solutions 6-33

Page 124: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-10 Log shipping and standby access with quiesce for external dump

1. On the primary, issue the quiesce database hold for external dump command. This will hold write I/O on the primary, and set a special flag in the log indicating that the database will be dumped using the standby access method.

2. Split the R2 device(s) containing the database.

3. On the primary, issue the quiesce database release to once again allow write I/O and continue normal processing.

4. On the target host, start the Sybase server with the –q flag. This indicator allows a database to be loaded from a dump that was taken via quiesce for external dump. Without the –q flag, the database load command will fail.

5. On the primary, dump the transaction log using the with standby_access syntax.

6. Split the R2 device(s) containing the database backup that was created in the previous step.

7. On the target server, load the database from the backup device.

8. Repeat steps 5, 6, and 7 continually dumping the transaction log, and loading it onto the target server, until such time it becomes necessary to failover to the target database, hence making it the new primary.

9. On the target server, issue the online database command to make the target database the new primary.

The Sybase standby access and the quiesce for external dump methods can easily be confused. The main difference lies in the fact that the Sybase quiesce for external dump feature does not require a full database dump to start the process. Transaction log dumps

6-34 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 125: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

may be taken on the primary database and applied to the target as long as the target Sybase instance has been started with the –q flag.

With the Sybase standby access feature, the user continually applies the transaction log dumps to the standby database. When it becomes necessary to promote the target database to the primary database, the user must perform a full database dump on the new primary to begin (or reinitialize) the backup process.

Both quiesce for external dump and the standby access feature are discussed in detail in Chapter 4 and Chapter 5.

6.13 Running database solutions Running database solutions attempt to use DR solutions in an active fashion. Instead of having the database and server sitting idle waiting for a disaster to occur, the idea of having the database running and serving a useful purpose at the DR site is an attractive one. The problem is that storage and host-based replication solutions typically require exclusive access to the database, not allowing users to access the target database. The solutions presented in this section perform replication at the database layer and therefore allow user access even when the database is being updated by the replication process.

6.13.1 Mirror Activator

The Sybase Mirror Activator solution is a combination of the Sybase Mirror Replication Agent (MRA), Sybase Replication Server, and block replication product such as EMC SRDF.

The Mirror Replication Agent reads a primary database transaction log device in order to replicate transactions. It is specifically designed to read a remotely mirrored log device that resides on a separate host from the primary data server. The Mirror Replication Agent acquires transactions from the primary database transaction log and sends them to the Replication Server. The Replication Server then distributes the transactions to the standby database. The Mirror Replication Agent requires only read access to the mirror log device.

Running database solutions 6-35

Page 126: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

Figure 6-11 Sybase Mirror Activator in an SRDF/S environment

1. Transactions on the primary database are synchronously replicated via SRDF to the target database log device R2(C).

The MA log device (R2(C)) is sometimes referred to as the PDB log mirror.

2. The Mirror Activator Agent (MA) reads the SRDF device (R2(C)) and asynchronously applies data to the Replication Server (RS).

3. The Replication Server (RS) converts the data to SQL and updates the standby database.

With this running database solution, traditional restore and restart methods are not necessary. In the event of a disaster, the process of switching (or failing over) from the primary site to the target site requires that the disk devices, the database and user application are switched from the hardware and software resources on the primary to the target site.

How is this accomplished? First of all, an overall failover plan and process should be put in place and tested ahead of time. This was discussed earlier in this chapter. Failing over SRDF devices is done with a simple command:

symrdf –g rdf1grp failover

For complete details on SRDF failover/failback procedures, refer to section 2.4.5.3. The Sybase database and application failover is typically handled by another Sybase product called OpenSwitch. The OpenSwitch procedures would also be configured ahead of time to handle the failover of the database users and application.

Sybase OpenSwitch sits between client connections and two or more Sybase Adaptive Servers to provide end users continuous application availability in the event of

6-36 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 127: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Understanding Disaster Restart and Disaster Recovery

unplanned outages or planned downtime. It eliminates the need for users in such circumstances to reestablish their connections or reconfigure their client machines. By default, OpenSwitch automatically attempts to transfer client connections to the designated target server when they fail. OpenSwitch provides a simple application programming interface to enable users to develop a Coordination Module (CM). The CM provides the intelligence in a transparent connection-failover process so that connections are not blindly transferred to the target server. It does this by constantly "pinging" the target server to assess its status and make sure the target server is up and ready for the connection transfer. The CM can be configured to transfer the user connections automatically or reconnect when prompted by a privileged user. More information regarding Sybase OpenSwitch is available at: http://www.sybase.com/products/informationmanagement/openswitch.

Running database solutions 6-37

Page 128: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 129: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 7 Performance Considerations

This chapter presents these topics:

7.1 Introduction.........................................................................................................7-2 7.2 Traditional Sybase layout recommendations ......................................................7-3 7.3 Symmetrix DMX performance guidelines..........................................................7-5 7.4 RAID considerations ........................................................................................7-11 7.5 Host- versus array-based striping .....................................................................7-15 7.6 Data placement considerations .........................................................................7-18 7.7 SRDF and Sybase Bulk Copy Program (bcp)...................................................7-22 7.8 Improving slow bcp performance .....................................................................7-26

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 7-1

Page 130: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

What is the best way to configure Sybase on EMC Symmetrix DMX storage? This is a frequently asked question from customers. However, before recommendations can be made, a detailed understanding of the configuration and requirements for the database, host(s), and storage environment needs to be made. The principal goal for optimizing any layout on the Symmetrix DMX is to maximize the spread of I/O across the components of the array, reducing or eliminating any potential bottlenecks in the system. The following sections examine the trade-offs between optimizing storage performance and manageability for Sybase. They also discuss recommendations for laying out a Sybase database on EMC Symmetrix DMX arrays.

7.1 Introduction In theory, an ideal database environment is one in which most I/Os are satisfied from memory rather than going to disk to retrieve the required data. In practice however, this is not realistic; careful consideration of the disk I/O subsystem is necessary. Optimizing performance of a Sybase database on an EMC Symmetrix DMX involves a detailed evaluation of the I/O requirements of the proposed application or environment. A thorough understanding of the performance characteristics and best practices of the Symmetrix, including the underlying storage components (disks, directors, and such) is also needed. Additionally, knowledge of complementary software products such as EMC SRDF, EMC TimeFinder, EMC Symmetrix Optimizer, and backup software, along with how using these products will affect the database, is important for maximizing performance. Ensuring optimal configuration for the Sybase database requires a holistic approach to application, host, and storage configuration planning. Configuration considerations for host and application specific parameters are beyond the scope of this document.

Monitoring and managing database performance should be a continuous process in most Sybase environments. Establishing baselines and then collecting database performance statistics for comparison against them is important to monitor performance trends and maintain a system that runs efficiently. The following section discusses the performance stack and how database performance should be managed in general, while in subsequent sections, Symmetrix DMX-specific layout and configuration issues are discussed.

7.1.1 The performance stack

Performance tuning involves the identification and elimination of bottlenecks in the various resources that make up the system. Resources include the application, the code (SQL) that drives the application, the database, the host, and the storage. Tuning performance involves the following:

♦ Analyzing each of these individual components that make up an application

♦ Identifying bottlenecks or potential optimizations

♦ Implementing changes that eliminate the bottlenecks

♦ Verifying that the change has improved overall performance

This is an iterative process and is performed until the benefits to be gained by continued tuning are outweighed by the effort required to tune the system.

7-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 131: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Figure 7-1 on page 7-3 shows the various “layers” that need to be examined as a part of any performance analysis. The potential benefits achieved by analyzing and tuning a particular layer of the performance stack are not equal, however. In general, tuning the upper layers of the performance stack (the application and SQL statements) provides a much better return on investment than tuning the lower layers, such as the host or storage layers. For example, implementing a new index on a heavily used table that changes logical access from a full table scan to index lookup with individual row selection can vastly improve database performance if the statement is run many times (thousands or millions) a day.

When tuning a Sybase database application, developers, DBAs, system administrators, and storage administrators need to work together to monitor and manage the process. Efforts should begin at the top of the stack and address application and SQL statement tuning before moving down into the database and host based tuning parameters. After all of these have been addressed, storage-related tuning efforts should then be performed.

Figure 7-1 The performance stack

7.2 Traditional Sybase layout recommendations Sybase’s best practices for optimally laying out a database focus on identifying potential sources of contention for storage related resources. Eliminating contention involves understanding how the database manages the data flow process and ensuring that concurrent or near-concurrent storage resource requests are separated onto different physical spindles.

Traditional Sybase layout recommendations 7-3

Page 132: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Analysis of the Sybase database server environment must be performed to determine if any of the following conditions exist that would contribute to poor performance:

♦ Insufficient memory allocated to the Sybase server

♦ Improperly indexed database (poor index structure)

♦ Poor object placement (which includes placement of databases, tables, and indexes across the physical storage devices)

Use Sybase utilities such as sp_sysmon to determine whether data placement across physical devices is causing performance problems. Check the entire sp_sysmon output during tuning to verify how the changes affect all performance categories.

Some guidelines for ensuring optimal object placement are as follows:

♦ Adaptive Server allows users to control the placement of databases, tables, and indexes across the physical storage devices. Performance can be improved by balancing read and write activity to disks that are spread across many devices and controllers.

♦ Place database data segments on specific devices, and assign the database's log files to a separate physical device. This way, reads and writes to the database log do not interfere with data access.

♦ Spread large, heavily used tables across several disk devices.

♦ Place specific tables or nonclustered indexes on specific devices. For example, place a table on a segment that spans several devices and its nonclustered indexes on a separate segment.

♦ Place the text and image page chain for a table on a separate device from the table itself. The table stores a pointer to the actual data value in the separate database structure, so each access to a text or image column requires at least two I/Os.

♦ Distribute tables evenly across partitions on separate physical disks to provide optimum parallel query performance.

More information about using sp_sysmon is available in Chapter 8, “Monitoring Performance with sp_sysmon,” in the Sybase Performance and Tuning Guide.

7-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 133: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

7.3 Symmetrix DMX performance guidelines Optimizing performance for Sybase in an EMC Symmetrix DMX environment is very similar to optimizing performance for all applications on the storage array. To maximize performance, a clear understanding of the I/O requirements of the applications accessing storage is required. The overall goal when laying out an application on disk devices in the back end of the Symmetrix DMX is to reduce or eliminate bottlenecks in the storage system by spreading out the I/O across all of the array’s resources. Inside a Symmetrix DMX array, there are a number of areas to consider:

♦ Front-end connections into the Symmetrix DMX – this includes the number of connections from the host to the Symmetrix DMX that are required, and whether front-end Fibre Channel ports will be directly connected or a SAN will be deployed to share ports between hosts.

♦ Memory cache in the Symmetrix DMX – all host I/Os pass through memory cache on the Symmetrix DMX. I/O can be adversely affected if insufficient memory cache is configured in the Symmetrix DMX for the environment. Also, writes to individual hypervolumes or to the array as a whole may be throttled when a threshold known as the write-pending limit is reached.

♦ Back-end considerations – There are two sources of possible contention in the back end of the Symmetrix: the back-end directors and the physical spindles. Proper layout of the data on the disks is needed to ensure satisfactory performance.

7.3.1 Front-end connectivity

Optimizing front-end connectivity requires an understanding of the number and size of I/Os, both reads and writes, that will be sent between the hosts and the Symmetrix DMX. There are limitations to the amount of I/O that each front-end director port, each front-end director processor, and each front-end director board can handle. Additionally, SAN fan-out counts (the number of hosts that can be attached through a Fibre Channel switch to a single front-end port) must be carefully managed.

A key concern when optimizing front-end performance is determining which of the following I/O characteristics is more important in the customer’s environment:

♦ Input/output operations per second (IOPS)

♦ Throughput (MB/s)

♦ A combination of IOPS and throughput

In OLTP database applications, where I/Os are typically small and random, IOPS is the more-important factor. In DSS applications, where transactions in general require large sequential table or index scans, throughput is the more critical factor. In some databases, a combination of OLTP and DSS like I/Os are required. Optimizing performance in each type of environment requires tuning the host I/O size.

Figure 7-2 on page 7-6 depicts the relationships between the block size of a random read request from the host, and both IOPS and throughput needed to fulfill that request from

Symmetrix DMX performance guidelines 7-5

Page 134: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

the Symmetrix DMX. From this we can see that the maximum number of IOPS is achieved using smaller block sizes such as 4 KB (4096). For OLTP applications, where the typical Sybase block size is 4 KB (but configurable to 8 KB or 16 KB), we see that the Symmetrix DMX provides higher IOPS, but decreased throughput. The opposite is also true for DSS applications. Tuning the host to send larger I/O sizes for DSS applications can increase the overall throughput (MB/s) from the front-end directors on the DMX. Database block sizes are generally larger (16 KB or even 32 KB) for DSS applications.

Currently, each Fibre Channel port on the Symmetrix DMX is theoretically capable of 200 MB/s of throughput. In practice however, the throughput available per port is significantly less and depends on the I/O size and on the shared utilization of the port and processor on the director. Increasing the size of the I/O from the host perspective decreases the number of IOPS that can be performed, but increases the overall throughput (MB/s) of the port. As such, increasing the I/O block size on the host is beneficial for overall performance in a DSS environment. Limiting total throughput to a fraction of the theoretical maximum (100-120 MB/s is probably a good “rule of thumb”) will ensure that enough bandwidth is available for connectivity between the Symmetrix DMX and the host.

Figure 7-2 Relationship between host blocksize and IOPS/throughput

7.3.2 Symmetrix cache

Another important performance consideration is to ensure that an appropriate amount of memory cache is installed in the Symmetrix DMX. All I/O requests from hosts attached to the array are serviced from the Symmetrix DMX memory cache. As such, a lack of memory cache can seriously degrade both read and write performance. With newly purchased arrays, the sales team appropriately sizes the cache based on the number and size of physical spindles, configuration (including number and type of volumes),

7-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 135: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

replication requirements (SRDF for example), and customer requirements. As additional physical drives are added, or as configuration or cache requirements change, it is important to monitor and verify that the cache size meets the needs of the changing storage environment.

Memory cache usage can be monitored through a number of Symmetrix DMX monitoring tools. Primary among these is ControlCenter Performance Manager (formerly known as WorkLoad Analyzer). Performance Manager contains a number of views that analyze memory cache utilization at both the hypervolume and overall system level. Views provide detailed information on specific component utilizations including disks, directors (front end and back end), and cache utilization.

Symmetrix cache plays a key role in host I/O read and write performance. Read performance can be improved through prefetching by the Symmetrix DMX if the reads are sequential in nature. Enginuity algorithms detect sequential read activity and prestage reads from disk in cache before the data is requested. Write performance is greatly enhanced because all writes are acknowledged back to the host when they reach Symmetrix DMX memory cache rather than when they are written to disk. While reads from a specific hypervolume can use as much cache as is required to satisfy host requests assuming free cache slots are available, the DMX limits the number of writes that can be written to a single volume. Understanding how the Symmetrix DMX limits writes to memory cache is critical when planning for optimal performance.

The limit imposed by the Symmetrix DMX for writes by an individual hypervolume is called the write-pending limit. The write-pending limit is used to prevent high write rates to a single hypervolume from consuming all of the storage array memory cache for its use, at the expense of performance for reads or writes to other volumes. The write-pending limit for each hypervolume is determined at system startup and depends on the number and type of volumes configured and the amount of cache available. The limit is not dependent on the actual size of each volume. The more cache available, the more write requests that can be serviced in cache by each individual volume. While some sharing of unused memory cache may take place (although this is not guaranteed), an upper limit of three times the initial write-pending limit assigned to a volume is the maximum amount of memory cache any hypervolume can acquire for changed tracks. If the maximum write-pending limit is reached, destaging to disk must take place before new writes can come in. This forced destaging to disk before a new write can be received into cache limits writes to that particular volume to physical disk write speeds. Forced destage of writes can significantly reduce performance to a hypervolume should the write-pending limit be reached. If performance problems to a particular volume are identified, an initial step in determining the source of the problem should include verification of the number of writes and the write-pending limit for that volume.

In addition to limits imposed at the hypervolume level, there are additional write-pending limits imposed at the system level. Two key memory cache utilization points for the DMX are reached when 40 percent and 80 percent of the cache is used for pending writes. Under normal operating conditions, satisfying read requests from a host has greater priority than satisfying write requests. However, when pending writes consume 40 percent of memory cache, the Symmetrix DMX then prioritizes reads and writes equally. This reprioritization can have a profound affect on database performance. The degradation is even more pronounced if cache utilization for writes reaches 80 percent. At that point, the DMX begins a forced destage of writes to disk,

Symmetrix DMX performance guidelines 7-7

Page 136: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

with discernable performance degradation to both writes and reads. If this threshold is reached, it is a clear indicator that reexamination of both the cache and the total I/O on the array is needed.

Write-pending limits are also established for Symmetrix metavolumes. Metavolumes are created by combining two or more individual hypervolumes into a single logical device that is then presented to a host as a single logical unit (LUN). Metavolumes can be created as concatenated or striped metavolumes. Striped metavolumes use a stripe size of 960 KB. Concatenated metavolumes write data to the first hyper in the metavolume (meta head) and fill it before beginning to write to the next member of the meta. Write-pending limits for a metavolume are calculated on a member by member (hypervolume) basis.

Determining the write-pending limit and current number of writes pending per hypervolume can be done simply using SYMCLI commands.

♦ The following SYMCLI command displays the write-pending limit for hypervolumes in a Symmetrix: symcfg –v list | grep Pending

Max # of system write pending slots:162901

Max # of DA write pending slots:81450

Max # of device write pending slots:4719

Depending on cache availability, the maximum number of write pending slots an individual hypervolume can use is up to three times the maximum number of device write-pending slots listed (3 * 4,719 = 14,157 write-pending tracks).

♦ The number of write-pending slots that a host’s devices use can be found using the SYMCLI command: symstat –i 5

Figure 7-3 on page 7-8 shows the output from the symstat command.

Figure 7-3 Output from symstat indicating we have reached the write-pending limit

7-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 137: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

From this, we can see that the device 14D has reached the device write-pending limit of 14,157. Further analysis should be make to determine the cause of the excessive writes and methods of alleviating this performance bottleneck against this device.

Alternatively, Performance Manager may be used to determine the device write-pending limit, and whether device limits are being reached.

Figure 7-4 on page 7-9 is a Performance Manager view displaying both the device write-pending limits and device write-pending counts.

Figure 7-4 Write-pending count versus write-pending limit

Note that the number of memory cache boards can also have a minor affect on performance. When comparing Symmetrix DMX arrays that have the same amount of memory cache, increasing the number of boards (for example, four memory cache boards with 16 GB each as opposed to two memory cache boards with 32 GB each) has a small positive affect on the performance in DSS applications. This is due to the increased number of paths between front-end directors and cache, and has the affect of improving overall throughput. However, configuring additional boards is only helpful in high-throughput environments such as DSS applications. For OLTP workloads, where IOPS are more critical, additional cache directors provide no added performance benefits. This is because the number of IOPS per port or director is limited by the processing power of CPUs on each board.

7.3.3 Back-end considerations

Back-end considerations are typically the most important part of optimizing performance on the Symmetrix DMX. Advances in disk technologies have not kept up with performance increases in other parts of the storage array such as director and bandwidth

Symmetrix DMX performance guidelines 7-9

Page 138: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

(Direct Matrix versus bus) performance. Disk access speeds have increased by a factor of three to seven in the past eight years while other components have easily increased one to three orders of magnitude. As such, most performance bottlenecks in the Symmetrix DMX are attributed to physical spindle limitations.

An important consideration for back-end performance is the number of physical spindles available to handle the anticipated I/O load. Each disk is capable of a limited number of operations. Algorithms in the Symmetrix DMX Enginuity operating environment optimize I/Os to the disks. Although this helps to reduce the number of reads and writes to disk, access to disk, particularly for random reads, is still a requirement. If an insufficient number of physical disks are available to handle the anticipated I/O workload, performance will suffer. It is critical to determine the number of spindles required for a Sybase database implementation based upon I/O performance requirements, and not solely on the physical space considerations.

To reduce or eliminate Symmetrix DMX back-end performance issues, spread access to the disks across as many back-end directors and physical spindles as possible. EMC has long recommended for data placement of application data to “go wide before going deep.” This means that performance is improved by spreading data across the back-end directors and disks, rather than allocating specific applications to specific physical spindles. Significant attention should be given to balancing the I/O on the physical spindles. Understanding the I/O characteristics of each data file and separating high application I/O volumes on separate physical disks will minimize contention and improve performance. Implementing Symmetrix Optimizer may also help to reduce I/O contention between hypervolumes on a physical spindle. Symmetrix Optimizer identifies I/O contention on individual hypervolumes and nondisruptively moves one of the hypers to a new location on another disk. Symmetrix Optimizer is an invaluable tool in helping to reduce contention on physical spindles should workload requirements change in an environment.

Placement of data on the disks is another performance consideration. Due to the rotational properties of disk platters, tracks on the outer parts of the disk perform better than inner tracks. While the Symmetrix DMX Enginuity algorithms smooth out much of this variation, small (perhaps 15 percent) performance increases can be achieved by placing high I/O objects on the outer parts of the disk. Of more importance, however, is minimizing the seek times associated with the disk head moving between hypervolumes on a spindle. Physically locating higher I/O devices together on the disks can significantly improve performance. Disk head movement across the platters (seek time) is a large source of latency in I/O performance. By placing higher I/O devices contiguously, disk head movement may be reduced, increasing I/O performance of that physical spindle.

7-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 139: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

7.4 RAID considerations 7.4.1 Types of RAID

The following defines RAID configurations available on the Symmetrix DMX:

♦ Unprotected – This configuration is not typically used in a Symmetrix DMX environment for production volumes. BCVs and occasionally R2 devices (used as target devices for SRDF) can be configured as unprotected volumes.

♦ RAID 1 – These are mirrored devices and are the most common RAID type in a Symmetrix DMX. Mirrored devices require writes to both physical spindles. However, intelligent algorithms in the Enginuity operating environment can use both copies of the data to satisfy read requests that are not already in the cache of the Symmetrix DMX. RAID 1 offers optimal availability and performance but at an increased cost over other RAID protection options.

♦ RAID 5 – A relatively recent addition to Symmetrix data protection (Enginuity 5670+), RAID 5 stripes parity information across all volumes in the RAID group. RAID 5 offers good performance and availability, at a decreased cost. Data is striped using a stripe width of four tracks (128 KB). RAID 5 is configured either as RAID 5 (3+1) (75 percent usable) or RAID 5 (7+1) (87.5 percent usable) configurations. Figure 7-5 on page 7-12 shows the configuration for RAID 5 (3+1) while Figure 7-6 on page 7-12 shows how a random write in a RAID 5 environment is performed.

♦ RAID-S – Proprietary EMC RAID configuration with parity information on a single hypervolume. RAID-S functionality was optimized and renamed as Parity RAID in the Symmetrix DMX.

♦ Parity RAID – Prior to the availability of RAID 5, Parity RAID was implemented in storage environments that required a lower cost and did not have high performance requirements. Parity RAID utilizes a proprietary RAID protection scheme with parity information being written on a single hypervolume. Parity RAID is configured in 3+1 (75 percent usable) and 7+1 (87.5 percent usable) configurations. Parity RAID is not recommended in current DMX configurations, where RAID 5 should be used instead.

♦ RAID 10 – These are striped and mirrored devices. This configuration is only used in mainframe environments. However, RAID 10 can also be configured by creating striped metavolumes, as described next.

RAID considerations 7-11

Page 140: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Figure 7-5 RAID 5 (3+1) layout detail

Figure 7-6 Anatomy of a RAID 5 random write

The following describes the process of a random write to a RAID 5 volume:

1. A random write is received from the host and is placed into a data slot in memory cache to be destaged to disk.

7-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 141: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

2. The write is destaged from cache to the physical spindle. When received, parity information is calculated in memory cache on the drive by reading the old data and using an exclusive OR calculation with the new data.

3. The new parity information is written back to Symmetrix memory cache.

4. The new parity information is written to the appropriate parity location on another physical spindle.

Determining the appropriate level of RAID to configure in an environment depends on the availability and performance requirements of the applications that will use the Symmetrix DMX. Combinations of RAID types are configurable in the Symmetrix DMX with some exceptions. For example, storage may be configured as a combination of RAID 1 and RAID 5 (3+1) devices. Combinations of 3+1 and 7+1 Parity RAID or RAID 5 are currently not allowed in the same Symmetrix DMX. Likewise, mixing any types of RAID 5 and Parity RAID in the same frame is not allowed.

Until recently, RAID 1 was the predominant choice for RAID protection in Symmetrix storage environments. RAID 1 provides maximum availability and enhanced performance over other available RAID protections. In addition, performance optimizations such as Symmetrix Optimizer, which reduces contention on the physical spindles by nondisruptively migrating hypervolumes, and Dynamic Mirror Service Policy, which improves read performance by optimizing reads from both mirrors, were only available with mirrored volumes, not with Parity RAID devices. While mirrored storage is still the recommended choice for RAID configurations in the Symmetrix DMX, the relatively recent addition of RAID 5 storage protection provides customers with a reliable, economical alternative for their production storage needs.

RAID 5 storage protection became available with 5670 and later releases of the Enginuity operating environment. RAID 5 storage protection provides economic advantages over using RAID 1, while at the same time providing high availability and performance. RAID 5 implements the standard data striping and rotating parity across all members of the RAID group (either 3+1 or 7+1). Additionally, Symmetrix Optimizer functionality is available with RAID 5 in order to reduce spindle contention. RAID 5 provides customers with a flexible data protection option for dealing with varying workloads and service level requirements. With the advent of RAID 5 protection, using Parity RAID in Symmetrix DMX systems is not recommended.

7.4.2 RAID recommendations

In the Symmetrix DMX, Sybase databases can be deployed on RAID 5 protected disks for all but the highest I/O performance-intensive applications. Databases used for test, development, QA, or reporting are likely candidates for using RAID 5 protected volumes.

Another potential candidate for deployment on RAID 5 storage is DSS applications. In many DSS environments, read performance greatly outweighs the need for rapid writes. This is because data warehouses typically perform loads off-hours or infrequently (once a week or month); read performance in the form of database user queries is significantly more important. Since RAID 5 does not affect read performance, only writes, these types of applications are generally good candidates for RAID 5 storage deployments.

RAID considerations 7-13

Page 142: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Conversely, production OLTP applications typically require small random writes to the database, and as such, are typically more suited to RAID 1 storage.

An important consideration when deploying RAID 5 is disk failures. When disks containing RAID 5 members fail, two primary issues arise—performance and data availability. Performance will be affected when the RAID group operates in the degraded mode, as the missing data must be reconstructed using parity and data information from other members in the RAID group. Performance will also be affected when the disk rebuild process is initiated after the failed drive is replaced or a hot spare is disk is activated. Potential data loss is the other important consideration when using RAID 5. Multiple drive failures that cause the loss of multiple members of a single RAID group result in loss of data. While the probability of such an event is insignificant, the potential in 7+1 RAID 5 environment is much higher than that for RAID 1. As such, the effects of data loss due to the loss of multiple members of RAID 5 group should be carefully weighed against the benefits of using RAID 5.

The bottom line in choosing a RAID type is ensuring that the configuration meets the needs of the customer’s environment. Considerations include read and write performance, balancing the I/O across the spindles and back end of the Symmetrix, tolerance for reduced application performance when a drive fails, and the consequences of losing data in the event of multiple disk failures. In general, EMC recommends RAID 1 for all types of customer data including Sybase databases. However, RAID 5 configurations may be beneficial for many applications and should be considered.

7.4.3 Symmetrix metavolumes

Individual Symmetrix hypervolumes of the same RAID type (RAID 1, RAID 5) may be combined together to form a virtualized device called a Symmetrix metavolume. Metavolumes are created for a number of reasons including:

♦ A desire to create devices that are greater than the largest hypervolume available (in 5670 and 5671 Enginuity operating environments, this is currently just under 31 GB per hypervolume).

♦ To reduce the number of volumes presented down a front-end director or to an HBA. A metavolume presented to an HBA only counts as a single LUN even though the device may consist of a large number of individual hypers.

♦ To increase performance to a LUN by spreading I/O across more physical spindles.

There are two types of metavolumes: concatenated or striped. With concatenated metavolumes, the individual hypers are combined to form a single volume, such that data is written to the first hypervolume sequentially before moving to the next. Writes to the metavolume start with the metahead and proceed on that physical until full, and then move on to the next hypervolume. Striped metavolumes on the other hand, write data across all members of the device. The stripe size is set at two cylinders or 960 KB.

In nearly all cases, striped metavolumes are recommended over concatenated volumes in Sybase database environments. The exception to this general rule occurs in specific DSS environments where RAID 5 metavolumes may interrupt Enginuity prefetching algorithms, as discussed in the next section.

7-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 143: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

7.5 Host- versus array-based striping Another commonly disputed issue when configuring a storage environment for maximum performance is whether to use host-based or array-based striping in Sybase environments. Striping of data across the physical disks is critically important to database performance because it allows the I/O to be distributed across multiple spindles. Although disk drive size and speeds have increased dramatically in recent years, spindle technologies have not kept pace with host CPU and memory improvements. Performance bottlenecks in the disk subsystem can develop if careful attention is not paid to the data storage requirements and configuration.

Oracle has recommended the SAME (Stripe and Mirror Everywhere) configuration for many years. However, Oracle has never recommended where the striping should take place and Sybase has never made recommendations in either case. In general, the discussion concerns trade-offs between performance and manageability of the storage components. The following section presents the trade-offs when using host-based and array-based striping.

Host- versus array-based striping 7-15

Page 144: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

7.5.1 Host-based striping

Host-based striping is configured through the Logical Volume Manager utilized on most open systems hosts. For example, in an HP-UX environment, striping is configured when logical volumes are created in a volume group as follows:

lvcreate -i 4 -I 64KB -L 1024 -n stripevol redovg

In this case, the striped volume is called stripevol (using the –n flag), is created on the volume group redovg, is of volume size 1 GB (-L 1024), uses a stripe size of 64 KB (-I 64KB), and is striped across four physical volumes (-i 4). The specifics of striping data at the host level are operating system dependent.

Two important things to consider when creating host-based striping are the number of disks to configure in a stripe set and an appropriate stripe size. While no definitive answer can be given that optimizes these settings for any given configuration, the following are general guidelines to use when creating host-based stripes:

♦ Ensure that the stripe size used is a power of two multiple of the track size configured on the Symmetrix DMX (a multiple of 32 KB on DMX-2 and 64 KB on DMX-3), the database, and host I/Os. Alignment of database blocks, Symmetrix tracks, host I/O size, and the stripe size can have considerable impact on database performance. Typical stripe sizes are 64 KB to 256 KB, although it can be as high as 512 KB or even 1 MB.

♦ Multiples of 4 physical devices for the stripe width are generally recommended, although this may be increased to 8 or 16 as required for LUN presentation or SAN configuration restrictions as needed. Care should be taken with RAID 5 metavolumes to ensure that members do not end up on the same physical spindles, as this may affect performance.

♦ When configuring an SRDF environment, use smaller stripe sizes (32 KB for example), particularly for the redo logs. This is to enhance performance in synchronous SRDF environments due to the limit of having only one outstanding I/O on the link.

♦ Data alignment (along block boundaries) can play a significant role in performance, particularly in Windows environments. Refer to operating-system-specific documentation to learn how to align data blocks from the host along Symmetrix DMX track boundaries.

♦ Ensure that volumes used in the same stripe set are located on different physical spindles. Using volumes from the same physicals reduces the performance benefits of using striping.

7.5.2 Symmetrix-based striping (metavolumes)

An alternative to using host-based striping is to stripe at the Symmetrix DMX level. Striping in the Symmetrix is accomplished through the use of striped metavolumes, as discussed in the previous section. Individual hypervolumes are selected and striped together, forming a single LUN that is presented through the front-end director to the host. At the Symmetrix level, all writes to this single LUN are striped. Currently, the

7-16 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 145: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

only stripe size available for a metavolume is 960 KB. It is possible to create metavolumes with up to 255 hypervolumes, although in practice metavolumes are usually created with 4 to 16 devices.

7.5.3 Striping recommendations

Determining the appropriate striping method depends on a number of factors. In general, striping is a trade-off between manageability and performance. With host-based striping, CPU cycles are used in order to manage the stripes; Symmetrix metavolumes require no host cycles to stripe the data. This small performance decrease in host-based striping is offset however by the fact that each device in a striped volume group maintains an I/O queue, thereby increasing performance over a Symmetrix metavolume, which only has a single I/O queue on the host. Recent tests have shown that striping at the host level provides somewhat better performance than comparable Symmetrix-based striping, and is generally recommended if performance is paramount. Host-based striping is also recommended with environments utilizing synchronous SRDF since stripe sizes in the host can be tuned to smaller increments than are currently available with Symmetrix metavolumes, thereby increasing performance.

Management considerations however, generally favor Symmetrix-based metavolumes over host-based stripes. In many environments, customers have achieved high- performance back-end layouts on the Symmetrix by allocating all of the storage as four-way striped metavolumes. This has the advantage that any volume selected for host data is always striped, with reduced chances for contention on any given physical spindle. Additional storage requirements for any host volume group, since additional storage is configured as a metavolume, are also striped. Management of added storage to an existing volume group utilizing host-based striping may be significantly more difficult, requiring in some cases a full backup, reconfiguration of the volume group, and restore of the data to successfully expand the stripe.

An alternative in Sybase environments gaining popularity recently is the combined use of both host-based and array-based striping. Known as double striping or a plaid, this configuration uses striped metavolumes in the Symmetrix array, which are then presented to a volume group and striped at the host level. This has many advantages in database environments where read access is small and highly random in nature. Since I/O patterns are relatively unknown, access to data is spread across a large quantity of physical spindles, thereby decreasing the probability of contention on any given disk. Double striping, in some cases, can interfere with data prefetching at the Symmetrix DMX level when large, sequential data reads are predominant—this configuration may not be appropriate for DSS workloads.

Another method of double striping the data is through the use of Symmetrix metavolumes and RAID 5. A RAID 5 hypervolume stripes data across either four or eight physical disks using a stripe size of 128 KB. Striped metavolumes stripe data across two or more hypers using a stripe size of 960 KB. When using striped metavolumes with RAID 5 devices, ensure that members do not end up on the same physical spindles, as this will adversely affect performance. In some cases, double striping using this method may also affect prefetching for long, sequential reads.

The decision of whether to use host-based, array-based, or double striping in a storage environment has elicited considerable fervor on all sides of the argument. While each

Host- versus array-based striping 7-17

Page 146: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

configuration has positive and negative factors, the important thing is to ensure that some form of striping is used for the storage layout. The appropriate layer for disk striping can have a significant impact on the overall performance and manageability of the database system. Deciding which form of striping to use depends on the nature and requirements of the database environment in which it is configured.

With the advent of RAID 5 data protection in the Symmetrix DMX, an additional option of triple striping data using RAID 5, host-based striping, and metavolumes combined is now available. However, triple striping increases data layout complexity, and in testing has shown no performance benefits over other forms of striping. In fact, it has actually been shown to be detrimental to performance and as such, is not recommended in any Symmetrix DMX configuration.

7.6 Data placement considerations Placement of the data on the physical spindles can potentially have a significant impact on Sybase database performance. Placement factors that affect database performance include the following:

♦ Hypervolume selection for specific database files on the physical spindles themselves

♦ The spread of database files across the spindles to minimize contention

♦ The placement of high I/O devices contiguously on the spindles to minimize head movement (seek time)

♦ The spread of files across the spindles and back-end directors in order to reduce component bottlenecks

Each of these factors is discussed next.

7.6.1 Disk performance considerations

As shown in Figure 7-7 on page 7-20, there are five main considerations for spindle performance:

♦ Rotational Speed – this is due to the need for the platter to rotate underneath the head to correctly position the data that needs to be accessed. Rotational speeds for spindles in the Symmetrix DMX range from 7,200 to 15,000 rpm. The average rotational delay is the time it takes for one half of a revolution of the disk. In the case of a 15k rpm drive, this would be about 2 milliseconds.

♦ Actuator Positioning (Seek Time) – this is the time it takes the actuating mechanism to move the heads from their present position to a new position. This delay averages a few milliseconds in length and depends on the type of drive. For example, a 15k rpm drive has an average seek time of approximately 3.5 ms for reads and 4 ms for writes, with a full disk seek of 7.4 ms for reads and 7.9 ms for writes.

♦ Interface Speed – this is a measure of the transfer rate from the drive into the Symmetrix cache. It is important to ensure that the transfer rate between the drive

7-18 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 147: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

and cache is greater than the drives rate to deliver data. Delay caused by this is typically a very small value, on the order of a fraction of a millisecond.

♦ Areal Density – this is a measure of the number of bits of data that fits on a given surface area on the disk. The greater the density, the more data per second that can be read from the disk as it passes under the disk head.

♦ Cache Capacity and Algorithms – newer disk drives have improved read and write algorithms, as well as memory cache, in order to improve the transfer of data in and out of the drive and to make parity calculations for RAID 5.

Cache capacity and algorithms, along with interface speed combine to produce a data transfer delay. The rotational speed and areal density combine to produce a rotational delay in the transfer of data off disk. Disk latency then, is typically measured as the sum of these two elements, data transfer and rotational delay, plus the seek time. Data transfer delays are typically on the order of fractions of a millisecond and as such, rotational delays and delays due to repositioning the actuator heads are the primary sources of latency on a physical spindle. Additionally, while rotational speeds of disk drives have increased from top speeds of 7,200 rpm up to 15,000 rpm, the rotational delay continues to be the largest source of latency in disk assemblies.

Rotational delays are more acute in the inner parts of the drive—more data can be read per second on the outer parts of the drive than by data located on the inner regions. Therefore, performance is significantly improved on the outer parts of the disk. In many cases, performance improvements of almost 100 percent can sometimes be realized on the outer cylinders of a physical spindle. This performance differential typically leads customers to place high I/O objects on the outer portions of the drive.

While placing high I/O objects such as redo logs on the outer edges of the spindles has merit, performance differences across the drives inside the Symmetrix DMX are significantly smaller than the stand-alone disk characteristics would attest. Enginuity operating environment algorithms, particularly the algorithms that optimize ordering of I/O as the disk heads scan across the disk, greatly reduces differences in hypervolume performance across the drive. Although this smoothing of disk latency may actually increase the delay of a particular I/O, overall performance characteristics of I/Os to hypervolumes across the face of the spindle will be more uniform.

Data placement considerations 7-19

Page 148: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Figure 7-7 Disk performance factors

7.6.2 Hypervolume contention

Disk drives can receive only a limited number of read or write I/Os before performance degradation occurs. While disk improvements and cache, both on the physical drives and in disk arrays, have improved disk read and write performance, the physical devices can still become a critical bottleneck in Sybase database environments. Eliminating contention on the physical spindles is a key factor in ensuring maximum Sybase performance on Symmetrix DMX arrays.

Contention can occur on a physical spindle when I/O (read or write) to one or more hypervolumes exceeds the I/O capacity of the disk. While contention on a physical spindle is undesirable, this type of contention can be rectified by migrating high I/O data onto other devices with lower utilization. This can be accomplished using a number of methods, depending on the type of contention that is found. For example, when two or more hypervolumes on the same physical spindle have excessive I/O, contention may be eliminated by migrating one of the hypervolumes to another, lower utilized physical spindle. This could be done through processes such as LVM mirroring at the host level or by using tools such as EMC Symmetrix Optimizer to nondisruptively migrate data from impacted devices. One method of reducing hypervolume contention is careful layout of the data across the physical spindles on the back-end of the Symmetrix. Another method of reducing contention is to use striping, either at the host level or inside the Symmetrix.

Hypervolume contention can be found in a number of ways. Sybase-specific data collection and analysis tools such as sp_sysmon, the Historical Monitor, and the use of third-party tools such as Bradmark Surveillance for Sybase and BMC's DB-XRay can

7-20 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 149: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

help to identify areas of reduced I/O performance in the database data files. Additionally, EMC tools such as Performance Manager can help to identify performance bottlenecks in the Symmetrix DMX array. Establishing baselines of the system and proactive monitoring are essential in helping to maintain an efficient, high-performance database.

Most often, tuning database performance on the Symmetrix is performed postimplementation. This is unfortunate because with a small amount of up-front effort and detailed planning, significant I/O contention issues could be minimized or eliminated in a new implementation. While detailed I/O patterns of a database environment are not always well known, particularly in the case of a new system implementation, careful layout consideration of a database on the Symmetrix back end can save time and future effort in trying to identify and eliminate I/O contention on the disk drives.

7.6.3 Maximizing data spread across the back end

A long-standing data layout recommendation at EMC has been “Go wide before going deep.” This means that data placement on the Symmetrix DMX should be spread across the back-end directors and physical spindles before locating data on the same physical drives. By spreading the I/O across the Symmetrix back end, I/O bottlenecks in any one array component can be minimized or eliminated.

Given recent improvements in the Symmetrix DMX component technologies such as CPU performance on the directors and the Direct Matrix architecture, the most common bottleneck in new implementations is with contention on the physical spindles and the back-end directors. To reduce these contention issues, examine the I/O requirements for each application that will use the Symmetrix storage. From this analysis, make a detailed layout that balances the anticipated I/O requirements across both back-end directors and physical spindles.

Before data is laid out on the back end of the DMX, it is helpful to understand the I/O requirements for each of the file systems or volumes that are being laid out. Several methods are available for optimizing layout on the back-end directors and spindles. One time-consuming method involves creating a map of the hypervolumes on physical storage, including hypervolume presentation by director and physical spindle, based on information available in EMC ControlCenter. This involves documenting the environment using a tool such as Excel, with each hypervolume marked on its physical spindle and disk director. Using this map of the back end and volume information for the database elements, preferably categorized by I/O requirement (high/medium/low, or by anticipated reads and writes), the physical data elements and I/Os can be evenly spread across the directors and physical spindles.

This type of layout can be extremely complex and time consuming. Additional complexity is added when RAID 5 hypers are added to the configuration. Since each hypervolume is really placed on either four or eight physical volumes in RAID 5 environments, trying to uniquely map out each data file or database element is beyond what most customers feel provides value. In these cases, one alternative is to rank each of the database elements or volumes in terms of anticipated I/O. Once ranked, each element may be assigned a hypervolume in order on the back end. Since BIN file creation tools almost always spread contiguous hypervolume numbers across different

Data placement considerations 7-21

Page 150: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

elements of the back end, this method of assigning the ranked database elements usually provides a reasonable spread of I/O across the spindles and back-end directors in the Symmetrix DMX. In combination with Symmetrix Optimizer, this method of spreading the I/O is normally effective in maximizing the spread of I/O across DMX components.

7.6.4 Minimizing disk head movement

Perhaps the key performance consideration controllable by a customer when laying out a database on the Symmetrix DMX is minimizing head movement on the physical spindles. Minimizing head movement is accomplished by positioning high I/O hypervolumes contiguously on the physical spindles. Disk latency caused by interface or rotational speeds cannot be controlled by layout considerations. The only disk drive performance considerations that can be controlled are the placement of data onto specific, higher-performing areas of the drive (discussed in a previous section) and the reduction of actuator movement, by trying to place high I/O objects in adjacent hypervolumes on the physical spindles.

One method, described in the previous section, describes how volumes can be ranked by anticipated I/O requirements. Using a documented “map” of the back-end spindles, high I/O objects can be placed on the physical spindles, grouping the highest I/O objects together. Recommendations differ as to whether placing the highest I/O objects together on the outer parts of the spindle (that is, the highest performing parts of a physical spindle) or in the center of a spindle are optimal. Since there is no real consensus, the historical recommendation of putting high I/O objects together on the outer part of the spindle is still a reasonable suggestion. Placing these high I/O objects together on the outer parts of the spindle should help to reduce disk actuator movement when doing reads and writes to each hypervolume on the spindle, thereby improving a controllable parameter in any data layout exercise.

7.7 SRDF and Sybase Bulk Copy Program (bcp) This section discusses how to configure and performance tune Sybase ASE to improve bulk data copying with SRDF on Symmetrix storage systems.

The Sybase Bulk Copy Program (bcp) is a high-speed command utility for moving Sybase ASE database data to/from a data file in a user-specified format. The most common use of bulk copying is for batch update processing. The ability to make data available to different systems or consolidate data to be used for data warehouses are some examples of using bcp to copy data to/ from database servers.

Data processing performance depends on many factors involving host hardware, storage subsystem, database structures, data, and applications. This discussion is based on generally accepted best practices and test results in EMC engineering labs. We recommend performance testing using your system configuration environment and data.

7.7.1 Overview of bcp

The bcp (Bulk Copy Program) is a high-speed command utility for moving Sybase ASE table and index data to or from a data file in a user-specified format. It can be invoked from the operating system prompt and accepts data in any character or binary format.

7-22 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 151: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

bcp can be used to perform the following functions:

♦ Copy data into or out of a table

♦ Copy data from one database or host server to another

♦ Import data previously associated with another program

♦ Transfer data for use with other programs in and out of another database in any format

7.7.2 bcp data format

The bcp offers two different data formats: character and native. bcp allows the user to move data from a database using the bcp out syntax and the data is copied to a file. Users may copy data from a file into a database using the bcp in syntax.

When bulk copying character data, all data must be converted from the character datatype to the database datatype (for example, datetime, float, money, or decimal data type), and it carries a significant overhead. We recommend the use of character format to move data between different hardware platforms, different operating systems, or different major releases of ASE.

The bcp native format uses operating system formats, so it is operating-system dependent. Nevertheless, it provides better performance compared to character format. For large data transfers, we recommend using native format whenever possible.

7.7.3 bcp speed

The bcp provides two speeds when copying data into a table: fast bcp and slow bcp.

Fast bcp is used if a table has no index or trigger and the select into/bulkcopy/pllsort option is set on. Since inserts are not logged in the transaction log, it has better performance compared to slow bcp but transactions cannot be recovered. Remember to turn off select into/bulkcopy/pllsort after bcp completes.

Slow bcp is rarely used. The inserts are recorded in the database transaction log providing greater data recoverability. Generally, slow bcp will negatively affect SRDF performance due to the increased number of records that must be written to the database and across the RDF link to the remote Symmetrix system.

Sorting the raw file data prior to loading it into a table (via bcp) will improve bcp performance. After bcp has finished, create the indexes as needed with the sorted_data option to reduce index creation time. To bulk copy unsorted data, create the index(es) first, and then load the data into the table. This way, the indexes are not being created and/or updated for each new data row. If slow bcp must be used to log everything for recoverability, the clustered index will be populated more quickly than a nonclustered index.

SRDF and Sybase Bulk Copy Program (bcp) 7-23

Page 152: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

7.7.4 Batch size

Batch size for bcp is used to determine how often the server commits the rows that have already been loaded. In this way, it guarantees recovery up to the last completed batch and makes the transaction log less likely to fill up when copying large data into a table. The default batch size is 100 rows. However, it can be set to a higher value using the –b flag with bcp. A typical batch size is 10,000 rows.

7.7.5 Packet size

Using a larger network packet size for the Adaptive Server than the default (512 KB) will improve the performance of large bulk copy operations. The –A size option can be used to specify the server network packet size in the bcp session. For best performance, choose the server network packet size:

♦ Between the values of the default network packet size and max network packet size, and

♦ A multiple of 512.

7.7.6 Partitioned table bulk copy

With a partitioned table, bcp performance will greatly improve by executing several bcp sessions. This will reduce lock contention and distribute I/O over multiple devices. However, keep in mind the following guidelines when executing a bcp session on a partitioned table;

♦ A partitioned table improves performance only when bulk copying into a table.

♦ The performance of slow bcp does not improve as much with partitioned tables compared to fast bcp.

♦ Network traffic can quickly become a bottleneck when multiple bcp sessions are being executed. If possible, use a local connection to the Adaptive Server to avoid this problem.

7.7.7 Buffer caches and I/O block size

Sybase ASE memory is dedicated to the Sybase instance, from the host operating system, when the server is brought online. Unlike other software applications, ASE does not participate in memory swapping with the host operating system. ASE memory is allocated to the data cache (80 percent by default) and the procedure cache (20 percent by default). The data cache is divided into buffer pools and users can configure them by splitting them into multiple named data caches. Named data caches can be used to bind databases or database objects to them, with the intent to improve performance.

Each named data cache, by default, has a 2 KB memory pool size. The memory pool, or buffer pool size, is also configurable. The following is an example of how a named cache is created, bound to an object, and the buffer pool size is configured.

7-24 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 153: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Example:

Create a named cache bcp_cache. This named data cache will be assigned 10 MB of memory. sp_cacheconfig bcp_cache, “10M”

Example:

Bind the named cache to a database table. sp_bindcache “bcp_cache”, “testdb”, “test_table”

Where:

testdb is the database name.

test_table is the name of the table. test_table will be bound to the bcp_cache memory pool.

Memory pools can be configured at 4 KB, 8 KB, and 16 KB in both named database caches (also known as user-defined data caches) and the default data caches. Every cache contains a 2 KB buffer pool for use in internal utilities, and it cannot be removed.

While creating a memory pool, the size of pool is very important. If the pool size is too small, cache paging occurs, which creates more I/O. If cache paging is too large, it will waste memory that could be allocated elsewhere. As always, Sybase recommends testing the environment to find the optimal size.

Example:

Configure the buffer pool size for the named data cache bcp_cache. sp_poolconfig bcp_cache, “5M”, “4K”

Where we are creating a 4 KB buffer pool size for 5 MB of the memory pool named bcp_cache.

As you recall, we created a 10 MB memory pool for bcp_cache. The other 5 MB will remain configured at the default buffer pool size (2 KB).

Sybase ASE provides read and write I/O block sizes of 2 KB, 4 KB, 8 KB, and 16 KB. With large I/O, it will reduce I/O cost by reading and writing multiple database pages in a single I/O such as bcp. To take full advantage of the larger block size for bcp, configure a named cache that the tables reside in for the larger I/O size, and bind a named cache to the tables just before bcp. Then, unbind the named cache when bulk copy processing is complete.

It is suggested that bcp is only run during low to moderate transaction rate processing times.

7.7.8 Log I/O size

ASE Server defines 4 KB as a default value for the log I/O size of a database. However, database log I/O sizes can be changed by using the sp_logiosize system procedure. In this way, the performance can be improved by reducing the number of times that ASE

SRDF and Sybase Bulk Copy Program (bcp) 7-25

Page 154: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Performance Considerations

Server writes transaction log pages to disk. This can be very useful when intensive write activities occur (such as during bcp processing).

The value specified for sp_logiosize must correspond to an existing memory pool configured for the cache used by the database's transaction log.

Executing sp_logiosize “all” will list all databases on the server and the cache in use by the log.

Executing sp_logiosize without parameters will report the log I/O in the current database.

Executing sp_logiosize "xx" in the current database will change the log I/O size to xx.

7.8 Improving slow bcp performance As Sybase recommends, increasing the log I/O size will improve bcp performance by reducing the number of times that the database server writes transaction log pages to disk. The following results are from a preliminary test to determine how a large log I/O size can improve slow bcp performance.

After creating a 10 MB, 16 KB memory pool, and binding it to the test database, we changed the logiosize of the transaction log to use 16 KB I/O. In addition, all other configuration parameters are kept their respective defaults to simplify the testing.

Table 7-1 Large I/O log size bcp test results

Log I/O size

The bcp loading time (seconds) over clustered index

The bcp loading time (seconds) over nonclustered index

2 KB 160 217 8 KB 90 135 16 KB 89 111

Batch size is 5,000 rows

Based on the testing, we conclude that by using the log I/O size of 16 KB or 8 KB, slow bcp performance will increase as much as 31 percent. However, there was no major performance difference when using 16 KB or 8 KB log I/O size. Second, by creating a clustered index, slow bcp performance increases by 33 percent. Thus, using large log I/O size together with clustered index will improve slow bcp performance. However, it is important to balance the other database activities that use the 2 KB memory pool.

7-26 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 155: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 8 EMC ControlCenter and Sybase

This chapter presents these topics:

8.1 Storage Allocation ..............................................................................................8-3 8.2 Monitoring ..........................................................................................................8-6 8.3 Performance Management ..................................................................................8-6 8.4 ECC Administration ...........................................................................................8-7 8.5 Data Protection ...................................................................................................8-8

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 8-1

Page 156: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

EMC ControlCenter is an integrated family of products that enable users to discover, monitor, automate, provision, and report on networks, host resources, and storage resources across the entire information environment. From a single Console, ControlCenter will monitor and manage:

♦ Connectivity components — Fibre Channel switches and hubs

♦ Host components — Host operating systems, file systems, volume managers, databases, and backup applications

♦ Storage arrays — EMC’s Symmetrix, CLARiiON, and other vendors’ storage arrays

ControlCenter displays a consolidated view of the storage environment, allowing users to monitor the health of, track the status of, report on, and control each object from a Console anywhere on the network.

ControlCenter is designed for use in a heterogeneous environment of multivendor storage networks, storage hosts, and storage resources. Information can reside on technologically disparate devices running a variety of operating systems, in geographically diverse locations.

The ability to manage host applications and their associated storage needs across disparate platforms from a single interface simplifies the task of storage management and makes it possible implement cross-platform storage-wide strategies.

ControlCenter manages a variety of storage network connectivity components, different host platforms, and an assortment of host applications. ControlCenter monitors and reports on the following host applications:

♦ Novell

♦ Active Directory

♦ Oracle, Informix, Sybase, SQL Server, or DB2 databases

♦ IBM Tivoli Storage Manager, VERITAS NetBackup

♦ EMC NetWorker™ backup applications

For a complete listing of ControlCenter storage network, host, and application support, and detailed information, refer to the document titled, ControlCenter Overview.

EMC ControlCenter supports Sybase and provides an agent that allows a user to view device mapping and database configuration information. Users must install and configure the entire EMC ControlCenter infrastructure, which involves the installation of the ControlCenter Repository, the ControlCenter Server (ECC Server), data Store, Master Agent, Workload Analyzer Performance View, ControlCenter StorageScope, and StorageScopeAPI in order to use the functionality described in this chapter.

Assuming the EMC ControlCenter infrastructure has been installed and configured, a user may choose to install only the ControlCenter client Console to view the Sybase

8-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 157: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

server environment. The client Console communicates with the ControlCenter Server and provides a view of the Enterprise Storage Network and tools to monitor it.

Installation of the EMC ControlCenter infrastructure is described in the ControlCenter Installation Guide Volume I (open systems).

Fundamentally, the EMC ControlCenter storage and application infrastructure is viewed through the use of objects. Objects allow users to select, view, and perform actions in EMC ControlCenter. Examples of objects include storage arrays, directors, devices, hosts, switches, ports, or a grouping of objects. Users select the object(s) from anywhere in the interface: a tree panel, target panel, from within a view, in some cases even from within a dialog box. Then, by selecting an operation valid for that object, there’s no need to leave the current application and open another. Users can switch between different views, perhaps stepping back for the complete picture, or drill down for successively greater levels of detail.

The following sections describe the use of these five main EMC ControlCenter features:

♦ Storage Allocation

♦ Monitoring

♦ Performance Management

♦ ECC Administration

♦ Data Protection

These objects will be described throughout this section.

This is merely an overview to explain the capabilities of this product and how they may benefit the Sybase environment or application. For more information, refer to EMC ControlCenter documentation.

8.1 Storage Allocation Storage Allocation allows a user to view Path Details, Free Space, Visual Storage, and Masking, together with six generic views: Alerts, At A Glance, Properties, Topology, Relationship, and Performance, that appear in all five task menus.

For demonstration purposes, Figure 8-1 on page 8-4 shows all Oracle, SQL Server, and Sybase instances. Notice the storage allocation folder is highlighted (dark blue).

Storage Allocation 8-3

Page 158: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Figure 8-1 Main ControlCenter Storage Allocation screen

In Figure 8-1, from the Storage Allocation object, we are viewing all databases that have been created. By clicking a particular Sybase instance name (losan070 and losan188 in this case), all databases within that instance are displayed. Right-click Databases to display a menu containing more options.

Clicking Properties (in the grey area below the object selections) displays information about each database (host, state and last update) in the panel on the right. A DBA or storage administrator may find this information useful because it helps identify the databases that reside on a particular Sybase instance.

Figure 8-2 on page 8-5 displays the databases that reside on the Sybase instance named losan070. The host that houses each database is listed as well as the current state of the database.

8-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 159: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Figure 8-2 Databases on the Sybase Server named losan070

Storage Allocation 8-5

Page 160: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Figure 8-3 on page 8-6 shows, from the Storage Allocation object, the amount of free space on the volumes on which the ControlCenter database resides. A DBA or storage administrator may find this information useful because it can identify disk space allocation for the databases. Also, alerts can be set to notify the DBA when there is a disk space issue.

Figure 8-3 Free space allocation for the ControlCenter database

8.2 Monitoring EMC ControlCenter does not perform monitoring of Sybase instances or databases at this time. Sybase provides the ASE historical monitor and other monitoring tools such as DBXray for Sybase by BMC Software and Foglight for Sybase by Quest Software. For more information, refer to: http://www.bmc.com/supportu/hou_Support_ProdAllVersions/0,3646,19097_19695_8170_0,00.html or http://www.quest.com/foglight_cartridge_for_sybase/ .

8.3 Performance Management EMC ControlCenter Console Performance view displays performance statistics for storage objects, host objects and connectivity objects.

Users can display a table depicting realtime performance statistics that are collected directly from the agent monitoring the storage, host or connectivity objects. In addition, users can produce charts displaying historical performance data for a specified period of time.

Figure 8-4 on page 8-7 shows the Performance View chart displaying (in real time) the number of I/Os per second for the selected time period (50 minutes).

8-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 161: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Device reads, writes, and cache hits are displayed for the selected storage devices.

Figure 8-4 Performance View for selected storage devices

8.4 ECC Administration The ECC Administration object provides a series of views to assist users with the administration tasks associated with ControlCenter authorization, usage, agent plug-ins, and other functions specific to management and use of EMC ControlCenter.

Figure 8-5 on page 8-8 shows the ECC Administration menu.

ECC Administration 8-7

Page 162: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Figure 8-5 ECC Administration menu

The ECC Administration menu lists the administration views: Authorization, Usage, Agents, and Policies, together with the six generic views: Alerts, At A Glance, Properties, Topology, Relationship, and Performance. Eight extra menu items—Topology, Discover, Migration, Alerts, DCP, Install, Reports, and Agents—appear on the menu bar.

8.5 Data Protection The Data Protection object in EMC ControlCenter has two primary functions available to Sybase users: TimeFinder and SRDF. The TimeFinder option allows users to perform functions such as creating relationships between standard (STD) and BCV devices, splitting devices, and checking device status. The SRDF option allows users to perform functions such as defining relationships between devices, setting the mode of operation, and suspending/resuming the RDF link.

Figure 8-6 on page 8-9 shows the Data Protection menu.

8-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 163: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC ControlCenter and Sybase

Figure 8-6 Data Protection menu

Data Protection 8-9

Page 164: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 165: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 9 EMC SRDF and Sybase Replication Server

This chapter presents these topics:

9.1 EMC SRDF overview.........................................................................................9-2 9.2 Sybase Replication Server overview ..................................................................9-3 9.3 Sybase Mirror Activator overview .....................................................................9-6 9.4 Implementing Mirror Activator ..........................................................................9-7 9.5 EMC SRDF and Sybase Replication Server.....................................................9-22

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 9-1

Page 166: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

This chapter describes some key features and functionality of EMC SRDF, Sybase Replication Server and Sybase Mirror Activator. The intent is to differentiate these products, and to educate on where and how they can be integrated to best suit customer needs.

A discussion on storage based replication versus transactional replication is warranted. Disk-based(or storage-based) replication systems, such as EMC SRDF, replicate the contents of physical devices from a primary storage unit (source site) to a secondary storage unit (target site).

The primary benefits of storage-based replication are:

♦ Zero data loss with synchronous replication.

♦ Redundancy and protection of primary devices at a secondary site.

♦ Allows near-instantaneous disk recovery operations.

♦ Does not require any host on the target side.

The disadvantages of storage-based replication are:

♦ Physical disk corruptions at the primary site may be propagated to the standby site.

♦ Data at the target site is not available to any application until a failover occurs.

Transaction replication systems, such as Sybase Replication Server, read the transaction log of the primary database, convert the log records into equivalent SQL commands, and then apply the SQL to a running database. Transaction replication is logical replication. It is based on the transaction log of the primary database, so it follows all of the data integrity “rules” of the database.

The primary benefits of transaction replication are:

♦ Standby databases are online and available for failover as well as decision support.

♦ Data integrity (transactional consistency) between primary and standby databases is guaranteed.

The disadvantage of transaction replication is that it is asynchronous, which allows for the possibility of data loss. If the primary site goes down after a transaction is committed, but before the replication system reads the primary log record, the transaction will not be replicated to the standby site.

9.1 EMC SRDF overview The product introduction, Chapter 2, in this solutions guide explains the features and functionality of SRDF in detail. The topics presented in this chapter involve the use of concurrent SRDF and SRDF consistency groups, which are also explained in detail in the product introductory chapter. For review, their definitions are as follows.

9-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 167: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Concurrent SRDF allows users to have two remote copies available at any point in time by having two target R2 devices configured as concurrent remote mirrors of one source R1 device.

SRDF consistency groups are used to protect the consistency of one or more database management systems that span SRDF groups during a disaster. The implementation of an SRDF consistency group may include devices associated with other DBMSs, file system devices, and/or other Sybase databases or instances.

9.2 Sybase Replication Server overview Replication Server provides a cost-effective way of setting up near-real-time standby copies of a database. Replication Server uses two separate database instances asynchronously transferring records to the replicate database. Providing that only one primary replication server is used, transactional consistency is preserved.

The Sybase Replication Server warm standby consists of a primary (or active database) replicating to a target (or standby database) defined in the Replication Server system by a logical connection between the two. Changes to the primary database are copied to the standby using Replication Server.

One consideration when using a transactional replication product is latency at the replicate site—especially if the primary database has high transaction rates. Latency is the amount of time it takes to replicate data from the primary site to each database in the Replication Server system. System administrators must be cognizant of the throughput of the replication system when setting up the Replication Server environment. The use of replication involves a trade-off between data availability and data synchronization.

Figure 9-1 on page 9-3 shows the components associated with a Sybase Replication Server environment. Each component is described in detail following the figure.

Figure 9-1 Sybase Replication Server components

Sybase Replication Server overview 9-3

Page 168: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

1. The primary database (PDB) has a log transfer manager (LTM) that receives transaction log records marked for replication. The LTM is a log scanning thread, basically looking for transactions from a database log file that have been marked for replication.

2. Transactions that have been set up for replication are forwarded to the primary replication server (PRS).

3. The PRS manages the inbound and outbound queues in the stable device. The Replication Server stable device stores incoming and outgoing transactions. It is a disk device that can be dynamically configured.

4. Subscription information is held in the Replication Server system database (RSSD). The RSSD contains system catalogs including definitions, subscriptions, routing information, rejected transactions, errorlog information, and recovery information. The RSSD is much like the master device in the Adaptive Server in that it holds critical system information.

5. Subscriptions are made from the replicate sites to the primary sites for updates.

6. The PRS writes to the replicate database (RDB) or to other replication servers, depending on how the replication system has been configured.

7. The replicate Replication Server (RRS) keeps transactions in its stable device and applies them to other databases expecting data.

In most environments, the Sybase Replication Server uses the LAN/WAN as its primary transport mechanism between hosts.

The primary function of the Replication Server is to manage the distribution of replicated transactions to the standby database. It maintains its own device-based queues to provide guaranteed transaction delivery. Sybase Replication Server features several replication models that can be configured for distributing data from one database to another. These replication models include the following:

♦ Distributed primary fragments

♦ Corporate rollup

♦ Redistributed corporate rollup

♦ Warm standby

9.2.1 Distributed primary fragments

Applications that use the distributed primary fragment model include distributed tables that contain both primary and replicated data. The Replication Server at each site distributes modifications made to local primary data to other sites. It also applies modifications received from other sites to the local data.

9-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 169: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

9.2.2 Corporate rollup

The corporate rollup model has distributed primary fragments and a single, centralized consolidated replicate table. This consolidated replicate table is called a corporate rollup. It contains the data that has been rolled up from each primary site.

9.2.3 Redistributed corporate rollup

The redistributed corporate rollup model is similar to the corporate rollup. Primary fragments distributed at remote sites are rolled up into a consolidated table at the central site. At the site where the fragments are consolidated however, a replication agent processes the consolidated table as if it were primary data. The data is then forwarded to Replication Server for distribution to subscribers.

9.2.4 Warm Standby

The warm standby model consists of a primary (called primary or active database) replicating to a target (called standby database) defined in the Replication Server system by a logical connection between the two. One database will function as a standby copy of an active database. As clients update the active database, Replication Server copies transactions to the standby database, maintaining consistency between them. Should the active database fail for any reason, applications may be switched to the standby database making it the active, and resume operations with little interruption. The warm standby is the most popular of the Replication Server models and is typically used for decision support, queries, or analytics.

9.2.5 Materialization

Materialization is the act of initially loading data from the primary database (PDB) into the standby database (SDB). The process of database materialization must be completed before any automated replication can occur.

The view of the environment during materialization starts with the PDB. The PDB has devices associated with the data (PDB DATA) where the contents of all database object data is stored, and devices associated with the log (PDB LOG) where the database keeps a record of the changes that occur to the database.

Users must create a Sybase instance at the target site. The standby database must have the exact same disk configuration as the primary. Initialize the standby database (SDB) devices (via Sybase disk init commands) and issue the create database statement. At this point, a shell (or empty) database exists on the target and can be materialized from the PDB.

The process of materialization via EMC SRDF involves bringing the primary and standby database devices into a paired relationship with one another via the EMC Symmetrix, and then synchronizing these devices so that the primary and standby contain the exact same data. Use whatever SRDF configuration is easiest and best suits the current environment. The objective is to replicate the Sybase data and log devices over the SRDF link. During materialization, the standby database must be offline and the primary database must be quiesced (put into a read-only state). Once the device

Sybase Replication Server overview 9-5

Page 170: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

pairings are synchronized, they are split and the standby database is brought online as an exact copy of the primary database.

Figure 9-2 Database materialization process

The following describes the sequence of events in Figure 9-2.

1. A relationship between the primary database data (R1(A)) and log (R1(C)) devices and the target database data (R2(A)) and log (R2(C)) devices must be defined.

2. Establish (synchronize) the data and log devices via SRDF.

3. Quiesce the PDB via Sybase commands and bring down the SDB.

4. Split the R1 and R2 devices after sync is complete.

5. Bring the SDB online and quiesce release the PDB.

9.3 Sybase Mirror Activator overview The Mirror Activator solution is a combination of the Sybase Mirror Replication Agent (MRA), Sybase Replication Server, and a synchronous block replication product such as EMC SRDF.

The Mirror Replication Agent reads a primary database transaction log device in order to replicate transactions. It is specifically designed to remotely mirror log devices that reside on a separate host from the primary data server. The Mirror Replication Agent acquires transactions from the primary database transaction log and sends them to the Replication Server. The Replication Server then distributes the transactions to the standby database. The Mirror Replication Agent requires only read access to the mirror log device.

The Mirror Activator is designed specifically to be configured with the warm standby replication model only. The warm standby model consists of a primary (called primary or active database) replicating to a target (called standby database) defined in the Replication Server system by a logical connection between the two. Changes to the

9-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 171: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

primary database are copied directly to the standby using Replication Server. The Sybase Mirror Activator system configuration assumes a new ASE warm standby environment rather than the migration of an existing environment.

Sybase Mirror Activator takes advantage of the strengths of both disk mirroring and transactional replication. It provides a zero-data-loss solution, significantly reduces RTO, and improves ROI from continued access to standby resources.

For additional product support documentation on Sybase Mirror Activator installation and configuration, refer to http://sybooks.sybase.com/mra.html

Figure 9-3 Sybase Mirror Activator in an EMC SRDF/S environment

1. Transactions on the primary database are synchronously replicated via SRDF to the target database log device R2(C).

The MA log device (R2(C)) is sometimes referred to as the PDB log mirror.

2. The Mirror Activator Agent (MA) reads the SRDF device (R2(C)) and asynchronously applies data to the Replication Server (RS).

3. The Replication Server (RS) converts the data to SQL and updates the standby database.

9.4 Implementing Mirror Activator Joint development efforts have brought EMC and Sybase replication products together. The Sybase Mirror Activator with EMC SRDF provides a zero-data-loss solution combined with providing customers a way to utilize and leverage idle resources at the target site. This section describes how to implement Sybase Mirror Activator with EMC SRDF, and the pros and cons of various implementations.

Implementing Mirror Activator 9-7

Page 172: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Appendix G provides technical details, including command syntax, related to implementing Sybase Mirror Activator with EMC SRDF.

Table 9-1 on page 9-8 provides the minimum software and release levels required to implement the Mirror Activator with SRDF/S solution.

Table 9-1 Minimum software requirements

Sybase Sybase Mirror Activator 12.6 Sybase Sybase Adaptive Server Enterprise

(ASE) 12.5.0.3 EBF #1

EMC Primary and target Symmetrix models

5568 (Symmetrix systems must be at same Enginuity level)

EMC Solutions Enabler 5.4 EMC PowerPath 4.0

9.4.1 Implementation guidelines for Mirror Activator with SRDF/S

When implementing Mirror Activator with EMC’s SRDF/S, the following general steps must be taken.

This document does not describe installation procedures for any of the products associated with Mirror Activator.

1. Choose a materialization method.

Materialization is the act of initially loading data from the primary database into the standby database. This is the first step in the process of implementing a Mirror Activator with SRDF solution.

2. Configure SRDF for the Mirror Activator.

There are two implementation methods for SRDF/S that may be configured for Mirror Activator. This section discusses the use of symcli to implement either Concurrent SRDF or SRDF Consistency Groups, which is called the Enterprise Restart consistency group solution.

3. Configure the Sybase database servers.

The primary and standby databases require unique configurations to support replication. This Solutions Guide does not detail specifics on configuring a Sybase database or the Replication Server. Refer to Sybase documentation at www.sybase.com/products.

4. Configure Mirror Activator.

Mirror Activator must be configured for the replication environment.

5. Initialize replication.

9-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 173: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

When all components have been properly configured, enabling replication requires that the SRDF devices are synchronized, materialization of the standby is complete, and the Mirror Activator has been started.

9.4.2 Choosing an implementation method

When implementing Sybase Mirror Activator with SRDF, two implementation methods must be considered. This section describes concurrent SRDF and Enterprise Restart consistency groups, and discusses the pros and cons of each where Mirror Activator is concerned.

Using a concurrent SRDF pair allows users to easily generate two copies of the same data at remote locations. When the two R2 devices are split from their source R1 device, each target site’s host can access its own data at the time of the split.

Concurrent SRDF requires two different RA (remote adapter) groups to achieve the connection between the local R1 device and its two remote R2 mirrors. The RA groups should be on two different RA adapter interfaces, but this is not required if Fibre RA adapters are being used.

The basis for implementing concurrent SRDF for the Mirror Activator with SRDF is for ease of use during both the materialization process and ongoing replication. The first remote mirror is used during materialization, and the second remote mirror is used for ongoing replication. Never are the two remote mirrors in sync at the same time. Each has a different purpose in this configuration.

Figure 9-4 on page 9-10 depicts Sybase Mirror Activator configured for EMC’s concurrent SRDF feature. Notice that the Primary database log file on the local Symmetrix is associated with two different devices on the remote Symmetrix. The use of this configuration is described in detail throughout this section.

Implementing Mirror Activator 9-9

Page 174: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Figure 9-4 Mirror Activator implementation using concurrent SRDF

The following describes the process in Figure 9-4 on page 9-10.

1. Define the concurrent SRDF relationship between the PDB log device R1(C) and the PDB log mirror R2(C). The PDB log mirror is the device that the Mirror Activator (MA) will read from.

2. The Mirror Replication Agent must be attached to the PDB log mirror R2(C) device. This is accomplished by configuring the Mirror Activator and Replication Server so that the transactions can be sent to the RS for distribution to the standby database.

3. Establish only the PDB log device using SYMCLI command syntax. Ensure that the SRDF link to the standby database devices is not established.

4. Once everything has been set up for replication, the MA will read the R2 device as transactions are synchronously replicated via SRDF. Here the MRA will update the Replication Server by converting the R2 log data to SQL.

5. The Replication Server updates the standby database which resides on R2 devices.

SRDF consistency groups are used to protect the consistency of one or more database management systems that span SRDF groups during a disaster. The implementation of an Enterprise Restart consistency group Solution may include devices associated with other DBMSs, file system devices, and other Sybase databases or instances.

Starting with Solutions Enabler version 5.4, consistency groups are created from a composite group. SRDF composite groups can provide control over a set of SRDF/S pairs that span multiple Symmetrix units. A composite group that has been configured to provide consistency protection across SRDF pairs subsequently becomes a consistency group that provides an Enterprise Restart solution. Restated, in version 5.4 a consistency group is a composite group with consistency

9-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 175: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

checking enabled. This section refers to the group as a consistency group, regardless of how it is created.

Figure 9-5 on page 9-11 describes a Mirror Activator implementation using the Enterprise Restart consistency group solution.

Figure 9-5 Mirror Activator implementation with an Enterprise Restart consistency group

The following depicts the flow in Figure 9-5 on page 9-11.

1. Create a consistency group that is suitable to your environment. The ConGroup may consist of devices containing any type of data (for example, Oracle, SQL Server, and Sybase databases and file system devices).

2. Remote BCVs must be configured as part of the ConGroup for this configuration, as the Sybase standby database will reside on the remote BCVs.

3. The Mirror Activator must be attached to the PDB log mirror device (R2(C)). This is accomplished by configuring the Mirror Activator and Replication Server so that transactions can be sent to the RS for distribution to the standby database.

4. Establish the ConGroup devices. In this case, the R1s and R2s will be established, but the remote BCVs are not synchronized; they are in a split state.

5. Once everything has been set up for replication, the MA will read the PDB log mirror device (R2(C)) as transactions are synchronously replicated via SRDF. Here the MA will update the Replication Server by converting the R2 log data to SQL.

6. The Replication Server updates the standby database which resides on remote BCV devices.

Implementing Mirror Activator 9-11

Page 176: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

9.4.3 Pros and cons of the two implementation methods

This section describes the advantages and disadvantages of implementing the Sybase Mirror Activator with SRDF using concurrent SRDF versus the Enterprise Restart consistency group solution.

Implementing with concurrent SRDF

Advantages:

♦ This is a remote database restart solution.

♦ This is a zero-data-loss solution.

♦ It drastically reduces RTO (recovery time objective) in the event of a disaster because the target server is up and running and does not have to be restarted.

♦ Reduced bandwidth due to the fact that the only device being replicated is the PDB log mirror.

Disadvantages

♦ The Sybase database cannot be a member of the ConGroup. The database data devices cannot be in sync mode with MA and therefore must be removed from the ConGroup.

♦ The failover/failback (also known as go home) procedures are complex. They involve manual manipulation of the Sybase Rep Server and MRA log truncation pointers.

Implementing with the Enterprise Restart consistency group solution

Advantages

♦ This is an enterprise disaster restart solution.

♦ This is a zero-data-loss solution.

♦ Allows Sybase to be included in the ConGroup, because the Rep Server is updating a database on the remote BCVs.

♦ Provides an independent BCV copy (the remote BCV) of the database that can be used for reporting, queries, analytics, or database consistency checking, without impacting production or the DR copy.

♦ The failover/failback (go home) procedures are simplistic and part of the basic SRDF feature set.

Disadvantages:

♦ This configuration requires more bandwidth and storage because all of the database data and log devices are replicated.

♦ Unless application failover is on the remote BCVs, this configuration does not reduce RTO as much as the concurrent SRDF implementation. In this configuration, the Sybase server (configured on the R2s) is not up and running.

9-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 177: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

9.4.4 Implementation using concurrent SRDF

Configuration of the EMC Symmetrix devices is a critical success factor for this solution to work. From a storage device perspective, EMC recommends using either concurrent SRDF or the Enterprise Restart consistency group solution.

Figure 9-6 on page 9-13 shows a single device (DEV002) configured with concurrent SRDF for the purposes of implementing Mirror Activator.

Figure 9-6 SRDF device configuration for DEV002

Notice that the 034-034 device pairing is synchronized, and the 034-035 device pairing is split.

It is apparent that the R1 (DEV002 is device 0034) has its mirrored pairings on the target Symmetrix as devices 0034 and 0035. Target device 0034 is used only for materialization of the standby database. Once materialization is complete, we suspend SRDF to target device 0034. From this time forward, or until we need to rematerialize the standby database again in the future, device 0034 remains in a suspended state.

After materialization, synchronize the primary device 0034 to target device 0035 for what is referred to as “ongoing replication.”. We do not use concurrent SRDF in its traditional sense where there are normally two identical remote copies available at any point in time. This is because, with the Mirror Activator, we never want that situation. We are always only replicating to one or the other concurrent device. In this case, either target device 0034 or 0035—but never both. Concurrent SRDF was configured for this solution to allow the ease of switching between the two target database log devices.

It is important to understand the use of concurrent SRDF for this solution. Only one target device is ever synchronized with the primary. One device is used for the standby database log during materialization only! The second device is for ongoing standby database replication.

The following symcli commands were used to create the concurrent device relationship, and establish the “ongoing or future” replication devices. Before continuing to the steps that follow, ensure that the Symmetrix has been set up for dynamic operations. Dynamic

Implementing Mirror Activator 9-13

Page 178: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

operations allow users to perform certain functions, such as defining concurrent device relationships.

symdev list -dynamic -sid 047

As shown in Figure 9-7 on page 9-14, the devices 0034 and 0035 are eligible for a concurrent relationship.

Symmetrix ID: 000185400047

Device Name Directors Device

----------------------- ------------- ----------------------------------

Sym Physical SA :P DA :IT Config Attribute Sts (MB)

--------------------------- ------------- ------------------------------

0034 /dev/rdsk/emcpower63c 16A:0 02A:D3 2-Way Mir Grp'd RW 8632

0035 /dev/rdsk/emcpower64c 16A:0 01A:C4 2-Way Mir Grp'd RW 8632

0060 Not Visible ***:* 01B:D0 2-Way Mir Grp'd RW 8632

0061 Not Visible ***:* 02B:C1 2-Way Mir Grp'd RW 8632

Figure 9-7 List the eligible dynamic devices

1. Create two files containing the device file pairings. The first file is for materialization (logreplicationpair), the second for ongoing replication (logmaterializationpair). more /usr/Sybase/logreplicationpair

034 034

more /usr/Sybase/logmaterializationpair

034 035

2. Create the dynamic pairing for log materialization, but do not establish these devices. symrdf createpair –sid 047 –type rdf1 –rdfg 2 –file

/usr/Sybase/logmaterializationpair

3. Create the dynamic pairing for log replication and establish the devices. symrdf createpair –file /usr/Sybase/logreplicationpair –

sid 047 –type rdf1 –rdfg 1 –establish

Figure 9-5 on page 9-11 shows see how the device pairing looks when synchronized.

9.4.5 Implementation using the Enterprise Restart consistency group solution

Implementation of Sybase Mirror Activator with the Enterprise Restart consistency group solution requires the use of remote BCVs, which will reside on the target Symmetrix. The consistency group at the primary site can and will most likely include

9-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 179: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

many devices necessary to sustain an enterprise-wide federated database environment, hence, an Enterprise Restart solution. For example, the ConGroup may include database devices involving an Oracle and SQL Server database(s) as well as many file system devices. EMC SRDF consistency technology lets users maintain consistency across enterprise-wide environments where data from one heterogeneous source may rely on the data from another source. This section describes how to implement Sybase Mirror Activator using the Enterprise Restart consistency group solution.

Table 9-2 on page 9-15 distinguishes the devices that were configured for the EMC ConGroup testing.

Table 9-2 The Symmetrix device configuration for ConGroup

R1 – Symm device ID

R1 – host LUN

Purpose SRDF R2

R2 BCV R2 – host LUN

003A C2t0d58s2 File system dev

003A 0071 C2t0d113

003B C2t0d59s2 File system dev

003B 0072 C2t0d114

003F C2t0d63s2 Sybase data dev

03F 0073 C2t0d115

0040 C2t0d64s2 Sybase log dev

0040 0070 C2t0d112

Device 0040 is a key component in this environment. This is the device to which the Mirror Replication Agent (MRA) will be “attached,” and it is the primary database log device. The MRA will read the log mirror (dev 0040) and send the changes to Replication Server (RS) for distribution to the standby database which resides on BCV devices 0073 (data) and 0070 (log) in our configuration. All of the devices listed in Table 9-2 on page 9-15 are part of a consistency group; the R1, R2 and BCVs will be synchronized during materialization. For ongoing replication, the BCVs associated with the database data and log devices will be split and remain split until such time that materialization must occur at some point again in the future. The reason that these devices remain split is because MRA uses it to update the database, as opposed to SRDF. SRDF would be in synchronous mode, therefore the devices would be write-disabled (WD) and the database could not be online. Allowing MRA to update the database allows the Sybase ASE to be up (running), and the database already online for faster recovery in the event of disaster.

The master and sybprocs devices are not included in the ConGroup. In order to include the Sybase master device, there are several things to consider. Following are some considerations for including the master device in the SRDF consistency group, however it is not recommended:

♦ The master device on the target ASE server must be identical to the primary (this includes sysdatabases, sysdevices, sysusers, and syslogins).

♦ The target ASE server must contain the same databases as the primary.

♦ The target devices on the ASE server must have the exact absolute pathnames as the primary (for example; /dev/rdsk/c2t0d64s2) unless symbolic links are used.

Implementing Mirror Activator 9-15

Page 180: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

♦ If your environment meets these criteria, it is safe to include the master device in the ConGroup.

The following symcli commands were used to create a ConGroup named macg, add the R1/R2 devices, associate the remote BCVs, and synchronize the devices.

1. Create the consistency group: symcg create macg –type rdf1 –ppath

2. Add the four devices to macg: symcg –cg macg add pd emcpower57c symcg –cg macg add pd emcpower58c symcg –cg macg add pd emcpower62c symcg –cg macg add pd emcpower63c

3. Enable the composite group for consistency: symcg -cg macg enable

4. Each time a new device is added to the ConGroup, you must reissue the previous command.

5. Associate the remote BCVs: symbcv -cg macg associateall dev -range 006D:0070 -sid 0047 -rdfg 1 –rdf

6. Perform a full establish on the R2 devices: symrdf –cg macg est –full

It is only necessary to perform a full synchronization of the devices the first time they are established.

At this point the SRDF environment should be configured and ready for replication. Appendix G describes configuration procedures for the Sybase database servers, the Mirror Activator, the Replication Server, and how to make it all work together.

9.4.6 Sybase Mirror Activator with SRDF/A

EMC SRDF/A provides an option for users who want to implement Sybase Mirror Activator without distance limitations and no host application impact. SRDF/A operations and benefits are described in detail in section 2.4.8.

Implementing Mirror Activator with SRDF/A requires a minimum Symmetrix Enginuity level of 5670. All other minimum software requirements (as specified in Table 9-1 on page 9-8) remain the same.

Configuration of the EMC Symmetrix devices is a critical success factor for this solution. There are several ways to configure this environment from a storage device perspective, but EMC recommends using the Enterprise Restart consistency group solution. SRDF/A guarantees dependent-write consistency on a delta set boundary; therefore, the devices belonging to a group assigned to run SRDF/A are in fact a consistency group. An Enterprise Restart consistency group solution ensures that all database and application data can be restarted and brought back to a single point of consistency, in the event of a failure on the primary (or production) host.

9-16 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 181: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Figure 9-8 on page 9-17 depicts Sybase Mirror Activator in an SRDF/A configuration.

Figure 9-8 Mirror Activator in SRDF/A configuration

The following process takes place in Figure 9-8 on page 9-17:

1. The SRDF/A group is established and enabled for consistency to the R2s. The remote BCVs are split.

2. The Mirror Replication Agent (MRA) is attached to the R2 primary database log file (R2(C)).

3. The MRA reads the active primary log file and updates the Replication Server.

4. The Replication Server redistributes changes to the secondary database residing on remote BCVs (BCV(A) and BCV(C)).

9.4.7 Implementation guidelines for Mirror Activator with SRDF/A

When implementing Mirror Activator with EMC’s SRDF/A, the following general steps must be taken.

This document does not describe installation procedures for any of the products associated with the Mirror Activator.

Implementing Mirror Activator 9-17

Page 182: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

1. Choose a materialization method.

Materialization is the act of initially loading data from the primary database into the standby database. For best practice purposes, both EMC and Sybase recommend using SRDF because it is the simplest and fastest tool for materialization.

2. Configure SRDF for the Mirror Activator.

This section will discuss the use of symcli to create and implement Enterprise Restart consistency groups for mirror activator.

3. Configure the Sybase database servers.

The primary and standby databases require unique configurations to support replication. This Solutions Guide does not detail specifics on configuring a Sybase database or the Replication Server. Refer to Sybase documentation at www.sybase.com/products.

4. Configure Mirror Activator.

Mirror Activator must be configured for the replication environment.

5. Initialize replication.

When all components have been properly configured, enabling replication requires that the SRDF devices are synchronized, materialization of the standby is complete, and the Mirror Activator has been started.

9.4.8 Implementation using SRDF/A

Configuration of the EMC Symmetrix devices is a critical success factor for this solution to work. It is recommended that customers have a good understanding of their environment, and how this solution will be used to benefit the business, before considering a production implementation.

1. Configure the Enterprise Restart consistency group: symcg create MACG –type RDF1 -rdf_consistent

2. Add the local devices to the ConGroup: symcg –cg MAER add dev 6CA –sid 379 symcg –cg MAER add dev 6CB –sid 379

3. Add the remote devices (RBCVs) to the ConGroup using the range parameter for simplicity. Specify the RDF group (7), and Symmetrix ID (379) in this example. symbcv -cg MAER associateall -RANGE 77A:77B -rdfg 7 -rdf

-sid 379

4. Make sure the ConGroup was created properly. Notice that DEV001 and DEV002 are the R1/R2 devices and RBCV001 and RBCV002 are the remote BCVs. symcg show MAER

9-18 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 183: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Composite Group Name: MAER Composite Group Type : RDF1 Valid : Yes CG in PowerPath : No CG in GNS : No RDF Consistency Protection Allowed : No RDF Consistency Enabled : No Number of RDF (RA) Groups : 1 Number of STD Devices : 2 Number of BCV's (Locally-associated) : 0 Number of VDEV's (Locally-associated) : 0

Number of RBCV's (Remotely-associated STD-RDF) : 2 Number of BRBCV's (Remotely-associated BCV-RDF) : 0 Number of RRBCV's (Remotely-associated RBCV) : 0 Number of Symmetrix Units (1): 1) Symmetrix ID : 000187400379 Microcode Version : 5671

Sym Device Flags Caps LdevName PdevName Dev Config

DEV001 /dev/rdsk/emcpower137c 0684 RDF1+Mir RW . -- 8632 DEV002 /dev/rdsk/emcpower138c 0685 RDF1+Mir RW .-- 8632

RBCV's (Remotely-associated STD-RDF) (2): {Remote RDF (RA) Group Number : N/A Remote Remote Symmetrix ID : N/A Microcode Version : N/A

Sym Device Flags Caps LdevName PdevName Dev Config RBCV001 N/A 0778 BCV NR --- 8632 RBCV002 N/A 0779 BCV NR --- 8632

Figure 9-9 Output of symcg show MAER command

Implementing Mirror Activator 9-19

Page 184: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

When the R1/R2 device pairings were created, the devices were automatically established. The remote BCVs were added to the ConGroup and were not established.

5. Split the local devices so that the database environment can be created: symrdf -cg MAER split

6. Create the primary and standby databases, configure the MRA and set permissions in the RS. All the necessary steps are outlined in Appendix G.

7. After completing the procedures from the previous step, establish both the R2s and RBCVs. This will materialize the database on the R2s and the remote (RBCV) devices. symmir –cg MAER est –rdf symrdf –cg MAER est

8. Display the status of the local devices to ensure that they are synchronized: symrdf -cg MAER query

9. Display the status of the remote devices to ensure that they are synchronized: symmir -cg MAER query –rdf symmir -cg MAER split –rdf

Source (R1) View Target (R2) View MODES STATES --------------------- ---------------------- ----- ---------- Logical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA s STATE ---------------- --------------------- ------ ---------- DEV001 0684 RW 0 0 RW 0684 WD 0 0 S.. . Synchronized DEV002 0685 RW 0 0 RW 0685 WD 0 0 S.. . Synchronized

Figure 9-10 Status of remote devices

Thus far, we have configured this environment to have the Replication Server update the database on the remote BCVs; therefore, we do not want SRDF updating the remote devices. Split the remote devices only.

10. Set the mode to asynchronous for SRDF/A testing: symrdf –cg MAER set mode async

11. Enable consistency protection for the MATEST group: symrdf -cg MAER enable –rdfg 7

12. Ensure that the mode and consistency protection are set properly for the ConGroup. Notice in Figure 9-11 on page 9-21 that the [bolded] A under the MDA column indicates that the mode is set to asynchronous and the STATE is Consistent. symrdf –cg MAER query

9-20 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 185: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

Composite Group Name : MAER Composite Group Type : RDF1 Number of Symmetrix Units : 1 Number of RDF (RA) Groups : 1 RDF Consistency Enabled : No Symmetrix ID : 000187400379 (Microcode Version: 5671) Remote Symmetrix ID : 000187400636 (Microcode Version: 5671) RDF (RA) Group Number : 7 (06) Source (R1) View Target (R2) View MODES STATES -------------------------------- ------------------------- ----- ------ ------------ Logical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA s p STATE -------------------------------- -- ----------------------- ----- ------ ------------ DEV001 06CA RW 0 0 RW 06CA WD 0 0 A.. . - Consistent DEV002 06CB RW 0 0 RW 06CB WD 0 0 A.. . - Consistent

Figure 9-11 Output of symrdf query command

13. Execute the following steps to start ongoing replication. Ensure that the device path is set to the primary database log mirror (in this case, c4t0d17s2 or DEV 6CB).

14. Log in to the MRA and issue the following commands: ra_init force go ra_helpdevice go

15. From the Mirror Activator, quiesce release the primary database: pdb_quiesce release go

16. Bring the standby ASE database online: startserver –f RUN_ase_154

17. From Mirror Activator, resume replication and check the replication status. resume go ra_status go

Status should be .REPLICATING (WAITING AT END OF LOG).

18. Executed this step from Replication Server. Resume the logical connection to the database: resume connection to ase_154.db1 go

Implementing Mirror Activator 9-21

Page 186: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

9.5 EMC SRDF and Sybase Replication Server EMC SRDF and Sybase Replication Server both offer features and functionality that can provide a complete disaster recovery solution. Mirror Activator is not part of this comparison because it is not a stand-alone product. Mirror Activator cannot operate without Sybase Replication Server.

When comparing products to keep in mind that SRDF is block-level data replication that operates on Symmetrix hardware, and Sybase Replication Server is application software that replicates data on a database transaction level.

These two products replicate data using entirely different methods. Replication occurs with Sybase at the host level and with SRDF at the storage level. Replication Server uses the LAN/WAN, and SRDF uses dedicated Fibre Channel, FICON, ESCON, Gigabit Ethernet, or iSCSI as the primary data transfer mechanism. Both EMC and Sybase provide different features, functionality, and value to the end user.

Following is an analysis of these two products and how the strengths of each can be combined together to provide the utmost in data protection for EMC and Sybase customers.

9.5.1 Synchronous operations

In campus solutions, EMC SRDF/S can be run in synchronous mode ensuring write consistency across the source (R1) and target (R2) Symmetrix volumes. Because SRDF uses dedicated Fibre Channel, FICON, ESCON, Gigabit Ethernet, or iSCSI as the primary data transfer mechanism, it can be cost-prohibitive for long-distance replication needs. Furthermore, long-distance synchronous replication may decrease performance on the primary server.

Sybase Replication Server can simulate synchronous mode by specifically coding an application to perform two-phase commits.

9.5.2 Asynchronous operations

EMC SRDF/A provides a point-in-time image on the target (R2) device, which is only slightly behind the source (R1) device. SRDF/A session data is transferred to the remote Symmetrix array in predefined timed cycles or delta sets, which minimizes the redundancy of same track changes being transferred over the link.

Sybase Replication Server runs strictly in asynchronous mode—transactions from the primary database are applied consistently thereby preserving transactional consistency. This model, however, guarantees some latency. It can be sensitive to network problems, reduced bandwidth or heavy load on the production database.

9.5.3 Automatic switching

EMC provides a Symmetrix CLI that can be incorporated in high-availability/clustering software solutions to automate a failover operation. (For more information, refer to documentation related to SRDF/CE (Cluster Enable)). It is therefore possible to completely automate the necessary steps to quiesce a Sybase database and perform a

9-22 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 187: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

failover/failback operation. Once this is in place and has been tested, no additional maintenance is required.

As with any application, Sybase Replication Server requires manual intervention during initial setup. If automated switching is desired in the user environment, then Sybase Open Switch must be implemented to handle such a task. The second database is viewed as a separate instance (rather than a mirror), so a physical and logical connection to this instance is required. During the initialization of the standby database, a dump must be taken of the active database, and then loaded to the standby. Additionally, the syslogins, sysusers (system tables), and metadata must be kept consistent between the two databases. All tables must have replication definitions and subscriptions defined in order to be replicated.

9.5.4 Redundancy

EMC Symmetrix provides full redundancy through various RAID levels, hot spares disk protection, dual initiators, nondisruptive microcode upgrades, nondisruptive maintenance, redundant power supplies, and battery backup. EMC SRDF adds an extra layer of protection by mirroring the source Symmetrix system to a remote location. In addition, SRDF provides various operational modes and products for both synchronous and asynchronous replication, thus allowing users to specify how their data is replicated based on the criticality of the data, latency requirements, and performance needs.

Sybase Replication Server can be configured and implemented to be redundant if additional Replication Servers are put in place. However, the replicate standby database is not a true mirror, and various elements are not automatically replicated. These elements depend on outside (DBA) maintenance.

9.5.5 Data loss

EMC SRDF operating in synchronous mode ensures a zero-data-loss solution. The remote volumes are identical to local mirrors and are transparent to the application. SRDF Consistency Groups consist of devices specially configured to act in unison, ensure zero data loss, and maintain data integrity even when the devices are spread across multiple Symmetrix systems.

With Sybase Replication Server, it is possible to lose transactions in the event of a network failure, or corrupt stable device within Replication Server. The replication administrator must ensure that log truncation is consistent between and among sites. However, records may be reapplied directly from the primary transaction logs in the event of lost transactions.

9.5.6 Transactional consistency

EMC SRDF operates as a hardware mirror, copying data at the track level, creating an exact bit-for-bit data copy. Consequently, if corruption is introduced at the source (R1) side, this will be copied to the target (R2) side. While SRDF will never be the cause of the corruption, it will dutifully mirror logical corruption on the source side to the target.

Sybase Replication Server provides transactional consistency in the case of one primary server and one read-only standby database. If the database on the primary side becomes

EMC SRDF and Sybase Replication Server 9-23

Page 188: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

corrupted, the warm standby will not automatically become corrupted. This is due to the fact that data is loaded on the warm standby one transaction at a time, and the primary database is a separate entity with its own transaction log. While Replication Server does provide transactional consistency, the use of distinct databases introduces a complex system with multiple components (logs, LTMs, RSSDs, and Replication Servers). These components must be closely monitored to ensure no loss of data.

9.5.7 Immediate recovery

EMC SRDF allows near-instantaneous recovery operations. In this case, data will immediately be copied from the good target (R2) volumes to the R1 volumes. While this copy is in process, the R1 volumes are immediately accessible and the database instance can be started. As soon as the operation begins, every changed track on the R1 side that differs from the tracks on the R2 side will be identified. Once that is complete, the valid tracks from the R2 side will be copied to the invalid R1 tracks. During this procedure, the source volumes will satisfy any read or write operations, any requested tracks that have not been copied will be immediately copied to the R1 side and presented back to the application. This allows the copy procedure to continue in the background while the production site resumes standard processing.

Sybase Replication Server recovery time largely depends on two things:

♦ Level of activity on the primary database at the time of the failure

♦ Number of components involved in the replication environment

If the primary database was undergoing heavy updates, the stable queue must be emptied prior to switching over. The log transfer manager and replication processing is suspended with a single switch command, which then quiesces the active database and purges all of the queues. The active database’s LTM is then marked valid, while the previous database’s LTM is set to ignore. A marker is then logged in the new database and the log transfer manager must be started. At each step in this process, the administrator must monitor the logical status to verify that all of the connections are established and replication is resumed. Additional checks may be required to determine which transactions were or were not replicated to their replicate sites.

9.5.8 Administration/maintenance

EMC Symmetrix products are installed with dial-up capabilities to enable remote customer support 24x7x365. Additionally, each Symmetrix configuration is carefully reviewed to ensure that the customer’s needs are being met and the features of the Symmetrix system are fully utilized. Once the Symmetrix system has been configured and SRDF has been set up, no further administration/maintenance is required.

Sybase Replication Server administration increases in direct proportion to the complexity of the given environment. Since replication occurs at the transaction level on a per-table basis, ongoing maintenance is required to maintain the metadata of the primary database. The administrator must be able to troubleshoot downed servers, log transfer managers, and threads, as well as be able to rebuild (stable) queues as necessary. Additionally, the administrator must be able to recover lost data, re-create subscriptions, and check for inconsistent, missing, or orphaned transactions.

9-24 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 189: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

EMC SRDF and Sybase Replication Server

9.5.9 CPU intensive

EMC SRDF does not use host CPU cycles. SRDF requires T1/T3 or Fibre Channel direct connectivity.

Sybase Replication Server can be CPU intensive. Additional system resources, such as server and network memory, must be considered when architecting a Replication Server environment.

9.5.10 Database restrictions

EMC SRDF operates at the hardware level and is therefore database and application independent. However, because SRDF replicates at the block level, it has no concept of transactional consistency.

Sybase Replication Server operates only at the database transaction level. Therefore, the processes associated with the replication of data at a logical level are naturally inherent in this product.

9.5.11 Feature summary

9-3 on page 9-25 summarizes the features and functionality of EMC SRDF and Sybase Replication Server. Sybase Replication occurs at the host level and EMC at the storage level. Both provide different features, functionality, and value to the end user.

Table 9-3 EMC SRDF and Sybase Software Matrix

Feature

EMC SRDF

Sybase Replication Server

Mirror Activator w/concurrent SRDF

Mirror Activator w/SRDF consistency groups

Synchronous Yes No Yes Yes Asynchronous Yes Yes No No Redundancy Yes Yes Yes Yes Data loss No Yes No No Transactional consistency

No Yes Yes Yes

Immediate recovery Yes (of data only)

No Yes Yes

Administrator/ maintenance required

Yes (for initial deployment)

Yes (ongoing) Yes (for initial deployment)

Yes (for initial deployment)

CPU intensive No Yes Yes Yes Database restrictions No Yes Yes Yes

EMC SRDF and Sybase Replication Server 9-25

Page 190: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 191: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Chapter 10 Symmetrix Storage Considerations for Sybase IQ-Multiplex

This chapter presents these topics:

10.1 IQ-Multiplex capability ....................................................................................10-2 10.2 IQ-Multiplex architecture .................................................................................10-3 10.3 IQ-Multiplex indexing ......................................................................................10-5 10.4 Backup and data recovery.................................................................................10-5 10.5 Qualifying IQ-Multiplex on Symmetrix systems .............................................10-6 10.6 Integrating IQ-Multiplex with TimeFinder.......................................................10-7

Sybase on EMC Storage Systems Version 2.1 Solutions Guide 10-1

Page 192: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

IQ-Multiplex is a high-performance decision support server designed specifically for data warehousing. IQ-Multiplex is part of the Adaptive Server product family that includes Adaptive Server Enterprise for enterprise transaction and mixed workload environments and Adaptive Server Anywhere, a small footprint version of Adaptive Server often used for mobile and occasionally connected computing.

IQ-Multiplex provides benefits that support the interactive approach to decision support including the following:

♦ Intelligent query processing: IQ-Multiplex uses index-only access plans to process only the data needed to satisfy any type of query

♦ Interactive, ad hoc query performance on a uniprocessor as well as on parallel systems

♦ Open Architecture — Supports third-party front ends

♦ Supports terabytes of data and hundreds of simultaneous users

♦ Fully flexible schema support

♦ Efficient query execution without query-specific tuning by the System Administrator (under most circumstances)

♦ Fast initial and incremental loading

♦ Fast aggregations, counts, comparisons of data

♦ Parallel processing optimized for multiuser environments

♦ Stored procedures

♦ Reduced query time for increased productivity

♦ Entire database and indexing stored in less space than raw data

♦ Reduced I/O

♦ Multiplex capability

10.1 IQ-Multiplex capability Multiplex capability allows Adaptive Server IQ to support a multiserver configuration. Multiplex capability is designed for managing large query loads across multiple hosts. An Adaptive Server IQ-Multiplex supports many users, each executing complex queries against the shared database. With each server running Adaptive Server IQ and a shared disk subsystem connected to all servers, customers can match I/O operations to CPU capabilities and scale the user community past the limits of a single server.

An Adaptive Server IQ-Multiplex also offers higher availability. Failures of a single host leave all other hosts up and running. Customers can attach as many IQ-Multiplex servers as the EMC storage can support.

10-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 193: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

Sybase IQ-Multiplex provides features and functionality to perform full or incremental backups of an IQ database. If a database restore is required, the DBA must first restore the full backup archive and then each incremental archive in the proper order. Backup and data recovery is discussed in more detail later in this section.

10.2 IQ-Multiplex architecture An Adaptive Server IQ database is a joint data store with three parts:

♦ IQ Main Store, for permanent IQ data

♦ IQ Temporary Store, for temporary data such as data used in sorting

♦ Catalog Store, for system information and the database schema

One way to manage a large query load would be to copy the entire IQ database to multiple machines.

Each machine would require sufficient disk space to hold the IQ Catalog, Temp, and Main Stores. The Catalog Stores are small, and one expects the total Temp Store required to grow with the aggregate query load. The many copies of the IQ Main Store, however, would contain identical data yet require the user to purchase an additional set of disks for each host. This concept would also make it costly to propagate database updates since the entire IQ Main Store would have to be copied from the host that processed the updates to each of the other servers.

IQ-Multiplex provides an effective alternative, where the IQ Store cost is resolved by sharing it, and taking advantage of a shared disk subsystem. Access to the shared disk uses raw I/O. Adaptive Server IQ's vertical storage, compression, and bitwise technology reduce I/O requirements dramatically, making it possible to have many systems sharing the disk array(s) before contention degrades performance. Customers gain additional CPU power and memory space for processing queries by attaching more systems to the shared disk array.

Each set of an IQ Temporary Store and Catalog Store makes up one server. The servers share a common IQ Main Store.

Related documents 10-3

Page 194: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

Figure 10-1 Sybase IQ-Multiplex architecture

10.2.1 Write and query servers

In IQ-Multiplex, one server is designated as the write server and can load or update the database while the others submit queries. The read-only servers submitting queries are called query servers or read servers. The updating server is known as the write server or writer. By allowing only a single write server, IQ-Multiplex eliminates the overhead and scalability limits of a distributed lock manager.

Adaptive Server IQ's table-level versioning has been extended to support Multiplex capability. When a transaction creating a new version of a table commits on the write server, the control information pointing to that new version is sent to the query servers. New transactions beginning on the query server automatically see these new versions of the tables, just as transactions beginning on the write server do.

10.2.2 Tools for system administration

To assist with the management of the IQ-Multiplex, Sybase provides two primary tools:

♦ Sybase Central is an application for managing Sybase databases. It helps manage database objects and perform common administrative tasks such as creating databases, backing up databases, adding users, adding tables and indexes, creating an IQ-Multiplex, and monitoring database performance. Sybase Central has a Java based graphical user interface, and can be used with any operating system that allows graphical tools.

♦ DBISQL (also called interactive SQL) is an application that allows users to enter SQL statements interactively and send them to a database. DBISQL has a window-

10-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 195: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

like user interface for all platforms. Adapter Server IQ-Multiplex supports both a Java-based DBISQL and the C-based DBISQL.

10.3 IQ-Multiplex indexing Indexes are used to improve data retrieval performance. It's important to grasp a basic understanding of IQ-Multiplex indexing structure and index types. IQ-Multiplex indexes represent and store the data so that it can be used for processing queries. This strategy is designed for the data warehousing environment in which queries typically examine enormous numbers of records, often with relatively few unique values, and where aggregate results are commonly required. When data is loaded into a table, IQ stores data by column rather than by row, for each column in each table. The column orientation gives IQ indexes important performance advantages over traditional row-based indexing.

IQ-Multiplex automatically creates a default index, also known as the Fast Projection index, for each column. This index is used in a number of SQL operations including column projection, table joins, and string searches. When a column is designated as either a Primary Key or Unique, a High_Group (HG) index is automatically created. To achieve maximum query performance, define one or more additional index types to best represent the cardinality and use of column data.

IQ-Multiplex index types are as follows:

♦ Low_Fast (LF) — A value-based bitmap for processing queries on low-cardinality data (recommended for up to 1,000 distinct values, but can support up to 10,000).

♦ High_Group (HG) — An enhanced b-tree index to process equality and group by operations on high-cardinality data (recommended for more than 1,000 distinct values).

♦ High_Non_Group (HNG) — A nonvalue-based bitmap index ideal for most high-cardinality DSS operations involving ranges or aggregates.

10.4 Backup and data recovery To back up an IQ-Multiplex database, use the backup command. The backup includes both the Adapter Server IQ data (the IQ store) and the underlying database (the Catalog Store). The backup runs concurrently with read and write operations in the database, by contrast, during a restore no other operations are allowed on that database. The user must be connected to a database in order to back it up, as the backup command does not allow specification of another database.

Users can back up an IQ-Multiplex in one of three ways:

♦ Full backup makes a complete copy of the database.

♦ Incremental backup copies all transactions since the last backup of any type.

♦ Incremental-since-full backup copies all transactions since the last full backup.

Related documents 10-5

Page 196: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

All three types of backups fully back up the Catalog Store. In most cases, the Catalog Store is much smaller than the IQ store. The Temporary Store data is not backed up. However, the metadata and any other information needed to re-create the Temporary Store structure is backed up.

Backup backs up committed data only. Backups begin with an automatic checkpoint and will back up the current snapshot version of a database as of the time of this checkpoint. A second automatic checkpoint occurs at the end of the backup.

The transaction log file contains information that allows Adaptive Server IQ-Multiplex to recover from a system failure. It does not use the transaction log to restore an IQ-Multiplex database, to recover committed transactions, or to restore the Catalog Store. All databases require a transaction log. The Adaptive Server IQ-Multiplex transaction log file is different than most relational database transaction log files. If for some reason, the database files are lost or become corrupted, the database can only be restored from an appropriate backup.

Although backup does check that all necessary files are present before backing up the database, it does not check for internal consistency. For a consistency check, run the stored procedure sp_iqcheckdb before creating a backup.

An IQ-Multiplex database may be backed up to disk or tape, however disk backups must be done to a file system; raw disk is not supported as a backup medium.

When a full backup has been performed, a user may want to restore the database when necessary. To restore an IQ-Multiplex database from a backup, use the restore command. Restore requires exclusive access to the database. Restore requires that an entire backup or set of backups be restored; restoring individual files is not supported.

Full and incremental restores are supported. For an incremental, the restore files must match the number and size of the files that they replace, for both the IQ and Catalog Store.

While performing a restore operation, any user changes to the database before the completion of the restore will result in termination of the restore process. The DBA or system operator must ensure that no changes are made to the database until the full and/or incremental restores are complete.

10.5 Qualifying IQ-Multiplex on Symmetrix systems EMC and Sybase performed a baseline set of tests on Symmetrix systems for qualification, configuration, performance, and scalability of the IQ-Multiplex. The disk requirements were determined based on the need for an IQ Main Data Store, a Temporary Data Store for each IQ server, and a Catalog Store.

The SYMCLI provided the host with a set of commands to obtain device configuration information, configuration control, retrieve status, and performance data on attached Symmetrix units.

Sybase Central was used to create an IQ-Multiplex of four Solaris hosts. The IQ-Multiplex shared the top-level directory (/usr/sybase/test), which contained the IQ

10-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 197: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

Catalog and resided on one logical device on a Symmetrix system so it could be paired with a BCV. The write server mounted the /usr/sybase/test directory and shared it via NFS with the other three Solaris hosts. The Main IQ Store was made up of eight raw devices; each device had an assigned BCV. The Temporary IQ Store for each server included one file system device plus four raw devices, each associated with a BCV. The file system files resided on the shared top-level directory.

Therefore, all database files had associated BCV volumes that were available for recovery and other purposes.

10.5.1 Loading the database

Since all hosts had different configurations, the IQ servers were tailored to each respective host. In particular, buffer cache sizes were set to use most of the main memory to avoid paging.

A test database was created, and flat data files were generated on the write server host in order to populate the databases.

After creating and loading the tables on the write server, the multiplex was synchronized to bring the query hosts up to date.

The BCVs were established (to synchronize), and then split to maintain a refreshable environment.

10.6 Integrating IQ-Multiplex with TimeFinder EMC TimeFinder features and functionality can be used in an IQ-Multiplex environment. TimeFinder is a business continuance solution and allows backup, restore, upgrades, application testing, and many other uses in this environment.

The difference between a traditional DBMS and IQ-Multiplex is that the databases that make up the IQ-multiplex (the write server and the query servers) share common disk. For the integration of these two products, the host bus adapter ports were configured so that the paths from each host have access to the same logical devices on the Symmetrix system. Shared raw devices are then allocated for use as IQ Main Store on all hosts.

IQ-Multiplex does not have a built-in quiesce feature, but halting the device I/O can be accomplished by stopping (shutting down) the write server; leaving the read servers unaffected.

EMC TimeFinder can be used to recover an IQ-Multiplex write server. This involves splitting off the BCVs and creating an additional write server from the BCV host. This could be useful if a customer loses a write server and needs to quickly recover it, or wants to move the write server to a different host in the IQ-Multiplex environment.

Creating a quiescent state for an application means that disk I/O to one device or a set of devices has been halted, thus allowing a BCV mirror split without impacting production. A quiesced state for a disk device creates a dependent-write consistent view of the primary environment, thus enabling application restart from the BCV volume on another host.

Related documents 10-7

Page 198: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

The write server should be shut down only during quiet operational times to reduce impact to the production environment.

10.6.1 Combining the technologies

Sybase IQ-Multiplex and EMC Symmetrix appear to provide an excellent scalable data warehouse solution. The joint testing between EMC, Sybase proved that the Symmetrix system and IQ-Multiplex provide a highly scalable and reliable data warehouse solution. Furthermore, the flexibility of the EMC disk and connectivity architecture allows a customer to grow the capability of the SAN as needed, for example, by adding additional switches and/or Symmetrix arrays, to overcome I/O limitations as they arise. The result is a building-block approach that can yield well-tuned, high-performance data warehouses that can scale to a very large size and user counts. At the same time, all of the standard EMC high-availability data assurance technology can be applied to the underlying database storage to protect the customer's investment in accumulating the data.

10-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 199: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Appendix A Related Documents

This appendix presents the following topic:

A.1 Related documents......................................................................................................... A-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide A-1

Page 200: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

A.1 Related documents

The following is a list of related documents that may assist readers with more detailed information on topics described in this Solutions Guide. Many of these documents may be found the EMC Powerlink site (http://Powerlink.EMC.com). For Sybase information, refer to the Sybase websites including the main site (http://www.sybase.com).

SYMCLI

Solutions Enabler Release Notes (by release)

Solutions Enabler Support Matrix (by release)

Solutions Enabler Symmetrix Device Masking CLI Product Guide (by release)

Solutions Enabler Symmetrix Base Management CLI Product Guide (by release)

Solutions Enabler Symmetrix CLI Command Reference (by release)

Solutions Enabler Symmetrix Configuration Change CLI Product Guide (by release)

Solutions Enabler Symmetrix SRM CLI Product Guide (by release)

Solutions Enabler Symmetrix Double Checksum CLI Product Guide (by release)

Solutions Enabler Installation Guide (by release)

Solutions Enabler Symmetrix CLI Quick Reference (by release)

TimeFinder

Solutions Enabler Symmetrix TimeFinder Family CLI Product Guide (by release)

SRDF

Solutions Enabler Symmetrix SRDF Family CLI Product Guide (by release)

Symmetrix Remote Data Facility (SRDF) Product Guide

Symmetrix Automated Replication UNIX and Windows

Replication Manager

Replication Manager Product Guide

Administration Guide: Planning

Administration Guide: Implementation

Administration Guide: Performance

A-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 201: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Related Documents

“Data Movement Utilities Guide and Reference

“Data Recovery and High Availability Guide and Reference

“Replication Guide and Reference

“System Monitor Guide and Reference

Sybase on EMC Storage Systems Version 2.1 Solutions Guide A-3

Page 202: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sample SYMCLI Group Creation Commands

Appendix B Sample SYMCLI Group Creation Commands

This appendix presents the following topic:

B.1 Sample SYMCLI group creation commands .................................................................B-5

B-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 203: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sample SYMCLI Group Creation Commands

B.1 Sample SYMCLI group creation commands

The following shows how Symmetrix device groups and composite groups are created for the TimeFinder family of products including TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap.

The following example shows how to build and populate a device group and a composite group for TimeFinder/Mirror usage:

Device Group:

1. To create the device group execute the command: symdg create device_group –type regular

2. Add the standard devices to the group. The database containers reside on five Symmetrix devices. The device numbers for these are 0CF, 0F9, 0FA, 0FB and 101: symld –g device_group add dev 0CF symld –g device_group add dev 0F9 symld –g device_group add dev 0FA symld –g device_group add dev 0FB symld –g device_group add dev 101

3. Associate the BCV devices to the group. The number of BCV devices should be the same as the number of standard devices. They should also be the same size. The device serial numbers of the BCVs used in the example are 00C, 00D, 063, 064 and 065. symbcv –g device_group associate dev 00C symbcv –g device_group associate dev 00D symbcv –g device_group associate dev 063 symbcv –g device_group associate dev 064 symbcv –g device_group associate dev 065

Composite group:

1. Create the composite group: symcg create composite_group –type regular

2. Add the standard devices to the composite group. The database containers reside on five Symmetrix devices on two different Symmetrix arrays. The device numbers for these are 0CF, 0F9 on Symmetrix with the last three digits of 123, and device numbers 0FA, 0FB and 101 on the Symmetrix with the last three digits of 456: symcg –cg composite_group add dev 0CF –sid 123 symcg –cg composite_group add dev 0F9 –sid 123 symcg –cg composite_group add dev 0FA –sid 456 symcg –cg composite_group add dev 0FB –sid 456 symcg –cg composite_group add dev 101 –sid 456

3. Associate the BCV devices to the composite group. The number of BCV devices should be the same as the number of standard devices. They should also be the same size. The device serial numbers of the BCVs used in the example are 00C, 00D, 063, 064 and 065.

Sybase on EMC Storage Systems Version 2.1 Solutions Guide B-5

Page 204: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sample SYMCLI Group Creation Commands

symbcv –cg device_group associate dev 00C –sid 123 symbcv –cg device_group associate dev 00D –sid 123 symbcv –cg device_group associate dev 063 –sid 456 symbcv –cg device_group associate dev 064 –sid 456 symbcv –cg device_group associate dev 065 –sid 456

The following example shows how to build and populate a device group and a composite group for TimeFinder/Clone usage using BCV volumes for the targets of the clones:

Device group:

1. Create the device group device_group: symdg create device_group –type regular

2. Add the standard devices to the group. The database containers reside on five Symmetrix devices. The device numbers for these are 0CF, 0F9, 0FA, 0FB and 101: symld –g device_group add dev 0CF symld –g device_group add dev 0F9 symld –g device_group add dev 0FA symld –g device_group add dev 0FB symld –g device_group add dev 101

3. Add the target clone devices to the group. The targets for the clones can be standard devices or BCV devices. In this example, BCV devices are being used. The number of BCV devices should be the same as the number of standard devices. They should also be the same size or larger than the paired standard device. The device serial numbers of the BCVs used in the example are 00C, 00D, 063, 064 and 065. symbcv –g device_group associate dev 00C symbcv –g device_group associate dev 00D symbcv –g device_group associate dev 063 symbcv –g device_group associate dev 064 symbcv –g device_group associate dev 065

Composite group:

1. Create the composite group composite_group: symcg create composite_group –type regular

2. Add the standard devices to the group. The database containers reside on five Symmetrix devices on two different Symmetrix arrays. The device numbers for these are 0CF, 0F9 on Symmetrix with the last three digits of 123, and device numbers 0FA, 0FB and 101 on the Symmetrix with the last three digits of 456: symcg –cg composite_group add dev 0CF –sid 123 symcg –cg composite_group add dev 0F9 –sid 123 symcg –cg composite_group add dev 0FA –sid 456 symcg –cg composite_group add dev 0FB –sid 456 symcg –cg composite_group add dev 101 –sid 456

B-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 205: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sample SYMCLI Group Creation Commands

3. Add the target for the clones to the composite group. In this example, BCV devices are added to the composite group to simplify the later symclone commands. The number of BCV devices should be the same as the number of standard devices. They should also be the same size. The device serial numbers of the BCVs used in the example are 00C, 00D, 063, 064 and 065. symbcv –cg composite_group associate dev 00C –sid 123 symbcv –cg composite_group associate dev 00D –sid 123 symbcv –cg composite_group associate dev 063 –sid 456 symbcv –cg composite_group associate dev 064 –sid 456 symbcv –cg composite_group associate dev 065 –sid 456

The following example shows how to build and populate a device group and a composite group for TimeFinder/Snap usage.

Device group:

1. Create the device group device_group: symdg create device_group –type regular

2. Add the standard devices to the group. The database containers reside on five Symmetrix devices. The device numbers for these are 0CF, 0F9, 0FA, 0FB and 101: symld –g device_group add dev 0CF symld –g device_group add dev 0F9 symld –g device_group add dev 0FA symld –g device_group add dev 0FB symld –g device_group add dev 101

3. Add the virtual devices or VDEVs to the group. The number of VDEVs should be the same as the number of standard devices. They should also be the same size. The device serial numbers of the VDEVs used in the example are 291, 292, 394, 395 and 396. symld –g device_group add dev 291 –vdev symld –g device_group add dev 292 –vdev symld –g device_group add dev 394 –vdev symld –g device_group add dev 395 –vdev symld –g device_group add dev 396 –vdev

Composite group:

1. Create the composite group composite_group: symcg create composite_group –type regular

2. Add the standard devices to the composite group. The database containers reside on five Symmetrix devices on two different Symmetrix arrays. The device numbers for these are 0CF, 0F9 on Symmetrix with the last three digits of 123, and device numbers 0FA, 0FB and 101 on the Symmetrix with the last three digits of 456: symcg –cg composite_group add dev 0CF –sid 123 symcg –cg composite_group add dev 0F9 –sid 123 symcg –cg composite_group add dev 0FA –sid 456 symcg –cg composite_group add dev 0FB –sid 456 symcg –cg composite_group add dev 101 –sid 456

Sybase on EMC Storage Systems Version 2.1 Solutions Guide B-7

Page 206: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Sample SYMCLI Group Creation Commands

3. Add the virtual devices or VDEVs to the composite group. The number of VDEVs should be the same as the number of standard devices. They should also be the same size. The device serial numbers of the VDEVs used in the example are 291, 292, 394, 395 and 396: symcg –cg composite_group add dev 291 –sid 123 –vdev symcg –cg composite_group add dev 292 –sid 123 –vdev symcg –cg composite_group add dev 394 –sid 456 –vdev symcg –cg composite_group add dev 395 –sid 456 –vdev symcg –cg composite_group add dev 396 –sid 456 –vdev

B-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 207: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase Standby Access Method with TimeFinder

Appendix C Using Sybase Standby Access Method with TimeFinder

This appendix presents the following topic:

C.1 Required steps ............................................................................................................. C-10

Sybase on EMC Storage Systems Version 2.1 Solutions Guide C-9

Page 208: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase Standby Access Method with TimeFinder

C.1 Required steps

This appendix details the steps required to use the Sybase standby access feature with TimeFinder.

These are the steps as discussed in Chapter 4, “Backup Considerations for Sybase Environments.”

1. Dump the database (on the primary host) to create a complete database backup: dump database Dbname to DbDevice with log_option

Where:

Dbname is the name of the database to be dumped and DbDevice is the name of the Sybase dump device. For example, dump database prod1 to dumprod1.

log_option is the syslogs truncation option for the database log file.

2. Issue the Sybase quiesce command to halt I/O from the production server: quiesce database tagname hold dbname

Where:

tagname designates a list of databases and dbname is the name of the database to be quiesced. For example, quiesce database db1 hold prod1,master.

3. Perform a TimeFinder split: symmir -g groupname -split

Where:

groupname is the name of the device group.

4. Issue the Sybase quiesce release command to resume I/O on the production server: quiesce database tagname release

Where:

tagname designates a list of databases. It is not necessary to specify dbname here; it is identified by tagname. For example, quiesce database db1 release.

5. From the BCV host, import and start all disk groups (using VERITAS CLI commands). Importing a disk group enables local access to the disks on another system. vxdg -fC import Dgname vxvol -g Dgname startall

Where:

Dgname is the name of the VERITAS Volume Manager device group.

6. Restart the secondary Sybase server (where the BCVs reside): startserver -f RUN_server_file

C-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 209: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase Standby Access Method with TimeFinder

Where:

RUN_server_file is the name of the Sybase startup file. This file should reside in $SYBASE/install.

7. Load the secondary database. Perform the initial load command to create and populate the database with data. load database Dbname from DbDevice

Where:

Dbname is the name of the database to be loaded and DbDevice is the name of the Sybase device that was used during the dump process. For example, load database prod1 from dumprod1.

8. Create a transaction log dump of the production database. The with standby_access option must be used if the intention is to continually apply transaction logs to the secondary (BCV) database. dump transaction Dbname to DbDevice with standby_access

Where:

Dbname is the name of the database to be dumped and DbDevice is the name of the Sybase device containing the transaction log. For example, dump transaction prod1 to dumpfil01 with standby_access.

9. Log in and shut down the target server before the DUMPFILE disk group is deported. This is necessary because there are devices initialized in the Sybase sysdevices table and VERITAS will not allow deportation of any disk group with a device in use: isql -U <username> -P <password> shutdown

10. Deport devices from the secondary (BCV) host using the VERITAS CLI. Deport the disk group that contains the dump device. Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. vxdg deport Dgname

For example, vxdg deport DUMPFILE. DUMPFILE contains devices dumpfil01, dumpfil02, dumpfil03, and dumpfil04.

11. Incrementally establish the devices that have been designated as the dump devices (in this case, dumpfil01, dumpfil02, dumpfil03, and dumpfil04). symmir -g groupname establish DEVname

Where:

groupname is the name of the device group and DEVname is the name of the device pair. Do not specify the -full option, as this is an incremental establish. For example, symmir -g SYBDG establish dumpfil01 dumpfil02...

Sybase on EMC Storage Systems Version 2.1 Solutions Guide C-11

Page 210: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase Standby Access Method with TimeFinder

12. Split the BCVs that have been designated as the transaction dump devices. Because they are dump devices, a quiesce is not necessary. symmir -g groupname split DEVname

For example, symmir -g SYBDG split dumpfil01 dumpfil02 dumpfil03…

13. Import and start the device group that contains the “dump” devices. vxdg -fC import Dgname vxvol -g Dgname startall

14. Restart the secondary (BCV) server. startserver -f RUN_server_file

Where:

RUN_server_file is the name of the Sybase startup file. This file should reside in $SYBASE/install.

15. Apply the transaction log to the secondary (BCV) database. load transaction Dbname from DbDevice

Where:

Dbname is the name of the database to be loaded and DbDevice is the name of the Sybase device that was used during the dump process. For example, load transaction prod1 from dumpfil01.

16. Put the database online. After the load process, the database will be in an offline state. Issuing the online database command puts the database in a state that allows read-only access. online database Dbname for standby_access

The database state will be offline, online for standby_access. In this state, the database is read-only.

17. Continue applying transaction log dumps to keep the database current.

Optionally, if the database is to be used in a failover scenario (in place of the production database), use the following steps to restore the database from the BCV copy:

1. Apply the latest transaction log to the secondary (BCV) database: load transaction Dbname from DbDevice

Where:

Dbname is the name of the database to be loaded and DbDevice is the name of the Sybase device that was used during the dump process. For example, load transaction prod1 from dumpfil04.

2. Log in to both production server and secondary (BCV) server and shut them down: isql -U username -P password shutdown

C-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 211: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase Standby Access Method with TimeFinder

3. From both hosts, production and secondary, deport the disk groups (using VERITAS CLI commands): vxdg deport Dgname

For example, vxdg deport SYBDG.

4. Incrementally restore the devices in the disk group. This operation copies the updated contents of the BCV devices to the standard devices. symmir -g groupname restore

For example, symmir -g SYBDB -full restore.

5. Verify that the devices are restored: symmir –g groupname query

6. Import and start the device groups on the standard host: vxdg -fC import Dgname vxvol -g Dgname startall

7. Restart the Sybase servers on the standard and BCV hosts: startserver -f RUN_server_file

8. On the standard host, log in to Sybase and issue the online database command without the standby_access option. The database will be read/write enabled, and in a normal state. online database Dbname

Sybase on EMC Storage Systems Version 2.1 Solutions Guide C-13

Page 212: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 213: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Appendix D Using Sybase quiesce for external dump with TimeFinder

This appendix presents the following topic:

D.1 Required steps ............................................................................................................... D-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide D-1

Page 214: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

D.1 Required steps

This appendix details the steps required for using Sybase quiesce for external dump with EMC TimeFinder.

These are the steps as discussed in Chapter 4, “Backup Considerations for Sybase Environments.”

1. Fully synchronize the standard and BCV device pairs, if necessary, otherwise perform and incremental establish:

symmir -g groupname establish -full

Where:

groupname is the name of the device group. Specify the -full option for an establish that will synchronize every track in the device tables. Do not specify -full if you want to perform an incremental establish.

2. Issue the Sybase quiesce for external dump command to halt I/O from the production server:

quiesce database tagname hold dbname for external dump

Where:

tagname designates a list of databases and dbname is the name of the database to be quiesced. For example, quiesce database db1 hold prod1,master for external dump.

Include all databases in the quiesce database hold list if they reside on the same physical device. Quiesce database hold only allows up to eight databases (including Master) to be included during a single operation. If there are more than eight databases to be quiesced, use multiple instances of quiesce database hold.

3. Split the BCVs on the databases that are quiesced:

symmir -g groupname split

For example, symmir -g SYBDG split.

4. Issue the Sybase quiesce release command to resume I/O on the production server:

quiesce database tagname release

Where:

tagname designates a list of databases. It is not necessary to specify dbname here; it is identified by the tagname. For example, quiesce database db1 release.

5. Restart the Sybase servers on the BCV host with the –q option:

startserver -f RUN_server_file

D-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 215: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using Sybase quiesce for external dump with TimeFinder

Where :

RUN_server_file is the name of the Sybase startup file. The –q option will be specified in the startup file itself.

6. Create a transaction dump from the production database by entering the following command on the production server:

dump transaction Dbname to DbDevice with standby_access

Where:

Dbname is the name of the database to be dumped and DbDevice is the name of the Sybase device containing the transaction log. For example, dump transaction prod1 to dumpfil01 with standby_access.

Use the with standby_access option if the intention is to continually apply transaction logs to the BCV database for warm standby.

7. Split the BCVs containing the transaction log dumps:

symmir -g groupname split DEVname

For example, symmir -g SYBDG split (in this case, dbdumpfil01 are the BCVs containing the transaction log dumps).

8. Apply the transaction log from the production database to the BCV database by entering the following command on the secondary server:

load transaction Dbname from DbDevice

Where:

Dbname is the name of the database to be loaded and DbDevice is the name of the dump device file that was created in the previous step.

At least one transaction log must be loaded into the BCV database after splitting a BCV so that a subsequent roll forward of transaction logs can be applied directly and the secondary server can be kept up to date at all times.

9. Put the database online to allow read-only access by entering the following command on the secondary server:

online database Dbname for standby_access

The database state will be offline, online for standby_access. It will allow the continuous application of transaction log dumps to keep the database current.

Sybase on EMC Storage Systems Version 2.1 Solutions Guide D-3

Page 216: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 217: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Appendix E Using TimeFinder Consistent Split for Sybase

This appendix presents the following topic:

E.1 Examples and output ..................................................................................................... E-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide E-1

Page 218: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

E.1 Examples and output

This appendix outlines the steps required to create a point-in-time restartable copy of the database using TimeFinder consistent split.

To invoke consistent split using the -ppath command: symmir –ppath stddevs –sid 1122 split

Where:

-ppath stddevs specifies that device I/O will be held to all PowerPath devices defined to the environment variable SYMCLI_DG.

-sid defines the Symmetrix identification number.

Beginning with Enginuity version 5568, any split operation is an instant split.

This is probably the most generic way to perform a consistent split. The devices belonging to the device group specified by the environment variable SYMCLI_DG are the ones that are frozen as well as split.

Figure E-1 Listing of sybhome and c20dg device groups

To invoke consistent split using the -f command: symmir –f splitfile –ppath stddevs –sid 1122 split

Where:

-f specifies a device file, which is a standard text file containing device pairs (Symdevnames) listing one pair per each line.

A sample of the file “splitfile” follows:

00A1 0103

00A1 0104

00B1 0105

E-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 219: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Using TimeFinder Consistent Split for Sybase

This following command in Figure E-2performs a consistent split on all devices listed in splitfile. The -ppath option, specifying stddevs tells PowerPath to freeze/thaw all 17 devices listed in splitfile.

Figure E-2 Consistent split of all devices in splitfile

To invoke consistent split using the -rdb command:

symmir –g c20dg split –rdb –dbtype Sybase –db c20

Where:

-g c20dg specifies the device group containing the devices that will be split.

-rdb specifies that all devices associated with the specified database will be frozen just before the instant split is performed, and thawed as soon as the foreground split completes.

-dbtype Sybase specifies the database type.

-db specifies the database name (c20 in this case).

The following command output shows the 15 devices defined to the c20 database are frozen and thawed before and after the consistent split.

Sybase on EMC Storage Systems Version 2.1 Solutions Guide E-3

Page 220: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Figure E-3 Consistent split with –rdb option

To invoke consistent split using ECA and the -consistent command: symmir split –g c20dg –consistent

Where:

-g c20dg specifies the device group containing the devices that will be split.

-consistent specifies that all devices will be frozen just before the instant split is performed, and thawed as soon as the foreground split completes.

The -consistent option defers write I/Os through the Enginuity. Subsequent reads after the first write received will be held for each logical device.

E-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 221: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Appendix F Recovering an IQ-Multiplex Write Server with TimeFinder

This appendix presents the following topic:

F.1 Recovery.........................................................................................................................F-2

Sybase on EMC Storage Systems Version 2.1 Solutions Guide F-1

Page 222: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

F.1 Recovery

Table F-1 describes the hostname, and the role of the read/write server in the IQ-Multiplex environment for the TimeFinder testing.

Table F-1 Role of IQ-Multiplex Server for TimeFinder testing

Hostname Pretest Role (in IQ-Multiplex)

Posttest role

Losaw200 Original Query Server Same role

Losaw201 Original Query Server Converted to Write Server

Losaw202 Original Query Server Same role

Losaw203 Original Converted to Query Server

This test consisted of the following procedures:

Steps 1 through 13 are performed on node Losaw201 to reconfigure it as a new write server for the multiplex.

1. From Sybase Central, stop the Query server.

2. As root, unmount the device for the Catalog Store. In this test, the device c1t1d32s2 was mounted to /usr/sybase/test.

umount /usr/sybase/test

3. As root, mount the associated BCV device for the Catalog store. The device c1t1d80s2 is the associated BCV device for c1t1d32s2.

4. Define 12 symbolic links on the four servers (8 main devices; 4 temp devices) that specify the Main and Temporary data Stores. In the subdirectory of /usr/sybase/test, these links must be redefined to point to their associated BCV devices. For example: ln -s /dev/rdsk/c1t1d8s2 iqmain1 ln -s /dev/rdsk/c1t1d9s2 iqmain2 ln -s /dev/rdsk/c1t1d56s2 iqtemp1 ln -s /dev/rdsk/c1t1d57s2 iqtemp2

5. Redefine links to point to associated BCV devices. For example: ln -s /dev/rdsk/c1t1d16s2 iqmain1 ln -s /dev/rdsk/c1t1d17s2 iqmain2 ln -s /dev/rdsk/c1t1d64s2 iqtemp1 ln -s /dev/rdsk/c1t1d65s2 iqtemp2

6. For convenience, save the current links to the primary devices: mkdir my_primary mv iqmain* my_primary mv iqtemp* my_primary

F-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 223: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Recovering an IQ-Multiplex Write Server with TimeFinder

Sybase on EMC Storage Systems Version 2.1 Solutions Guide F-3

7. Create 12 new symbolic links for the BCVs (8 main devices; 4 temp devices). For example: ln -s /dev/rdsk/c1t1d16s2 iqmain1 ln -s /dev/rdsk/c1t1d64s2 iqtemp1

8. From Sybase Central, start IQ-Multiplex on Losaw201 in Simplex mode, with the override switch.

Use the same port number and server name that the new writer was using when it was a query server.

9. Follow Sybase procedures to reconfigure the multiplex on the BCVs with Losaw201 as the write server.

To accomplish what was needed for this test, Sybase provided procedural scripts to manipulate the catalog tables. Refer to Sybase documentation for the most current procedures.

10. Disconnect Sybase Central.

11. Stop the new write server using dbstop. For example:

dbstop -c uid=userid;pwd=password;eng=write_servername>;links=tcpip"{port=4780;host=hostname}"

Sybase Central will attempt to stop all servers at this point with its Stop Multiplex command, but there is no harm in letting the other servers continue to run on the standard volumes while the BCVs remain split. They will be stopped a few steps later.

12. Remove the links to the BCVs and restore the links to the primary multiplex volumes. rm iqmain* iqtemp* (remove symbolic links) mv primary/* . rmdir primary

Now iqmain* and iqtemp* should be pointing to the main data devices (c1t0d8 to c1t0d15) and the temp data devices (c1t1d56 to c1t1d63), so that the correct standard volumes will be visible following the BCV restore.

13. As root, check to see that the main data device (/usr/sybase/test) is mounted to the BCV device, and then unmount it. df umount /usr/sybase/test

At this point, the BCVs contain a multiplex defined with Losaw201 as the write server, and Losaw203 removed, while the standard volumes still contain the original configuration. To revert to the old configuration, simply resynchronize the BCVs and restart the query server on Losaw201 from Sybase Central.

Switching to the new configuration involves restoring the standard volumes from the BCVs. The demonstration proceeds with the second option here.

Page 224: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

14. From Sybase Central, connect to and stop the old multiplex. Depending on whether the original write server (Losaw203) had crashed, this stops the servers that are still running.

15. As root, unmount /usr/sybase/test from the standard device (c1t1d32s2) on all hosts except the new write server (Losaw201).

Before this step, /usr/sybase/test would have already been unmounted on Losaw201.

16. As root, perform an incremental restore; then split the data from the BCV devices to the standard devices. Log in to the server where the SYMCLI device groups have been defined. Log in as root. symmir -g device_group_name restore symmir -g device_group_name split

17. Now that the BCV has been restored, move into a production "state" using the original standard devices and the new write server (Losaw201). As root, remount the original standard device for the shared data and catalog store on Losaw201.

mount /dev/dsk/c1t1d32s2 /usr/sybase/test

18. As root, configure the /usr/sybase/test device so that the other three hosts can access the shared data store.

share -F nfs /usr/sybase/test

19. On each of the other hosts, reestablish the NFS mount.

mount Losaw201:/usr/sybase/test /usr/sybase/test

20. Verify that the Sybase IQ agents are running on each host. If they are not, start them.

/etc/rc3.d/S99SybaseASIQAgent

21. From Sybase Central, start the multiplex in simplex mode on new write server host (Losaw201). The multiplex must be running for Sybase Central to synchronize it.

22. Synchronize the multiplex. This starts all query servers in the new configuration for the multiplex with Losaw201 as the writer. The old write server on Losaw203 will no longer be part of the multiplex.

Optionally, create another query server on what was the original write server. The results are: Losaw201 is a write server and Losaw200, Losaw202, and Losaw203 are query servers. In practice, an option would be to add a query server to the multiplex on a new host.

1. From Sybase Central, create a new query server on Losaw203.

2. When the new query server is created, define new symbolic links for the main and temporary devices. ln -s /dev/rdsk/c#t#d#s# iqmain# ln -s /dev/rdsk/c#t#d#s# iqtemp#

F-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 225: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Recovering an IQ-Multiplex Write Server with TimeFinder

Sybase on EMC Storage Systems Version 2.1 Solutions Guide F-5

3. Synchronize the multiplex to initialize and start the new query server.

4. Synchronize and then split the BCVs.

Page 226: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture
Page 227: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Appendix G Configuring Sybase ASE for Mirror Activator

This appendix presents these topics:

G.1 Configuring the primary ASE........................................................................................ G-2 G.2 Initialization output ....................................................................................................... G-6

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-1

Page 228: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

G.1 Configuring the primary ASE

G.1.1 Build the Primary ASE

Following are the contents of the build script used to build the primary ASE. The script was used with the ASE srvbuildres executable (located in $SYBASE/ASE-12_5/bin) and is specific to the testing environment.

sybinit.release_directory: USE_DEFAULT sybinit.product: sqlsrv sqlsrv.server_name: as_emc50 sqlsrv.new_config: yes sqlsrv.do_add_server: no sqlsrv.network_protocol_list: tcp sqlsrv.network_hostname_list: 172.23.191.50 sqlsrv.network_port_list: 4100 sqlsrv.server_page_size: USE_DEFAULT sqlsrv.force_buildmaster: yes sqlsrv.master_device_physical_name: /dev/rdsk/c2t0d49s4 sqlsrv.master_device_size: USE_DEFAULT sqlsrv.master_database_size: USE_DEFAULT sqlsrv.errorlog: USE_DEFAULT sqlsrv.do_upgrade: no sqlsrv.sybsystemprocs_device_physical_name: /dev/rdsk/c2t0d50s4 sqlsrv.sybsystemprocs_device_size: USE_DEFAULT sqlsrv.sybsystemprocs_database_size: USE_DEFAULT sqlsrv.sybsystemdb_device_size: USE_DEFAULT sqlsrv.sybsystemdb_database_size: USE_DEFAULT sqlsrv.default_backup_server: as_emc50_backup

G.1.2 Build the standby ASE

Following are the contents of the build script used to build the standby ASE on the standby host. The script was used with the ASE srvbuildres executable (located in $SYBASE/ASE-12_5/bin) and is specific to the testing environment.

sybinit.release_directory: USE_DEFAULT sybinit.product: sqlsrv sqlsrv.server_name: as_emc51 sqlsrv.new_config: yes sqlsrv.do_add_server: no sqlsrv.network_protocol_list: tcp sqlsrv.network_hostname_list: 172.23.191.51 sqlsrv.network_port_list: 4100 sqlsrv.server_page_size: USE_DEFAULT sqlsrv.force_buildmaster: yes sqlsrv.master_device_physical_name: /dev/rdsk/c2t0d9s4 sqlsrv.master_device_size: USE_DEFAULT sqlsrv.master_database_size: USE_DEFAULT sqlsrv.errorlog: USE_DEFAULT sqlsrv.do_upgrade: no sqlsrv.sybsystemprocs_device_physical_name: /dev/rdsk/c2t0d10s4 sqlsrv.sybsystemprocs_device_size: USE_DEFAULT

G-2 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 229: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-3

sqlsrv.sybsystemprocs_database_size: USE_DEFAULT sqlsrv.sybsystemdb_device_size: USE_DEFAULT sqlsrv.sybsystemdb_database_size: USE_DEFAULT sqlsrv.default_backup_server: as_emc51_backup

G.1.3 Configure the primary ASE

1. Initialize the devices for the primary database.

a. Initialize the data device. disk init name = "db1_data", physname = "/dev/rdsk/c2t0d51s2", vstart = 2, size = "500M" go

b. Initialize the log device. disk init name = "db1_log", physname = "/dev/rdsk/c2t0d52s2", vstart = 2, size = "100M" go

2. Create the primary database. create database db1 on db1_data = "500M" log on db1_log = "100M"

go

3. Create the replication maintenance user and grant the replication role to the user. The maintenance user is created specifically for the Replication Server component of Mirror Activator. Although the sa_role is granted to the maintenance user in this test environment, it is not required to be so. However, the replication_role is required for the maintenance user.

a. Add the maintenance user login. use master go exec sp_addlogin ws_maint, "ws_maint_ps", db1 go

b. Grant the replication role to the maintenance user. sp_role "grant", replication_role, ws_maint

go

c. Grant the sa role to the maintenance user. sp_role "grant", sa_role, ws_maint go

4. Verify that maintenance user ws_maint can log in to the primary ASE.

Page 230: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

5. Add the maintenance user to the primary database. use db1 go sp_adduser "ws_maint" go

G.1.4 Configure the standby ASE

1. Initialize the devices for the standby database.

a. Initialize the data device. disk init name = "db1_data", physname = "/dev/rdsk/c2t0d51s2", vstart = 2, size = "500M" go

b. Initialize the log device. disk init name = "db1_log", physname = "/dev/rdsk/c2t0d52s2", vstart = 2, size = "100M" go

2. Create the standby database. create database db1 on db1_data = "500M" log on db1_log = "100M" go

3. Create the replication maintenance user and grant the replication role to the user. The maintenance user is created specifically for the Replication Server component of Mirror Activator. Although the sa_role is granted to the maintenance user in this test environment, it is not required to be so. The replication_role is required.

a. Add the maintenance user login. use master go exec sp_addlogin ws_maint, "ws_maint_ps", db1 go

b. Grant the replication role to the maintenance user. sp_role "grant", replication_role, ws_maint go

c. Grant the sa role to the maintenance user. sp_role "grant", sa_role, ws_maint go

4. Verify that maintenance user 'ws_maint' can log in to the standby ASE.

G-4 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 231: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-5

G.1.5 Configuring Mirror Activator

G.1.5.1 Install Replication Server system objects

Once the Replication Server system objects are installed by executing the rs_install_primary.sql script (rsinspri.sql on Windows), configure the system objects in the primary database. For example:

isql –U sa –P –S as_emc50 –D db1 –i rs_install_primary.sql >> rs_install_primary.out

1. Grant execute permissions on RepServer system procedures. grant execute on rs_update_lastcommit to PUBLIC go grant execute on rs_check_repl_stat to PUBLIC go grant execute on rs_marker to PUBLIC go

2. Grant permissions to the Replication Server maintenance user. grant all on rs_lastcommit to ws_maint go grant execute on rs_get_lastcommit to ws_maint go

3. Mark the Replication Server system procedure rs_update_lastcommit for replication. sp_setreplicate rs_update_lastcommit, true go

4. Disable the secondary truncation point. dbcc settrunc('ltm','ignore') go

G.1.5.2 Configure Mirror Replication Agent 12.6

In the Mirror Replication Agent, use the ra_config command to set the primary data server and replication connection properties:

ra_config pds_host_name, 172.23.191.50 go ra_config pds_port_number, 4100 go ra_config pds_database_name, db1 go ra_config pds_username, ws_maint go ra_config pds_password, ws_maint_ps go ra_config rs_source_ds, as_emc50 go ra_config rs_source_db, db1 go ra_config rs_host_name, localhost

Page 232: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

go ra_config rs_port_number, 4200 go ra_config rs_username, rep_db1_mra go ra_config rs_password, rep_db1_mra_ps go ra_config rssd_host_name, localhost go ra_config rssd_port_number, 4201 go ra_config rssd_database_name, rep_emc51_erssd go ra_config rssd_username, rep_emc51_RSSD_prim go ra_config rssd_password, rep_emc51_RSSD_prim_ps go

G.2 Initialization output

G.2.1 Initialize the primary database

Initialize the primary database by logging in to the Mirror Activator and issuing the following:

pdb_init move_truncpt go Msg 0, Level 20, State 0: Server 'mra_emc51', Procedure 'pdb_init move_truncpt', Line 1: successful (0 rows affected)

G.2.2 Initialize Replication Server

1. Create the replication user. This is the user Mirror Replication will log in to Replication Server as. create user rep_db1_mra set password rep_db1_mra_ps go User 'rep_db1_mra' is created.

2. Grant connect source permission to the replication user. The "connect source" permission is needed by the replication user so that the Mirror Replication Agent can send transactions to Replication Server for distribution. grant connect source to rep_db1_mra go Permission granted to user 'rep_db1_mra'.

3. Create a logical connection for the primary and standby databases. Replication Server uses logical connections to manage warm standby applications. create logical connection to AS_EMC_LDS.db1_logical

G-6 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 233: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-7

go Logical connection to 'AS_EMC_LDS.db1_logical' is

created.

4. Create the active connection to the primary database. This adds the primary database to the warm standby replication system. create connection to as_emc50.db1 set error class rs_sqlserver_error_class set function string class rs_sqlserver_function_class set username "ws_maint" set password "ws_maint_ps" with log transfer on as active for AS_EMC_LDS.db1_logical go

Active connection to 'as_emc50.db1' is created.

5. Create the standby connection. This adds the standby database to the warm standby replication system. create connection to as_emc51.db1 set error class rs_sqlserver_error_class set function string class rs_sqlserver_function_class set username "ws_maint" set password "ws_maint_ps" with log transfer on as standby for AS_EMC_LDS.db1_logical go

Standby connection to 'as_emc51.db1' is created.

6. View the logical connection status. We expect the standby connection state to be waiting for an Enable Marker. admin logical_status go Logical Connection Name Active Connection Name Active Conn State Standby Connection Name Standby Conn State Controller RS Operation in Progress State of Operation in Progress Spid---------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------- ---------------------------------------------------

Page 234: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

--------------------------------------------------- --------------------------------------------------- [166] AS_EMC_LDS.db1_logical [167] as_emc50.db1 Suspended/ [168] as_emc51.db1 Suspended/Waiting for Enable Marker [16777317] rep_emc51 None None

G.2.3 Materialize the standby database (execute from MRA)

1. Quiesce the primary database using the Mirror Replication Agent pdb_quiesce hold command. The for_dump option specifies that the database devices will be copied while the database is quiesced. Notice that the sa_role is required for a user to quiesce a database in ASE. If the MRAgent pds user does not have sa_role privileges, the pdb_quiesce command will fail. pdb_quiesce hold, for_dump go Msg 0, Level 20, State 0: Server 'mra_emc51', Procedure 'pdb_quiesce hold,

for_dump', Line 1: successful (0 rows affected)

2. Shut down the standby ASE.

3. Materialize the standby database devices. Using storage-based replication to copy the primary database data and log devices to the standby site is the easiest and fastest way to materialize the standby database. The standby database must be offline when materialization is done.

a. Check the state of the EMC device pairings. In this example, the devices are currently split (not synchronized).

G-8 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 235: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-9

symrdf –g RDF1GRP query –rdfg all Device Group (DG) Name : RDF1GRP DG's Type : RDF1 DG's Symmetrix ID : 000185400047 Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 1 (A) Remote Symmetrix ID : 000185400050 Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 2 (B) Source (R1) View Target (R2) View MODES ---------------------- ------------------------ ----- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE ------------------- -- ------------------------ ----- ------------ DEV001 0033 RW 0 12 NR 0033 RW 13 0 S.. Split DEV002 0034 RW 0 6 NR 0034 RW 5 0 S.. Split RW 0 76 NR 0035 RW 0 0 S.. Split

Total -------- -------- -------- -------- Track(s) 0 94 18 0 MB(s) 0.0 2.9 0.6 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

b. Establish synchronous replication to the standby data and log devices. symrdf -g RDF1GRP establish -rdfg 1 Execute an RDF 'Incremental Establish' operation for device group 'RDF1GRP' (y/[n]) ? y An RDF 'Incremental Establish' operation execution is in progress for device group 'RDF1GRP'. Please wait... Write Disable device(s) in (0047,01) on RA at target (R2).......Done. Suspend RDF link(s) for device(s) in (0047,01)..................Done. Mark target device(s) in (0047,01) to refresh from source.......Started. Devices: 0033-0034 ............................................ Marked. Mark target device(s) in (0047,01) to refresh from source.......Done. Merge track tables between source and target in (0047,01).......Started. Devices: 0033-0034 ............................................ Merged. Merge track tables between source and target in (0047,01).......Done. Resume RDF link(s) for device(s) in (0047,01)...................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'RDF1GRP'.

c. Make sure the standby devices being materialized are synchronized. In this example, device 0033 synchronization is not yet complete (SyncInProg).

symrdf -g RDF1GRP query -rdfg all Device Group (DG) Name : RDF1GRP DG's Type : RDF1 DG's Symmetrix ID : 000185400047 Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 1 (A) Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 2 (B) Source (R1) View Target (R2) View MODES

Page 236: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

------------------------------ ------------------------ ----- ------------ ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE ------------------------------ -- ------------------------ ----- ------------ DEV001 0033 RW 0 16 RW 0033 WD 0 13 S.. SyncInProg DEV002 0034 RW 0 0 RW 0034 WD 0 0 S..

Synchronized RW 0 76 NR 0035 RW 0 0 S.. Split Total -------- -------- -------- -------- Track(s) 0 92 0 13 MB(s) 0.0 2.9 0.0 0.4 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

d. Wait until the RDF pair states are synchronized. In this example, devices 0033 and 0034 are in a synchronized state.

symrdf –g RDF1GRP query –rdfg all Device Group (DG) Name : RDF1GRP DG's Type : RDF1 DG's Symmetrix ID : 000185400047 Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 1 (A) Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 2 (B) Source (R1) View Target (R2) View MODES -------------------------------- ------------------------ ----- ----------

-- ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- ------------------------ ----- ----------

-- DEV001 0033 RW 0 0 RW 0033 WD 0 0 S..

Synchronized DEV002 0034 RW 0 0 RW 0034 WD 0 0 S..

Synchronized RW 0 76 NR 0035 RW 0 0 S.. Split Total -------- -------- -------- -------- Track(s) 0 76 0 0 MB(s) 0.0 2.4 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

e. Split storage-based replication to the standby data and log devices. symrdf -g RDF1GRP split -rdfg 1 Execute an RDF 'Split' operation for device group 'RDF1GRP' (y/[n]) ? y An RDF 'Split' operation execution is in progress for device group 'RDF1GRP'. Please wait... Suspend RDF link(s) for devices in (0047).......................Done.

G-10 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 237: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-11

Read/Write Enable device(s) in (0047,01) on RA at target (R2)...Done. The RDF 'Split' operation successfully executed for device group 'RDF1GRP'.

f. Make sure the standby devices that were materialized are now split. symrdf –g RDF1GRP query –rdfg all Device Group (DG) Name : RDF1GRP DG's Type : RDF1 DG's Symmetrix ID : 000185400047 Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 1 (A) Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 2 (B) Source (R1) View Target (R2) View MODES -------------------------------- ------------------------ ----- ----------

-- ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- ------------------------ ----- ----------

-- DEV001 0033 RW 0 0 NR 0033 RW 0 0 S.. Split DEV002 0034 RW 0 0 NR 0034 RW 0 0 S.. Split RW 0 76 NR 0035 RW 0 0 S.. Split Total -------- -------- -------- -------- Track(s) 0 76 0 0 MB(s) 0.0 2.4 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

4. Synchronize replication to the primary database log device mirror. symrdf establish -rdfg 2 Execute an RDF 'Incremental Establish' operation for device group 'RDF1GRP' (y/[n]) ? y An RDF 'Incremental Establish' operation execution is in progress for device group 'RDF1GRP'. Please wait... Write Disable device(s) in (0047,02) on RA at target (R2).......Done. Suspend RDF link(s) for device(s) in (0047,02)..................Done. Resume RDF link(s) for device(s) in (0047,02)...................Not Done. Merge track tables between source and target in (0047,02).......Started. Device: 0034 .................................................. Merged. Merge track tables between source and target in (0047,02).......Done. Resume RDF link(s) for device(s) in (0047,02)...................Done. The RDF 'Incremental Establish' operation successfully initiated for device group 'RDF1GRP'.

5. Check the state of the primary database log device and its mirror and make sure it is synchronized. symrdf –g RDF1GRP query –rdfg all Device Group (DG) Name : RDF1GRP DG's Type : RDF1 DG's Symmetrix ID : 000185400047

Page 238: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 1 (A) Remote Symmetrix ID : 000185400050 RDF (RA) Group Number : 2 (B) Source (R1) View Target (R2) View MODES -------------------------------- ------------------------ ----- ----------

-- ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE -------------------------------- -- ------------------------ ----- ----------

-- DEV001 0033 RW 0 0 NR 0033 RW 0 0 S.. Split DEV002 0034 RW 0 0 NR 0034 RW 0 0 S.. Split RW 0 0 RW 0035 WD 0 0 S..

Synchronized Total -------- -------- -------- -------- Track(s) 0 0 0 0 MB(s) 0.0 0.0 0.0 0.0 Legend for MODES: M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off

6. Initialize the Mirror Replication Agent for replication using the Mirror Replication Agent ra_init command. This can be done concurrently with steps 3 and 4. Remember that the primary database must be quiesced for this command to execute successfully. ra_init go Msg 0, Level 20, State 0: Server 'mra_emc51', Procedure 'ra_init', Line 1: successful (0 rows affected)

If the database has already been initialized, use ra_init force.

7. If necessary, set the path to the log device mirror if it differs from the path to the log device in the primary database.

a. List the device metadata using the Mirror Replication Agent ra_helpdevice command. This command returns results for all devices in the device repository, which is loaded when the ra_init command is executed. ra_helpdevice go ID Database Device Name Path -- -------- ----------- -------------------- 3 db1 db1_log /dev/rdsk/c2t0d52s2 (1 row affected)

G-12 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 239: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-13

b. Set the device path using the Mirror Replication Agent ra_devicepath command. This command is used to set the path to the device to a different value. ra_devicepath db1_log, /dev/rdsk/c2t0d53s2 go ID Database Device Name Path -- -------- ----------- -------------------- 3 db1 db1_log /dev/rdsk/c2t0d53s2 (1 row affected)

8. Unquiesce the primary database using the Mirror Replication Agent pdb_quiesce release command. pdb_quiesce release go Msg 0, Level 20, State 0: Server 'mra_emc51', Procedure 'pdb_quiesce release', Line 1: successful (0 rows affected)

9. Bring the standby ASE online.

G.2.4 Resume replication

1. Resume replication in Mirror Replication Agent using the resume command. Execute the ra_status command get the status of the Mirror Replication Agent. resume go State Action ------------ ------------------------- REPLICATING Ready to replicate data. (1 row affected) ra_status go State Action ----------------------------------- ------------------------- REPLICATING (WAITING AT END OF LOG) Ready to replicate data. (1 row affected)

2. Resume the standby connection in Replication Server. Execution of this command causes the Standby Connection State to go to Active.

In this example, the Standby Conn State is not yet active (Suspended). admin logical_status

Page 240: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

go Logical Connection Name Active Connection Name Active Conn State Standby Connection Name Standby Conn State Controller RS Operation in Progress State of Operation in Progress Spid ------------------------------------------------------- --------------------------------------=----------------- ---------------------------------------=---------------- -------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------- --------------------------------------------------- --------------------------------------------------- --------------------------------------------------- [166] AS_EMC_LDS.db1_logical [167] as_emc50.db1 Suspended/ [168] as_emc51.db1 Suspended/Waiting for Enable Marker [16777317] rep_emc51 None None resume connection to as_emc51.db1 go

Connection to 'as_emc51.db1' is resumed.

In this example, the Standby Conn State is now active. admin logical_status go Logical Connection Name Active Connection Name Active Conn State Standby Connection Name Standby Conn State Controller RS Operation in Progress State of Operation in Progress Spid ---------------------------------------------------------- ----------------------------------------------------------- ----------------------------------------------------------- ----------------------------------------------------------- ----------------------------------------------------------- --------------------------------------------------------

G-14 Sybase on EMC Storage Systems Version 2.1 Solutions Guide

Page 241: SYBASE ON EMC STORAGE SYSTEMS · PDF filevi Sybase on EMC Storage Systems Version 2.1 Solutions Guide . ... 8.4 ECC Administration ... 10.2 IQ-Multiplex architecture

Configuring Sybase ASE for Mirror Activator

Sybase on EMC Storage Systems Version 2.1 Solutions Guide G-15

-------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- [166] AS_EMC_LDS.db1_logical [167] as_emc50.db1 Suspended/ [168] as_emc51.db1 Active/ [16777317] rep_emc51 None