43
White Paper Abstract As the need to store, protect and process more information rapidly increases, businesses experience a growing demand for intelligent storage systems while at the same time required to keep cost to a minimum. The Symmetrix VMAX 10K system is a new member in the trusted Symmetrix product family, and designed to be simple, cost effective and reliable to support from one to many databases and applications. EMC Symmetrix VMAX 10K satisfies all these needs by combining Symmetrix Enginuity features, 100% Virtually Provisioned storage for speed and ease of deployment, FAST VP for improved performance, and a combination of TimeFinder and native RecoverPoint splitter for robust replications. This white paper describes how EMC Symmetrix VMAX 10K can be deployed to support Oracle databases and applications. September 2012 Deploying ORACLE DATABASE 11g on EMC SYMMETRIX VMAX 10K

h8271 Vmax10k Oracledb11g Deploy Wp

Embed Size (px)

DESCRIPTION

h8271 Vmax10k Oracledb11g Deploy

Citation preview

Page 1: h8271 Vmax10k Oracledb11g Deploy Wp

White Paper

Abstract

As the need to store, protect and process more information rapidly increases, businesses experience a growing demand for intelligent storage systems while at the same time required to keep cost to a minimum. The Symmetrix VMAX 10K system is a new member in the trusted Symmetrix product family, and designed to be simple, cost effective and reliable to support from one to many databases and applications. EMC Symmetrix VMAX 10K satisfies all these needs by combining Symmetrix Enginuity features, 100% Virtually Provisioned storage for speed and ease of deployment, FAST VP for improved performance, and a combination of TimeFinder and native RecoverPoint splitter for robust replications. This white paper describes how EMC Symmetrix VMAX 10K can be deployed to support Oracle databases and applications. September 2012

Deploying ORACLE DATABASE 11g on EMC SYMMETRIX VMAX 10K

Page 2: h8271 Vmax10k Oracledb11g Deploy Wp

2

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Copyright © 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided “as is”. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part Number H8271.1

Page 3: h8271 Vmax10k Oracledb11g Deploy Wp

3

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Table of Contents

Executive summary.................................................................................................. 5 Audience ............................................................................................................................ 5

Introduction ............................................................................................................ 6

Products and features overview ............................................................................... 6 Symmetrix VMAX 10K series with Enginuity ........................................................................ 6 Unishpere for VMAX 10K ..................................................................................................... 7 Symmetrix VMAX 10K Auto-provisioning Groups ................................................................. 8 Symmetrix VMAX 10K Virtual Provisioning .......................................................................... 9

Automated pool rebalancing ........................................................................................ 11 Symmetrix VMAX 10K FAST VP .......................................................................................... 12

Evolution of storage tiering ........................................................................................... 12 Symmetrix FAST VP ....................................................................................................... 13 FAST VP and Virtual Provisioning .................................................................................. 13 FAST VP elements ......................................................................................................... 13 FAST VP Performance Time Window considerations ...................................................... 15 FAST VP Move Time Window considerations ................................................................. 16 FAST VP architecture ..................................................................................................... 16

Symmetrix VMAX 10K TimeFinder product family .............................................................. 17 TimeFinder/Clone full clone and clone with no-copy option .......................................... 17 TimeFinder Consistent Split .......................................................................................... 18 General best practices for ASM when using TimeFinder based local replications .......... 18

EMC RecoverPoint/EX ....................................................................................................... 18 RecoverPoint components ............................................................................................ 19

Combining TimeFinder and RecoverPoint for repurposing and recovery ............................ 20

Virtual Provisioning and Oracle databases ............................................................. 21 Strategies for thin pool allocation with Oracle databases ................................................. 21

Oracle Database file initialization ................................................................................. 21 Oversubscription .......................................................................................................... 21 Undersubscription ........................................................................................................ 22 Thin device preallocation ............................................................................................. 22

Planning thin pools for Oracle databases ......................................................................... 23 Planning thin devices for Oracle databases ...................................................................... 24

Thin device LUN sizing .................................................................................................. 24 Thin devices and ASM disk group planning .................................................................. 25

Thin pool reclamation with the ASM Reclamation Utility (ASRU) ........................................ 26

FAST VP and Oracle databases ............................................................................... 27 Instantaneous changes in workload characteristics .......................................................... 28 Changes in data placement initiated by the host (such as ASM rebalance) ....................... 28

Page 4: h8271 Vmax10k Oracledb11g Deploy Wp

4

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Which Oracle objects to place under FAST VP control........................................................ 29 OLTP vs. DSS workloads and FAST VP ............................................................................... 29

Examples of VMAX 10K configurations for Oracle database 11g .............................. 30 Developing the configurations to meet the database needs ............................................. 30 Configuration 1 details ..................................................................................................... 31

Review of configuration 1 ............................................................................................. 31 Database test of configuration 1 ................................................................................... 32

Configuration 2 details ..................................................................................................... 33 Review of configuration 2 ............................................................................................. 33 Database test of configuration 2 ................................................................................... 34

Conclusion ............................................................................................................ 35

Appendixes ........................................................................................................... 37 Appendix A – Example of storage provisioning steps for configuration 1 .......................... 37 Detailed configuration steps ............................................................................................. 37 Appendix B – TimeFinder/Clone configuration steps ........................................................ 41

Page 5: h8271 Vmax10k Oracledb11g Deploy Wp

5

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Executive summary The EMC® Symmetrix VMAX 10K™ with Enginuity delivers a multi-controller, scale-out architecture for enterprise reliability, availability, and serviceability at an affordable price. Built on the strategy of simple, intelligent, modular storage, it incorporates a scalable Virtual Matrix™ interconnect that connects all shared resources across all VMAX 10K engines, allowing the storage array to grow seamlessly from an entry-level configuration with one engine up to four engines. Each VMAX 10K engine contains two directors and redundant interface to the Virtual MatrixTM interconnect for increased performance and availability.

EMC Symmetrix VMAX 10K delivers enhanced capability and flexibility for deploying Oracle databases throughout the entire range of business applications, from mission-critical applications to test and development. In order to support this wide range of performance and reliability at minimum cost, Symmetrix VMAX 10K can start with as little as 24 drives up to 240 drives with a single engine (a single system bay) and scale up to four engines supporting 960 drives and 512 GB of cache (four system bays and two drive bays). Symmetrix VMAX 10K arrays support multiple drive technologies that include Enterprise Flash Drives (EFDs), Fibre Channel (FC) drives, and SATA drives. Symmetrix VMAX 10K with FAST VP technology provides automatic policy-driven storage tiering allocation, based on the actual application workload.

For ease of deployment and improved performance, Symmetrix VMAX 10K is fully based on Virtual Provisioning technology. Virtual Provisioning provides ease and speed of storage management, and native wide striping storage layout for higher performance. When oversubscription is used, it can highly improve storage capacity utilization with seamless “grow as you go” thin provisioning model.

For business continuity and Disaster Recovery Symmetrix VMAX 10K offers TimeFinder/Clone for creating local space efficient copies of the data for recoverability and restartability. Symmetrix VMAX 10K also offers native RecoverPoint splitter. RecoverPoint provides local and remote replications with any point in time recovery using CDP, CRR or CLR RecoverPoint technology.

Audience

This white paper is intended for Oracle database administrators, storage administrators and architects, customers, and EMC field personnel who want to understand a Symmetrix VMAX 10K deployment with Oracle databases.

Page 6: h8271 Vmax10k Oracledb11g Deploy Wp

6

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Introduction This white paper demonstrates how to implement a typical Oracle database using the new installation and configuration features specific to the VMAX 10K platform. As Symmetrix VMAX 10K is 100% Virtually Provisioned, and with the combination of TimeFinder for point-in-time replicas, and native RecoverPoint splitter for local and remote data protection, the paper focuses on storage layout choices that can best accommodate performance, protection and availability for Oracle databases.

Products and features overview

Symmetrix VMAX 10K series with Enginuity

Symmetrix VMAX 10K, the newest member of the Symmetrix family, is a revolutionary storage system purpose-built to meet all data center requirements as seen in Figure 1. Based on the Virtual Matrix Architecture™ and new Enginuity capabilities, Symmetrix VMAX 10K scales performance and capacity, delivers continuous operations, and greatly simplifies and automates the management and protection of information.

Figure 1. The Symmetrix VMAX 10K platform

The Symmetrix VMAX 10K design is based on individual engines with redundant CPU, memory, and connectivity on two directors for fault tolerance. VMAX 10K Engines connect to and scale out through the Virtual Matrix Architecture, which allows resources to be shared within and across VMAX 10K Engines. To meet growth requirements, additional VMAX 10K Engines can be added nondisruptively for efficient and dynamic scaling of capacity and performance that is available to any application on demand.

1 – 4 redundant VMAX 10K Engines

24 – 960 drives Up to 384 GB global memory Up to 1.3 PB usable capacity Up to 64 FC ports Up to 32 Gig-E / iSCSI / FCoE ports Enterprise Flash Drives 200 GB FC drives 450 GB 15k rpm FC drives 600 GB 10k rpm SATA drives 2 TB 7.2k rpm

Page 7: h8271 Vmax10k Oracledb11g Deploy Wp

7

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

The VMAX 10K packaging and ease-of-use features streamline the implementation of Symmetrix systems from order entry through final configuration. A complete VMAX 10K system can be selected with just a few mouse clicks to specify the host connectivity, the disk types and total capacity. Standard systems are pre-configured in the factory, new VMAX 10K installation script executed at the customer site, and the final application related configuration is completed with the use of Unisphere for VMAX (although Solutions Enabler CLI’s are available as well).

Unishpere for VMAX 10K

Unisphere for VMAX 10K is the replacement for the Symmetrix Management Console (SMC), as seen in Figure 2, is a browser-based user interface that configures and manages Symmetrix VMAX and VMAX 10K storage systems. Unisphere for VMAX 10K can be hosted on a Windows, UNIX, Linux server or the Symmetrix service processor with access through a Web browser. Unisphere for VMAX 10K is used to:

• Discover Symmetrix VMAX 10K arrays

• Perform configuration operations

• Configure and manage storage tiering technologies such as FAST VP

• Manage Symmetrix Access Controls, user accounts, and permission roles

• Install customer replaceable disk drives

• Perform and monitor replication operations (TimeFinder for VMAX 10K, Open Replicator)

• Monitor alerts and applications’ performance

Page 8: h8271 Vmax10k Oracledb11g Deploy Wp

8

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 2. Unisphere for VMAX 10K

Symmetrix VMAX 10K Auto-provisioning Groups

With Symmetrix VMAX 10K Auto-provisioning Groups, mapping devices to small or large Oracle database environments becomes faster and easier. Devices, HBA ports and storage ports can be easily grouped to create a masking view that defines the exact relationship between host LUNs and storage connectivity. Auto-provisioning Groups provides increased security and simplifies the host and application management tasks by making only the appropriate storage devices (LUNs) visible to each host. Any component in the masking view can be dynamically modified and the changes will automatically propagate throughout the Auto-provisioning Group, thus improving and simplifying complex storage provisioning activities.

Page 9: h8271 Vmax10k Oracledb11g Deploy Wp

9

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 3. VMAX 10K Auto-provisioning Groups

Symmetrix VMAX 10K Virtual Provisioning

Symmetrix Virtual Provisioning enables users to simplify storage management and increase capacity utilization by sharing storage among multiple applications and only allocating storage as needed from a shared “virtual pool” of physical disks.

Symmetrix virtual provisioning technology, as can be seen in Figure 4, makes use of thin devices. The Symmetrix thin devices are logical devices that can be used in many of the same ways that Symmetrix standard devices have traditionally been used. Unlike traditional Symmetrix devices, thin devices do not need to have physical storage preallocated at the time the device is created and presented to a host (although in many cases customers interested only in wide striping and ease of management choose to fully preallocate the thin devices). A thin device is not usable until it has been bound to a shared storage pool known as a thin pool. Multiple thin devices may be bound to any given thin pool. The thin pool is comprised of devices called data devices that provide the actual physical storage to support the thin device allocations. Refer also to Appendix A – Example of storage provisioning steps for configuration 1.

Port: 10E:1

Port: 07E:1

Oracle RAC devices

Storage SAN

RAC1_HBAs RAC2_HBAs

Auto-provisioning

Group

Page 10: h8271 Vmax10k Oracledb11g Deploy Wp

10

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 4. VMAX 10K Virtual Provisioning

When a write is performed to a part of any thin device for which physical storage has not yet been allocated, the Symmetrix allocates physical storage from the thin pool for that portion of the thin device only. The Symmetrix operating environment, Enginuity, satisfies the requirement by providing a block of storage from the thin pool called a thin device extent. This approach reduces the amount of storage that is actually consumed. Allocations across the data devices are striped and balanced to ensure that an even distribution of allocations occurs from all available data devices in the thin pool (also referred to as wide striping).

For Symmetrix, the thin device extent size is 12 Symmetrix tracks or 768 KB. As a note, there is no reason to match the LVM stripe depth with the thin device extent size. Oracle commonly accesses data either by random single block read/write operations (usually 8 KB in size) or sequentially by reading large portions of data. In either case there is no advantage or disadvantage to match the LVM stripe depth to the thin device extent size as single block read/writes operate on a data portion that is smaller than the LVM stripe depth anyway. For sequential operations, if the data is stored together in adjacent locations on the devices, the read operation will simply continue to read data on each LUN (every time the sequential read wraps to that same LUN) regardless of the stripe depth. If the LVM striping caused the data to be stored randomly on the storage devices then the sequential read operation will turn into a storage random read of large I/Os spread across all the devices.

When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the thin pool to which the thin device is associated. If for some reason a read is performed against an unallocated portion of the thin device, zeros are returned to the reading process.

Page 11: h8271 Vmax10k Oracledb11g Deploy Wp

11

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

When more physical data storage is required to service existing or future thin devices, for example, when a thin pool is approaching full storage allocations, data devices can be added to existing thin pools dynamically without causing a system outage. New thin devices can also be created and bound to an existing thin pool at any time.

When data devices are added to a thin pool they can be in an enabled or disabled state. In order for the data device to be used for thin extent allocation it needs to be in the enabled state. For it to be removed from the thin pool, it needs to be in a disabled state. Symmetrix automatically initiates a drain operation on a disabled data device without any disruption to the application. Once all the allocated extents are drained to other data devices, a data device can be removed from the thin pool.

The following figure depicts the relationships between thin devices and their associated thin pools. Thin Pool A contains six data devices, and thin Pool B contains three data devices. There are nine thin devices associated with thin Pool A and three thin devices associated with thin pool B. The data extents for thin devices are distributed on various data devices as shown in Figure 5.

Figure 5. Thin devices and thin pools containing data devices

The way thin extents are allocated across the data devices results in a form of striping in the thin pool. The more data devices in the thin pool (and the associated physical drives behind them), the wider the striping will be, creating an even I/O distribution across the thin pool. Wide striping simplifies storage management by reducing the time required for planning and execution of data layout. Refer also to Appendix A – Example of storage provisioning steps for configuration 1.

Automated pool rebalancing

Symmetrix VMAX 10K automated pool rebalancing allows the user to run a balancing operation that will redistribute data evenly across the enabled data devices in the

Page 12: h8271 Vmax10k Oracledb11g Deploy Wp

12

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

thin pool. Because the thin extents are allocated from the thin pool in round-robin fashion, the rebalancing mechanism will be used primarily when adding data devices to increase thin pool capacity. If automated pool rebalancing is not used, existing data extents will not benefit from the added data devices as they will not be redistributed.

The balancing algorithm will calculate the minimum, maximum, and mean used capacity values of the data devices in the thin pool. The Symmetrix will then move thin device extents from the data devices with the highest used capacity to those with the lowest until the pool is balanced. Pool rebalancing is a nondisruptive operation and thin devices (LUNs) can continue to be accessed by the applications during the rebalance.

Symmetrix VMAX 10K FAST VP

Evolution of storage tiering

Almost any application causes access skewing at a LUN or sub-LUN level. In other words, some portions of the data are heavily accessed, some are accessed to a lesser degree, and often some portions are hardly accessed at all. Because DBAs tend to plan for the worst-case peak workloads they commonly place almost all data into a single storage tier based on fast FC drives (10k or 15k rpm). Based on the availability of multiple storage tiers and FAST VP technology, a more efficient storage tiering strategy can be deployed that will place the correct data on the right storage tier for it.

Storage tiering has evolved over the past several years from a completely manual process to the automatic process it is today. Manual storage tiering is the process of collecting performance information on a set of drives and then manually placing data on different drive types based on the performance requirement for that data. This process is typically very labor-intensive and does not dynamically adjust as the load on the application increases or decreases over time. FAST VP automates that process and adjusts the tiers allocation to the appropriate data set (that is likely to change over time), the IO profile changes and in accordance with a FAST Policy.

Figure 6 shows an example of storage tiering evolution from a single tier to sub-LUN tiering. Although the image shows FAST VP operating on two tiers alone, in most cases tiering strategy is still best optimized for cost/performance using a three-tier approach.

Page 13: h8271 Vmax10k Oracledb11g Deploy Wp

13

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 6. Evolution of storage tiering

Symmetrix FAST VP

FAST VP automates the identification of thin device extents for the purposes of re-allocating application data across different performance tiers. FAST VP proactively monitors workloads at a sub-LUN level in order to identify active areas that would benefit from being moved to higher-performing drives. FAST VP will also identify less active sub-LUN areas that could be moved to higher-capacity drives, without existing performance being affected.

FAST VP and Virtual Provisioning

FAST VP is based on Virtual Provisioning technology. As explained earlier, Virtual Provisioning allows the creation and use of virtual devices (commonly referred to as thin devices) that are host-addressable, cache-only pointer-based devices. Once the host starts using the thin devices, their data is allocated in commonly shared pools called thin pools. A thin pool is simply a collection of Symmetrix regular devices of the same drive technology and RAID protection (for example, 50 x 100 GB RAID 5 15k rpm FC devices can be grouped into a thin pool called FC15k_RAID5). Because the thin pool devices store the pointer-based thin devices’ data, they are also referred to as data devices. Data in the thin pool is always striped, taking advantage of all the physical drives behind the thin pool data devices.

One can start understanding how FAST VP benefits from this structure. Since the thin device is pointer-based, and its actual data is stored in thin pools based on distinct drive type technology, when FAST VP moves data between storage tiers it simply migrates the data between the different thin pools and updates the thin device pointers accordingly. To the host, the migration is seamless as the thin device maintains the exact same LUN identity. At the Symmetrix storage, however, the data is migrated between thin pools without any application downtime.

FAST VP elements

FAST VP has three main elements — storage tiers, storage groups, and FAST policies — as shown in Figure 7.

Page 14: h8271 Vmax10k Oracledb11g Deploy Wp

14

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 7. FAST managed objects

• Storage tiers are the combination of drive technology and RAID protection available in the VMAX 10K array. Examples for storage tiers are RAID 5 EFD, RAID 1 FC, RAID 6 SATA, and so on.

• Storage groups are collections of Symmetrix host-addressable devices. For example, all the devices provided to an Oracle database can be grouped into a storage group.

• A FAST VP policy combines storage groups with storage tiers, and defines the configured capacities, as a percentage, that a given storage group is allowed to consume on each of these tiers. For example a FAST VP policy can define 10 percent of its allocation to be placed on EFD_RAID5, 40 percent on FC15k_RAID1, and 50 percent on SATA_RAID6 as shown in Figure 7. Note that these allocations are the maximum allowed. For example, a policy of 100 percent on each of the storage tiers means that FAST VP has liberty to place up to 100 percent of the storage group data on any of the tiers. When combined, the policy must total at least 100 percent, but may be greater than 100 percent as shown in Figure 8. In addition the FAST VP policy defines exact time windows for performance analysis, data movement, data relocation rate, and other related settings.

Page 15: h8271 Vmax10k Oracledb11g Deploy Wp

15

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 8. FAST policy association

FAST VP operates in the storage array based on the policy defined allocation limits (“Compliance”), and in response to the application workload (“Performance”). During the Performance Time Window that the FAST Policy defines, FAST will gather performance statistics for the controlled storage groups. During the Movement Time Window that the FAST policy defines, FAST will create move plans (every 10 minutes) that will accommodate any necessary changes based on the collected performance statistics, or due to compliance changes.

FAST VP Performance Time Window considerations

There is no one Performance Time Window recommendation that is generically applicable to all customer environments. Each site will need to make the decision based on their particular requirements and SLAs. Collecting statistics 24x7 is simple and the most comprehensive approach; however, overnight and daytime I/O profiles may differ greatly, and evening performance may not be as important as daytime performance. This difference can be addressed by simply setting the collection policy to be active only during the daytime from 7 A.M. to 7 P.M., Monday to Friday. This policy is best suited for applications that have consistent I/O loads during traditional business hours. Another approach would be to only collect statistics during peak times on specific days. This is most beneficial to customers whose I/O profile has very specific busy periods, such as the A.M. hours of Mondays. By selecting only the peak hours for statistical collection the site can ensure that the data that is most active during peak periods gets the highest priority to move to a high-performance tier. The default Performance Time Window is set for 24x7 as the norm but can be easily changed using Solutions Enabler CLI or Unisphere for VMAX.

Page 16: h8271 Vmax10k Oracledb11g Deploy Wp

16

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

FAST VP Move Time Window considerations

Choosing a FAST VP Move Time Window allows a site to make a decision about how quickly FAST VP responds to changes in the workload. Allowing it to move data at any time of the day lets FAST VP quickly adapt to changing I/O profiles but may add activity to the Symmetrix back end during these peak times. Alternatively, the FAST VP Move Time Window can be set to specific lower activity hours to prevent FAST VP activity from interfering with online activity. One such case would be when FAST is initially implemented on the array when the amount of data being moved could be substantial. In either case FAST VP would attempt to make the move operations as efficiently as possible by only moving allocated extents, and with sub-LUN granularity the move operations are focused on just the data sets that need to be promoted or demoted.

The FAST VP Relocation Rate (FRR) is a quality-of-service setting for FAST VP and affects the “aggressiveness” of data movement requests generated by FAST VP. FRR can be set between 1 and 10, with 1 being the most aggressive, to allow the FAST VP migrations to complete as fast as possible, and 10 being the least aggressive. The default FRR is set to 5 and can be easily changed dynamically.

FAST VP architecture

There are two components of FAST VP as seen in Figure 9: Symmetrix Enginuity and the FAST controller.

The Symmetrix microcode is a part of the Enginuity storage operating environment that controls components within the array. The FAST controller is a service that runs on the Symmetrix service processor.

Figure 9. FAST VP components

Page 17: h8271 Vmax10k Oracledb11g Deploy Wp

17

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

When FAST VP is active, both components participate in the execution of two algorithms to determine appropriate data placement:

• Intelligent tiering algorithm

The intelligent tiering algorithm uses performance data collected by the microcode, as well as supporting calculations performed by the FAST controller, to issue data movement requests to the VLUN VP data movement engine.

• Allocation compliance

The allocation compliance algorithm enforces the upper limits of storage capacity that can be used in each tier by a given storage group by also issuing data movement requests to the VLUN VP data movement engine.

Data movements performed by the microcode are achieved by moving allocated extents between tiers. The size of data movement can be as small as 768 KB, representing a single allocated thin device extent, but will more typically be an entire extent group, which is 10 thin device extents, or 7.5 MB.

FAST VP has two modes of operation, Automatic or Off. When operating in Automatic mode, data analysis and data movements will occur continuously during the defined windows. In Off mode, performance statistics will continue to be collected, but no data analysis or data movements will take place.

Symmetrix VMAX 10K TimeFinder product family The EMC TimeFinder family of local replication technology allows for creating multiple, nondisruptive, read/writeable storage-based replicas of database and application data. It satisfies a broad range of customers’ data replication needs with speed, scalability, efficient storage utilization, and minimal to no impact on the applications – regardless of the database size. TimeFinder provides a solution for backup, restart, and recovery of production databases and applications, even when they span Symmetrix arrays. The TimeFinder product family supports the creation of dependent write-consistent replicas using EMC consistency technology, and replicas that are valid for Oracle backup/recovery operations, as described in the TimeFinder sections of the paper: EMC Symmetrix VMAX Using SRDF/TimeFinder and Oracle Database 10g/11g. TimeFinder/Clone can scale to thousands of devices.

TimeFinder/Clone full clone and clone with no-copy option The TimeFinder on VMAX 10K allows creation of either full clone of thin source device to a thin target device, or space saving clone by using the no-copy option. With full clone the target device is a stand alone replica of the source device and consumes the same amount of storage in the thin pool. On the other hand, TimeFinder/Clone with no-copy option allows creation of space saving clone target. The target clone device does not consume any space in the thin pool at the clone session creation and the thin pool is allocated only with changes to either the source, or the clone thin device. Incremental restore is supported when the background copy operation is completed on the full clone, or immediately when the clone is created using no-copy option. With TimeFinder/Clone technology, Oracle database recovery operations can be started as soon as the incremental restore started (no need to wait for the background copy to

Page 18: h8271 Vmax10k Oracledb11g Deploy Wp

18

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

complete as any needed data tracks will be prioritized as the background copy proceeds). This provides tremendous amount of improvement in RTO.

TimeFinder Consistent Split With TimeFinder you can use the Enginuity Consistency Assist (ECA) feature to perform consistent splits between source and target device pairs across multiple, heterogeneous hosts. Consistent split helps to avoid inconsistencies and restart problems that can occur if you split database-related devices without first quiescing the database. The difference between a normal instant split and a consistent split is that when using consistent split on a group of devices, the database writes are held at the storage level momentarily while the foreground split occurs, maintaining dependent-write order consistency on the target devices comprising the group. Since the foreground split completes in just a few seconds, Oracle needs to be in hot backup mode only for this short time when hot backup is used. Consistent split can be also used stand-alone to create a restartable replica, as described in the white paper referenced above.

TimeFinder target devices, after performing a consistent split, are in a state that is equivalent to the state a database would be in after a power failure, or if all database instances were aborted simultaneously. This is a state that is well known to Oracle and it can recover easily from it by performing a crash recovery the next time the database instance is started.

General best practices for ASM when using TimeFinder based local replications • Use external redundancy (not ASM mirroring) in accordance with EMC’s recommendation of

leveraging the Symmetrix array RAID protection instead.

• Use separate disk groups for redo, data and archive logs. For example, +REDO (redo logs), +DATA (data, control, temp files), and +FRA (archives, flashback logs). Typically EMC recommends separating logs from data for performance monitoring and backup offload reasons. Finally, +FRA can typically use a lower-cost storage tier like SATA drives and therefore require their own diskgroup.

• Starting with Oracle 11gR2 Oracle Cluster Ready Services (CRS) and ASM have been merged. Therefore when installing CRS the first ASM disk group is created. In that case it is recommended to create a small ASM disk group exclusively for CRS (no database objects should be stored in it), for example: +GRID and provide it with 5 LUNs. That will allow the ASM disk group to use High Redundancy ASM protection, which is the only way to have Oracle clusterware create multiple voting disks (quorum devices). As described earlier, all other ASM disk groups should use External Redundancy, making use of storage RAID protection.

• Whenever TimeFinder is used to clone an ASM disk group, consistency technology should be used (-consistent flag) even if Hot Backup mode is used at the database level. The reason is that Hot Backup mode does not protect ASM metadata writes.

EMC RecoverPoint/EX The EMC RecoverPoint provides DVR-like point in-time recovery with three topologies, the first is local continuous data protection (CDP), the second is synchronous or asynchronous continuous remote replication (CRR) and the third is a combination of both (CLR). RecoverPoint/EX is the offering that simplifies continuous data protection and replication by using VMAX 10K with

Page 19: h8271 Vmax10k Oracledb11g Deploy Wp

19

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Enginuity based write splitter. RecoverPoint/EX is appliance based, out-of-band data protection solution designed to ensure the integrity of production data at local and/or remote sites. It enables customers to centralize and simplify their data protection management and allow for the recovery of data to nearly any point in time.

RecoverPoint provides continuous replication of every write between a pair of local volumes residing on one or more arrays. RecoverPoint also provides remote replication between pairs of volumes residing in two different sites. For local replication and remote synchronous replication every write is collected and written to local and remote journal and then distributed to target volumes. For remote asynchronous replications multiple writes are collected at local site, deduplicated, compressed and sent across periodically to the remote site where they are uncompressed and written to the journals and then distributed to target volumes. Figure 10 depicts the RecoverPoint configuration for local and remote replication.

Figure 10. RecoverPoint Configuration

RecoverPoint components

RecoverPoint Appliance (RPA) RPA is a server that runs RecoverPoint software and includes four 4 Gb FC connections and two 1 Gigabit Ethernet connections. For fault tolerance a minimum of 2 RPAs are needed per site which can be extended up to 8 RPAs. RPAs are connected to the SAN and for updating the journal volumes RPA ports are zoned to the same Symmetrix VMAX 10K front end adapters (FAs) that are zoned to the production host to have access to all the writes originated from the production host.

Page 20: h8271 Vmax10k Oracledb11g Deploy Wp

20

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Symmetrix VMAX 10K write splitter for RecoverPoint/EX

Symmetrix VMAX 10K write splitter for RecoverPoint is an enhanced implementation of Open Replicator that sends all incoming host writes from the VMAX 10K array to local RPA cluster for use in CDP local replication, CRR based remote replication or CLR, which is the combination of both CDP and CRR.

RecoverPoint source volumes RecoverPoint source volumes are the production volumes that are protected using RecoverPoint.

RecoverPoint replica volumes RecoverPoint replica volumes are the target RecoverPoint volumes on any heterogeneous storage array containing full copy of the production volumes. The replica volumes are normally write disabled volumes but by providing image access functionality RecoverPoint enables direct read/write access on the replica volume to a secondary or standby host by allowing easy access to data at any point in time in conjunction with available journal. This any point in time image of the production data can be used for test/dev, reporting, backup or any other use cases. Another option is to swap the roles of secondary/standby and primary host and the direction of replication reversed.

RecoverPoint consistency groups Similar to TimeFinder consistency groups, RecoverPoint consistency groups also allow creation of write order consistent copy of the set of production volumes. The consistency groups can be disabled at any time for some maintenance operations on production volumes and RecoverPoint will make resynchronize the replica volumes once the consistency groups are re-enabled. The best practices for using consistency groups for RecoverPoint are similar to what described earlier for TimeFinder consistency groups.

RecoverPoint journal volumes RecoverPoint journals store block level changes to the source volumes and they are used in conjunction with the replica volumes to enable any point-in-time recovery. RecoverPoint journal volumes are the Symmetrix devices visible only to the RPA cluster. As all the writes are journalled the size of the journal depends on the desired period of protection and change rate at the production site.

RecoverPoint repository volumes Repository volumes are very small devices visible to RPA cluster and they store some management information required for RecoverPoint replication operations.

Combining TimeFinder and RecoverPoint for repurposing and recovery Both TimeFinder and RecoverPoint can co-exist for Symmetrix VMAX 10K. A production volume can be the source for TimeFinder/Clone and/or RecoverPoint replica. This will allow creation of multiple independent copies of the production database for recovery and restartability using TimeFinder while continuing to get the any near any point in time recovery functionality from RecoverPoint with retention based on the RecoverPoint journal size. The RecoverPoint replica

Page 21: h8271 Vmax10k Oracledb11g Deploy Wp

21

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

volume can also be associated with TimeFinder/Clone operation allowing similar use cases from the replica volume as well. Creation of periodic TimeFinder/Clone of replica volume and refreshing replica volume from production data would extend the data protection window beyond what only RecoverPoint journals can support by reusing the journal volumes for more recent changes. It should be noted that when using TimeFinder/Clone to restore the production data RecoverPoint consistency group operations should be disabled on those volumes as such a restore would invalidate RecoverPoint based replica. Once the restore completes consistency groups can be reenabled which will result in full sweep in order to refresh the RecoverPoint replica.

Virtual Provisioning and Oracle databases

Strategies for thin pool allocation with Oracle databases

Oracle Database file initialization

Using Virtual Provisioning in conjunction with Oracle databases provides benefits such as reducing future server impact during LUN provisioning, increasing storage utilization, native striping in the thin pool, and ease and speed of creating and working with thin devices. However, as commonly known, when Oracle initializes new files, such as log, data and temp files, it fully allocates the file space by writing non-zero information (metadata) to each initialized block. This will cause the thin pool to allocate the amount of space that is being initialized by the database. As database files are added, more space will be allocated in the pool. Due to Oracle file initialization, and in order to get the most benefit from a Virtual Provisioning infrastructure, a strategy for sizing files, pools, and devices should be developed in accordance to application and storage management needs. Some strategy options are explained next.

Oversubscription

An oversubscription strategy is based on using thin devices with a total capacity greater than the physical storage in the thin pool(s) they are bound to. This can increase capacity utilization by not allocating the long term predicted storage capacity from start, thereby reducing the amount of allocated but possibly never used space. The thin devices each appear to be a full-size device to the application, while in fact the thin pool can’t accommodate the total thin LUN capacity. Since Oracle database files initialize their space as soon as they are created even though they are still empty, it is recommended that when oversubscription is used, instead of creating very large data files that may remain largely empty for most of their lifetime, smaller data files should be considered to accommodate near-term capacity needs. As they fill up over time, their size can be increased, or more data files added, in conjunction with the capacity increase of the thin pool. The Oracle auto-extend feature can be used for simplicity of management, or DBAs may prefer to use manual file size management or addition.

Page 22: h8271 Vmax10k Oracledb11g Deploy Wp

22

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

An oversubscription strategy is recommended for database environments when database growth is controlled, and thin pools can be actively monitored and their size increased when necessary in a timely manner.

Undersubscription

An undersubscription strategy is based on using thin devices with a total capacity smaller than the physical storage in the pool(s) they are bound to. This approach doesn’t necessarily improve storage capacity utilization but still makes use of wide striping, thin pool sharing, and other benefits of Virtual Provisioning. In this case the data files can be sized to make immediate use of the full thin device size, or alternatively, auto-extend or manual file management can be used.

Undersubscribing is recommended when data growth is unpredictable, when multiple small databases share a large thin pool to benefit from wide striping, or when an oversubscriptioned environment is considered unacceptable due to a potential out-of-space thin pool condition.

Thin device preallocation

With either oversubscription or undersubscription, when the DBAs would like to guarantee that space is reserved for the databases’ thin devices, they can use thin device preallocation. A thin device can preallocate space in the pool, even before data was written to it. Preallocation also eliminates a small performance overhead when data is first written to a previously unallocated space in the thin pool. Figure 11 shows an example of creating 10 x 10 GB thin devices, and preallocating full device capacity of 10 GB in the pool for each of them. When preallocation is used in conjunction with undersubscription, Oracle database customers often preallocate the whole thin device (reducing the storage capacity optimization benefits). In effect each thin device therefore fully claims its space in the thin pool, eliminating a possible thin pool out-of-space condition. It is also possible to preallocate only a portion of the thin device, especially when oversubscription strategy is used, to match the size of the short term application needs. For example, ASM disks can be set smaller than their actual full size, and later be resized dynamically without any disruption to the database application.

Page 23: h8271 Vmax10k Oracledb11g Deploy Wp

23

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 11. Creation of pre-allocated thin devices in the thin pool

Planning thin pools for Oracle databases

Symmetrix VMAX 10K is fully virtually provisioned which simplifies storage provisioning by working with thin pools. When VMAX 10K is ordered the data devices that comprise the thin pools are already configured and all that is left is to group them into the appropriate thin pools. With thin devices, performance of the database can be easily improved because thin devices are striped evenly over all the physical drives in the pool. For typical OLTP Oracle databases this provides the maximum number of physical devices to service the workload. If a database starts on a pool of, say, 64 physical drives, and the load to those drives is too heavy, the pool can be expanded dynamically without interruption to the application, to spread the load over more physical drives.

In general thin pools should be configured to meet at least the initial capacity requirements of all applications that will reside in the pool. The pool should also contain enough physical drives to service the expected workload. When using FAST VP the full power of automated storage tiering allows the workload to be dynamically distributed across multiple tiers, providing the best cost/performance benefits.

For RAID protection, thin pools are no different in terms of reliability and physical drive performance than existing drives today. Both RAID 1 and RAID 5 protect from a single-drive failure, and RAID 6 protects from two-drive failures. A RAID 1 group resides on two physical drives; a RAID 5 (3+1) group resides on four physical drives, and so on. When a thin pool is created, it is always created out of similarly configured RAID groups. For example, if eight RAID 5 (3+1) data devices are

Page 24: h8271 Vmax10k Oracledb11g Deploy Wp

24

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

placed in one pool, the pool has eight RAID 5 devices of four drives each. If one of the drives in this pool fails, you are not losing one drive from a pool of 32 drives; rather, you are losing one drive from one of the eight RAID-protected data devices and that RAID group can continue to service read and write requests, in degraded mode, without data loss. Also, as with any RAID group, with a failed drive Enginuity will immediately invoke a hot sparing operation to restore the RAID group to its normal state. While this RAID group is rebuilding, any of the other RAID groups in the thin pool can have a drive failure and there is still no loss of data. In this example, with eight RAID groups in the pool there can be one failed drive in each RAID group in the pool without data loss. In this manner data stored in the thin pool is no more vulnerable to data loss than any other data stored on similarly configured RAID devices. Therefore a protection of RAID 1 or RAID 5 for thin pools is acceptable for most applications and RAID 6 is only required when in situations where additional parity protection is warranted.

The choice of drive technology and RAID protection is the first factor in determining the number of thin pools. The other factor has to do with the business owners. When applications share thin pools they are bound to the same set of data devices and spindles, and they share the same overall thin pool capacity and performance. If business owners require their own control over thin pool management they will likely need a separate set of thin pools based on their needs. In general, however, for ease of manageability it is best to keep the overall number of thin pools low, and allow them to be spread widely across many drives for best performance.

Planning thin devices for Oracle databases

Thin device LUN sizing

The maximum size of a standard thin device in a Symmetrix VMAX 10K is 240 GB. If a larger size is needed, then a metavolume comprised of thin devices can be created. Symmetrix VMAX 10K thin metavolumes can be concatenated or striped. While concatenated metavolumes support faster online expansion than striped metavolumes, often it is best practice to use striped metavolumes for improved performance (if metavolumes are necessary).

Note that it is not recommended to provision applications with extremely low number of very large LUNs. The reason is that each LUN provides the host with an additional I/O queue to which the host operating system can stream I/O requests and parallelize the workload. Host software and HBA drivers tend to limit the amount of I/Os that can be queued at a time to a LUN. In order to keep the physical drives behind the thin pool busy, and avoid host queuing bottlenecks under heavy workloads, it is better to provide the application with sufficient number of LUNs that allows enough IO paths and concurrency, without becoming too many to manage.

When oversubscription is used, the thin pool can be sized for near-term database capacity growth, and the thin devices for long-term LUN capacity needs. Since the thin LUNs don’t take space in the pool until data is written to them1

1 When thin device preallocation is not used

, this method

Page 25: h8271 Vmax10k Oracledb11g Deploy Wp

25

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

optimizes storage capacity utilization and reduces the database and application impact as they continue to grow. Note, however, that the larger the device the more metadata is associated with it and tracked in the Symmetrix cache. Therefore the sizing should be reasonable and realistic to limit unnecessary cache overhead.

Thin devices and ASM disk group planning

Thin devices are presented to the host as SCSI LUNs. Oracle recommends creating at least a single partition on each LUN to identify the device as being used. On x86-based platforms it is important to align the LUN partition, for example by using fdisk or parted on Linux. With fdisk, after the new partition is created type “x” to enter Expert mode, then use the “b” option to move the beginning of the partition. Either 128 blocks (64 KB) offset or 2,048 blocks (1 MB) offset are good choices and align with the Symmetrix 64 KB cache track size. After assigning Oracle permissions to the partition it can become an ASM disk group member or used in other ways for the Oracle database.

Oracle recommends when using Oracle Automatic Storage Management (ASM) to use a minimum number of ASM disk groups for ease of management. Indeed when multiple smaller databases share the same performance and availability requirements they can also share ASM disk groups; however, larger, more critical databases may require their own ASM disk groups for better control and isolation. EMC best practice for mission-critical Oracle databases is to create a few ASM disk groups based on the following guidelines:

• +GRID: Starting with database 11gR2 Oracle has merged Cluster Ready Services (CRS) and ASM and they are installed together as part of Grid installation. Therefore when the clusterware is installed the first ASM disk group is also created to host the quorum and cluster configuration devices. Since these devices contain local environment information such as hostnames and subnet masks, there is no reason to replicate them. EMC best practice starting with Oracle Database 11.2 is to only create a very small disk group during Grid installation for the sake of CRS devices and not place any database components in it. When other ASM disk groups containing database data are replicated with storage technology they can simply be mounted to a different +GRID disk group at the target host or site, already with Oracle CRS installed with all the local information relevant to that host and site. Note that while external redundancy (RAID protection is handled by the storage array) is recommended for all other ASM disk groups, EMC recommends normal or high redundancy only for the +GRID disk group. The reason is that Oracle automates the number of quorum devices based on redundancy level and it will allow the creation of more quorum devices. Since the capacity requirements of the +GRID ASM disk group are tiny, very small devices can be provisioned (Normal redundancy implies 3 failure groups, quorum devices and LUNs, High redundancy implies five failure groups, quorums devices, and LUNs).

• +DATA, +LOG: While separating data and log files to two different ASM disk groups is optional, EMC recommends it in the following cases:

Page 26: h8271 Vmax10k Oracledb11g Deploy Wp

26

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

When TimeFinder is used to create a clone that is a valid backup image of the database. The TimeFinder clone image can serve as a source for RMAN full and incremental backup, and/or be opened for reporting (read-only), and so on. However the importance of such a clone image is that it is also a valid full backup image of the database. If the database requires media recovery, restoring the TimeFinder clone back to production takes only seconds – regardless of the database size! This is a huge saving in RTO and in a matter of a few seconds archive logs can start being applied as part of media recovery roll forward. When such a clone doesn’t exist the initial backup set has to be first restored from tape/VTL prior to applying any archive log, which can add a significant amount of time to recovery operations. Therefore, when TimeFinder is used to create a backup image of the database, in order for the restore to not overwrite the online logs, they should be placed in separate devices and a separate ASM disk group

Another reason for separation of data from log files is performance and availability. Redo log writes are synchronous and require to complete in the least amount of time. By having them placed in separate storage devices the commit writes won’t have to share LUN I/O queue with large async buffer cache checkpoint I/Os. Placing the logs in different thin devices than the data make it possible to use a different thin pool and therefore have an increased availability value (when the thin pools don’t share spindles) and possibly different RAID protection (when the thin pools use different RAID protection).

• +TEMP: When storage replication technology is used for disaster recovery it is possible to save bandwidth by not replicating temp files. Since temp files are not part of a recovery operation and quick to add, having them on separate devices allows replication bandwidth saving, but adds to the operations of bringing up the database after failover. While it is not required to separate temp files it is an option and the DBA may choose to do it anyway for performance isolation reasons if that is their best practice.

• +FRA: Fast Recovery Area typically hosts the archive logs and sometimes flashback logs and backup sets. Since the I/O operations to FRA are typically sequential writes, it is usually sufficient to have it located on a lower tier such as SATA drives. It is also an Oracle recommendation to have FRA as a separate disk group from the rest of the database to avoid keeping the database files and archive logs or backup sets (that protect them) together.

Thin pool reclamation with the ASM Reclamation Utility (ASRU)

In general, Oracle ASM reuses free/deleted space under the high watermark very efficiently. However, when a large amount of space is released, for example after the deletion of a large tablespace or database, and the space is not anticipated to be needed soon by that ASM disk group, it is beneficial to free up that space in both the disk group and thin pool.

Page 27: h8271 Vmax10k Oracledb11g Deploy Wp

27

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

To simplify the storage reclamation of thin pool space no longer needed by ASM objects, Oracle and storage partners have developed the ASM Storage Reclamation Utility. ASRU in conjunction with Symmetrix Space Reclamation helps in consolidating the Oracle ASM disk group, and reclamation of the space that was freed in the ASM disk group, from the Symmetrix storage array. The integration of Symmetrix with ASRU is covered in the white paper Implementing Virtual Provisioning on EMC Symmetrix VMAX with Oracle Database 10g and 11g.

FAST VP and Oracle databases FAST VP integrates very well with Oracle databases. As explained earlier, applications tend to drive most of the workload to a subset of the database, and very often, just a small subset of the whole database. That subset is a candidate for performance improvement and therefore uptiering by FAST VP. Other database subsets can either remain where they are or be down-tiered if they are mostly idle (for example, unused space or historic data maintained due to regulations). If we look at Oracle ASM, it natively stripes the data across its members, spreading the workload across all storage devices in the ASM disk group. From the host it may look as if all the LUNs are very active but in fact, in almost all cases just a small portion of each LUN is very active. Figure 12 shows an example of I/O read activity, as experienced by the Symmetrix storage array, to a set of 14 ASM devices (X-axis) relative to the location on the devices (Y-axis). The color reflects I/O activity to each logical block address on the LUN (LBA), where blue indicates low activity and red high. It is easy to see in this example that while ASM stripes the data and spreads the workload evenly across the devices, not all areas on each LUN are “hot,” and FAST VP can focus on the hot areas alone and uptier them. It can also down-tier the idle areas (or leave them in place, based on the policy allocations). The result will be improved performance, cost, and storage efficiency.

Even if ASM is not in use other volume managers tend to stripe the data across multiple devices and will therefore benefit from FAST VP in a similar way. When filesystems alone are used we can look at a sub-LUN skewing inside the filesystem rather than a set of devices. The filesystem will traditionally host multiple data files, each containing database objects in which some will tend to be more active than others as discussed earlier, creating I/O access skewing at a sub-LUN level.

Page 28: h8271 Vmax10k Oracledb11g Deploy Wp

28

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 12. “Heat” map of ASM disks showing sub-LUN skewing

Certain considerations exist in relationship to FAST VP and Oracle databases. Examples are instantaneous changes in workload characteristics and changes in data placement initiated by the host such as ASM rebalance.

Instantaneous changes in workload characteristics

Instantaneous changes in workload characteristics, such as quarter-end or year-end reports, may put a heavy workload on portions of the database that are not accessed daily and may have been migrated to a lower-performance tier. Symmetrix is optimized to take advantage of very large cache (up to 1 TB raw) and has efficient algorithms to prefetch data and optimize disk I/O access. Therefore Symmetrix VMAX 10K will handle most workload changes effectively and no action needs to be taken by the user. On the other hand the user can also assist by modifying the FAST VP policy ahead of such activity when it is known and expected, and by changing the Symmetrix priority controls and cache partitioning quotas if used. Since such events are usually short term and only touch each data set once, it is unlikely (and not desirable) for FAST VP to migrate data at that same time and it is best to simply let the storage handle the workload appropriately. If the event is expected to last a longer period of time (such as hours or days), then FAST VP, being a reactive mechanism, will actively optimize the storage allocation as it does natively.

Changes in data placement initiated by the host (such as ASM rebalance)

Changes in data placement initiated by the host can be due to filesystem defrag, volume manager restriping, or even simply a user moving database objects. When Oracle ASM is used the data is automatically striped across the disk group. There are certain operations that will cause ASM to restripe (rebalance) the data, effectively moving existing allocated ASM extents to a new location, which may cause the

Page 29: h8271 Vmax10k Oracledb11g Deploy Wp

29

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

storage tiering optimized by FAST VP to temporarily degrade until FAST VP re-optimizes the database layout. ASM rebalance commonly takes place when devices are added or dropped from the ASM disk group. These operations are normally known in advance (although not always) and will take place during maintenance or low-activity times. Typically new thin devices given to the database (and ASM) will be bound to a medium- or high-performance storage tier, such as FC or EFD. Therefore when such devices are added, ASM will rebalance extents into them, and it is unlikely that database performance will degrade much afterward (since they are already on a relatively fast storage tier). If such activity takes place during low-activity or maintenance time it may be beneficial to disable FAST VP movement until it is complete and then let FAST VP monitor the performance and initiate a move plan based on the new layout. FAST VP will respond to the changes and re-optimize the data layout. Of course it is important that any new devices that are added to ASM should be also added to the FAST VP controlled storage groups so FAST VP can operate on them together with the rest of the database devices.

Which Oracle objects to place under FAST VP control

Very often storage technology is managed by a different group from the database management team and coordination is based on need. In these cases when devices are provisioned to the database they can be placed under FAST VP control by the storage team without clear knowledge on how the database team will be using them. Since FAST VP analyzes the actual I/O workload based on the FAST policy it will actively optimize the storage tiering of all controlled devices.

However, when more coordination takes place between the database and storage administrators it might be best to focus the FAST VP optimization on database data files, indexes, and at times even temp, and leave other database objects such as logs and archive logs outside of FAST VP control. The reason is that redo logs devices experience sequential read and write activity, and archive logs sequential writes. All writes in Symmetrix go to cache and are acknowledged immediately to the host (regardless of storage tier). For sequential reads, the different disk technologies at the storage array will have minimal impact due to I/O prefetch and reduced disk head movement (in contrast to random read activity). While temp I/O profile is also of sequential write and later read nature, very often due to concurrency and LVM striping TEMP workload becomes random. In such cases when the Oracle database reports high IO activity to TEMP space (such as when sorts and merges can’t fit in memory), it is beneficial to place it under FAST VP control as well. FAST VP algorithms place higher emphasis on improving random read I/O activity although they also take into consideration writes and sequential reads activity. Therefore while data files are likely to be reach EFD tier first, chances are the high activity logs will consume some space on that tier as well if they are included in the storage group under FAST VP control.

OLTP vs. DSS workloads and FAST VP

As explained in the previous section, FAST VP places higher emphasis on uptiering a random read workload, although it will try to improve performance of other devices with high I/O activity such as sequential reads and writes. For that reason the active

Page 30: h8271 Vmax10k Oracledb11g Deploy Wp

30

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

data set of the OLTP applications will have a higher priority to be uptiered by FAST VP over DSS. However, DSS applications can benefit from FAST VP as well. First, data warehouse/BI systems often have large indexes that generate random read activity. These indexes generate an I/O workload that can highly benefit by being uptiered to EFD. Master Data Management (MDM) tables are another example of objects that can highly benefit from the EFD tier. FAST VP also downtiers inactive data. This is especially important in DSS databases that tend to be very large. FAST VP can reduce costs by downtiering the aged data and partitions, and keep the active data set in faster tiers. FAST VP does the storage tiering automatically without having to continuously perform complex ILM actions at the database or application tiers.

Examples of VMAX 10K configurations for Oracle database 11g

Developing the configurations to meet the database needs

VMAX 10K comes pre-configured with data devices ready to be grouped into thin pools. The ease of management and flexibility provided by VMAX 10K allow better control over allocation, storage tiering, performance, and data protection with several possible deployment scenarios. Not all the applications running on VMAX 10K will demand similar service level objectives. This section describes two configurations and demonstrates how they satisfy various application requirements.

Configuration 1 is focused on local protection and isolation, with the assumption that remote replication is not in place. In configuration 1, a single thin pool is created for each storage tier. The data files are placed in the FC thin pool whereas redo logs, archive logs and TimeFinder clone LUNs are placed in the SATA thin pool. Instead of using default RAID 6 protection for SATA this configuration uses RAID 1 protection for higher performance required for Oracle redo logs that experience sequential write intensive workload. This configuration achieves a separation of the data files from the logs, archive logs and gold copy clones that protect the data. In the absence of remote replication a higher level of isolation and separation is achieved.

Configuration 2 is focused on resource sharing with the assumption that remote replication is in place. Also in configuration 2 a single thin pool is created for each storage tier. Data files and redo logs are placed in the FC thin pool while TimeFinder clone and archive log LUNs use the SATA thin pool. To improve capacity utilization and since the redo logs are in the FC thin pool, RAID 6 protection is used for the SATA thin pool. This configuration achieves more sharing of the FC tier and improves capacity utilization of the SATA tier. The clones are on a separate set of drives (SATA drives) from the database and if indeed remote replication is in place, then the data and logs are replicated remotely for additional protection.

In both configurations the EFD thin pool (and tier) can be fully used by FAST VP, for example by creating a policy that includes the database data LUNs in all three tiers. The EFD thin pool can also be used for manual storage tiering by creating additional thin LUNs, bind them to the EFD thin pool and provisioning it to the database (for

Page 31: h8271 Vmax10k Oracledb11g Deploy Wp

31

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

example, create an +EFD ASM disk group from thin LUNs bound to the EFD tier). A combination of manual and automated storage tiering (FAST VP) can be used as well.

Note that when database data files are spread across all tiers, some of the isolation advantages of configuration 1 will be diminished, and therefore configuration 2 will be more attractive. Making use of FAST VP (or manual storage tiering) for Oracle databases is highly recommended.

The Table 1 and Table 2 show the storage and host environments used for deployment and test of the two configurations. Note that Symmetrix VMAX 10K arrives with disk groups and data devices preconfigured. Table 1. Symmetrix VMAX 10K storage environment

Configuration aspect Description Storage array Symmetrix VMAX 10K Disk Group 1 83 x 15k rpm 450 GB FC Disk Group 2 15 x 7500 rpm 2000 GB SATA Disk Group 3 4 x 200 GB EFD FC tier Data devices 162 x 154 GB devices SATA tier Data devices 64 x 224 GB devices EFD tier Data devices 8 x 70 GB devices

Table 2. Host Environment

Configuration aspect Description Oracle CRS and database version 11gR2 Linux Oracle Enterprise Linux 5.3 Multipathing EMC PowerPath® 5.3 SP1 Host Dell R900 (4 quad core) Volume Manager Oracle ASM

Configuration 1 details

Review of configuration 1

This configuration segregates Oracle data and logs to separate storage disk groups, storage tiers and RAID protections. Symmetrix data devices from each tier are added to a single thin pool for that tier for ease of manageability. TimeFinder/Clone is used to create backups and gold copies of the database that can be kept for long time, or used for test/dev/reports, and RecoverPoint, if used locally, allows local CDP with DVR like recovery of production database. Table 3 shows the storage configuration used in this configuration.

Table 3. Configuration 1 details

Thin devices (LUNs) Data devices

Page 32: h8271 Vmax10k Oracledb11g Deploy Wp

32

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Assignment Thin pool binding2

(pre-configured)

Database: Name: PROD_DB Size: 1 TB Num. LUNs: 35

+DATA: 20 x 80 GB thin LUNs (ASM disk group)

FC_Pool 162 x RAID5 (3+1)

FC thin pool

+REDO: 6 x 3 GB thin LUNs

(ASM disk group)

SATA_Pool 52 x RAID1

SATA thin pool

+FRA: 5 x 1 GB thin LUNs

(ASM disk group)

+TEMP: 4 x 5GB thin LUNs

(ASM disk group)

In this configuration data files are in the FC RAID-5 thin pool and redo logs, temp and FRA in the SATA RAID-1 thin pool. The separation is done to protect against the unlikely event of a dual-drive failure in one of the RAID-5 groups causing data loss in the RAID-5 pool3

Following the steps in

. Since in this configuration redo logs are in a separate pool, on separate physical disks, in a case of a catastrophic disaster to the FC thin pool the data files could be recovered from backup, such as TimeFinder clone or tape, and the redo files would be available to recover the database up to the time of the failure (no loss of committed transactions). We chose to make the redo log pool on SATA tier RAID-1 to reduce the potential for two drive failure in the same RAID group as well as to ensure the highest performance for writes due the high write rate typical of redo logs. This configuration is recommended when the VMAX 10K is not protected by remote replications such as RecoverPoint, and/or when the database is not spread across all tiers. If that’s not the case then configuration 2 is recommended.

Appendix C – Example of storage provisioning steps for configuration 1, configuration 1 was created, and the database thin LUNs were used to create the Oracle database.

Database test of configuration 1

An Oracle database was created based on configuration 1, and an OLTP workload was used to test it. Note that the test goal was to assess the base storage layout and the EFD tier was not active during the test. Also the goal of the test was not to achieve the highest transaction rate (by using a highly cached workload) and rather heavy utilization of the underlying storage for relative performance comparison. Storage, host and database statistics were collected during the run. Figure 13 shows the database transaction rate during the test. As the data and logs resided on two

2 All thin LUNs were fully allocated in creation, consuming their full capacity in the thin pools they were bound to. 3 As discussed earlier, RAID1 and RAID5 protected thin pools can sustain many drive failures and remain fully available, as long as no two disks have failed in the same RAID group. RAID6 protects from 2 drive failure in the same RAID group.

Page 33: h8271 Vmax10k Oracledb11g Deploy Wp

33

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

separate tiers, the combination provided more physical disks for the workload achieving about 7,500 average transactions per minute (TPM) rate.

Figure 13. Configuration 1 transaction rate

Table 4 shows the “top 5 timed foreground events” as Oracle database statistics report for the run. The metric “db file sequential read” corresponds to small random read I/O and is the biggest component affecting the workload performance (85.21%). Average read response time of 6 ms is very reasonable for a FC tier under heavy I/O workload and with very little caching.

Table 4. Oracle AWR report for configuration 1

Event Waits Time(s) Avg wait (ms)

% DB time

Wait Class

db file sequential read 5,962,793 33,999 6 85.21 User I/O

db file parallel read 276,081 3,292 12 8.25 User I/O

DB CPU 1,549 3.88

db file scattered read 82,133 666 8 1.67 User I/O

log file sync 610,432 374 1 0.94 Commit

Configuration 2 details

Review of configuration 2

This configuration puts the Oracle data, log and temp files in the FC thin pool whereas FRA is placed in the SATA thin pool together with the TimeFinder clone devices. TimeFinder/Clone is used to create backups and gold copies of the database that can be kept for long time, or used for test/dev/reports, and RecoverPoint, if used, allows

0 1000 2000 3000 4000 5000 6000 7000 8000 9000

Tran

sact

ions

per

Min

utes

Time

FC Pool: Data SATA Pool: REDO, FRA

Page 34: h8271 Vmax10k Oracledb11g Deploy Wp

34

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

CDP, CRR or CLR with DVR like recovery of production database. Table 5 shows the storage configuration used in this configuration.

Table 5. Configuration 2 details

Thin devices (LUNs) Data devices

(pre-configured) Assignment Thin pool binding4

Database: Name: PROD_DB Size: 1 TB Num. LUNs: 35

+DATA: 20 x 80 GB thin LUNs (ASM disk group)

FC_Pool

162 x RAID5 (3+1)

FC thin pool

+REDO: 6 x 3 GB thin LUNs

(ASM disk group)

+TEMP: 4 x 5GB thin LUNs

(ASM disk group)

+FRA: 5 x 1 GB thin LUNs

(ASM disk group) SATA_Pool

52 x RAID1

SATA thin pool

In this configuration data, logs and temp files are in the FC RAID-5 thin pool and redo FRA in the SATA RAID-6 thin pool. Having the database data and log files sharing a single pool is recommended for overall configuration simplicity although it is recommended that the VMAX 10K is protected by remote replications such as RecoverPoint. In this configuration if the FC thin pool fails due to a catastrophic event, the database can be recovered from the TimeFinder clone devices or a remote replica, however no-dataloss of committed transactions can only be achieved if the logs were synchronously replicated remotely.

Database test of configuration 2

An identical Oracle database was created based on configuration 2, and the same OLTP workload was used to test it as was used with configuration 1. Storage, host and database statistics were collected during the run. Figure 14 shows the database transaction rate during the test. As the data and logs resided on the same storage tier, the overall number of disks available for the workload was smaller, achieving about 7,000 average transactions per minute (TPM) rate. While slightly lower than configuration 1, clearly if storage tiering was utilized (like FAST VP), these differences

4 All thin LUNs were fully allocated in creation, consuming their full capacity in the thin pools they were bound to.

Page 35: h8271 Vmax10k Oracledb11g Deploy Wp

35

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

due to total number of physical drives would have been irrelevant (since the database would be spanning multiple tiers benefiting from all spindles in the system).

Figure 14. Configuration 2 transaction rate

Table 6 shows the “top 5 timed foreground events” as Oracle database statistics report for the run. The metric “db file sequential read” corresponds to small random read I/O and is the biggest component affecting the workload performance (86.75%). Average read response time of 6 ms is very reasonable for a FC tier under heavy I/O workload and with very little caching.

Table 6. Oracle AWR report for configuration 1

Event Waits Time(s) Avg wait (ms)

% DB time

Wait Class

db file sequential read 6,161,968 34,621 6 86.75 User I/O

db file parallel read 260,994 2,782 11 6.97 User I/O

DB CPU 1,522 3.81

db file scattered read 80,625 648 8 1.62 User I/O

log file sync 590,362 323 1 0.81 Commit

Conclusion VMAX 10K with its modular design and industry standard components is a highly scalable storage array that can support from one to many applications’ workload. It is fully based on virtual provisioning and therefore offers high performance and ease of use. As Symmetrix VMAX 10K comes pre-configured with tiers, RAID protection and data devices customers can more easily and quickly create and provision thin LUNs for their applications. To make full benefit of

0 1000 2000 3000 4000 5000 6000 7000 8000 9000

Tran

sact

ions

Per

Min

ute

Time

FC Pool: Data, Redo logs SATA Pool: FRA, Clone

Page 36: h8271 Vmax10k Oracledb11g Deploy Wp

36

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

this ease of deployment, the choice of tiers and RAID protection should be done prior to the purchase (although changes can be made later as well). Unisphere for VMAX makes management and provisioning of entirely thin provisioned VMAX 10K array very easy. With features like FAST VP customers can effectively achieve higher performance at lower overall cost. A choice of local and remote replication using TimeFinder and RecoverPoint greatly improves the protection and availability of database environments and reduces recovery time considerably. Thus it provides a cost effective alternative for customers who are looking for multi-controller, scalable storage array with the advanced feature set of the Symmetrix family.

Page 37: h8271 Vmax10k Oracledb11g Deploy Wp

37

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Appendixes

Appendix A – Example of storage provisioning steps for configuration 1 The Symmetrix VMAX 10K comes with factory configured devices using standard RAID protection for the specified disk technology. The thin devices and pools can be easily created using these pre-configured devices and provisioned to the host to suit the requirements of the databases. The Symmetrix VMAX 10K allows very easy provisioning and management functionality using the Unisphere for VMAX graphical user interface. Here are the steps required to provision storage to the Oracle databases:

(1) Create the thin pool using factory configured data devices (2) Create the database thin LUNs of desired capacities and bind them to the pool (3) Create Auto-provisioning storage groups, port groups and initiator groups (4) Create the masking view to provision the storage to the host

The following sections describe all these steps in more details.

Detailed configuration steps (1) Create the thin pool using factory configured data devices As mentioned in Table 2, Oracle database PRODDB is using two different thin pools on the factory configured data devices on FC and SATA storage tiers. The following Unisphere screen shots show the steps to create the thin pool FC_Pool using the Unisphere dash board. An empty thin pool will be created with specified RAID configuration, storage tier and initial capacity.

Figure 15 Creation of FC Pool using Unisphere Dashboard

(2) Create the database thin LUNs of desired capacities and bind them to the pool

Page 38: h8271 Vmax10k Oracledb11g Deploy Wp

38

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

PRDDB requires creation of multiple ASM disk groups and the sizes and protection for the devices are also different. The following Unisphere screen shot shows the steps to create the thin LUNs of the desired capacity. The process can be easily repeated to have the customer sizes required for database. 20 – 75 GB thin devices are created and bound to FC_Pool. The devices are created as fully allocated thin device which can be changed to have any amount of pre-allocated capacity or no allocation at all. Once a new configuration session is created select the “config session” tab on Unisphere menu bar and commit the session to have new devices created and allocated to the desired capacity. Creation and commit of the configuration operations can be done in a sequence or multiple device configuration tasks can be defined and then committed together.

Figure 16 Creation of thin devices bound to specific pool

(3) Create Auto-provisioning storage groups, port groups and initiator groups Once the thin LUNs are created and bound to the thin pool next task is to map and mask the devices to make them visible to the database host. For this step it is assumed that the SAN zone configuration task is accomplished successfully through the SAN configuration. SAN configuration will activate the switch zones to establish the connectivity between host HBAs and Symmetrix FA ports by creating bindings between HBA port WWN with the Symmetrix FA port WWN. Once the zoning operation is successful the host and Symmetrix FAs will appear to be logged in to the zone and Unisphere will show them available for configuration of Auto-provisioning groups.

Page 39: h8271 Vmax10k Oracledb11g Deploy Wp

39

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Symmetrix Auto-provisioning groups allow automatic mapping and masking of devices by creating dynamic bindings using host initiator group (group of HBA port WWNs), Symmetrix FA port group (group of FA ports), and storage group containing Symmetrix thin LUNs. (A) Initiator Groups

When zone configuration is successfully activated database host HBA ports will log in to Symmetrix FA ports through the SAN and they will be displayed as valid initiators in Unisphere initiator group configuration screen. Identify the HBA port WWNs and add them to create an initiator group. In this case Ora_006_IG is created with 3 host HBA port WWNs (10:00:00:00:c9:74:69:32, 10:00:00:00:c9:74:7f:28 and 10:00:00:00:c9:74:6c:d6).

Figure 17 Creation of initiator group specifying host HBA port WWNs

(B) Port Groups

Create the port group by selecting the Symmetrix FA ports masked to the host using SAN zones. In this case Symmetrix FA ports 1E:0, 2E:0, 1G0, and 2G0 are added to create the port group Or_006_PG.

Figure 18 Creation of port group specifying Symmetrix FA ports

Page 40: h8271 Vmax10k Oracledb11g Deploy Wp

40

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

(C) Storage Groups Create the storage group by adding all database LUNs for +DATA, +REDO, +FRA and +TEMP

disk groups to a single storage groups. As we described earlier we are using separate disk groups and RAID protection for data and logs so they belong to different thin pool. But when creating the storage groups all the LUNs can be added to a single storage group for ease of maintenance.

Figure 19 Creation of storage group specifying thin devices for Oracle database

(4) Create the masking view to provision the storage to the host

Once initiator group, port group and storage group are created a masking view has to be created that automatically performs necessary mapping and masking commands to make storage visible to the host.

Figure 20 Creation of masking view to bind IG, PG and SG

At this point the storage provisioning is done and thin devices for Oracle database should be visible to the host.

Page 41: h8271 Vmax10k Oracledb11g Deploy Wp

41

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Appendix B – TimeFinder/Clone configuration steps This section describes the steps needed to create the TimeFinder/Clone of the database for quick recovery of production database in the event of some logical or physical corruption. Here are the steps required to protect Oracle databases using TimeFinder/Clone.

(1) Create Symmetrix device group and populate it with source and target devices (2) Establish the clone copy with the source (3) Activate the clone for point-in-time image

(1) Create Symmetrix device group and populate it with source and target devices

Create a local TimeFinder/Clone REGULAR device group by adding source and target LUNs of the database. The device group is populated using STD and TGT device pairs to create the TimeFinder/Clone image from source to target. The PRODDB source devices are added to the device group using the device type of STD. The corresponding TimeFinder/Clone target devices are added using the device type of target. From the Data Protection window click ‘Device Groups’ to bring up the Create Device Group Wizard.

Figure 21 Symmetrix device group creation for TimeFinder/Clone

(2) Establish the clone copy with the source Create pairings between source and target devices and create TimeFinder/Clone copy. A full copy of the source LUN can be generated or space efficient clones with no pre-copy option can be used to create a clone that is updated whenever there are changes on the source device.

Page 42: h8271 Vmax10k Oracledb11g Deploy Wp

42

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 22 TimeFinder/Clone specifying source and target device pairs

Select action create to start creating TimeFinder/Clone.

Figure 23 Creating TimeFinder/Clone Options on the device pairs

Specify the TimeFinder/Clone options to create consistent restartable clone copy. In this differential option is chosen to create incremental clone copy which will allow creation of clone with only changes. No copy will create space efficient clone in which the target devices will be updated with changes on source devices. (3) Activate the target to create a point in time consistent/restartable copy The TimeFinder/Clone copy can be activated using consistent split to create point-in-time restartable copy. This copy can also be used in conjunction with production database archived logs to recover database to any point in time.

Page 43: h8271 Vmax10k Oracledb11g Deploy Wp

43

Deploying Oracle Database 11g on EMC SYMMETRIX VMAX 10K Configuration of ORACLE DATABASE 11g on EMC SYMMETRIX VMAXe

Figure 24 Activating TimeFinder/Clone to create a consistent copy of source database