28
. . . . . . . . © Copyright IBM Corporation, 2009. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders IBM PowerVM Live Partition Mobility (LPM) Experiences Testing Oracle RAC 11gR2 with Live Partition Migration Peter Mooshammer IBM Systems & Technology Group March 21, 2012 In collaboration with the IBM Oracle International Competency Center

LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration © Copyright 2012, IBM Corporation 1 Abstract

  • Upload
    letuyen

  • View
    230

  • Download
    4

Embed Size (px)

Citation preview

Page 1: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

. . . . . . . .

© Copyright IBM Corporation, 2009. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders

IBM PowerVM Live Partition Mobility (LPM)

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration

Peter Mooshammer IBM Systems & Technology Group

March 21, 2012

In collaboration with the IBM Oracle International Competency Center

Page 2: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Expperiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

Table of contents

Abstract..................................................................................................................................... 1

Introduction .............................................................................................................................. 1

Prerequisites............................................................................................................................. 1

Executive overview .................................................................................................................. 1

System Setup and Certification............................................................................................... 2 System Setup .................................................................................................................................... 2 Test Description – Destructive Tests .................................................................................................. 3 Test Description – Stress Tests.......................................................................................................... 7 Using Command Line Utilities for Migration ........................................................................................ 8 Migration Recovery ...........................................................................................................................10

Performance Assessments.................................................................................................... 11 Active Instance vs. Inactive Instance Migration..................................................................................11 Active Instance Migration under Load................................................................................................14 Large Scale Live Partition Migration ..................................................................................................15 Workload Considerations ..................................................................................................................17

Managing a RAC Database based LPM operation................................................................ 17

Summary................................................................................................................................. 19

Resources............................................................................................................................... 20

About the author..................................................................................................................... 21

Acknowledgements................................................................................................................ 21

Appendix 1: List of common abbreviations and acronyms................................................. 22

Trademarks and special notices ........................................................................................... 25

Page 3: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

1

Abstract

This paper documents the findings of a Certification of Oracle RAC in conjunction with Live Partition Mobility (LPM), The objective of this Certification was to validate the compatibility of LPM with Oracle 11g Oracle RDBMS, ASM and Oracle Clusterware components running on an IBM Power™ Systems server. This paper will introduce the concepts used to test the stability of such a solution. It will discuss the system performance during a migration.

Introduction

IBM offers Live Partition Mobility on its P-Series line of servers. It allows for the transfer of a Logical Partition in the form of a Virtual Client from one system to another. Two cases of migration can be distinguished:

� Inactive migration: A powered off logical partition is transferred to another system. � Active migration: The partition is transferred while service is provided, without disruption

An LPM operation can be initiated with a few simple steps using the HMC GUI. These automated steps cover the validation of the configuration on the source and the destination systems and the actual migration. In particular an active migration promises a seamless relocation of resources without an interruption in service. Typical use cases for LPM are:

� Preventive hardware maintenance � Hardware upgrade � Server consolidation that involves reallocation of partitions between servers

Although Oracle RAC offers a solution for the use cases listed above, only Live Partition Mobility allows the migration of a logical partition (LPAR) to another system.

Prerequisites

The paper will give a brief overview of the configuration and settings used in the test. However, the operational procedures are covered in detail in “IBM PowerVM Live Partition Mobility” Redbook, IBM publication number SG24-7460-00. The audience is recommended to refer to the Redbook to get familiar with Live Partition Mobility (LPM).

In addition several other IBM Whitepapers (IBM PowerVM Live Partition Mobility and LPM for Oracle RAC concepts) are being referenced.

Executive overview Oracle has certified non-RAC LPM with the Oracle Database 10g Release 2 on AIX® 5.3 and 6.1. Newer versions were also certified recently for single instance databases. The increased availability and flexibility of LPM with a single instance Oracle database is clear. Only LPM allows access to the database while migrating the running partition containing the database to another system. However, with LPM and RAC two potentially overlapping concepts are being used. Oracle RAC with its highly available framework provides the means to ensure uninterrupted access to the database even when one of the instances is shutdown. So the advantages of LPM in conjunction with RAC may be limited as discussed in the IBM Whitepaper named LPM for Oracle RAC concepts.

Page 4: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

2

The purpose of the LPM/RAC Certification was:

• To prove that LPM with RAC is mature and stable in a large scale environment (This was proven with partitions up to 256GB of main memory and up to 16 cores).

• To provide customer with usage guidelines. The tests show that it is possible to migrate heavily loaded partitions successfully. However, the findings also point to some of the performance trade-offs. However, the findings also point to some of the performance trade-offs. During a migration a performance penalty can occur across the Oracle RAC cluster. The severity of this performance penalty is dependent on the workload, so the customer needs to determine if the performance penalty can be tolerated.

System Setup and Certification

The beginning of this section explains the details of the system configuration being used for the certification. Then each individual test that was part of the certification is described.

Two IBM® Power® 770 (9117-MMB) systems were used for this Certification. They were configured with 32 cores and 512GB each. An IBM® System Storage™ DS5300 served as shared storage for the Oracle RAC Database files and the Oracle Cluster Ready Services (CRS) devices. It also stored the root volumes for the Oracle RAC partitions. A single Hardware Management Console (HMC) controlled the LPM process. The scope of the certification was limited to newer systems and HMC firmware levels to take advantage of optimizations added to reduce the amount of time a partition spends in the suspended state. The HMC was running a firmware level of V7R7.3 SP1 whereas the system (CEC) firmware was on a AM730_49 revision level. This particular certification covered AIX6.1 with a detailed level TL06 SP09. The VIOS software was kept at 2.2.0.12-FP-24 SP-02 levels. The tested Oracle RAC and Cluster ware versions were 11.2.0.2.0.

System Setup

The certification had two test groups: destructive tests and stability tests. The destructive tests covered an additional set of failure scenarios during an active migration. These tests complemented the destructive tests which are usually part of any Oracle RAC Certification. A setup with 4 partitions containing 4 cores and 32 GB of main memory were used for the VIOS clients. The VIOSs (even the Mover Service Partitions (MSP)) were limited to 0.5 cores and 2GB in order to allow for a longer migration period (details see below). A dedicated 1 Gigabit Ethernet connection between the MSP provided a reliable and consistent bandwidth.

The stability tests consisted of three 50 hour tests of various workload scenarios while the size of the migrating partition increased considerably. A total of 256GB main memory and 16 cores were allotted to the migrating virtual client. To adjust the LPM infrastructure to the higher workload requirements, the MSP was expanded to 2 physical cores and 4 GB of memory. Alternatively the MSP can be setup uncapped as described in IBM PowerVM Live Partition Mobility. Also a dedicated 10Gbit Ethernet connection matched the higher requirements for the data transport. Jumbo frames were enabled.

Page 5: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

3

Access to shared storage was implemented via SAN featuring NPIV (see the summary titled NPIV and the IBM Virtual I/O Server (VIOS)). As is recommended by IBM for Oracle RAC installations, a DUAL VIOS configuration was setup for HA purposes to make the different virtual client networks and SAN connections highly available.

Figure 1: Hardware Setup for ORACLE RAC/LPM Certification

Test Description – Destructive Tests

The tests fell into two groups:

• Destructive testing

• Stability testing

The first group of tests verifies that an induced failure is recovered using the expected code paths in an acceptable amount of time. As a result of these tests, the best practices provided in this document were confirmed. The tests concentrated on fault induction during migrations and here mostly on faults relating to components relevant during migrations. In all these tests high memory utilization was emphasized.

Page 6: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

4

Test scenario Test procedure Expected Test Outcome and Recovery Actions

Target server failure during the pre-suspension phase of the partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Powerdown / shutdown the TARGET server ungracefully BEFORE the migration suspends the LPAR

A failure of Target Server does affect the operation of the partition on the source server. The migration is aborted and needs to be recovered. (Any existing duplicate profile on the source server needs to be deleted)

Target server failure during the suspension of a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Powerdown / shutdown the TARGET server ungracefully DURING the migration suspends the LPAR

A failure of Target Server does affect the operation of the partition on the source server. The migration is aborted and needs to be recovered. (Any existing duplicate profile on the source server needs to be deleted)

Source server failure during the post-suspend phase of a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Powerdown / shutdown the SOURCE server ungracefully AFTER the migration suspends the LPAR on the SOURCE and restarted on the TARGET

The migration will be aborted and the partition will be rebooted. The partition needs to be recovered via HMC on the target server. (Any existing duplicate profile on the source server needs to be deleted)

Network failure during the pre-suspension phase of a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Fail the network between the VIO servers BEFORE the migration suspends the

This scenario has no impact on the partition running on the source server. The migration however is aborted and needs to be recovered

Page 7: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

5

Test scenario Test procedure Expected Test Outcome and Recovery Actions

LPAR the TARGET

Network failure during the suspension phase of a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Fail the network between the VIO servers DURING the migration suspends the LPAR

The partition will fail and reboots on the source server. The migration will be aborted and needs to be recovered via HMC.

Network during the post-suspend phase of a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Fail the network between the VIO servers AFTER the migration suspends the LPAR on the SOURCE and restarted on the TARGET

The partition will fail and reboots on the target server. The migration will be aborted and needs to be recovered via HMC.

Storage failure on the target server during a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Fail the network between the VIO servers BEFORE the migration suspends the LPAR the TARGET

Storage failure on the target server during a partition migration

1. Start the workloads and the artificial memory load

2. Let the workloads run stable for 20 minutes

3. Start the migration process

4. Fail the network between the VIO servers AFTER the migration suspends and the LPAR restarted on the

In both cases the storage failure will not interrupt the migration in progress. The partition will be evicted by Oracle Clusterware. Depending on the severity of the storage failure, the partition will not restart until the boot disk becomes available again. The process of migration will succeed and does not to be recovered.

Page 8: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

6

Test scenario Test procedure Expected Test Outcome and Recovery Actions

TARGET

Failure of a non-migrating partition during migration

1. Hard reset one of the non-migrating partitions

2. When the CSSD on the migrating partition reports 90% of misscount reached for the failed partition, start the migration on this process. This will cause a cluster reconfiguration and database instance recovery during the partition migration

Only the partition that was reset is affected. Migration continues while the surviving Oracle instances do instance recovery. No need to recover the migration

Test scenario Test procedure Expected Test Outcome and Recovery Actions

CSS failure causing cluster reconfiguration followed by instance recovery during partition migration.

1. Kill the CSS master on a partition that is a non-migrating partition

2. When the CSSD on the migrating partition reports 90% of misscount reached for the failed partition, start the migration on this process. This will cause a cluster reconfiguration and database instance recovery during the partition migration

The partition with the failed CSS master will reboot. Migration is not affected. No need to recover the migration

Single instance recovery during migration

1. Start the partition migration

2. Immediately after the start of the partition migration, shutdown abort a single instance on a non-migrating partition.

No partition failure and migration is not affected. Surviving instances recover database No need to recover the migration

Multi instance recovery during migration

1. Start the partition migration

2. Immediately after the start of the partition migration, shutdown abort all instances on non-migrating partitions.

No partition failure and migration is not affected. Surviving instance recovers database No need to recover the migration

Page 9: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

7

The destructive tests concluded that in all test scenarios the Oracle DB service continued to work as expected and an ongoing migration did not lead to disruptions of the services. Like in any case of instances failure some amount brown-out time and performance impact must be tolerated due to recovery operations by surviving Oracle instances. All aborted migrations are recoverable (see Section Migration Recovery).

Test Description – Stress Tests

This part of the certification was meant to prove the stability of the system during continuous migrations. Various load scenarios were used to stress the migrating partition as well as all other components involved in the migration.

The general outline of the test setup was described earlier, as with regards to Oracle the cluster was an asymmetric cluster since the migrating node was about 4 - 8 times larger (4x cores and 8x main memory) than the 3 non-migrating nodes. To achieve a high level of workload on the larger partition additional single instance databases were created. The multi-instance RAC Database was constrained to about 30 GB of SGA, where the single instance database(s) running on the migrating node used up the rest of the 256GB main memory.

Depending on the test scenario the workload was either CPU or I/O bound. An Online Transaction Processing (OLTP) workload guaranteed a high CPU usage, whereas a Decision Support System (DSS) workload was uses to realize high I/O and memory usage.

Test scenario Test objective Expected Test Outcome

Mixed load migrations

Create a high, mixed OLTP/DSS workload with additional synthetic memory workload, to increase memory usage. Run continuously a cycle of migration and a 30 minute phase to re-stabilize the workload. Period of testing aprox.50h (20-25 migrations)

DSS Workload and Parallel Queries

Create a high DSS only workload with additional synthetical memory workload, to increase memory usage. Emphasize high I/O rate. Run continuously a cycle of migration and a 30 minute phase to re-stabilize the workload. Period of testing

No failures or network timeouts due to partition migration. All Database and CRS operations should complete successfully.

All LPM operations complete successful

Page 10: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

8

Test scenario Test objective Expected Test Outcome

aprox.50h (20-25 migrations)

Database functional test

The test contains a series of Database/ASM/CRS operations running in parallel of a migration. Repeat series for 50 hours while running the databases under work load.

1. Adding a disk to ASM and rebalance

2. Full Parallel Backup of the Database

3. Adding a datafile create tablespace, dropping tablesspace with contents, remove datafile from database

4. Adding/removing services to database via srvctl

5. Remove a disk from ASM.

The tests do not represent best practices but are meant to prove the robustness of the LPM solution. For example, dropping/adding a disk or running any other database administration operations is not recommended during migrations. A later chapter will discuss the lessons learned regarding the performance implications.

Using Command Line Utilities for Migration

In order to facilitate the test scripts for the 50 hour stress tests, the following HMC command line utilities were used: migrlpar and lslparmigr. Listing 1 shows an example of the migrlpar command used during testing to initiate the LPM process. Specifically, the “-i” option was used to specify the source and the destinations interfaces on the Mover Vios used for the data transport.

ssh [email protected] -n migrlpar -o m -m Server-9117-MMB-SN1017D9P -t Server-9117-MMB-SN1017D8P -p prd1117_lrg_vclient_el9-89-162 -i source_msp_ipaddr=10.10.10.166,dest_msp_ipaddr=10.10.10.161;

Listing 1: Example for Use of Command Line Utility migrlpar via ssh

Page 11: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

9

The table below is a brief excerpt of the migrlpar options as used for the certification. For more information see: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7edm/migrlpar.html.

-o The operation to perform. Valid values are m to validate then migrate a partition if validation succeeds, r to recover from a failed partition migration, s to stop a partition migration, v to validate a partition migration, and set to set attributes related to partition migration operations.

-m The name of the source managed system for the partition migration operation.

-t The name of the target, or destination, managed system for the partition migration operation.

-p The name of the partition for which the partition migration operation is to be performed.

-i This option allows you to enter input data on the command line, instead of using a file. Data entered on the command line must follow the same format as data in a file, and must be enclosed in double quotes.

Table 1: Excerpt migrlpar command line options

The second command used was lslparmigr command to trace the progress of the LPM process.

ssh [email protected] -n lslparmigr -r lpar -m Server-9117-MMB-SN1017D8P --filter lpar_names=prd1117_lrg_vclient_el9-89-162 name=prd1117_lrg_vclient_el9-89162,lpar_id=12,migration_state=Migration Starting,migration_type=active,source_sys_name=Server-9117-MMB-SN1017D9P,source_lpar_id=12,source_msp_name=prd1117_vios22_el9-89-166, source_msp_id=9,dest_msp_name=prd1117_vios12_el9-89-161, dest_msp_id=3,remote_manager=unavailable,remote_user=unavailable,bytes_transmitted=5525527671,bytes_remaining=29583110144

Listing 2: Example for Use of Command Line Utility lslparmigr via ssh

The table below is a brief excerpt of the lslparmigr command options as used for the certification. For more information see: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7edm/lslparmigr.html

-r The type of resources for which to list partition migration information.

Specify lpar to list partition migration information for all of the partitions in managed-system.

-m The name of the managed system for which to list partition migration information.

--filter The filter(s) to apply to the resources to be listed. Filters are used to select which resources are to be listed

Valid filter names: lpar_names | lpar_ids Only one of these filters may be specified.

Table 2: Excerpt lslparmigr command line options

The output of the lslparmigr command was used for all the performance assessments in a later chapter. In addition it was used to help debug potential Oracle issues that may have occurred during a migration.

Page 12: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

10

A special version of this command was provided to identify the internal Suspending/Suspended states of a migration, to allow for failure induction during the “Suspend” phase. This information is also presented in graphs that are part of the performance discussions. Publicly available firmware will not show the “Suspending” and the “Suspended” phase.

User Viewable State Internal Migration State Comments

Not migrating Invalid Standard output if no migration was initiated

Migration Starting Enabled This phase starts with the validation of the

configuration continues with the creation of

an LPAR on the destination system and the

pre-emptive copy of the memory to the newly created LPAR

Suspending/Suspended Mover service partition instructs the hypervisor on the source system to suspend the mobile partition.

All threads running on mobile partition are quiescenced.

Migration in Progress

Resumed The mobile partition resumes execution on the destination server. � Some of its memory pages may have been modified on the source after the migration started and a valid copy is not yet on the target. So they have to be demand-paged from the source system.

Table 3: Migration States displayed by lslparmigr command

In addition to the state information the lslparmigr command also displays the amount of bytes already transferred to the destination system and the number of bytes that still needs to be moved. Initially the command will note a bytes_transmitted=0, bytes_remaining=34903949312 for a partition with 32GB of main memory. After the migration completed the command may output bytes_transmitted=38562370550, bytes_remaining=0. The difference of 3658421238 Bytes (3.6 GB) results from the activity of the software running on the migrating LPAR. It indicates how much the main memory was “dirtied”, during the process of LPM. For comparison on a 32GB LPAR with only some scripting activity the numbers would be bytes_transmitted=36378691246, bytes_remaining=0. The issue of memory activity will be further discussed in the section about performance.

Migration Recovery

Recovery was automatically initiated whenever failure was initiated during the destructive testing. When auto recovery was not possible administrator intervention via the HMC GUI (see Figure 2) was used.

Page 13: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

11

Figure 2: Recovery Option in HMC Window

Administrator intervention was particularly necessary for cases where one of the servers was powered off to simulate a sudden power loss. In these cases duplicate profiles can exist on both servers – one active and one inactive - after both servers were up and running. For more information about recovery please refer to the Redbook on IBM PowerVM Live Partition Mobility.

Performance Assessments

In the following chapter when discussing performance the term performance is not meant in absolute terms of the performance capabilities of the Power systems used in these tests. No optimization work was done at any level – not on the system level or the I/O subsystem level. For testing purposes the OAST generated workloads either OLTP or DSS were used. This discussion about performance is limited to the impact migration can have on the overall system performance. For general best practices on IBM Power Systems with LPM with performance implications, we suggest to consult the white paper titled Oracle Real Application Clusters on IBM AIX

Active Instance vs. Inactive Instance Migration

In an earlier IBM white paper two types of migration were discussed. This section will go in to more detail by comparing both options – active instance and inactive instance migration and their performance impact on the Oracle Database cluster as a whole.

For this paper an active instance migration is defined as a migration in which the Oracle stack is up and running. In contrast an inactive instance migration will have the Oracle Database, ASM and Clusterware

Page 14: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

12

software shutdown but the LPAR would still be active. If the Oracle clients, as was the case for the test setup, allow for reconnection, both options would be viable, functional alternatives.

Figure 3 Active Migration of 32Gbytes /4 Core LPAR

Figure 3 shows a rather typical performance chart for an active migration. The OAST driver was configured to run 100 Clients with less clients running on the LPAR to be migrated as can be seen at the higher idle rate (around 40% - red line). For comparison purposes the green line shows the system idle time (in %) for a non-migration LPAR, which is also part of the Oracle Database Cluster.

The migration is initiated at around the 5 minute mark. The validation and the pre-emptive memory copy show no visible effect on the Database performance. At around 10:30 minutes of the migration the partition goes into the suspended phase which causes a steep increase in system idle time. The suspended phase lasted in this case for about 0.66s, after which the partition resumed on the destination system.

The last phase of the migration – Migration in Progress Phase – takes about 40 seconds, during which memory locations modified during the migration have to be demand copied from the source to the destination system. During this last phase the overall system performance declines. Note the TPM (transaction per minute) curve seems to lag behind the system idle curves. This is mostly due to the fact that a per minute average was used to measure the transaction rate. The transaction rate recovers after two minutes. The impact the migration on the TPM rate can vary greatly and will be discussed in more detail later in this chapter.

The complete process of the migration from the start until the transactions rate went back to the initial level lasted for about 7:30 minutes on relatively busy partition.

In case of an inactive migration a “transactional shutdown” of the instance running on the partition would start the migration process. Subsequently, existing client connections fail over and connect to the active instances in the database cluster. Following the success of this initial step the shutdown of the Oracle Clusterware stack including the ASM instance is initiated. The next step would be the live migration of the

Page 15: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

13

partition. After the LPM process succeeded the full Oracle Software will be restarted and client connections may or may not failback to the restarted instance. Figure 4 shows a typical timeline of such a process.

.

Figure 4 Migration behavior for partition with Oracle Software stack shutdown

Just like in the case of an active migration the TPM figures are impacted. This time biggest drop occurs at the beginning when the clients need to reconnect to the unaffected database instances. The migration itself takes around 5:30 minutes with a suspend time of 0:56 seconds and a “Migration in Progress” period of around 20 seconds. Since Oracle was shutdown and there was little activity on this partition all the phases of the migration are somewhat shorter than the times measured during a active migration. However, since Oracle needs to shutdown first and started up after the partition resumed on the destination system the actual process may take longer.

Page 16: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

14

Active Instance Migration under Load

One of the goals was to verify the behavior of the migrating partition under load. Figure 5 shows the typical behavior of a migrating partition under load. The drop-off in TPM is more apparent here than in Figure 3.

Figure 5: High Workload on migrating partition (OAST workload 3 with 200 users)

It should be noted however that the drop in transaction rate can differ greatly from migration to migration. A test series consisted of around 15 migrations at a stable defined workload and the graphs shown here represent only one of the runs. The graphs were chosen to show the tendency of the migration behavior with respect to the clusterwide workload.

Figure 6 represents the behavior of lightly loaded partitions during migration. Even the non-migrating node show little load. The impact of a migration on the clusterwide performance can be almost negligible under these circumstances, although as it was the case before a variation from migration to migration can be observed. Workload Time Start -

In Progress (min)

Black-out Period (sec)

Time In Progress – Complete (min)

Total Time (min)

Dirty Mem transferred in MB

Total Amount of Data in MB

200 User (Avg over 16 Runs)

0:05:15

0.64

0:00:36

0:05:51

4175

37462

40 User (Avg over 16 Runs)

0:04:59

0.61

0:00:24

0:05:23

3290

36577

100 User – No Oracle SW – (Avg over 5 runs)

0:04:59

0.57

0:00:05

0:05:03

1634

34921

40 User DSS Workload (Avg over 10 Runs) High mem usage/medium CPU workload

0:07:21

0.67

0:00:52

0:08:13

20841

54128

Page 17: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

15

Table 4: Comparison of different workload scenarios

Table 4 summarizes the test results of various workload runs. In terms of percent, the largest difference observed was at resume time (time from In Progress to Complete). This is also the period of time with largest influence on the performance. The actual workload has less influence on the time from start of the migration until the partition is suspended. The black-out period is mostly independent from the workload. For more discussions about the performance see the white paper ‘IBM Power Systems Live Partition Mobility (LPM) and Oracle DB Single Instance’.

The last row in Table 4 lists the results of tests using a DSS centric workload showing a medium CPU load but high I/O load and a high main memory usage. The amount of “dirtied” memory is approximately 5 times higher than as was in the case with the OLTP workload. This significantly increases the time spent for migrating the partition.

Figure 6: Low Workload on migrating partition (OAST workload 3 with 40 users)

The time spent on migration is dependent of the type and the amount of workload and as will be shown in the next section the size of the partition.

Large Scale Live Partition Migration

For the stress load part of the migration a partition with 256GB of main memory and 16 cores was actively migrated under load. As described earlier this resulted in an asymmetric cluster with a migrating partition far greater than the non-migrating partitions. Figure 7 shows the percent idle of such a large partition during a migration. The migrating partition hosted two databases running a DSS related workload. One smaller clustered database and one larger single instance database. In addition a small utility program used to create more “dirty” memory ran in parallel. The DSS workload is very I/O centric, mostly I/O reads. However reads from I/O translate into writes to memory.

Page 18: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

16

Workload Time Start -In Progress (min)

Black-out Period (sec)

Time In Progress – Complete (min)

Total Time (min)

Dirty Mem transferred in GB

Total Amount of Data in GB

DSS Workload (Avg over 19 runs)

0:56:10

3.41

0:52:13

1:48:23

145.30

405.31

Table 5: Migration Timing for 16 core/156GB Memory Partition running a DSS Workload

Whereas in earlier examples with those much smaller partitions the migrations only lasted for minutes, a large partition under load will take much more time. As Table 5 shows, a migration in this case will last for around 1 hour and 50 minutes. Even in this case, the absolute suspend time (black-out period) stayed at below 4 seconds. This is an important factor for the use of LPM in conjunction with Oracle RAC and will be discussed in a later part of this paper. Although an additional artificial workload driver was used, the data in the table and the graph can point out some important issues. A large, heavily loaded partition will create a large amount of “dirtied” memory, which will lead to an extended resume period. The average resume time during this part of the certification was around 52 minutes and almost as long as the initial phase on the migration.

Figure 7: Large partition migration under high DSS workload

Again, during the resume time of a migration the impact on the system performance is the biggest. This is due to the fact that data still stored on the source system need to be transferred to the target system before it can be manipulated further. In the case of the workload discussed the partition shows a substantial increase of system idle time with the beginning of the resume phase, which then declines until it is back to the level before the migration. The increase in system idle time is an indicator of diminished throughput at the database level. A DSS type of workload combined with the artificial memory workload for a highly stressed partition, did somewhat constitute a “worst case” scenario in the certification. It will lead to a prolonged resume period.

Page 19: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

17

Any similar workload will cause relative high performance degradation during the LPM process. It will be up to the user to decide if this is tolerable.

Workload Considerations

Several workload scenarios were discussed in the earlier parts of this white paper. It was shown that under light and medium workloads a partition running Oracle 11gR2 RAC can be actively migrated with no or little impact on the overall system performance. With reduced workloads the performance impact on the cluster wide performance decreases. It was also shown that the resume phase has the biggest impact. This was also shown in an earlier paper (IBM Power Systems Live Partition Mobility (LPM) and Oracle DB Single Instance). For short periods during the resume phase the performance decrease can – on occasion – be bigger than the performance contribution of a single partition to the overall cluster. This is an observation that still needs some investigation, but could be explained by potential buffer contention issues. A DSS type of workload can modify main memory at a higher rate than an OLTP like workload, thus leading to longer migration in general and longer resume times in particular. The time during which the migrating partition was suspended never exceeded 5 seconds during the certification tests. For smaller partitions with 32 GB of main memory and 4 cores the suspend time stayed below 1 second.

Managing a RAC Database based LPM operation While actively migrating a partition to another system Oracle Cluster Ready Services (CRS) continues to run and CRS imposes timing constraints on the migration. Oracle CRS defines a default heartbeat threshold of 27 seconds for non-responsiveness on the interconnect. If this limit is reached, CRS perceives the partition as failed and will evict the non-responsive partition (node). The default value of this CRS parameter, css_misscount obviously limits the time a partition can spend in a black-out period without triggering an eviction. Consequently, an LPM suspend (black-out) period should not exceed 27 seconds. The LPM technology provided with IBM Pseries systems ensures that the black-out periods are much smaller and in itself do not lead to any problems, at least not for the tested partition sizes with up to 256 GB of main memory and 16 cores black-out periods never exceeded more the 5 seconds. As noted earlier, the suspend (black-out) period is a function of the workload and the dirty buffer rate on the source partition. Controlling the dirty buffer rate on the source system by redirecting workload to another instance in the cluster would greatly reduce the suspend period and decrease the likelihood of evictions during LPM operations. As was shown in an earlier part of this paper, performance of a partition can be considerably reduced during the “Resumed” phase of a migration. So it is – in theory – possible that because of this general performance degradation, Oracle Clusterware processes awaiting data still on the source system could experience scheduling latencies potentially triggering an eviction. During the certification tests node evictions of this nature were never observed. However, under the heaviest of workloads during the tests, we found that the cluster wide heartbeat time did indeed reach the 50% mark (see listing below).

2011-10-29 08:21:31.526: [ CSSD][5157]clssnmPollingThread: node rac81 (1) at 50% heartbeat fatal, removal in 14.819 seconds

Listing 3: cssd output showing the watermark of 50% for the css_misscount timer is reached

Partition/node evictions during LPM operations should be rare; however, any eviction of a partition during an LPM operation should be fully investigated with the help of IBM and Oracle.

Page 20: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

18

Another observation during the certification tests was that doing administrative tasks during LPM operations could lead to dropping client connections. Especially dropping a disk, while an I/O heavy DSS workload was running exacerbates the problem.

Migrating heavily loaded partitions has an adverse effect on the performance of the cluster as a whole and can increase the potential for a node eviction. Preparing and testing migrations is recommended in all cases, but is essential for heavily loaded systems.

Page 21: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

19

Summary

The purpose of the documents was to document the readiness of LPM for an Oracle RAC based solution stack. This was achieved by describing the test efforts that went into the certification. The tests and especially the extended 50-hour stress tests while continuously migrating a substantially sized partition did indeed prove just that.

The paper also pointed out some of boundaries that should be understood, before deploying LPM with Oracle RAC. Particularly high DSS workloads tend to put a bigger stress on a migrating partition leading to performance degradation over the whole cluster. Customers that are planning to run such configurations are advised to test thoroughly and if necessary implement additional steps to improve the stability of the migration.

Page 22: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

20

Resources

These Web sites provide useful references to supplement the information contained in this document:

IBM eServer iSeries [System i] Information Center http://publib.boulder.ibm.com/iseries/

IBM eServer pSeries [System p] Information Center http://publib.boulder.ibm.com/infocenter/pseries/index.jsp

� IBM Publications Center

www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US

IBM Redbooks www.redbooks.ibm.com/

� IBM PowerVM Live Partition Mobility

http://www.redbooks.ibm.com/abstracts/sg247460.html?Open

� Oracle Real Application Clusters on IBM AIX : Best Practices in Memory Tuning and Configuring for System Stability

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101513

� IBM Power Systems Live Partition Mobility (LPM) and Oracle DB Single Instance

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101566

� Live Partition Mobility for Oracle RAC

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101965

� PowerVM Live Partition Mobility available on Oracle DB and AIX

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10686

� Oracle Real Application Clusters in Oracle VM Environments

http://www.oracle.com/technetwork/database/clustering/oracle-rac-in-oracle-vm-environment-131948.pdf

� NPIV and the IBM Virtual I/O Server (VIOS)

http://www.ibm.com/developerworks/wikis/download/attachments/53871900/NPIV+-+VUG+12-18-2008.pdf?version=1

Page 23: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

21

About the author Peter Mooshammer is a technical specialist with 12 years of experience of Oracle Databases and clustering concepts on various platforms. Since 2010 he works as a contractor in the IBM Oracle System Technology Business Strategy and Enablement organization, where he focuses on virtualization and clustering in AIX and Oracle RAC environments.

Acknowledgements Thanks to the following people from the IBM organization who reviewed and contributed to this paper:

Dennis Massanari

Nitin Sharma Wayne T. Martin

Also a special thanks to John McHugh from Oracle for his contributions.

Page 24: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

22

Appendix 1: List of common abbreviations and

acronyms

ANSI American National Standards Institute

A private, nonprofit organization whose membership includes private companies, U.S. government agencies, and professional, technical, trade, labor, and consumer organizations. ANSI coordinates the development of voluntary consensus standards in the U.S.

API application programming interface

An interface that allows an application program that is written in a high-level language to use specific data or functions of the operating system or another program.

ASCII American Standard Code for Information Interchange

A standard code used for information exchange among data processing systems, data communication systems, and associated equipment. ASCII uses a coded character set consisting of 7-bit coded characters.

ASM Automatic Storage Management A feature introduced in Oracle 10g to simplify the storage of Oracle datafiles, controlfiles and logfiles. CEC Central Electronics Complex A "module" or "building block" housing CPUs, RAM, PCI backplane, etc. CPU Central Processing Unit A central of a computer necessary to interpret and execute programming instructions. CRS Cluster Ready Services

A software stack provided by Oracle to enable compute nodes to form a cluster so can communicate with each other, and act as single logical server.

DB database A collection of interrelated or independent data items that are stored together to serve one or more applications. DBA database administrator A person who is responsible for the design, development, operation, security, maintenance, and use of a database. DHCP Dynamic Host Configuration Protocol

A communications protocol that is used to centrally manage configuration information. For example, DHCP automatically assigns IP addresses to computers in a network. The Dynamic Host Configuration Protocol is defined by the Internet Engineering Task Force (IETF).

DSS decision support system A computer based information system to support businesses and organizations in decision making GB Gigabyte

For processor storage, real and virtual storage, and channel volume, 2 to the 30th power or 1 073 741 824 bytes. For disk storage capacity and communications volume, 1 000 000 000 bytes.

HMC Hardware Management Console

The IBM Hardware Management Console provides systems administrators a tool for planning, deploying, and managing IBM System p and IBM System i servers.

I/O input/output Pertaining to a device, process, channel, or communication path involved in data input, data output, or both. IOA input/output adapter (1) A functional unit or a part of an I/O controller that connects devices to an I/O processor.

(2) A circuit board containing logic and internal software that bridges an internal processor or memory interconnect scheme and an external, common, standard channel or link.

IOC input/output controller

A functional unit that combines the I/O processor and one or more I/O adapters, and directly connects and controls one or more input or output devices.

IOP input/output processor A processor dedicated to controlling channels or communication links. IP Internet Protocol

A protocol that routes data through a network or interconnected networks. This protocol acts as an intermediary between the higher protocol layers and the physical network. See also Transmission Control Protocol.

IPL initial program load The process that loads the system programs from the system auxiliary storage, checks the system hardware, and

Page 25: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

23

prepares the system for user operations.

LAN local area network

A network that connects several devices in a limited area (such as a single building or campus) and that can be connected to a larger network.

LDAP Lightweight Directory Access Protocol

An open protocol that uses TCP/IP to provide access to directories that support an X.500 model and that does not incur the resource requirements of the more complex X.500 Directory Access Protocol (DAP). For example, LDAP can be used to locate people, organizations, and other resources in an Internet or intranet directory.

LPAR logical partition

A subset of a single system that contains resources (processors, memory, and input/output devices). A logical partition operates as an independent system. If hardware requirements are met, multiple logical partitions can exist within a system.

LPM live partition mobility

A feature of IBM POWER6 and POWER7 servers that allows a running LPAR to be relocated from one system to another.

MB Megabyte

For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1,048,576 bytes. For disk storage capacity and communications volume, 1 000 000 bytes.

Mb Megabit

For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1 048 576 bits. For disk storage capacity and communications volume, 1 000 000 bits.

MSP Mover Service Partition

During active Partition Mobility, the mover service partitions transfer the mobile partition from the source server to the destination server. A MSP is a VIOS enabled for this specific task.

NFS Network File System

A protocol, developed by Sun Microsystems, Incorporated, that allows a computer to access files over a network as if they were on its local disks.

NIC network interface controller Hardware that provides the interface control between system main storage and external high-speed link (HSL) ports. NPIV N_Port ID Virtualization A Fibre Channel facility allowing multiple N_Port IDs to share a single physical N_Port. OAST Oracle Automated Stress Test An Oracle workload driver OLTP Online transaction processing The processing of transactions by computers in real time. OS operating system A collection of system programs that control the overall operation of a computer system. PTF program temporary fix

For IBM System i, IBM System p, and IBM System z products, a fix that is tested by IBM and is made available to all customers. Use only for System i, System p, and System z products. For all other IBM products, use interim fix. Capitalize only when it is in a product name or on a label, otherwise write in lowercase.

RAID Redundant Array of Independent Disks

A collection of two or more disk physical drives that present to the host an image of one or more logical disk drives. In the event of a single physical device failure, the data can be read or regenerated from the other disk drives in the array due to data redundancy

RAC Real Application Cluster

A Oracle DB optional product that allows multiple compute units (nodes, partitions running RDBMS software) simultaneous access to a single database.

RAM random access memory Computer memory in which any storage location can be accessed directly. RDB relational Database

A database that can be perceived as a set of tables and manipulated in accordance with the relational model of data. Each database includes a set of system catalog tables that describe the logical and physical structure of the data, a configuration file containing the parameter values allocated for the database, and a recovery log with ongoing transactions and archivable transactions.

RDBMS relational database management system A collection of hardware and software that organizes and provides access to a relational database. RISC reduced instruction set computer A computer that uses a small, simplified set of frequently used instructions for rapid processing. ROM read-only memory Memory in which stored data cannot be changed by the user except under special conditions. SAN storage area network

Page 26: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

24

A dedicated storage network tailored to a specific environment, combining servers, storage products, networking products, software, and services.

SCSI Small Computer System Interface

(1) An ANSI-standard electronic interface that allows personal computers to communicate with peripheral hardware, such as disk drives, tape drives, CD-ROM drives, printers, and scanners faster and more flexibly than previous interfaces.

(2) A standard hardware interface that enables a variety of peripheral devices to communicate with one another. TCP Transmission Control Protocol

A communication protocol used in the Internet and in any network that follows the Internet Engineering Task Force (IETF) standards for internetwork protocol. TCP provides a reliable host-to-host protocol in packet-switched communication networks and in interconnected systems of such networks. See also Internet Protocol.

TCP/IP Transmission Control Protocol/Internet Protocol

An industry-standard, nonproprietary set of communications protocols that provide reliable end-to-end connections between applications over interconnected networks of different types.

TPM Transactions per minute The amount of database transactions a server can do in a minute’s time.

Telnet Not an acronym

In TCP/IP, an application protocol that allows a user at one site to access a remote system as if the user's display station were locally attached.

VIOS Virtual i/O Server VPN virtual private network

An extension of a company's intranet over the existing framework of either a public or private network. A VPN ensures that the data that is sent between the two endpoints of its connection remains secure.

Page 27: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

25

Trademarks and special notices

© Copyright. IBM Corporation 1994-2012. All rights reserved.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM, the IBM logo are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both:

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows Server, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

AMD and AMD Opteron are trademarks of Advanced Micro Devices, Inc.

Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc., in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

The information provided in this document is distributed “AS IS” without any warranty, either express or implied.

The information in this document may include technical inaccuracies or typographical errors.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The

Page 28: LPM with Oracle V3 Peter Mooshamer - IBM WWW Page · Experiences Testing Oracle RAC 11gR2 with Live Partition Migration  © Copyright 2012, IBM Corporation 1 Abstract

Experiences Testing Oracle RAC 11gR2 with Live Partition Migration http://www.ibm.com/support/techdocs © Copyright 2012, IBM Corporation

26

information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.