67
. . . . . . . . Oracle Database 11 g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 Betty Lee IBM Oracle International Competency Center January 2010 © Copyright IBM Corporation, 2010. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders

ORA 11gR2 RAC BladeCenter 020510

Embed Size (px)

Citation preview

Page 1: ORA 11gR2 RAC BladeCenter 020510

. . . . . . . .

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real

Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and

IBM System Storage DS4800

Betty Lee IBM Oracle International Competency Center

January 2010

© Copyright IBM Corporation, 2010. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders

Page 2: ORA 11gR2 RAC BladeCenter 020510

Table of Contents Table of Contents........................................................................................................................2 Abstract........................................................................................................................................1 Prerequisites ...............................................................................................................................1 Introduction .................................................................................................................................1

Oracle Database 11g Release 2 new features........................................................................................ 1 High Availability ................................................................................................................. 1 Performance and scalability .............................................................................................. 2 Security.............................................................................................................................. 2 Clustering .......................................................................................................................... 2 Manageability .................................................................................................................... 2

About Oracle Real Application Clusters 11g Release 2 .......................................................................... 3 About IBM BladeCenter ........................................................................................................................... 3 About IBM System Storage DS4800 ....................................................................................................... 5

Hardware requirements ..............................................................................................................5 Oracle Real Application Clusters requirements....................................................................................... 5

Server CPU ....................................................................................................................... 6 U

Server memory .................................................................................................................. 6 Network ............................................................................................................................. 7 Shared storage .................................................................................................................. 7

High availability considerations................................................................................................................ 8 Software requirements ...............................................................................................................8

Operating system..................................................................................................................................... 8 Storage System Manager ........................................................................................................................ 9 Linux RDAC driver ................................................................................................................................... 9 Oracle Database 11g Release 2............................................................................................................ 10 Automatic Storage Management Library (ASMLib) ............................................................................... 10

Library and Tools............................................................................................................. 11 Drivers for kernel 2.6.18-164.el5 ..................................................................................... 11

Configuring the system environment .....................................................................................11 BIOS....................................................................................................................................................... 11 Remote system management................................................................................................................ 11 Installing Linux operating systems......................................................................................................... 12 Installing RDAC driver............................................................................................................................ 13

Installing Oracle Grid Infrastructure 11.2.0.1 .........................................................................15 Pre-Installation tasks.............................................................................................................................. 15

Configuring kernel parameters ........................................................................................ 15 Creating users and groups .............................................................................................. 15 Setting shell limits for the Oracle software owner ........................................................... 16 Setting the Time on Cluster Nodes ................................................................................. 16 Setting Oracle inventory location..................................................................................... 17

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation

Page 3: ORA 11gR2 RAC BladeCenter 020510

Setting up network files ................................................................................................... 17 Configuring SSH on all cluster nodes.............................................................................. 17 Configuring ASMLib......................................................................................................... 17 Creating ASM disks......................................................................................................... 18 Running Cluster Verification Utility (CVU)....................................................................... 19

Performing Oracle Clusterware installation and Automatic Storage Management Installation ............. 21 Performing post-installation tasks ..........................................................................................32 Installing Oracle Database 11g Release 2 (11.2.0.1) ..............................................................34

Pre-Installation tasks.............................................................................................................................. 34 Running Cluster Verification Utility .................................................................................. 34 Preparing Oracle home and its path................................................................................ 37

Performing database installation ........................................................................................................... 37 Post-installation tasks ............................................................................................................................ 46

Summary....................................................................................................................................47 References.................................................................................................................................48

Oracle documentation............................................................................................................................ 48 IBM documentation................................................................................................................................ 48 IBM and Oracle Web sites ..................................................................................................................... 48

About the author .......................................................................................................................49 Appendix A: Sample configuration .........................................................................................50

BladeCenter and DS4800...................................................................................................................... 50 Appendix B: List of common abbreviations and acronyms..................................................51 Appendix C: Oracle ASM Configuration Assistant with ACFS (ASMCA).............................52

Creating Oracle ASM Disk Groups ........................................................................................................ 52 Creating Oracle ACFS for Oracle Database home................................................................................ 55 Creating ACFS for a General Purpose Filesystem................................................................................ 57

Appendix D: Trouble-shooting for Oracle Clusterware 11g Release 2 ................................61 Failed to meet the prerequisites requirements ...................................................................................... 61

Missing packages............................................................................................................ 61 Insufficient swap spaces ................................................................................................. 61

Timed out waiting for the CRS stack to start ......................................................................................... 62 Trademarks and special notices..............................................................................................63

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation

Page 4: ORA 11gR2 RAC BladeCenter 020510

Abstract The purpose of this paper is to assist those who are looking to implement Oracle Real Application Clusters (RAC) on Red Hat Enterprise Linux 5 (RHEL5) running on IBM BladeCenter® servers and IBM System Storage™ products. The information provided herein is based on experiences with test environments at the IBM Oracle International Competency Center, and is based on available documentation from IBM, Oracle, and Red Hat. This paper does not cover the setting up of the shared disk for Oracle RAC.

Prerequisites Good knowledge of Oracle Database Knowledge of the Linux® operating system

Introduction This paper will discuss the necessary steps to prepare for and successfully install Red Hat Enterprise Linux 5 64-bit and Oracle Database 11g Release 2 Enterprise Edition with Oracle Real Application Clusters 64-bit on IBM BladeCenter servers and IBM System Storage disks. The operating system environment described is the 2.6 kernel-based Red Hat Enterprise Linux 5 (RHEL5).

An implementation of Oracle Real Application Clusters 11g Release 2 consists of three main steps:

Planning the hardware for Oracle Real Application Clusters implementation

Configuring the servers and storage disk systems

Installing and configuring the Oracle Clusterware and Oracle RAC database

Oracle Database 11g Release 2 new features

There are many new features found in Oracle Database 11g Release 2. They can be found in Oracle 11g Release 2 documentation available on the Oracle web site. According to Oracle Database New Features Guide 11g Release 2, the main highlights are as follows:

High Availability Automatically repair corrupted blocks on the primary database or physical standby database

(which must be in real-time query mode). As part of Oracle Cloud Computing offering, databases can be backed up to Amazon S3. With connection to a catalog and auxiliary database, DUPLICATE command in RMAN can be

executed without any connection to the target database. Tables with compression are supported in logical standby databases and Oracle LogMiner. A primary database can support up to 30 standby databases.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 1

Page 5: ORA 11gR2 RAC BladeCenter 020510

Whenever the host computer restarts, Oracle Restart will automatically restart the database instance, the ASM instance, the listener, and other components. Oracle Restart is a separate installation from Oracle Database.

Performance and scalability Oracle RAC has integrated with Universal Connection Pool (UCP) which is the new Java™

connection pool. With UCP, Java applications can easily manage connections to an Oracle RAC database.

UCP for JDBC enhances performance and stabilization, and provides connection labeling and harvesting.

Database Smart Flash Cache is a transparent extension of the database buffer cache using solid state device (SSD) technology. This SSD acts as a Level 2 cache to the SGA (Level 1). SSD can reduce the amount of disk I/O at a much lower cost than adding same amount of memory.

Oracle ASM can migrate a disk group with 512 byte sector drives to 4 KB sector drives. Oracle RAC One Node is a new option to Oracle Database 11g Release 2 Enterprise Edition.

It can easily upgrade to a full multi-node Oracle RAC database without downtown or disruption.

Security New encryption key management can update the master key associated with transparent

data encryption (TDE) encrypted tablespaces. New package for audit data management can clean up audit trail records after backup and

control the size and age of the audit files.

Clustering Oracle Universal Installer has integrated with the Cluster Verification Utility (CVU) in the pre-

installation steps of Oracle RAC installation. In order to have successful installation of Oracle RAC, a synchronized system time across

the cluster is a requirement. Cluster Time Service will be responsible to synchronize the system time on all nodes in the cluster.

The high redundancy option for storing OCR has increased to 5 copies so as to improve the cluster availability.

OCR can now be stored in Automatic Storage Management (ASM). Oracle Clusterware is installed into a separate home from Oracle Database home.

Manageability Single Client Access Name (SCAN) provides a single name for clients to access an Oracle

Database running a cluster. It provides load balancing and failover of client connections to the database.

The Clusterware administrator can delegate specific tasks on specific servers to different people based on their roles. This is called role-separated management.

Patch sets for Oracle Clusterware and Oracle RAC can be applied to the servers as out-of-place upgrades to the Oracle Grid infrastructure without bringing the entire cluster down.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 2

Page 6: ORA 11gR2 RAC BladeCenter 020510

The new Enterprise Manager GUI can monitor and manage the full lifecycle of Oracle Clusterware resources. It also introduces procedures to scale up or scale down Oracle Clusterware and Oracle Real Application Clusters easily.

Complete deinstallation and deconfiguration of Oracle RAC databases and listeners can be done by Database Configuration Assistant (DBCA), Database Upgrade Assistant (DBUA), and Net Configuration Assistant (NETCA).

Oracle Universal Installer can help to clean up a failed Oracle Clusterware Installation by advising you the places to clean and steps to change prior to reattempting the installation again. During the installation, it also consists of several recovery points for you to retry and rollback to the closet recovery point once the problem has been fixed.

Database administrator can limit Oracle instance’s CPU usage by setting the CPU_COUNT initialization parameter. This is called Instance Caging.

E-mail notifications can be sent to users on any job activities.

For more information on Oracle Database 11g Release 2 new features, please refer to Oracle Database New Features Guide 11g Release 2 (11.2) E10881-03.

About Oracle Real Application Clusters 11g Release 2

Oracle Real Application Clusters (RAC) is an option of Oracle Database that allows a database to be installed across multiple servers. According to Oracle, RAC uses the shared disk method of clustering databases. Oracle processes running in each node access the same data residing on shared data disk storage. First introduced with Oracle Database 9i, RAC provides high availability and flexible scalability. If one of the clustered nodes fails, Oracle continues processing on the other nodes. If additional capacity is needed, nodes can be added without taking down the cluster.

In Oracle Database 11g Release 2, Oracle provides Oracle Clusterware, which is designed specifically for Oracle RAC. You do not need a third party Clusterware product to implement Oracle RAC. Since storage is shared, the file system and volume management must be cluster-aware.

Starting with Oracle Database 11g Release 2, Oracle Clusterware files can be stored in Oracle ASM. Oracle Clusterware and Oracle ASM are installed into a single home directory called grid home.

For further information on Oracle RAC, please refer to this web site:

http://www.oracle.com/technology/products/database/clustering/index.html

About IBM BladeCenter

The unique IBM BladeCenter design addresses today’s customers’ most serious issues: space constraints, efficient manageability, resiliency, and the physical environment which includes cooling and power. IBM BladeCenter servers takes less time to install, fewer resources to manage and maintain, and cost less than traditional multi-server solutions. These blade servers are so compact and easy to use that customers can increase the system capacity by simply sliding an additional blade into the integrated chassis, and then IBM Director can auto-configure it making it ready to use. Since the blades share a

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 3

Page 7: ORA 11gR2 RAC BladeCenter 020510

common, integrated infrastructure, basic components such as power, system ports and fans, power consumption and system complexity are reduced. The BladeCenter H chassis, shown in Figure 1, is one of the models of the BladeCenter chassis.

Figure 1: BladeCenter H Chassis

IBM offers blades with Intel®, AMD Opteron™ or IBM POWER® processors. IBM BladeCenter chassis offers the capability to hold from 6 to 14 2-socket blades.

Figure 2: IBM Blade HS22

Figure 3: IBM Blade LS42

The IBM HS22 blade is a 2-socket quad-core Intel Xeon® 5500 series processor blade, with up to 2.93 GHz processors. It supports up to 96 GB of memory with 12 VLP DDR-3 memory DIMMs. The HS22 can run applications two times faster than the previous generation blades. The IBM HS21 and HS22 are ideal for collaboration, running Citrix, Linux clusters and compute-centric applications.

IBM offers two AMD Opteron processor based blades – the LS22 (two socket) and LS42 (four socket). The new LS22 and LS42 are enterprise-class, high performance computing blades. The LS22 is ideal for memory-intensive applications including research, modeling and simulation, while LS42 is ideal for high performance computing, virtualization, consolidation and database applications. A picture of the LS42 is shown in Figure 3.

For more information about the IBM BladeCenter platform, please refer to the following web site:

http://www-03.ibm.com/systems/bladecenter/intel-based.html

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 4

Page 8: ORA 11gR2 RAC BladeCenter 020510

For the latest information on compatibility of IBM BladeCenter hardware, applications and middleware, please visit:

http://www-03.ibm.com/servers/eserver/serverproven/compat/us

About IBM System Storage DS4800

The IBM System Storage DS3000 and DS4000® families are designed to provide fast, reliable and efficient networked storage. It is easy to deploy and flexible for use with IBM System x® and BladeCenter servers.

The DS4800 is affordable for small and medium businesses and scalable by supporting up to 224 Fibre Channel (FC) or SATA drives. It supports multiple RAID levels (0.1.3.5.10) and its components can be replaced without stopping the DS4800. It can also provide up to 1724 MBps of sustained applications through the eight channels.

Figure 4: IBM System Storage DS4800

As shown in Figure 4, IBM System Storage DS4800 has 2 U rack-mount enclosures with 12 easily accessible drive bays. It supports dual-ported and hot-swappable SAS disk at 10,000 and 15,000 rpm speeds. It is also scalable to 3.6 TB of storage capacity with 300 GB hot-swappable SAS disks.

For further information on IBM System Storage DS4800, please refer to the following web site:

http://www-03.ibm.com/systems/storage/disk/ds4000/ds4800/index.html

For information on interoperability matrix for IBM System Storage DS4000, please visit:

http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html

Hardware requirements

Oracle Real Application Clusters requirements

An Oracle Real Application Clusters database environment consists of the following components:

1. Cluster nodes - 2 to n nodes or hosts, running Oracle Database server(s) 2. Network interconnect - a private network used for cluster communications and cache fusion 3. Shared storage - used to hold database system and data files and accessed by the cluster nodes 4. Production network - used by clients and application servers to access the database

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 5

Page 9: ORA 11gR2 RAC BladeCenter 020510

Figure 5 below is an architecture diagram for Oracle Real Application Clusters:

Production Network

Application Servers Users

High speed interconnect

Storage Area Network

SAN Fabric

Shared storage

Shared cache with Oracle Cache Fusion

Figure 5: Oracle Real Application Clusters architecture

For more information on Oracle Real Application Clusters, please visit http://www.oracle.com/technology/products/database/clustering/index.html.

For more information on technology supported by Oracle with Oracle Real Application Clusters, please visit http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_linux_new.html.

Server CPU

There should be enough server CPU capacity in terms of speed and number of CPU’s to handle the workload. Generally speaking, there should be enough CPU capacity to have an average CPU utilization of 65%. This will allow the server absorb peak activity more easily.

Server memory

An Oracle Database may require a lot of memory. This depends on the activity level of users and the nature of the workload. As a rule of thumb, the server should have more memory than it actually uses because performance will be greatly degraded and heavy disk swapping may occur when there is insufficient of memory.

It is important to select servers that are available with the amount of memory required plus room for growth. Memory utilization should be around 75-85% maximum of the physical memory in production environment. Otherwise, heavy disk swapping may occur and server performance will decrease.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 6

Page 10: ORA 11gR2 RAC BladeCenter 020510

Network

Servers in an Oracle Real Application Clusters need at least two separate networks, a public network and a private network. The public network is used for the communication between the clients or applications servers and the database. The private network, sometimes referred to as “network interconnect” is used for cluster node communication. It is used for monitoring the heartbeat of the cluster and by Oracle Real Application Clusters for Cache Fusion.

InfiniBand networking is supported with Oracle Database 11g.

Shared storage

Shared storage for Oracle Real Application Clusters devices can be a logical drives or LUNs from a Storage Area Network (SAN) controller or a Network File System (NFS) from a supported Network Attached Storage (NAS) device. NAS has some advantages but a SAN is recommended for higher performance.

Please refer to the following IBM web site for more information about IBM NAS offerings such as IBM System Storage N3000, N3700, N5000 and N7000:

http://www-03.ibm.com/systems/storage/nas

For SAN products, IBM offers enterprise disk systems such as DS6000™ and DS8000®, mid-range disk systems such as DS3400, DS4200, DS4700 Express and DS4800. Check to ensure the System Storage product you are using is supported with Oracle Real Application Clusters implementations. Third party storage subsystem can also be used with BladeCenter servers. Please refer to third party documentation or contact a third party representative for product certification information.

For more information on IBM System Storage product offerings, please visit

http://www-03.ibm.com/systems/storage/disk

For Oracle Real Application Clusters implementation, Oracle Database files may be located on shared storage using the following options:

1. A Certified Cluster file system

It is a file system that may be accessed (read and write) by all members in a cluster at the same time, with all cluster members having the same view of the file system. It allows all nodes in a cluster to access a device concurrently via the standard file system interface. Oracle Cluster File System Release 2 (OCFS2) is an example.

2. Oracle Automated Storage Management (ASM)

ASM is a simplified database storage management and provisioning system that provides file system and volume management capabilities in Oracle. It allows database administrators (DBA) to reference disk groups instead of individual disks and files which ASM manages internally. ASM is included in Oracle Database 11g and is designed to handle Oracle Database files, control files and log files.

In Oracle 11g Release 2, Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is introduced. It is a multi-platform, scalable file system which supports database and application files like executables, database trace files, database alert logs, application reports, BFILEs, and

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 7

Page 11: ORA 11gR2 RAC BladeCenter 020510

configuration files. However, it does not support any file that can be directly stored in Oracle ASM as well as any files for the Oracle grid infrastructure home.

For more information on Oracle ACFS, please refer to Oracle Database Storage Administrator’s Guide 11g Release 2 (11.2), Part Number E10500-02.

High availability considerations

High availability (HA) is a key requirement for many clients. From a hardware configuration standpoint, this means eliminating single point of failure. IBM products are designed for high availability, with such standard features as redundant power supplies and cooling fans, hot-swappable components, and so on.

For high availability environments, the following recommendations should also be taken into consideration when selecting the server:

Configure additional network interfaces and use IP bonding to combine at least two network interfaces for each of the two Oracle RAC networks. This reduces the downtime due to a network interface card (NIC) failure or network component failure. Multi-port adapters provide network path redundancy, however the adapter will be a single point of failure. In this case, redundant multi-port adapters are the best solution. In addition, NICs used for IP bonding should be on separate physical network cards and connected to different network switches.

There should be at least two fibre channel host bus adapters (HBA) on each node to provide redundant I/O paths to the storage subsystem. Multi-port HBAs and Storage Area Network (SAN) with redundant components like SAN switches and cabling will provide higher availability of the servers.

A kernel crash utility for every node in the cluster should be configured. This will throw kernel panic when a server crashes and the kernel dump will be saved. This core dump can then be used for further investigation for the problem. This will in-turn save some problem resolution time.

Finally, an Oracle Real Application Clusters (RAC) implementation requires at least two network interfaces. Nevertheless, up to five network interfaces are recommended, two for public, two for private and one for administration and netdump. The more redundancy of hardware architectures and software components, the less downtime databases and applications will experience.

Software requirements In an Oracle Real Application Clusters implementation, different kinds of software need to be downloaded and installed in the cluster nodes. A few of them are optional, however, it will be very beneficial to install them and make use of them in the implementation.

Operating system

Red Hat Enterprise Linux 5 is the operating system used in the tests described in this paper. It can be downloaded from https://www.redhat.com/apps/download.

For the latest information regarding IBM hardware certification by Red Hat, please refer to:

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 8

Page 12: ORA 11gR2 RAC BladeCenter 020510

https://hardware.redhat.com

Storage System Manager

IBM System Storage DS4000 Storage Manager is used to manage the DS3200, DS3300 and DS3400 via the graphical user interface. The DS3000 Storage Manager host software is required for managing the DS3200 and DS3400 models with controller firmware version 06.17.xx.xx and the DS3300 model with controller firmware version 06.50.xx.xx.

The DS4000 Storage Manager can be downloaded from the IBM Systems support Web site:

http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5082143&brandind=5000028

The IBM DS Storage Manager Software packages are available for AIX®, Microsoft® Windows® (32-bit and 64-bit version), Linux, and other platforms.

Note for IBM System Storage DS4000 users: The DS3000 Storage Manager manages only DS3000 systems. With the DS4000 Storage Manager (version 9.23 or above) you will be able to manage both DS3000 and DS4000 storage systems with Enterprise Management Window.

Linux RDAC driver

The Linux RDAC driver provides redundant failover/failback support for the logical drives in the DS4000 storage subsystem that are mapped to the Linux host server. The Linux host server must have Fibre Channel (FC) or Serial Attached SCSI (SAS) connections to the host ports of both controllers A and B of the DS3000 storage subsystem. It is provided as an alternative to the Linux FC host bus adapter failover device driver.

The Linux RDAC driver is not included with DS4000 Storage Manager for Linux, it needs to be downloaded and installed for this configuration.

Two different Linux RDAC packages are available, 09.03.0B05.0214 for kernel version 2.4 (REHL 4, SLES 9 and SLES 10) and 09.03.0C05.0214 (RHEL 5, SLES 10 SP1, SLES 10 SP2, and SLES 11) for kernel version 2.6. Please follow the instructions in the readme files for loading the packages.

To download 09.03.0B05.0214, please follow this link:

http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5081900&brandind=5000028

To download 09.03.0C05.0214, please use this link:

http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5081901&brandind=5000028

Since the RDAC provides multipath and failover functionalities, the HBA drivers need to configured with the non-failover option. The Linux RDAC cannot coexist with failover HBA drivers.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 9

Page 13: ORA 11gR2 RAC BladeCenter 020510

Another important consideration is that each HBA in the host server should only see one DS4000 RAID controller, otherwise the RDAC driver will not work properly. Correct implementation of SAN switch zoning (in the case of Fibre Channel HBAs and DS4800) will prevent the problem.

Moreover, since the Linux kernel does not detect so-called sparse LUNs, no LUNs after a skipped number would be available to the host server; the order of LUNs assigned through host-to-logical-drive mapping is a very significant consideration during the configuration of the DS4800. The LUNs assigned to a Linux host must be a contiguous set of numbers and the access logical drive should be assigned to LUN 31.

Finally, the HBA driver has to be installed successfully and the DS4000 subsystems attached correctly before you install the Linux RDAC driver.

Oracle Database 11g Release 2

Oracle Database 11g Release 2 (11.2.0.1) is the current release of Oracle’s database product and is available on 32-bit and 64-bit Linux platforms as of November, 2009. It is certified on IBM System x with the following operating systems in both 32-bit and 64-bit:

SuSE Linux Enterprise System 11 (SLES-11) / SuSE Linux Enterprise System 10 (SLES-10) Red Hat Enterprise AS/ES 5 (RHEL5) / Red Hat Enterprise AS/ES 4 (RHEL4) Oracle Enterprise Linux 5 (OEL5) / Oracle Enterprise Linux 4 (OEL4)

For the latest information on Oracle product certification, please visit My Oracle Support web site:

https://support.oracle.com/CSP/ui/flash.html

This software can be downloaded from the Oracle Technology Network (OTN) or the DVDs can be requested from Oracle Support. Oracle RAC is a separately licensed option of Oracle Enterprise and Standard Editions. For additional information on pricing, please refer to:

http://www.oracle.com/corporate/pricing/technology-price-list.pdf

Automatic Storage Management Library (ASMLib)

Automatic Storage Management (ASM) provides volume and cluster file system management where the I/O subsystem is directly handled by the Oracle kernel. Oracle ASM will have each LUN mapped as a disk. Disks are then grouped together into disk groups. Each disk group can be segmented in one or more fail groups. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance.

Starting with Oracle Database 11g Release 2, Oracle Clusterware OCR and voting disk files can be stored in Oracle ASM.

There are two methods to configure ASM on Linux, one is ASM with ASMLib and the other is ASM with standard Linux I/O. ASM with ASMLib will be employed to configure ASM on Linux in this paper.

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 11g and can enable ASM I/O to Linux disks. ASMLib packages can be downloaded from the following web site:

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 10

Page 14: ORA 11gR2 RAC BladeCenter 020510

http://www.oracle.com/technology/tech/linux/asmlib/index.html

For Red Hat Enterprise Linux 5 Update 4 64-bit, the following packages need to be installed:

Library and Tools oracleasm-support-2.1.3-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm

Drivers for kernel 2.6.18-164.el5 oracleasm-2.6.18-164.el5xen-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5debug-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5debuginfo-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

Configuring the system environment

BIOS

Be sure to upgrade the system BIOS and adapter BIOS to the latest levels. Look for the blade models on http://www.ibm.com/support/us.

Remote system management

On the BladeCenter platform, the Management Module functions as a system-management processor and a keyboard/video/mouse-multiplexing switch for the blade servers. It provides keyboard, video, and mouse ports for a local console and a 10/100 Ethernet port which provides access to the system management processor.

The system management processor communicates with other BladeCenter components, providing functions such as:

Status monitoring of the blade servers, switch modules, power modules, blower modules Blade server management and control, e.g. power/restart, upgrading firmware, switching the

keyboard/video/mouse, etc. in conjunction with the blade server service processors Switch module configuration, such as enabling/disabling external ports Remote console

Set up the Ethernet ports on BladeCenter Management Module and connect them to your management Local Area Network (LAN). For information and instructions, please refer to IBM Redbook, “IBM eServer xSeries and BladeCenter Server Management”, SG24-6495-00. IBM Redbooks are available at:

http://www.redbooks.ibm.com

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 11

Page 15: ORA 11gR2 RAC BladeCenter 020510

Installing Linux operating systems

Installation of the operating systems will not be discussed in detail in this paper. For more details, please refer to the operating system vendor documentation. The instructions for installation of different operating systems for BladeCenter can be found at:

http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/docdisplay?lndocid=SITE-HELP05&brandind=5000020

Prior to installation, please make note of the following:

Be sure to create sufficient swap space appropriate for the amount of physical memory on your servers. Oracle recommends that the amount of swap space should equal the amount of RAM.

It is strongly recommended that every node of the cluster have an identical hardware configuration, although it is not mandatory.

Oracle publishes a minimal set of hardware requirements for each server

Hardware Minimum Recommended

Physical memory 1.5GB Depends on applications and usage

CPU 1 CPU per node

2 or more CPUs per node

(a processor type that is certified with Oracle 11g Release 2)

Interconnect network 1Gb 2 teamed Gb

External network 100Mb 1Gb

Backup network 100Mb 1Gb

HBA or NIC for SAN, iSCSI, or NAS 1Gb HBA Dual-pathed storage vendor certified HBA

Oracle Database single instance 4 GB 4 GB or more

Oracle Grid Home

(includes the binary files for Oracle Clusterware and Oracle ASM and their associated log files)

4.5GB 5 GB (with sample schemas)

Temporary disk space 1 GB 1 GB or more (and less than 2TB)

Table 1: Hardware requirements

Prior to installation, you install the required OS packages, otherwise Oracle Universal Installer will provide you with the list of packages that you need to install before you can proceed.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 12

Page 16: ORA 11gR2 RAC BladeCenter 020510

The following packages will be checked for Oracle Real Application Clusters 11g Release 2 on RHEL 5.4 64-bit when using Cluster Verification Utility (the version numbers of these packages are the minimum version required):

Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1" Package existence check passed for "libaio-0.3.106 (i386)" Package existence check passed for "libaio-0.3.106 (x86_64)" Package existence check passed for "glibc-2.5-24 (i686)" Package existence check passed for "glibc-2.5-24 (x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5 (i386)" Package existence check passed for "glibc-devel-2.5 (x86_64)" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libaio-devel-0.3.106 (i386)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)" Package existence check passed for "libgcc-4.1.2 (i386)" Package existence check passed for "libgcc-4.1.2 (x86_64)" Package existence check passed for "libstdc++-4.1.2 (i386)" Package existence check passed for "libstdc++-4.1.2 (x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11 (i386)" Package existence check passed for "unixODBC-2.2.11 (x86_64)" Package existence check passed for "unixODBC-devel-2.2.11 (i386)" Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)" Package existence check passed for "ksh-20060214"

Installing RDAC driver

As mentioned in previous section in this paper, the Linux RDAC driver is not included with DS4000 Storage Manager for Linux. Please follow the web site mentioned in the previous section for instructions to install the driver.

After you install the Linux RDAC driver, you need to update /boot/grub/menu.lst with the new MPP driver package. Then, you will need to reboot the server to initialize the MPP driver.

After reboot, the server should be able to recognize the MPP driver and discover the LUNs which have been assigned to the server.

To verify that the Linux RDAC driver has been loaded successfully, execute the following command:

[root@blade1 ~]# lsmod | grep mpp mppVhba 162400 8 mppUpper 150252 1 mppVhba scsi_mod 196697 18 ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,mppVhba,usb_storage,qla2xxx,scsi_transport_fc,libata,mptspi,mptscsih,scsi_transport_spi,mppUpper,sg,sd_mod

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 13

Page 17: ORA 11gR2 RAC BladeCenter 020510

Verify that the Linux RDAC driver discovered the available physical LUNs and created the virtual LUNs for them by executing the following command:

[root@blade1 ~]# ls -lR /proc/mpp /proc/mpp: total 0 dr-xr-xr-x 4 root root 0 Nov 18 16:00 Oracle_ICC_DS4800 /proc/mpp/Oracle_ICC_DS4800: total 0 dr-xr-xr-x 3 root root 0 Nov 18 16:00 controllerA dr-xr-xr-x 3 root root 0 Nov 18 16:00 controllerB -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun0 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun1 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun2 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun3 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun4 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun5 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun6 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun7 /proc/mpp/Oracle_ICC_DS4800/controllerA: total 0 dr-xr-xr-x 2 root root 0 Nov 18 16:00 qla2xxx_h1c0t0 /proc/mpp/Oracle_ICC_DS4800/controllerA/qla2xxx_h1c0t0: total 0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN1 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN2 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN3 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN4 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN5 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN6 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN7 -rw-r--r-- 1 root root 0 Nov 18 16:00 UTM_LUN31 /proc/mpp/Oracle_ICC_DS4800/controllerB: total 0 dr-xr-xr-x 2 root root 0 Nov 18 16:00 qla2xxx_h2c0t0 /proc/mpp/Oracle_ICC_DS4800/controllerB/qla2xxx_h2c0t0: total 0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN1 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN2 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN3 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN4 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN5 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN6 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN7 -rw-r--r-- 1 root root 0 Nov 18 16:00 UTM_LUN31

After discovering the LUNs, partitions can be created on the appropriate LUNs.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 14

Page 18: ORA 11gR2 RAC BladeCenter 020510

Installing Oracle Grid Infrastructure 11.2.0.1 Before installing Oracle Grid Infrastructure 11.2.0.1 on both servers, there are several important tasks that need to be done on all of the cluster nodes.

Pre-Installation tasks

Configuring kernel parameters

Edit /etc/sysctl.conf file to set up the kernel parameters for Oracle Database. If you have current values in the file and they are higher than the value listed below, you do not need to change the value of the parameter. If these values are not set properly the Oracle Universal Installer will create a fix-up script for you to run during the pre-requisite check. This script can be run on the specified nodes to fix any parameter values that do not adhere to the minimum requirements. However, the range values must match exactly.

kernel.shmall = 2097152 kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a 4Gb RAM system. kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 512 * PROCESSES (for example 65536 for 128 processes) net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144

After making these changes, sysctl –p will enforce these values.

If you do not set the kernel parameters correctly before installation, Oracle Installer will create a fixup script (runfixup.sh) that you can run as root when your prerequisites check fails. This script will then update the kernel parameters for you. Nevertheless, Oracle recommends that you do not change the contents of the generated fixup script.

Creating users and groups

Two groups need to be created: dba and oinstall. The dba group is used for Oracle Database authentication and oinstall for Oracle Inventory group. Please make sure that the group id is the same on all cluster nodes. For instance, if oinstall gid is 502 on node 1, oinstall gid must be 502 on node 2 and any other nodes in the cluster. This can be accomplished by the groupadd command.

You can optionally create another user, besides oracle, for grid infrastructure installations in order to separate the administrative privileges from others. For instance, you can create a user ID grid and user ID oracle for Oracle Clusterware and Database installation.

# useradd –u 501 –g oinstall –G dba oracle # usermod –u 502 –g oinstall grid

As Oracle mentions, you cannot have separate Oracle Clusterware and Oracle ASM installation owners. In this paper, only user oracle has been created for simplification.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 15

Page 19: ORA 11gR2 RAC BladeCenter 020510

Create appropriate directories for oracle and grid installations and have appropriate ownership of the directories. Set up a grid infrastructure home directory to be owned by user oracle and group oinstall. The Oracle grid infrastructure directory cannot be a subdirectory of the Oracle base directory.

# mkdir –p /u01/grid # mkdir –p /u01/app/oracle # chown –R oracle:dba /u01/app/oracle # chown –R oracle:oinstall /u01/grid # chmod –R 775 /u01/app/oracle # chmod –R 755 /u01/grid

In Oracle 11g Release 2, there are two separate ORACLE_HOME directories; One home for Oracle grid infrastructure; and the other home for Oracle Real Application Clusters Database. To execute commands like ASMCA for Oracle ASM Configuration or DBCA for Database Configuration, you will need to change the ORACLE_HOME environment variable to Oracle RAC Database home.

Setting shell limits for the Oracle software owner

The file /etc/security/limits.conf needs to be modified to include new standards for user oracle.

oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536

In order for these limits to take effect, /etc/pam.d/login file needs to be edited.

session required pam_limits.so

Finally, enable these limits when user “oracle” is logged into the server.

if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi

Setting the Time on Cluster Nodes

In an Oracle RAC environment, the date and time settings on all cluster nodes have to be synchronized either by Oracle Cluster Time Synchronization Service (CTSS) or Network Time Protocol (NTP). If you do not use NTP, Oracle will use CTSS to synchronize the internal clocks of all cluster members.

You can check if NTP is up and running by doing the following commands:

[root@blade1 ~]# service ntpd status ntpd (pid 3589) is running... [oracle@blade1 grid]$ ntpq -p remote refid st t when poll reach delay offset jitter ===============================================================*ntp.pbx.org 192.5.41.40 2 u 55 64 377 118.027 -11.959 0.916 +ntp.clt.sharefi 198.82.1.203 3 u 41 64 377 116.511 9.709 0.706 +newton.8086.net 209.51.161.238 2 u 48 64 377 99.529 -12.616 1.431

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 16

Page 20: ORA 11gR2 RAC BladeCenter 020510

Setting Oracle inventory location

When you install Oracle software on the system for the first time, Oracle will create a file called oraInst.loc under /etc directory. The file will give Oracle information on where the Oracle inventory directory is and the name of the Oracle Inventory group.

inventory_loc=/u01/app/oraInventory inst_group=oinstall

If a previous inventory directory exists, please make sure that the same Oracle inventory directory is used and all Oracle software users have the write permissions to this directory.

Setting up network files

The following network addresses are required for each node:

Public network address Private network address Virtual IP network address (VIP) Single Client Access Name (SCAN) address for the cluster

The interfaces and IP addresses for both public and private networks need to be set up. These configurations can be done in Red Hat Enterprise Linux 5 System => Administration => Network.

After that, add the host names and IP addresses to /etc/hosts as shown in the example below. If the public host names and IP addresses are registered to the Domain Name Server (DNS), they can be excluded. Interconnect (private) host names and IP addresses should always be placed in /etc/hosts.

127.0.0.1 localhost.localdomain localhost 100.58.128.142 blade1.sanmateo.ibm.com blade1 100.58.128.143 blade2.sanmateo.ibm.com blade2 100.58.128.152 blade1-vip.sanmateo.ibm.com blade1-vip 100.58.128.154 blade2-vip.sanmateo.ibm.com blade2-vip 10.10.10.11 blade1-priv.sanmateo.ibm.com blade1-priv 10.10.10.12 blade2-priv.sanmateo.ibm.com blade2-priv

SCAN is a new requirement for Oracle Clusterware installation. It is a domain name that resolves to all the SCAN addresses (recommended 3 IP addresses) allocated for the cluster. The SCAN IP address must be on the same subnet as the VIP addresses, and it must be unique within the corporate network.

Configuring SSH on all cluster nodes

Starting with Oracle 11g Release 2, there is no need to configure SSH on all cluster nodes because Oracle Universal Installer will set them up for you during the grid infrastructure installation.

Configuring ASMLib

Starting with Oracle 11g Release 2, the Oracle Clusterware, the voting disk and OCR can be stored in ASM. Oracle strongly recommends storing Oracle Clusterware disks on ASM. However, Oracle Clusterware binaries and files cannot be stored in Oracle ASM Cluster File System (ACFS). Oracle recommends 280 MB minimum for each voting disk and OCR file. The total required values are cumulative and it depends on the level of redundancy you choose during the installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 17

Page 21: ORA 11gR2 RAC BladeCenter 020510

In this example, Oracle Clusterware disks will be stored in Oracle ASM. Oracle ASM disks will need to be created prior to installation. After downloading the three packages mentioned in ASMLib section, ASMLib needs to be configured as follows:

[root@blade1 asmlib]# rpm -Uvh oracleasm-support-2.1.3-1.el5.x86_64.rpm \ > oracleasmlib-2.0.4-1.el5.x86_64.rpm \ > oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-164.el5 ########################################### [ 67%] 3:oracleasmlib ########################################### [100%] [root@blade1 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [oracle]: Default group to own the driver interface [dba]: Start Oracle ASM library driver on boot (y/n) [y]: Fix permissions of Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: [ OK ] Scanning system for ASM disks: [ OK ]

Creating ASM disks

Create the ASM disks on either one of the nodes. After that, it is important to reboot all cluster nodes before installing Oracle Clusterware. After reboot, execute “oracleasm scandisks” command to reflect the newly created ASM disks.

[root@blade1 ]# oracleasm createdisk DATA1 /dev/sdb1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DATA2 /dev/sdc1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DATA3 /dev/sdd1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk LOG /dev/sde1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DISK1 /dev/sdf1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DISK2 /dev/sdg1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DISK3 /dev/sdh1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm createdisk DISK4 /dev/sdi1 Writing disk header: done Instantiating disk: done [root@blade1 ]# oracleasm listdisks DATA1

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 18

Page 22: ORA 11gR2 RAC BladeCenter 020510

DATA2 DATA3 DISK1 DISK2 DISK3 DISK4 LOG

Running Cluster Verification Utility (CVU)

Cluster Verification Utility (CVU) can be used to verify if the systems are ready to install Oracle Clusterware 11g Release 2. The Oracle Universal Installer will use CVU to perform all pre-requisite checks during the installation interview Login as oracle user and run the following command:

[oracle@blade1 grid]$ ./runcluvfy.sh stage -pre crsinst -n blade1,blade2 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "blade1" Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.10.10.0" with node(s) blade2,blade1 TCP connectivity check passed for subnet "10.10.10.0" Node connectivity passed for subnet "9.38.158.128" with node(s) blade2,blade1 TCP connectivity check passed for subnet "9.38.158.128" Interfaces found on subnet "9.38.158.128" that are likely candidates for VIP are: blade2 eth1:9.38.158.143 blade1 eth1:9.38.158.142 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are: blade2 eth0:10.10.10.12 blade1 eth0:10.10.10.11 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "blade2:/tmp" Free disk space check passed for "blade1:/tmp" User existence check passed for "oracle" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "oracle" in group "oinstall" [as Primary] passed Membership check for user "oracle" in group "dba" passed Run level check passed

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 19

Page 23: ORA 11gR2 RAC BladeCenter 020510

Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1" Package existence check passed for "libaio-0.3.106 (i386)" Package existence check passed for "libaio-0.3.106 (x86_64)" Package existence check passed for "glibc-2.5-24 (i686)" Package existence check passed for "glibc-2.5-24 (x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5 (i386)" Package existence check passed for "glibc-devel-2.5 (x86_64)" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libaio-devel-0.3.106 (i386)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)" Package existence check passed for "libgcc-4.1.2 (i386)" Package existence check passed for "libgcc-4.1.2 (x86_64)" Package existence check passed for "libstdc++-4.1.2 (i386)" Package existence check passed for "libstdc++-4.1.2 (x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11 (i386)" Package existence check passed for "unixODBC-2.2.11 (x86_64)" Package existence check passed for "unixODBC-devel-2.2.11 (i386)" Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)" Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed Current group ID check passed Core file name pattern consistency check passed. User "oracle" is not part of "root" group. Check passed Default user file creation mask check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness...

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 20

Page 24: ORA 11gR2 RAC BladeCenter 020510

Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... PRVF-5408 : NTP Time Server "192.5.41.40" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "207.171.30.106" is common only to the following nodes "blade2" PRVF-5408 : NTP Time Server "209.81.9.7" is common only to the following nodes "blade2" PRVF-5408 : NTP Time Server "209.51.161.238" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "198.82.1.203" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "4.99.128.199" is common only to the following nodes "blade2" PRVF-5416 : Query of NTP daemon failed on all nodes Clock synchronization check using Network Time Protocol(NTP) passed Pre-check for cluster services setup was successful.

Performing Oracle Clusterware installation and Automatic Storage Management Installation

To install Oracle Clusterware 11g Release 2, Oracle Database 11g Release 2 Grid Infrastructure nd run

r

run it from

le, we select

(11.2.0.1) for Linux x86-64, needs to be downloaded. After that, unzip linux.x64_11gR2_grid.zip athe Oracle Universal Installer (OUI) from one node (local node). For the most part, OUI handles the installation of the other cluster nodes. There are a number of steps that need to be done on the othecluster nodes and these are called out by OUI at various points during the process.

Running the installation from the system console will require a XWindows session, or you can vncserver on the node and use XWindows on the workstation to start the OUI.

1. The first screen will ask you select one of the installation options. In this examp“Install and Configure Grid Infrastructure for a Cluster”.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 21

Page 25: ORA 11gR2 RAC BladeCenter 020510

2. The next screen will ask if this is a typical or advanced installation. We will select typical installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 22

Page 26: ORA 11gR2 RAC BladeCenter 020510

3. The next screen asks for the SCAN and its cluster node names and virtual IP addresses. If this is the first installation, put in the OS password for user oracle and click setup. Oracle will then set up the SSH connectivity between the above-mentioned cluster nodes. After that, you can click Test to make sure that the SSH worked properly between the nodes. Note: If you choose Advanced Installation in the previous screen, you need to provide more details for Single Client Access Name (SCAN) such as SCAN Port and IP addresses. The SCAN should be defined in the DNS to resolve to 3 IP addresses. For the Typical Installation, you need to provide the SCAN.

4. The next screen will ask you for the Oracle base and software directories. In this example, all Oracle Clusterware files are going to be stored in ASM. Then, enter the password for SYSASM. Oracle would like the password to conform to specific rules. If you did not follow these rules, errors will be shown at the bottom of the screen.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 23

Page 27: ORA 11gR2 RAC BladeCenter 020510

5. Since ASM is chosen to be the storage type for the Clusterware files, Oracle asks for the names of the ASM disks and it will create the Disk Group Name with the selected ASM disks to store the OCR and voting disks. The number of disks needed for installation depends on the redundancy level you picked. For High redundancy, it requires five disks; for Normal redundancy, it requires three disks; for External redundancy, it requires one disk. If you do not select enough disks, Oracle will give you errors. The minimum size of each disk is 280 MB.

In this example, Normal redundancy has been chosen.

6. Oracle will run its Cluster Verification Utility to check if the cluster nodes have met all the prerequisites. If not, it will stop and show you the errors. You can fix the errors and ask Oracle to

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 24

Page 28: ORA 11gR2 RAC BladeCenter 020510

check again. At the bottom of the screen, you can click on more details, where suggestions on how to fix the errors will be shown.

7. After fixing all the errors and passing the prerequisites tests, Oracle will show the installation summary. You can save the response file for future silent installation.

8. This is the screen showing the installation process.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 25

Page 29: ORA 11gR2 RAC BladeCenter 020510

9. After Oracle has installed the binary files on all cluster nodes, it will ask you to run root.sh as user root. It is very important to run root.sh on the local node first and allow it to successfully complete. Do not run root.sh on other nodes until root.sh on the local node has completed; otherwise, errors will occur on the other cluster nodes.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 26

Page 30: ORA 11gR2 RAC BladeCenter 020510

This is the output from the local node which is blade1 in this example:

[root@blade1 grid]# ./root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-12 14:32:59: Parsing the host name 2009-11-12 14:32:59: Checking for super user privileges 2009-11-12 14:32:59: User has super user privileges Using configuration parameter file: /u01/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'..

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 27

Page 31: ORA 11gR2 RAC BladeCenter 020510

Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-2672: Attempting to start 'ora.gipcd' on 'blade1' CRS-2672: Attempting to start 'ora.mdnsd' on 'blade1' CRS-2676: Start of 'ora.gipcd' on 'blade1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade1' CRS-2676: Start of 'ora.gpnpd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade1' CRS-2676: Start of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade1' CRS-2672: Attempting to start 'ora.diskmon' on 'blade1' CRS-2676: Start of 'ora.diskmon' on 'blade1' succeeded CRS-2676: Start of 'ora.cssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade1' CRS-2676: Start of 'ora.ctssd' on 'blade1' succeeded ASM created and started successfully. DiskGroup DISK created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-2672: Attempting to start 'ora.crsd' on 'blade1' CRS-2676: Start of 'ora.crsd' on 'blade1' succeeded CRS-4256: Updating the profile Successful addition of voting disk 4aa5aa207b704f11bfbc9a9f0eb544ce. Successful addition of voting disk 2ee17edb66ca4fa7bf9814af4790890d. Successful addition of voting disk 0735fe5ce71f4f6cbf27dc203f3ba22e. Successfully replaced voting disk group with +DISK. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 4aa5aa207b704f11bfbc9a9f0eb544ce (ORCL:DISK1) [DISK] 2. ONLINE 2ee17edb66ca4fa7bf9814af4790890d (ORCL:DISK2) [DISK] 3. ONLINE 0735fe5ce71f4f6cbf27dc203f3ba22e (ORCL:DISK3) [DISK]

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 28

Page 32: ORA 11gR2 RAC BladeCenter 020510

Located 3 voting disk(s). CRS-2673: Attempting to stop 'ora.crsd' on 'blade1' CRS-2677: Stop of 'ora.crsd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'blade1' CRS-2677: Stop of 'ora.asm' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'blade1' CRS-2677: Stop of 'ora.ctssd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'blade1' CRS-2677: Stop of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'blade1' CRS-2677: Stop of 'ora.cssd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'blade1' CRS-2677: Stop of 'ora.gpnpd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'blade1' CRS-2677: Stop of 'ora.gipcd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'blade1' CRS-2677: Stop of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.mdnsd' on 'blade1' CRS-2676: Start of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'blade1' CRS-2676: Start of 'ora.gipcd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade1' CRS-2676: Start of 'ora.gpnpd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade1' CRS-2676: Start of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade1' CRS-2672: Attempting to start 'ora.diskmon' on 'blade1' CRS-2676: Start of 'ora.diskmon' on 'blade1' succeeded CRS-2676: Start of 'ora.cssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade1' CRS-2676: Start of 'ora.ctssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade1' CRS-2676: Start of 'ora.asm' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'blade1' CRS-2676: Start of 'ora.crsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'blade1' CRS-2676: Start of 'ora.evmd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade1' CRS-2676: Start of 'ora.asm' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.DISK.dg' on 'blade1' CRS-2676: Start of 'ora.DISK.dg' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.registry.acfs' on 'blade1' CRS-2676: Start of 'ora.registry.acfs' on 'blade1' succeeded blade1 2009/11/12 14:39:00 /u01/grid/cdata/blade1/backup_20091112_143900.olr Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.

This is the output of the second node which is blade2. It is slightly different from the first node and it is shorter.

[root@blade2 grid]# ./root.sh Running Oracle 11g root.sh script...

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 29

Page 33: ORA 11gR2 RAC BladeCenter 020510

The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-12 14:40:32: Parsing the host name 2009-11-12 14:40:32: Checking for super user privileges 2009-11-12 14:40:32: User has super user privileges Using configuration parameter file: /u01/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node blade1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster CRS-2672: Attempting to start 'ora.mdnsd' on 'blade2' CRS-2676: Start of 'ora.mdnsd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'blade2' CRS-2676: Start of 'ora.gipcd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade2' CRS-2676: Start of 'ora.gpnpd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade2' CRS-2676: Start of 'ora.cssdmonitor' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade2' CRS-2672: Attempting to start 'ora.diskmon' on 'blade2' CRS-2676: Start of 'ora.diskmon' on 'blade2' succeeded CRS-2676: Start of 'ora.cssd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade2' CRS-2676: Start of 'ora.ctssd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'blade2' CRS-2676: Start of 'ora.drivers.acfs' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade2' CRS-2676: Start of 'ora.asm' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'blade2' CRS-2676: Start of 'ora.crsd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'blade2' CRS-2676: Start of 'ora.evmd' on 'blade2' succeeded blade2 2009/11/12 14:44:31 /u01/grid/cdata/blade2/backup_20091112_144431.olr Preparing packages for installation... cvuqdisk-1.0.7-1

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 30

Page 34: ORA 11gR2 RAC BladeCenter 020510

Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4048 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.

nodes, Oracle OUI will continue to configure Oracle Grid

Infrastructure for a Cluster. 10. After executing root.sh on all cluster

11. Oracle will run cluvfy again after the configuration and post the errors on the screen. In this example, the error is about the inconsistent name resolution for SCAN which caused the

is verification of SCAN VIP and Listener setup to fail. According to Metalink Note: 887471.1, therror can be ignored because we are not using DNS in our network.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 31

Page 35: ORA 11gR2 RAC BladeCenter 020510

After you press OK and continue, the Oracle grid infrastructure installation has completed.

Please check the configuration log file for more details if there is any failure during the configuration. The configuration log file is located in Oracle Inventory location.

Performing post-installation tasks To confirm Oracle Clusterware is running correctly, use this command:

$CRS_HOME/bin/crsctl status resource "TYPE co 'ora'" -t

[oracle@blade1 bin]$ ./crsctl status resource -w "TYPE co 'ora'" -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.DISK.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.LISTENER.lsnr ONLINE ONLINE blade1

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 32

Page 36: ORA 11gR2 RAC BladeCenter 020510

ONLINE ONLINE blade2 ora.LOG.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.asm ONLINE ONLINE blade1 Started ONLINE ONLINE blade2 Started ora.data.data_db1.acfs ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.eons ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.gsd OFFLINE OFFLINE blade1 OFFLINE OFFLINE blade2 ora.net1.network ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.ons ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.registry.acfs ONLINE ONLINE blade1 ONLINE ONLINE blade2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE blade1 ora.blade1.vip 1 ONLINE ONLINE blade1 ora.blade2.vip 1 ONLINE ONLINE blade2 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE blade1 Open 2 ONLINE ONLINE blade2 Open ora.scan1.vip 1 ONLINE ONLINE blade1

Another command, “crsctl check cluster -all”, can also be used for cluster check.

[root@blade1 logs]# /u01/grid/bin/crsctl check cluster -all ************************************************************** blade1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** blade2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************

Finally, the command, “crsctl check crs”, can also be used for a less detailed system check.

[oracle@blade1 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 33

Page 37: ORA 11gR2 RAC BladeCenter 020510

CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

After the installation of Oracle Clusterware Oracle recommends that a backup is made for the contents of root.sh and emkey.ora for future usage. Emkey.ora is located in $ORACLE_HOME/<node_name>_<database_name>/sysman/config directory. In this example, emkey.ora is located under /u01/app/oracle/blade1_orcl/sysman/config directory. This file contains the encryption key for all enterprise manager data.

Installing Oracle Database 11g Release 2 (11.2.0.1)

Pre-Installation tasks

All of the pre-installation tasks for Oracle Database 11g Release 2 are the same as the pre-installation tasks for Oracle Clusterware.

Running Cluster Verification Utility

Cluster Verification Utility (CVU) can be used to verify if the systems are ready to install Oracle Database 11g Release 2 with Oracle RAC.

The command “cluvfy.sh stage –pre dbcfg –n nodelist –d $ORACLE_HOME” is used to pre-check requirements for an Oracle Database with an Oracle RAC installation. Login as oracle user and run the cluvfy command.

[oracle@blade1 ~]$ cluvfy stage -pre dbcfg -n blade1,blade2 -d /d01/app/oracle/product/11.2.0/dbhome_1 Performing pre-checks for database configuration Checking node reachability... Node reachability check passed from node "blade1" Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.10.10.0" with node(s) blade2,blade1 TCP connectivity check passed for subnet "10.10.10.0" Node connectivity passed for subnet "9.38.158.128" with node(s) blade2,blade1 TCP connectivity check passed for subnet "9.38.158.128" Interfaces found on subnet "9.38.158.128" that are likely candidates for VIP are: blade2 eth1:9.38.158.143 eth1:9.38.158.233

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 34

Page 38: ORA 11gR2 RAC BladeCenter 020510

blade1 eth1:9.38.158.142 eth1:9.38.158.232 eth1:9.38.158.231 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are: blade2 eth0:10.10.10.12 blade1 eth0:10.10.10.11 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "blade2:/d01/app/oracle/product/11.2.0/dbhome_1" Free disk space check passed for "blade1:/d01/app/oracle/product/11.2.0/dbhome_1" Free disk space check passed for "blade2:/u01/grid" Free disk space check passed for "blade1:/u01/grid" Free disk space check passed for "blade2:/tmp" Free disk space check passed for "blade1:/tmp" User existence check passed for "oracle" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "oracle" in group "oinstall" [as Primary] passed Membership check for user "oracle" in group "dba" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1.2" Package existence check passed for "libaio-0.3.106 (i386)" Package existence check passed for "libaio-0.3.106 (x86_64)" Package existence check passed for "glibc-2.5-24 (i686)" Package existence check passed for "glibc-2.5-24 (x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5 (i386)" Package existence check passed for "glibc-devel-2.5 (x86_64)" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libaio-devel-0.3.106 (i386)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)"

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 35

Page 39: ORA 11gR2 RAC BladeCenter 020510

Package existence check passed for "libgcc-4.1.2 (i386)" Package existence check passed for "libgcc-4.1.2 (x86_64)" Package existence check passed for "libstdc++-4.1.2 (i386)" Package existence check passed for "libstdc++-4.1.2 (x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11 (i386)" Package existence check passed for "unixODBC-2.2.11 (x86_64)" Package existence check passed for "unixODBC-devel-2.2.11 (i386)" Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)" Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed Current group ID check passed Checking CRS integrity... CRS integrity check passed Checking node application existence... Checking existence of VIP node application (required) Check passed. Checking existence of ONS node application (optional) Check passed. Checking existence of GSD node application (optional) Check ignored. Checking existence of EONS node application (optional) Check passed. Checking existence of NETWORK node application (optional) Check passed. Checking time zone consistency... Time zone consistency check passed. Pre-check for database configuration was successful. [oracle@blade1 ~]$ cluvfy stage -pre dbcfg -n blade1,blade2 -d /u01/app/oracle/product/11.1.0.6/db Performing pre-checks for database configuration Checking node reachability... Node reachability check passed from node "blade1". Checking user equivalence... User equivalence check passed for user "oracle". Checking administrative privileges... User existence check passed for "oracle". Group existence check passed for "oinstall". Membership check for user "oracle" in group "oinstall" [as Primary] passed. Group existence check passed for "dba". Membership check for user "oracle" in group "dba" passed. Administrative privileges check passed. Checking node connectivity...

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 36

Page 40: ORA 11gR2 RAC BladeCenter 020510

Node connectivity check passed for subnet "10.10.10.0" with node(s) blade2,blade1. Node connectivity check passed for subnet "9.38.158.128" with node(s) blade2,blade1. Interfaces found on subnet "10.10.10.0" that are likely candidates for VIP: blade2 eth0:10.10.10.4 blade1 eth0:10.10.10.3 Interfaces found on subnet "9.38.158.128" that are likely candidates for VIP: blade2 eth1:9.38.158.143 eth1:9.38.158.154 blade1 eth1:9.38.158.142 eth1:9.38.158.152 WARNING: Could not find a suitable set of interfaces for the private interconnect. Node connectivity check passed. Checking CRS integrity... Checking daemon liveness... Liveness check passed for "CRS daemon". Checking daemon liveness... Liveness check passed for "CSS daemon". Checking daemon liveness... Liveness check passed for "EVM daemon". Checking CRS health... CRS health check passed. CRS integrity check passed. Pre-check for database configuration was successful.

Preparing Oracle home and its path

Prepare the Oracle home and the path for the database installation. Use the edit ~/.bash_profile file ironment with variables such as ORACLE_SID,

ds, ase 2 with Oracle RAC cannot be installed onto the same home as Oracle

command to set up an Oracle database envLD_LIBRARY_PATH.

NOTE: The Oracle home path must be different from Oracle Clusterware home. In other worOracle Database 11g ReleClusterware software.

Performing database installation 1. Download and unzip linux.x64_11gR2_database_1of2.zip and

technet.oracle.com and go to the database directory

eceive security updates from My Oracle Support, you will need to

ail address (username) for the My Oracle Support web site.

linux.x64_11gR2_database_2of2.zip fromand invoke runInstaller.

2. The first screen asks for your email address. You have to provide your email address in order toproceed. If you want to rprovide the password of your em

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 37

Page 41: ORA 11gR2 RAC BladeCenter 020510

3. The next screen provides the users different installation options. In this example, we will be creating and configuring a database.

4. The next screen asks for the class of the database server. For this example, Server Class will be selected.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 38

Page 42: ORA 11gR2 RAC BladeCenter 020510

5. The next screen asks if you want to install and configure a single instance or Oracle RAC database. In this example, we are going to install the Oracle RAC Database on blade1 and blade2.

6. The next screen asks for the type of installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 39

Page 43: ORA 11gR2 RAC BladeCenter 020510

7. The next screen asks for the configuration details of the database installation. The software location must be different from the software location of the grid infrastructure. If the storage type is ASM, ASM disk group needs to be provided in the space “Database file locations”. If you hnot done so, please create ASM Disk groups by using the Oracle ASM configuration assistan(ASMCA). You can use the diskgroup created to store the OCR and Voting Disks during the ginfrastructure install.

Note: If you plan to use Oracle ASM to store your Oracle RAC database by using the Oracle ASM configuration assistant (ASMCA), please refer to Appendix C “Oracle ASM Configuration Assistant”.

ave t

rid

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 40

Page 44: ORA 11gR2 RAC BladeCenter 020510

8. This screen performs all the prerequisites checks on all cluster nodes before installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 41

Page 45: ORA 11gR2 RAC BladeCenter 020510

9. The next screen shows the installation summary for the database.

10. This screen shows the installation process of the Oracle RAC Database installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 42

Page 46: ORA 11gR2 RAC BladeCenter 020510

11. This screen shows the progress of the Oracle Database configuration.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 43

Page 47: ORA 11gR2 RAC BladeCenter 020510

12. This screen shows the completion of the Oracle Database configuration.

13. This is the last step of the database installation process. Execute root.sh from the software location that you provided previously on all cluster nodes as user root.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 44

Page 48: ORA 11gR2 RAC BladeCenter 020510

The output of the cluster nodes are the same. This is the output from running root.sh on blade1.

[root@blade1 dbhome_1]# . root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /d01/app/oracle/product/11.2.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.

14. This is the end of database installation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 45

Page 49: ORA 11gR2 RAC BladeCenter 020510

Post-installation tasks 1. Oracle recommends root.sh script to be backed up after completing the database installation. If

the information is needed in the future, the original root.sh script can be easily recovered. 2. After upgrading or creating databases, it is recommended that utlrp.sql be executed to compile or

re-compile all PL/SQL modules that might be in an invalid state including packages, procedures and types. This script is located in the $ORACLE_HOME/rdbms/admin directory.

3. Finally, user accounts need to be created for the database and system. Most of the administrator accounts in the new database have been locked except sys, system. They will need to be unlocked if the modules for the administrators are going to be implemented.

4. The port numbers of several Web-based applications including Oracle Enterprise Manager Database Control are recorded in $ORACLE_HOME/install/portlist.ini. Make a note of these port numbers for future reference.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 46

Page 50: ORA 11gR2 RAC BladeCenter 020510

Summary Oracle Database 11g Release 2 offers many new features. Many of the new features further optimize the performance, scalability and failover mechanisms of Oracle Real Application Clusters (RAC) 11g. It makes Oracle RAC easier to implement and give you the flexibility to add nodes. Integrated with Oracle Fusion Middleware, Oracle RAC can fail over connections in the connection pools and immediately take appropriate recovery action.

Oracle Database 11g Release 2 will be offered on more platforms in the next few months. The implementation steps are very different than Oracle Database 11g Release 1.

One important thing is to make sure that the Oracle Clusterware installation is successful and functional before proceeding to database installation. This is because Oracle Clusterware daemons make sure that all applications startup during system startup and any failed applications will be started automatically to maintain the high availability aspect of the Oracle RAC cluster.

Before proceeding to database creation, ASMLib has to be configured properly and ASM disks have to be created. This will smooth the implementation process.

Last but not least, choosing the hardware, operating systems and storage for Oracle RAC 11g Release 2 are very significant steps. Having the right combination of all options will contribute to the success of Oracle RAC 11g Release 2 installation and implementation on the IBM BladeCenter and IBM System Storage platforms.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 47

Page 51: ORA 11gR2 RAC BladeCenter 020510

References

Oracle documentation Oracle Database New Features Guide 11g Release 2 (11.2) E10881-03 Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux E10812-03 Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX E10813-

03 Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2)

E10718-04

IBM documentation Oracle Database11g R2 Enterprise Edition using Oracle RAC on IBM BladeCenter running RedHat

Enterprise Linux 5 and IBM System Storage DS4800 April 2008, Betty Lee Document ID: WP101223

IBM and Oracle Web sites

These Web sites provide useful references to supplement the information contained in this document:

IBM BladeCenter http://www-03.ibm.com/systems/bladecenter/intel-based.html

Compatibility of IBM BladeCenter on hardware, applications and middleware

http://www-03.ibm.com/servers/eserver/serverproven/compat/us/eserver.html IBM System Storage 4800

http://www-03.ibm.com/systems/storage/disk/ds4000/ds4800/index.html IBM Network Attached Storage

http://www-03.ibm.com/systems/storage/network IBM System Storage Interoperability Matrix

http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over=yes

Linux RDAC Driver

http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5081901&brandind=5000028

IBM RedBooks

http://www.redbooks.ibm.com IBM Techdocs (White Papers)

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/WhitePapers Oracle Real Application Clusters

http://www.oracle.com/technology/products/database/clustering

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 48

Page 52: ORA 11gR2 RAC BladeCenter 020510

Oracle OCFS2 http://oss.oracle.com/projects/ocfs2

Oracle Automatic Storage Management Library (ASMLib)

http://www.oracle.com/technology/software/tech/linux/asmlib Oracle Database 11g Enterprise Edition

http://www.oracle.com/database/enterprise_edition.html Oracle Automatic Storage Management (ASM)

http://www.oracle.com/technology/products/database/asm My Oracle Support (formerly Oracle Metalink)

https://support.oracle.com/CSP/ui/flash.html

About the author Betty Lee is a Senior IT Specialist with IBM Advanced Technical Support and works in the IBM Oracle International Competency Center based in San Mateo, CA. She provides System x and BladeCenter platform support for projects at the Competency Center and for enablement activities at Oracle Corporation in Redwood Shores, CA.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 49

Page 53: ORA 11gR2 RAC BladeCenter 020510

Appendix A: Sample configuration

BladeCenter and DS4800

The figure below shows the cabling for a BladeCenter and DS4800 configuration. Note that the BladeCenter fibre channel switch modules have been cabled directly to the DS4800 host minihubs. If existing SAN switches are utilized, the fibre channel switches can be run in interoperability mode, or Optical Pass thru Modules (OPM) can be selected for use in the BladeCenter implementation.

Production Network

eth0

eth1eth1

eth0

Cluster Interconnect

Fibre channel

BladeCenter HS20’s

Production Network

eth0

eth1eth1

eth0

Cluster Interconnect

Fibre channel

BladeCenter

BladeCenter Fibre Channel

Switch Modules

DS4800

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 50

Page 54: ORA 11gR2 RAC BladeCenter 020510

Appendix B: List of common abbreviations and acronyms ASM Automatic Storage Management A feature of Oracle Database 11g that provides an integrated cluster file system and volume management capabilities. FC Fibre Channel A gigabit-speed network technology primarily used for storage networking. GHz Gigahertz Represent computer processor speed. HBA Host bus adapter It connects a host system to other network and storage devices. HDD Hard Disk Drive A non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. I/O Input / Output The communication between an information processing system and the outside world. iSCSI Internet Small Computer System Interface

An Internet Protocol (IP)-based storage networking standard for linking data storage facilities developed by the Internet Engineering Task Force (IETF).

LUN Logical Unit Number

It is a subnet of a larger physical disk or disk volume. It can be a single disk drive, or a partition of a single disk drive or disk volume from a RAID controller. It represents a logical abstraction or virtualization layer between the physical disk device/volume and the applications.

MB Megabyte

For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1,048,576 bytes. For disk storage capacity and communications volume, 1 000 000 bytes.

Mb Megabit

For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1 048 576 bits. For disk storage capacity and communications volume, 1 000 000 bits.

NAS Network-attached storage File-level data storage connected to a computer network providing data access to heterogeneous network clients. NIC Network interface controller Hardware that provides the interface control between system main storage and external high-speed link (HSL) ports. OCFS Oracle Cluster File System A consistent file system image across the servers in a cluster. OCFS2 Oracle Cluster File System Release 2

The next generation of the Oracle Cluster File System for Linux. It is a general-purpose file system that can be used for shared Oracle home installations.

OCR Oracle Cluster Registry

A file that contains information pertaining to instance-to-node mapping, node list and resource profiles for customized applications in the Clusterware.

RAC Real Application Cluster

A cluster database with a shared cache architecture that supports the transparent deployment of a single database across a cluster of servers.

RDAC Redundant Disk Array Controller It provides redundant failover/failback support for the logical drives of the storage server. RHEL5 Red Hat Enterprise Linux 5 Linux operating systems released in March 2007 and it is based on the Linux 2.6.18 kernel. SAN Storage area network

A dedicated storage network tailored to a specific environment, combining servers, storage products, networking products, software, and services.

SAS Serial Attached SCSI

A communication protocol for direct attached storage (DAS) devices. It uses SCSI commands for interacting with SAS End devices.

SCSI Small Computer System Interface

(1) An ANSI-standard electronic interface that allows personal computers to communicate with peripheral hardware, such as disk drives, tape drives, CD-ROM drives, printers, and scanners faster and more flexibly than previous interfaces.

SLES SUSE Linux Enterprise Server A Linux distribution supplied by Novell.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 51

Page 55: ORA 11gR2 RAC BladeCenter 020510

Appendix C: Oracle ASM Configuration Assistant with ACFS (ASMCA) Oracle ASM Configuration Assistant is a new tool to manage ASM instances, create ASM disks groups, create volumes and ASM Cluster file systems. To invoke asmca, go to grid infrastructure home and bin directory.

# /u01/grid/bin/asmca

Creating Oracle ASM Disk Groups 1. Go to the tab “Disk Groups”. In this example, DISK disk group has already been created for

Oracle Clusterware files using oracleasm.

2. Click the Create button and select the disks for the DATA disk group. For normal redundancy, a minimum of three disks must be selected; for high redundancy, at least five disks must be selected; for external redundancy, only one disk must be selected.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 52

Page 56: ORA 11gR2 RAC BladeCenter 020510

3. Click OK to complete the disk group creation. It may take a few minutes.

4. An informational window will be shown to confirm the DATA diskgroup creation.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 53

Page 57: ORA 11gR2 RAC BladeCenter 020510

5. Repeat the same procedures for LOG diskgroup creation. In this example, there is only one disk selected for LOG diskgroup.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 54

Page 58: ORA 11gR2 RAC BladeCenter 020510

Creating Oracle ACFS for Oracle Database home

In this section, we will put a disk group on an Oracle ASM Cluster File System (ACFS). This file system can be used and configured for any Oracle Database home, including a shared or non-shared Oracle Database home in an Oracle RAC environment.

Please be aware that Oracle Database data files are not supported on Oracle ACFS.

Please note that this DATA disk group is different from the DATA disk group created in the previous section. This disk group is mainly for storing Oracle Database software files.

1. Press the Create button and put in the volume name, the mount point that you have already created, the size of that directory and its owner name and group.

2. It will take a few minutes to create the ACFS for the volume on the diskgroup.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 55

Page 59: ORA 11gR2 RAC BladeCenter 020510

3. During this process, Oracle will ask you to run acfs_script.sh on the local node.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 56

Page 60: ORA 11gR2 RAC BladeCenter 020510

4. After that, it will show you the mount point, status, volume device name, the size and the volume name of the newly-created ACFS.

Creating ACFS for a General Purpose Filesystem 1. Go to the tab Disk Groups and press the “Create” button.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 57

Page 61: ORA 11gR2 RAC BladeCenter 020510

2. Select General Purpose File System and enter the mount point which has been created on the server.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 58

Page 62: ORA 11gR2 RAC BladeCenter 020510

3. Select Volume and choose “create volume”. Enter the volume name, the disk group it will be put on and the size.

4. After a brief moment, the volume named data_log will have been created.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 59

Page 63: ORA 11gR2 RAC BladeCenter 020510

5. After creating the volume, this general purpose file system is ready to be created.

6. A pop-up window shows that the Oracle ACFS for /dev/asm/data_log-360 has been created on the mount point of /d01/logs. The final screen will show all the information on this newly-created ACFS.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 60

Page 64: ORA 11gR2 RAC BladeCenter 020510

Appendix D: Trouble-shooting for Oracle Clusterware 11g Release 2 Preparation for Oracle Clusterware 11g Release 2 is very important for a successful installation. In this new release, Oracle has greatly improved the content of the dialog windows by providing the cause of the problems and suggested actions to correct the problems.

Below are some of the problems that you may encounter during the installation of Oracle Clusterware 11g Release 2.

Failed to meet the prerequisites requirements

Figure 6: Setting up Grid Infrastructure - Step 7 of 10

Missing packages

According to Oracle, if you are not going to use ODBC functions, you do not need to download the unixODBC-devel packages. For libaio-devel packages, which are not included in RHEL 5.4, you need to download these packages from other web sites.

Insufficient swap spaces

Oracle will check to see if you have sufficient swap spaces in the server by checking your available memory. If you have a lot of memory on your server, Oracle will suggest you use more swap spaces. Nevertheless, you can ignore this requirement by checking the ignore box.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 61

Page 65: ORA 11gR2 RAC BladeCenter 020510

Timed out waiting for the CRS stack to start

If, at the end of the Oracle Clusterware 11g Release 2 installation, you get the above error when you run root.sh on the local node:

1. The problem could be the incorrect configuration of the private network interconnect. Try to ping the private IP address and see if it is reachable from each node. If it is not reachable, this means that some network settings are incorrect.

2. Another reason for getting this error may be that there are “non-empty” ASM devices. If this is not your first attempt to install Oracle Clusterware, make sure you have “cleaned” the ASM device’s header by using the dd utility. Other files like .oracle under /tmp and /var/tmp directories need to be removed before attempting another installation. Oracle recommends the use of the de-install tool ($ORACLE_HOME/deinstall) to clean any failed installations. The de-install tool will clean the ASM devices for you.

3. Another reason may be because you did not reboot the servers (all cluster nodes) after creating ASM disks on one of the cluster nodes. This is a very important step in order for all nodes to recognize all ASM disks.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 62

Page 66: ORA 11gR2 RAC BladeCenter 020510

Trademarks and special notices © Copyright. IBM Corporation 1994-2010. All rights reserved.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM. the IBM logo, ibm.com, AIX, BladeCenter, DS4000, DS6000, DS8000, POWER, System Storage, and System x are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both:

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

AMD and AMD Opteron are trademarks of Advanced Micro Devices, Inc.

Red Hat, the Red Hat "Shadow Man" logo, and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc., in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

The information provided in this document is distributed “AS IS” without any warranty, either express or implied.

The information in this document may include technical inaccuracies or typographical errors.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 63

Page 67: ORA 11gR2 RAC BladeCenter 020510

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs © Copyright 2010, IBM Corporation 64