Author
rachid-rach
View
55
Download
1
Tags:
Embed Size (px)
. . . . . . . .
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real
Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and
IBM System Storage DS4800
Betty Lee IBM Oracle International Competency Center
January 2010
Copyright IBM Corporation, 2010. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders
Table of Contents Table of Contents........................................................................................................................2 Abstract........................................................................................................................................1 Prerequisites ...............................................................................................................................1 Introduction .................................................................................................................................1
Oracle Database 11g Release 2 new features........................................................................................ 1 High Availability ................................................................................................................. 1 Performance and scalability .............................................................................................. 2 Security.............................................................................................................................. 2 Clustering .......................................................................................................................... 2 Manageability .................................................................................................................... 2
About Oracle Real Application Clusters 11g Release 2 .......................................................................... 3 About IBM BladeCenter ........................................................................................................................... 3 About IBM System Storage DS4800 ....................................................................................................... 5
Hardware requirements ..............................................................................................................5 Oracle Real Application Clusters requirements....................................................................................... 5
Server CPU ....................................................................................................................... 6 UServer memory .................................................................................................................. 6 Network ............................................................................................................................. 7 Shared storage .................................................................................................................. 7
High availability considerations................................................................................................................ 8 Software requirements ...............................................................................................................8
Operating system..................................................................................................................................... 8 Storage System Manager ........................................................................................................................ 9 Linux RDAC driver ................................................................................................................................... 9 Oracle Database 11g Release 2............................................................................................................ 10 Automatic Storage Management Library (ASMLib) ............................................................................... 10
Library and Tools............................................................................................................. 11 Drivers for kernel 2.6.18-164.el5 ..................................................................................... 11
Configuring the system environment .....................................................................................11 BIOS....................................................................................................................................................... 11 Remote system management................................................................................................................ 11 Installing Linux operating systems......................................................................................................... 12 Installing RDAC driver............................................................................................................................ 13
Installing Oracle Grid Infrastructure 11.2.0.1 .........................................................................15 Pre-Installation tasks.............................................................................................................................. 15
Configuring kernel parameters ........................................................................................ 15 Creating users and groups .............................................................................................. 15 Setting shell limits for the Oracle software owner ........................................................... 16 Setting the Time on Cluster Nodes ................................................................................. 16 Setting Oracle inventory location..................................................................................... 17
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation
Setting up network files ................................................................................................... 17 Configuring SSH on all cluster nodes.............................................................................. 17 Configuring ASMLib......................................................................................................... 17 Creating ASM disks......................................................................................................... 18 Running Cluster Verification Utility (CVU)....................................................................... 19
Performing Oracle Clusterware installation and Automatic Storage Management Installation ............. 21 Performing post-installation tasks ..........................................................................................32 Installing Oracle Database 11g Release 2 (11.2.0.1) ..............................................................34
Pre-Installation tasks.............................................................................................................................. 34 Running Cluster Verification Utility .................................................................................. 34 Preparing Oracle home and its path................................................................................ 37
Performing database installation ........................................................................................................... 37 Post-installation tasks ............................................................................................................................ 46
Summary....................................................................................................................................47 References.................................................................................................................................48
Oracle documentation............................................................................................................................ 48 IBM documentation................................................................................................................................ 48 IBM and Oracle Web sites ..................................................................................................................... 48
About the author .......................................................................................................................49 Appendix A: Sample configuration .........................................................................................50
BladeCenter and DS4800...................................................................................................................... 50 Appendix B: List of common abbreviations and acronyms..................................................51 Appendix C: Oracle ASM Configuration Assistant with ACFS (ASMCA).............................52
Creating Oracle ASM Disk Groups ........................................................................................................ 52 Creating Oracle ACFS for Oracle Database home................................................................................ 55 Creating ACFS for a General Purpose Filesystem................................................................................ 57
Appendix D: Trouble-shooting for Oracle Clusterware 11g Release 2 ................................61 Failed to meet the prerequisites requirements ...................................................................................... 61
Missing packages............................................................................................................ 61 Insufficient swap spaces ................................................................................................. 61
Timed out waiting for the CRS stack to start ......................................................................................... 62 Trademarks and special notices..............................................................................................63
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation
Abstract The purpose of this paper is to assist those who are looking to implement Oracle Real Application Clusters (RAC) on Red Hat Enterprise Linux 5 (RHEL5) running on IBM BladeCenter servers and IBM System Storage products. The information provided herein is based on experiences with test environments at the IBM Oracle International Competency Center, and is based on available documentation from IBM, Oracle, and Red Hat. This paper does not cover the setting up of the shared disk for Oracle RAC.
Prerequisites Good knowledge of Oracle Database Knowledge of the Linux operating system
Introduction This paper will discuss the necessary steps to prepare for and successfully install Red Hat Enterprise Linux 5 64-bit and Oracle Database 11g Release 2 Enterprise Edition with Oracle Real Application Clusters 64-bit on IBM BladeCenter servers and IBM System Storage disks. The operating system environment described is the 2.6 kernel-based Red Hat Enterprise Linux 5 (RHEL5).
An implementation of Oracle Real Application Clusters 11g Release 2 consists of three main steps:
Planning the hardware for Oracle Real Application Clusters implementation
Configuring the servers and storage disk systems
Installing and configuring the Oracle Clusterware and Oracle RAC database
Oracle Database 11g Release 2 new features
There are many new features found in Oracle Database 11g Release 2. They can be found in Oracle 11g Release 2 documentation available on the Oracle web site. According to Oracle Database New Features Guide 11g Release 2, the main highlights are as follows:
High Availability Automatically repair corrupted blocks on the primary database or physical standby database
(which must be in real-time query mode). As part of Oracle Cloud Computing offering, databases can be backed up to Amazon S3. With connection to a catalog and auxiliary database, DUPLICATE command in RMAN can be
executed without any connection to the target database. Tables with compression are supported in logical standby databases and Oracle LogMiner. A primary database can support up to 30 standby databases.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 1
Whenever the host computer restarts, Oracle Restart will automatically restart the database instance, the ASM instance, the listener, and other components. Oracle Restart is a separate installation from Oracle Database.
Performance and scalability Oracle RAC has integrated with Universal Connection Pool (UCP) which is the new Java
connection pool. With UCP, Java applications can easily manage connections to an Oracle RAC database.
UCP for JDBC enhances performance and stabilization, and provides connection labeling and harvesting.
Database Smart Flash Cache is a transparent extension of the database buffer cache using solid state device (SSD) technology. This SSD acts as a Level 2 cache to the SGA (Level 1). SSD can reduce the amount of disk I/O at a much lower cost than adding same amount of memory.
Oracle ASM can migrate a disk group with 512 byte sector drives to 4 KB sector drives. Oracle RAC One Node is a new option to Oracle Database 11g Release 2 Enterprise Edition.
It can easily upgrade to a full multi-node Oracle RAC database without downtown or disruption.
Security New encryption key management can update the master key associated with transparent
data encryption (TDE) encrypted tablespaces. New package for audit data management can clean up audit trail records after backup and
control the size and age of the audit files.
Clustering Oracle Universal Installer has integrated with the Cluster Verification Utility (CVU) in the pre-
installation steps of Oracle RAC installation. In order to have successful installation of Oracle RAC, a synchronized system time across
the cluster is a requirement. Cluster Time Service will be responsible to synchronize the system time on all nodes in the cluster.
The high redundancy option for storing OCR has increased to 5 copies so as to improve the cluster availability.
OCR can now be stored in Automatic Storage Management (ASM). Oracle Clusterware is installed into a separate home from Oracle Database home.
Manageability Single Client Access Name (SCAN) provides a single name for clients to access an Oracle
Database running a cluster. It provides load balancing and failover of client connections to the database.
The Clusterware administrator can delegate specific tasks on specific servers to different people based on their roles. This is called role-separated management.
Patch sets for Oracle Clusterware and Oracle RAC can be applied to the servers as out-of-place upgrades to the Oracle Grid infrastructure without bringing the entire cluster down.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 2
The new Enterprise Manager GUI can monitor and manage the full lifecycle of Oracle Clusterware resources. It also introduces procedures to scale up or scale down Oracle Clusterware and Oracle Real Application Clusters easily.
Complete deinstallation and deconfiguration of Oracle RAC databases and listeners can be done by Database Configuration Assistant (DBCA), Database Upgrade Assistant (DBUA), and Net Configuration Assistant (NETCA).
Oracle Universal Installer can help to clean up a failed Oracle Clusterware Installation by advising you the places to clean and steps to change prior to reattempting the installation again. During the installation, it also consists of several recovery points for you to retry and rollback to the closet recovery point once the problem has been fixed.
Database administrator can limit Oracle instances CPU usage by setting the CPU_COUNT initialization parameter. This is called Instance Caging.
E-mail notifications can be sent to users on any job activities.
For more information on Oracle Database 11g Release 2 new features, please refer to Oracle Database New Features Guide 11g Release 2 (11.2) E10881-03.
About Oracle Real Application Clusters 11g Release 2
Oracle Real Application Clusters (RAC) is an option of Oracle Database that allows a database to be installed across multiple servers. According to Oracle, RAC uses the shared disk method of clustering databases. Oracle processes running in each node access the same data residing on shared data disk storage. First introduced with Oracle Database 9i, RAC provides high availability and flexible scalability. If one of the clustered nodes fails, Oracle continues processing on the other nodes. If additional capacity is needed, nodes can be added without taking down the cluster.
In Oracle Database 11g Release 2, Oracle provides Oracle Clusterware, which is designed specifically for Oracle RAC. You do not need a third party Clusterware product to implement Oracle RAC. Since storage is shared, the file system and volume management must be cluster-aware.
Starting with Oracle Database 11g Release 2, Oracle Clusterware files can be stored in Oracle ASM. Oracle Clusterware and Oracle ASM are installed into a single home directory called grid home.
For further information on Oracle RAC, please refer to this web site:
http://www.oracle.com/technology/products/database/clustering/index.html
About IBM BladeCenter
The unique IBM BladeCenter design addresses todays customers most serious issues: space constraints, efficient manageability, resiliency, and the physical environment which includes cooling and power. IBM BladeCenter servers takes less time to install, fewer resources to manage and maintain, and cost less than traditional multi-server solutions. These blade servers are so compact and easy to use that customers can increase the system capacity by simply sliding an additional blade into the integrated chassis, and then IBM Director can auto-configure it making it ready to use. Since the blades share a
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 3
common, integrated infrastructure, basic components such as power, system ports and fans, power consumption and system complexity are reduced. The BladeCenter H chassis, shown in Figure 1, is one of the models of the BladeCenter chassis.
Figure 1: BladeCenter H Chassis
IBM offers blades with Intel, AMD Opteron or IBM POWER processors. IBM BladeCenter chassis offers the capability to hold from 6 to 14 2-socket blades.
Figure 2: IBM Blade HS22
Figure 3: IBM Blade LS42
The IBM HS22 blade is a 2-socket quad-core Intel Xeon 5500 series processor blade, with up to 2.93 GHz processors. It supports up to 96 GB of memory with 12 VLP DDR-3 memory DIMMs. The HS22 can run applications two times faster than the previous generation blades. The IBM HS21 and HS22 are ideal for collaboration, running Citrix, Linux clusters and compute-centric applications.
IBM offers two AMD Opteron processor based blades the LS22 (two socket) and LS42 (four socket). The new LS22 and LS42 are enterprise-class, high performance computing blades. The LS22 is ideal for memory-intensive applications including research, modeling and simulation, while LS42 is ideal for high performance computing, virtualization, consolidation and database applications. A picture of the LS42 is shown in Figure 3.
For more information about the IBM BladeCenter platform, please refer to the following web site:
http://www-03.ibm.com/systems/bladecenter/intel-based.html
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 4
For the latest information on compatibility of IBM BladeCenter hardware, applications and middleware, please visit:
http://www-03.ibm.com/servers/eserver/serverproven/compat/us
About IBM System Storage DS4800
The IBM System Storage DS3000 and DS4000 families are designed to provide fast, reliable and efficient networked storage. It is easy to deploy and flexible for use with IBM System x and BladeCenter servers.
The DS4800 is affordable for small and medium businesses and scalable by supporting up to 224 Fibre Channel (FC) or SATA drives. It supports multiple RAID levels (0.1.3.5.10) and its components can be replaced without stopping the DS4800. It can also provide up to 1724 MBps of sustained applications through the eight channels.
Figure 4: IBM System Storage DS4800
As shown in Figure 4, IBM System Storage DS4800 has 2 U rack-mount enclosures with 12 easily accessible drive bays. It supports dual-ported and hot-swappable SAS disk at 10,000 and 15,000 rpm speeds. It is also scalable to 3.6 TB of storage capacity with 300 GB hot-swappable SAS disks.
For further information on IBM System Storage DS4800, please refer to the following web site:
http://www-03.ibm.com/systems/storage/disk/ds4000/ds4800/index.html
For information on interoperability matrix for IBM System Storage DS4000, please visit:
http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html
Hardware requirements
Oracle Real Application Clusters requirements
An Oracle Real Application Clusters database environment consists of the following components:
1. Cluster nodes - 2 to n nodes or hosts, running Oracle Database server(s) 2. Network interconnect - a private network used for cluster communications and cache fusion 3. Shared storage - used to hold database system and data files and accessed by the cluster nodes 4. Production network - used by clients and application servers to access the database
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 5
Figure 5 below is an architecture diagram for Oracle Real Application Clusters:
Production Network
Application Servers Users
High speed interconnect
Storage Area Network
SAN Fabric
Shared storage
Shared cache with Oracle Cache Fusion
Figure 5: Oracle Real Application Clusters architecture
For more information on Oracle Real Application Clusters, please visit http://www.oracle.com/technology/products/database/clustering/index.html.
For more information on technology supported by Oracle with Oracle Real Application Clusters, please visit http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_linux_new.html.
Server CPU
There should be enough server CPU capacity in terms of speed and number of CPUs to handle the workload. Generally speaking, there should be enough CPU capacity to have an average CPU utilization of 65%. This will allow the server absorb peak activity more easily.
Server memory
An Oracle Database may require a lot of memory. This depends on the activity level of users and the nature of the workload. As a rule of thumb, the server should have more memory than it actually uses because performance will be greatly degraded and heavy disk swapping may occur when there is insufficient of memory.
It is important to select servers that are available with the amount of memory required plus room for growth. Memory utilization should be around 75-85% maximum of the physical memory in production environment. Otherwise, heavy disk swapping may occur and server performance will decrease.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 6
Network
Servers in an Oracle Real Application Clusters need at least two separate networks, a public network and a private network. The public network is used for the communication between the clients or applications servers and the database. The private network, sometimes referred to as network interconnect is used for cluster node communication. It is used for monitoring the heartbeat of the cluster and by Oracle Real Application Clusters for Cache Fusion.
InfiniBand networking is supported with Oracle Database 11g.
Shared storage
Shared storage for Oracle Real Application Clusters devices can be a logical drives or LUNs from a Storage Area Network (SAN) controller or a Network File System (NFS) from a supported Network Attached Storage (NAS) device. NAS has some advantages but a SAN is recommended for higher performance.
Please refer to the following IBM web site for more information about IBM NAS offerings such as IBM System Storage N3000, N3700, N5000 and N7000:
http://www-03.ibm.com/systems/storage/nas
For SAN products, IBM offers enterprise disk systems such as DS6000 and DS8000, mid-range disk systems such as DS3400, DS4200, DS4700 Express and DS4800. Check to ensure the System Storage product you are using is supported with Oracle Real Application Clusters implementations. Third party storage subsystem can also be used with BladeCenter servers. Please refer to third party documentation or contact a third party representative for product certification information.
For more information on IBM System Storage product offerings, please visit
http://www-03.ibm.com/systems/storage/disk
For Oracle Real Application Clusters implementation, Oracle Database files may be located on shared storage using the following options:
1. A Certified Cluster file system
It is a file system that may be accessed (read and write) by all members in a cluster at the same time, with all cluster members having the same view of the file system. It allows all nodes in a cluster to access a device concurrently via the standard file system interface. Oracle Cluster File System Release 2 (OCFS2) is an example.
2. Oracle Automated Storage Management (ASM)
ASM is a simplified database storage management and provisioning system that provides file system and volume management capabilities in Oracle. It allows database administrators (DBA) to reference disk groups instead of individual disks and files which ASM manages internally. ASM is included in Oracle Database 11g and is designed to handle Oracle Database files, control files and log files.
In Oracle 11g Release 2, Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is introduced. It is a multi-platform, scalable file system which supports database and application files like executables, database trace files, database alert logs, application reports, BFILEs, and
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 7
configuration files. However, it does not support any file that can be directly stored in Oracle ASM as well as any files for the Oracle grid infrastructure home.
For more information on Oracle ACFS, please refer to Oracle Database Storage Administrators Guide 11g Release 2 (11.2), Part Number E10500-02.
High availability considerations
High availability (HA) is a key requirement for many clients. From a hardware configuration standpoint, this means eliminating single point of failure. IBM products are designed for high availability, with such standard features as redundant power supplies and cooling fans, hot-swappable components, and so on.
For high availability environments, the following recommendations should also be taken into consideration when selecting the server:
Configure additional network interfaces and use IP bonding to combine at least two network interfaces for each of the two Oracle RAC networks. This reduces the downtime due to a network interface card (NIC) failure or network component failure. Multi-port adapters provide network path redundancy, however the adapter will be a single point of failure. In this case, redundant multi-port adapters are the best solution. In addition, NICs used for IP bonding should be on separate physical network cards and connected to different network switches.
There should be at least two fibre channel host bus adapters (HBA) on each node to provide redundant I/O paths to the storage subsystem. Multi-port HBAs and Storage Area Network (SAN) with redundant components like SAN switches and cabling will provide higher availability of the servers.
A kernel crash utility for every node in the cluster should be configured. This will throw kernel panic when a server crashes and the kernel dump will be saved. This core dump can then be used for further investigation for the problem. This will in-turn save some problem resolution time.
Finally, an Oracle Real Application Clusters (RAC) implementation requires at least two network interfaces. Nevertheless, up to five network interfaces are recommended, two for public, two for private and one for administration and netdump. The more redundancy of hardware architectures and software components, the less downtime databases and applications will experience.
Software requirements In an Oracle Real Application Clusters implementation, different kinds of software need to be downloaded and installed in the cluster nodes. A few of them are optional, however, it will be very beneficial to install them and make use of them in the implementation.
Operating system
Red Hat Enterprise Linux 5 is the operating system used in the tests described in this paper. It can be downloaded from https://www.redhat.com/apps/download.
For the latest information regarding IBM hardware certification by Red Hat, please refer to:
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 8
https://hardware.redhat.com
Storage System Manager
IBM System Storage DS4000 Storage Manager is used to manage the DS3200, DS3300 and DS3400 via the graphical user interface. The DS3000 Storage Manager host software is required for managing the DS3200 and DS3400 models with controller firmware version 06.17.xx.xx and the DS3300 model with controller firmware version 06.50.xx.xx.
The DS4000 Storage Manager can be downloaded from the IBM Systems support Web site:
http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5082143&brandind=5000028
The IBM DS Storage Manager Software packages are available for AIX, Microsoft Windows (32-bit and 64-bit version), Linux, and other platforms.
Note for IBM System Storage DS4000 users: The DS3000 Storage Manager manages only DS3000 systems. With the DS4000 Storage Manager (version 9.23 or above) you will be able to manage both DS3000 and DS4000 storage systems with Enterprise Management Window.
Linux RDAC driver
The Linux RDAC driver provides redundant failover/failback support for the logical drives in the DS4000 storage subsystem that are mapped to the Linux host server. The Linux host server must have Fibre Channel (FC) or Serial Attached SCSI (SAS) connections to the host ports of both controllers A and B of the DS3000 storage subsystem. It is provided as an alternative to the Linux FC host bus adapter failover device driver.
The Linux RDAC driver is not included with DS4000 Storage Manager for Linux, it needs to be downloaded and installed for this configuration.
Two different Linux RDAC packages are available, 09.03.0B05.0214 for kernel version 2.4 (REHL 4, SLES 9 and SLES 10) and 09.03.0C05.0214 (RHEL 5, SLES 10 SP1, SLES 10 SP2, and SLES 11) for kernel version 2.6. Please follow the instructions in the readme files for loading the packages.
To download 09.03.0B05.0214, please follow this link:
http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5081900&brandind=5000028
To download 09.03.0C05.0214, please use this link:
http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5081901&brandind=5000028
Since the RDAC provides multipath and failover functionalities, the HBA drivers need to configured with the non-failover option. The Linux RDAC cannot coexist with failover HBA drivers.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 9
Another important consideration is that each HBA in the host server should only see one DS4000 RAID controller, otherwise the RDAC driver will not work properly. Correct implementation of SAN switch zoning (in the case of Fibre Channel HBAs and DS4800) will prevent the problem.
Moreover, since the Linux kernel does not detect so-called sparse LUNs, no LUNs after a skipped number would be available to the host server; the order of LUNs assigned through host-to-logical-drive mapping is a very significant consideration during the configuration of the DS4800. The LUNs assigned to a Linux host must be a contiguous set of numbers and the access logical drive should be assigned to LUN 31.
Finally, the HBA driver has to be installed successfully and the DS4000 subsystems attached correctly before you install the Linux RDAC driver.
Oracle Database 11g Release 2
Oracle Database 11g Release 2 (11.2.0.1) is the current release of Oracles database product and is available on 32-bit and 64-bit Linux platforms as of November, 2009. It is certified on IBM System x with the following operating systems in both 32-bit and 64-bit:
SuSE Linux Enterprise System 11 (SLES-11) / SuSE Linux Enterprise System 10 (SLES-10) Red Hat Enterprise AS/ES 5 (RHEL5) / Red Hat Enterprise AS/ES 4 (RHEL4) Oracle Enterprise Linux 5 (OEL5) / Oracle Enterprise Linux 4 (OEL4) For the latest information on Oracle product certification, please visit My Oracle Support web site:
https://support.oracle.com/CSP/ui/flash.html
This software can be downloaded from the Oracle Technology Network (OTN) or the DVDs can be requested from Oracle Support. Oracle RAC is a separately licensed option of Oracle Enterprise and Standard Editions. For additional information on pricing, please refer to:
http://www.oracle.com/corporate/pricing/technology-price-list.pdf
Automatic Storage Management Library (ASMLib)
Automatic Storage Management (ASM) provides volume and cluster file system management where the I/O subsystem is directly handled by the Oracle kernel. Oracle ASM will have each LUN mapped as a disk. Disks are then grouped together into disk groups. Each disk group can be segmented in one or more fail groups. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance.
Starting with Oracle Database 11g Release 2, Oracle Clusterware OCR and voting disk files can be stored in Oracle ASM.
There are two methods to configure ASM on Linux, one is ASM with ASMLib and the other is ASM with standard Linux I/O. ASM with ASMLib will be employed to configure ASM on Linux in this paper.
ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 11g and can enable ASM I/O to Linux disks. ASMLib packages can be downloaded from the following web site:
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 10
http://www.oracle.com/technology/tech/linux/asmlib/index.html
For Red Hat Enterprise Linux 5 Update 4 64-bit, the following packages need to be installed:
Library and Tools oracleasm-support-2.1.3-1.el5.x86_64.rpm oracleasmlib-2.0.4-1.el5.x86_64.rpm
Drivers for kernel 2.6.18-164.el5 oracleasm-2.6.18-164.el5xen-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5debug-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5debuginfo-2.0.5-1.el5.x86_64.rpm oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm
Configuring the system environment
BIOS
Be sure to upgrade the system BIOS and adapter BIOS to the latest levels. Look for the blade models on http://www.ibm.com/support/us.
Remote system management
On the BladeCenter platform, the Management Module functions as a system-management processor and a keyboard/video/mouse-multiplexing switch for the blade servers. It provides keyboard, video, and mouse ports for a local console and a 10/100 Ethernet port which provides access to the system management processor.
The system management processor communicates with other BladeCenter components, providing functions such as:
Status monitoring of the blade servers, switch modules, power modules, blower modules Blade server management and control, e.g. power/restart, upgrading firmware, switching the
keyboard/video/mouse, etc. in conjunction with the blade server service processors Switch module configuration, such as enabling/disabling external ports Remote console Set up the Ethernet ports on BladeCenter Management Module and connect them to your management Local Area Network (LAN). For information and instructions, please refer to IBM Redbook, IBM eServer xSeries and BladeCenter Server Management, SG24-6495-00. IBM Redbooks are available at:
http://www.redbooks.ibm.com
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 11
Installing Linux operating systems
Installation of the operating systems will not be discussed in detail in this paper. For more details, please refer to the operating system vendor documentation. The instructions for installation of different operating systems for BladeCenter can be found at:
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/docdisplay?lndocid=SITE-HELP05&brandind=5000020
Prior to installation, please make note of the following:
Be sure to create sufficient swap space appropriate for the amount of physical memory on your servers. Oracle recommends that the amount of swap space should equal the amount of RAM.
It is strongly recommended that every node of the cluster have an identical hardware configuration, although it is not mandatory.
Oracle publishes a minimal set of hardware requirements for each server
Hardware Minimum Recommended
Physical memory 1.5GB Depends on applications and usage
CPU 1 CPU per node
2 or more CPUs per node
(a processor type that is certified with Oracle 11g Release 2)
Interconnect network 1Gb 2 teamed Gb
External network 100Mb 1Gb
Backup network 100Mb 1Gb
HBA or NIC for SAN, iSCSI, or NAS 1Gb HBA Dual-pathed storage vendor certified HBA
Oracle Database single instance 4 GB 4 GB or more
Oracle Grid Home
(includes the binary files for Oracle Clusterware and Oracle ASM and their associated log files)
4.5GB 5 GB (with sample schemas)
Temporary disk space 1 GB 1 GB or more (and less than 2TB)
Table 1: Hardware requirements
Prior to installation, you install the required OS packages, otherwise Oracle Universal Installer will provide you with the list of packages that you need to install before you can proceed.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 12
The following packages will be checked for Oracle Real Application Clusters 11g Release 2 on RHEL 5.4 64-bit when using Cluster Verification Utility (the version numbers of these packages are the minimum version required):
Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1" Package existence check passed for "libaio-0.3.106 (i386)" Package existence check passed for "libaio-0.3.106 (x86_64)" Package existence check passed for "glibc-2.5-24 (i686)" Package existence check passed for "glibc-2.5-24 (x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5 (i386)" Package existence check passed for "glibc-devel-2.5 (x86_64)" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libaio-devel-0.3.106 (i386)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)" Package existence check passed for "libgcc-4.1.2 (i386)" Package existence check passed for "libgcc-4.1.2 (x86_64)" Package existence check passed for "libstdc++-4.1.2 (i386)" Package existence check passed for "libstdc++-4.1.2 (x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11 (i386)" Package existence check passed for "unixODBC-2.2.11 (x86_64)" Package existence check passed for "unixODBC-devel-2.2.11 (i386)" Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)" Package existence check passed for "ksh-20060214"
Installing RDAC driver
As mentioned in previous section in this paper, the Linux RDAC driver is not included with DS4000 Storage Manager for Linux. Please follow the web site mentioned in the previous section for instructions to install the driver.
After you install the Linux RDAC driver, you need to update /boot/grub/menu.lst with the new MPP driver package. Then, you will need to reboot the server to initialize the MPP driver.
After reboot, the server should be able to recognize the MPP driver and discover the LUNs which have been assigned to the server.
To verify that the Linux RDAC driver has been loaded successfully, execute the following command:
[[email protected] ~]# lsmod | grep mpp mppVhba 162400 8 mppUpper 150252 1 mppVhba scsi_mod 196697 18 ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,mppVhba,usb_storage,qla2xxx,scsi_transport_fc,libata,mptspi,mptscsih,scsi_transport_spi,mppUpper,sg,sd_mod
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 13
Verify that the Linux RDAC driver discovered the available physical LUNs and created the virtual LUNs for them by executing the following command:
[[email protected] ~]# ls -lR /proc/mpp /proc/mpp: total 0 dr-xr-xr-x 4 root root 0 Nov 18 16:00 Oracle_ICC_DS4800 /proc/mpp/Oracle_ICC_DS4800: total 0 dr-xr-xr-x 3 root root 0 Nov 18 16:00 controllerA dr-xr-xr-x 3 root root 0 Nov 18 16:00 controllerB -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun0 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun1 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun2 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun3 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun4 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun5 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun6 -rw-r--r-- 1 root root 0 Nov 18 16:00 virtualLun7 /proc/mpp/Oracle_ICC_DS4800/controllerA: total 0 dr-xr-xr-x 2 root root 0 Nov 18 16:00 qla2xxx_h1c0t0 /proc/mpp/Oracle_ICC_DS4800/controllerA/qla2xxx_h1c0t0: total 0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN1 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN2 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN3 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN4 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN5 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN6 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN7 -rw-r--r-- 1 root root 0 Nov 18 16:00 UTM_LUN31 /proc/mpp/Oracle_ICC_DS4800/controllerB: total 0 dr-xr-xr-x 2 root root 0 Nov 18 16:00 qla2xxx_h2c0t0 /proc/mpp/Oracle_ICC_DS4800/controllerB/qla2xxx_h2c0t0: total 0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN0 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN1 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN2 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN3 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN4 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN5 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN6 -rw-r--r-- 1 root root 0 Nov 18 16:00 LUN7 -rw-r--r-- 1 root root 0 Nov 18 16:00 UTM_LUN31
After discovering the LUNs, partitions can be created on the appropriate LUNs.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 14
Installing Oracle Grid Infrastructure 11.2.0.1 Before installing Oracle Grid Infrastructure 11.2.0.1 on both servers, there are several important tasks that need to be done on all of the cluster nodes.
Pre-Installation tasks
Configuring kernel parameters
Edit /etc/sysctl.conf file to set up the kernel parameters for Oracle Database. If you have current values in the file and they are higher than the value listed below, you do not need to change the value of the parameter. If these values are not set properly the Oracle Universal Installer will create a fix-up script for you to run during the pre-requisite check. This script can be run on the specified nodes to fix any parameter values that do not adhere to the minimum requirements. However, the range values must match exactly.
kernel.shmall = 2097152 kernel.shmmax = 1/2 of physical RAM. This would be the value 2147483648 for a 4Gb RAM system. kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 512 * PROCESSES (for example 65536 for 128 processes) net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144
After making these changes, sysctl p will enforce these values.
If you do not set the kernel parameters correctly before installation, Oracle Installer will create a fixup script (runfixup.sh) that you can run as root when your prerequisites check fails. This script will then update the kernel parameters for you. Nevertheless, Oracle recommends that you do not change the contents of the generated fixup script.
Creating users and groups
Two groups need to be created: dba and oinstall. The dba group is used for Oracle Database authentication and oinstall for Oracle Inventory group. Please make sure that the group id is the same on all cluster nodes. For instance, if oinstall gid is 502 on node 1, oinstall gid must be 502 on node 2 and any other nodes in the cluster. This can be accomplished by the groupadd command.
You can optionally create another user, besides oracle, for grid infrastructure installations in order to separate the administrative privileges from others. For instance, you can create a user ID grid and user ID oracle for Oracle Clusterware and Database installation.
# useradd u 501 g oinstall G dba oracle # usermod u 502 g oinstall grid
As Oracle mentions, you cannot have separate Oracle Clusterware and Oracle ASM installation owners. In this paper, only user oracle has been created for simplification.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 15
Create appropriate directories for oracle and grid installations and have appropriate ownership of the directories. Set up a grid infrastructure home directory to be owned by user oracle and group oinstall. The Oracle grid infrastructure directory cannot be a subdirectory of the Oracle base directory.
# mkdir p /u01/grid # mkdir p /u01/app/oracle # chown R oracle:dba /u01/app/oracle # chown R oracle:oinstall /u01/grid # chmod R 775 /u01/app/oracle # chmod R 755 /u01/grid
In Oracle 11g Release 2, there are two separate ORACLE_HOME directories; One home for Oracle grid infrastructure; and the other home for Oracle Real Application Clusters Database. To execute commands like ASMCA for Oracle ASM Configuration or DBCA for Database Configuration, you will need to change the ORACLE_HOME environment variable to Oracle RAC Database home.
Setting shell limits for the Oracle software owner
The file /etc/security/limits.conf needs to be modified to include new standards for user oracle.
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
In order for these limits to take effect, /etc/pam.d/login file needs to be edited.
session required pam_limits.so
Finally, enable these limits when user oracle is logged into the server.
if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
Setting the Time on Cluster Nodes
In an Oracle RAC environment, the date and time settings on all cluster nodes have to be synchronized either by Oracle Cluster Time Synchronization Service (CTSS) or Network Time Protocol (NTP). If you do not use NTP, Oracle will use CTSS to synchronize the internal clocks of all cluster members.
You can check if NTP is up and running by doing the following commands:
[[email protected] ~]# service ntpd status ntpd (pid 3589) is running... [[email protected] grid]$ ntpq -p remote refid st t when poll reach delay offset jitter ===============================================================*ntp.pbx.org 192.5.41.40 2 u 55 64 377 118.027 -11.959 0.916 +ntp.clt.sharefi 198.82.1.203 3 u 41 64 377 116.511 9.709 0.706 +newton.8086.net 209.51.161.238 2 u 48 64 377 99.529 -12.616 1.431
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 16
Setting Oracle inventory location
When you install Oracle software on the system for the first time, Oracle will create a file called oraInst.loc under /etc directory. The file will give Oracle information on where the Oracle inventory directory is and the name of the Oracle Inventory group.
inventory_loc=/u01/app/oraInventory inst_group=oinstall
If a previous inventory directory exists, please make sure that the same Oracle inventory directory is used and all Oracle software users have the write permissions to this directory.
Setting up network files
The following network addresses are required for each node:
Public network address Private network address Virtual IP network address (VIP) Single Client Access Name (SCAN) address for the cluster
The interfaces and IP addresses for both public and private networks need to be set up. These configurations can be done in Red Hat Enterprise Linux 5 System => Administration => Network.
After that, add the host names and IP addresses to /etc/hosts as shown in the example below. If the public host names and IP addresses are registered to the Domain Name Server (DNS), they can be excluded. Interconnect (private) host names and IP addresses should always be placed in /etc/hosts.
127.0.0.1 localhost.localdomain localhost 100.58.128.142 blade1.sanmateo.ibm.com blade1 100.58.128.143 blade2.sanmateo.ibm.com blade2 100.58.128.152 blade1-vip.sanmateo.ibm.com blade1-vip 100.58.128.154 blade2-vip.sanmateo.ibm.com blade2-vip 10.10.10.11 blade1-priv.sanmateo.ibm.com blade1-priv 10.10.10.12 blade2-priv.sanmateo.ibm.com blade2-priv
SCAN is a new requirement for Oracle Clusterware installation. It is a domain name that resolves to all the SCAN addresses (recommended 3 IP addresses) allocated for the cluster. The SCAN IP address must be on the same subnet as the VIP addresses, and it must be unique within the corporate network.
Configuring SSH on all cluster nodes
Starting with Oracle 11g Release 2, there is no need to configure SSH on all cluster nodes because Oracle Universal Installer will set them up for you during the grid infrastructure installation.
Configuring ASMLib
Starting with Oracle 11g Release 2, the Oracle Clusterware, the voting disk and OCR can be stored in ASM. Oracle strongly recommends storing Oracle Clusterware disks on ASM. However, Oracle Clusterware binaries and files cannot be stored in Oracle ASM Cluster File System (ACFS). Oracle recommends 280 MB minimum for each voting disk and OCR file. The total required values are cumulative and it depends on the level of redundancy you choose during the installation.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 17
In this example, Oracle Clusterware disks will be stored in Oracle ASM. Oracle ASM disks will need to be created prior to installation. After downloading the three packages mentioned in ASMLib section, ASMLib needs to be configured as follows:
[[email protected] asmlib]# rpm -Uvh oracleasm-support-2.1.3-1.el5.x86_64.rpm \ > oracleasmlib-2.0.4-1.el5.x86_64.rpm \ > oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-164.el5 ########################################### [ 67%] 3:oracleasmlib ########################################### [100%] [[email protected] ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [oracle]: Default group to own the driver interface [dba]: Start Oracle ASM library driver on boot (y/n) [y]: Fix permissions of Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: [ OK ] Scanning system for ASM disks: [ OK ]
Creating ASM disks
Create the ASM disks on either one of the nodes. After that, it is important to reboot all cluster nodes before installing Oracle Clusterware. After reboot, execute oracleasm scandisks command to reflect the newly created ASM disks.
[[email protected] ]# oracleasm createdisk DATA1 /dev/sdb1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DATA2 /dev/sdc1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DATA3 /dev/sdd1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk LOG /dev/sde1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DISK1 /dev/sdf1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DISK2 /dev/sdg1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DISK3 /dev/sdh1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm createdisk DISK4 /dev/sdi1 Writing disk header: done Instantiating disk: done [[email protected] ]# oracleasm listdisks DATA1
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 18
DATA2 DATA3 DISK1 DISK2 DISK3 DISK4 LOG
Running Cluster Verification Utility (CVU)
Cluster Verification Utility (CVU) can be used to verify if the systems are ready to install Oracle Clusterware 11g Release 2. The Oracle Universal Installer will use CVU to perform all pre-requisite checks during the installation interview Login as oracle user and run the following command:
[[email protected] grid]$ ./runcluvfy.sh stage -pre crsinst -n blade1,blade2 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "blade1" Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.10.10.0" with node(s) blade2,blade1 TCP connectivity check passed for subnet "10.10.10.0" Node connectivity passed for subnet "9.38.158.128" with node(s) blade2,blade1 TCP connectivity check passed for subnet "9.38.158.128" Interfaces found on subnet "9.38.158.128" that are likely candidates for VIP are: blade2 eth1:9.38.158.143 blade1 eth1:9.38.158.142 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are: blade2 eth0:10.10.10.12 blade1 eth0:10.10.10.11 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "blade2:/tmp" Free disk space check passed for "blade1:/tmp" User existence check passed for "oracle" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "oracle" in group "oinstall" [as Primary] passed Membership check for user "oracle" in group "dba" passed Run level check passed
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 19
Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1" Package existence check passed for "libaio-0.3.106 (i386)" Package existence check passed for "libaio-0.3.106 (x86_64)" Package existence check passed for "glibc-2.5-24 (i686)" Package existence check passed for "glibc-2.5-24 (x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (i386)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5 (i386)" Package existence check passed for "glibc-devel-2.5 (x86_64)" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libaio-devel-0.3.106 (i386)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)" Package existence check passed for "libgcc-4.1.2 (i386)" Package existence check passed for "libgcc-4.1.2 (x86_64)" Package existence check passed for "libstdc++-4.1.2 (i386)" Package existence check passed for "libstdc++-4.1.2 (x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11 (i386)" Package existence check passed for "unixODBC-2.2.11 (x86_64)" Package existence check passed for "unixODBC-devel-2.2.11 (i386)" Package existence check passed for "unixODBC-devel-2.2.11 (x86_64)" Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed Current group ID check passed Core file name pattern consistency check passed. User "oracle" is not part of "root" group. Check passed Default user file creation mask check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness...
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 20
Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... PRVF-5408 : NTP Time Server "192.5.41.40" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "207.171.30.106" is common only to the following nodes "blade2" PRVF-5408 : NTP Time Server "209.81.9.7" is common only to the following nodes "blade2" PRVF-5408 : NTP Time Server "209.51.161.238" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "198.82.1.203" is common only to the following nodes "blade1" PRVF-5408 : NTP Time Server "4.99.128.199" is common only to the following nodes "blade2" PRVF-5416 : Query of NTP daemon failed on all nodes Clock synchronization check using Network Time Protocol(NTP) passed Pre-check for cluster services setup was successful.
Performing Oracle Clusterware installation and Automatic Storage Management Installation
To install Oracle Clusterware 11g Release 2, Oracle Database 11g Release 2 Grid Infrastructure nd run
r
run it from
le, we select
(11.2.0.1) for Linux x86-64, needs to be downloaded. After that, unzip linux.x64_11gR2_grid.zip athe Oracle Universal Installer (OUI) from one node (local node). For the most part, OUI handles the installation of the other cluster nodes. There are a number of steps that need to be done on the othecluster nodes and these are called out by OUI at various points during the process.
Running the installation from the system console will require a XWindows session, or you can vncserver on the node and use XWindows on the workstation to start the OUI.
1. The first screen will ask you select one of the installation options. In this exampInstall and Configure Grid Infrastructure for a Cluster.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 21
2. The next screen will ask if this is a typical or advanced installation. We will select typical installation.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 22
3. The next screen asks for the SCAN and its cluster node names and virtual IP addresses. If this is the first installation, put in the OS password for user oracle and click setup. Oracle will then set up the SSH connectivity between the above-mentioned cluster nodes. After that, you can click Test to make sure that the SSH worked properly between the nodes. Note: If you choose Advanced Installation in the previous screen, you need to provide more details for Single Client Access Name (SCAN) such as SCAN Port and IP addresses. The SCAN should be defined in the DNS to resolve to 3 IP addresses. For the Typical Installation, you need to provide the SCAN.
4. The next screen will ask you for the Oracle base and software directories. In this example, all Oracle Clusterware files are going to be stored in ASM. Then, enter the password for SYSASM. Oracle would like the password to conform to specific rules. If you did not follow these rules, errors will be shown at the bottom of the screen.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 23
5. Since ASM is chosen to be the storage type for the Clusterware files, Oracle asks for the names of the ASM disks and it will create the Disk Group Name with the selected ASM disks to store the OCR and voting disks. The number of disks needed for installation depends on the redundancy level you picked. For High redundancy, it requires five disks; for Normal redundancy, it requires three disks; for External redundancy, it requires one disk. If you do not select enough disks, Oracle will give you errors. The minimum size of each disk is 280 MB.
In this example, Normal redundancy has been chosen.
6. Oracle will run its Cluster Verification Utility to check if the cluster nodes have met all the prerequisites. If not, it will stop and show you the errors. You can fix the errors and ask Oracle to
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 24
check again. At the bottom of the screen, you can click on more details, where suggestions on how to fix the errors will be shown.
7. After fixing all the errors and passing the prerequisites tests, Oracle will show the installation summary. You can save the response file for future silent installation.
8. This is the screen showing the installation process.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 25
9. After Oracle has installed the binary files on all cluster nodes, it will ask you to run root.sh as user root. It is very important to run root.sh on the local node first and allow it to successfully complete. Do not run root.sh on other nodes until root.sh on the local node has completed; otherwise, errors will occur on the other cluster nodes.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 26
This is the output from the local node which is blade1 in this example:
[[email protected] grid]# ./root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-12 14:32:59: Parsing the host name 2009-11-12 14:32:59: Checking for super user privileges 2009-11-12 14:32:59: User has super user privileges Using configuration parameter file: /u01/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'..
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 27
Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-2672: Attempting to start 'ora.gipcd' on 'blade1' CRS-2672: Attempting to start 'ora.mdnsd' on 'blade1' CRS-2676: Start of 'ora.gipcd' on 'blade1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade1' CRS-2676: Start of 'ora.gpnpd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade1' CRS-2676: Start of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade1' CRS-2672: Attempting to start 'ora.diskmon' on 'blade1' CRS-2676: Start of 'ora.diskmon' on 'blade1' succeeded CRS-2676: Start of 'ora.cssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade1' CRS-2676: Start of 'ora.ctssd' on 'blade1' succeeded ASM created and started successfully. DiskGroup DISK created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-2672: Attempting to start 'ora.crsd' on 'blade1' CRS-2676: Start of 'ora.crsd' on 'blade1' succeeded CRS-4256: Updating the profile Successful addition of voting disk 4aa5aa207b704f11bfbc9a9f0eb544ce. Successful addition of voting disk 2ee17edb66ca4fa7bf9814af4790890d. Successful addition of voting disk 0735fe5ce71f4f6cbf27dc203f3ba22e. Successfully replaced voting disk group with +DISK. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 4aa5aa207b704f11bfbc9a9f0eb544ce (ORCL:DISK1) [DISK] 2. ONLINE 2ee17edb66ca4fa7bf9814af4790890d (ORCL:DISK2) [DISK] 3. ONLINE 0735fe5ce71f4f6cbf27dc203f3ba22e (ORCL:DISK3) [DISK]
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 28
Located 3 voting disk(s). CRS-2673: Attempting to stop 'ora.crsd' on 'blade1' CRS-2677: Stop of 'ora.crsd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'blade1' CRS-2677: Stop of 'ora.asm' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'blade1' CRS-2677: Stop of 'ora.ctssd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'blade1' CRS-2677: Stop of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'blade1' CRS-2677: Stop of 'ora.cssd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'blade1' CRS-2677: Stop of 'ora.gpnpd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'blade1' CRS-2677: Stop of 'ora.gipcd' on 'blade1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'blade1' CRS-2677: Stop of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.mdnsd' on 'blade1' CRS-2676: Start of 'ora.mdnsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'blade1' CRS-2676: Start of 'ora.gipcd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade1' CRS-2676: Start of 'ora.gpnpd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade1' CRS-2676: Start of 'ora.cssdmonitor' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade1' CRS-2672: Attempting to start 'ora.diskmon' on 'blade1' CRS-2676: Start of 'ora.diskmon' on 'blade1' succeeded CRS-2676: Start of 'ora.cssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade1' CRS-2676: Start of 'ora.ctssd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade1' CRS-2676: Start of 'ora.asm' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'blade1' CRS-2676: Start of 'ora.crsd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'blade1' CRS-2676: Start of 'ora.evmd' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade1' CRS-2676: Start of 'ora.asm' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.DISK.dg' on 'blade1' CRS-2676: Start of 'ora.DISK.dg' on 'blade1' succeeded CRS-2672: Attempting to start 'ora.registry.acfs' on 'blade1' CRS-2676: Start of 'ora.registry.acfs' on 'blade1' succeeded blade1 2009/11/12 14:39:00 /u01/grid/cdata/blade1/backup_20091112_143900.olr Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 8000 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
This is the output of the second node which is blade2. It is slightly different from the first node and it is shorter.
[[email protected] grid]# ./root.sh Running Oracle 11g root.sh script...
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 29
The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-11-12 14:40:32: Parsing the host name 2009-11-12 14:40:32: Checking for super user privileges 2009-11-12 14:40:32: User has super user privileges Using configuration parameter file: /u01/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node blade1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster CRS-2672: Attempting to start 'ora.mdnsd' on 'blade2' CRS-2676: Start of 'ora.mdnsd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'blade2' CRS-2676: Start of 'ora.gipcd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'blade2' CRS-2676: Start of 'ora.gpnpd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'blade2' CRS-2676: Start of 'ora.cssdmonitor' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'blade2' CRS-2672: Attempting to start 'ora.diskmon' on 'blade2' CRS-2676: Start of 'ora.diskmon' on 'blade2' succeeded CRS-2676: Start of 'ora.cssd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'blade2' CRS-2676: Start of 'ora.ctssd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'blade2' CRS-2676: Start of 'ora.drivers.acfs' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'blade2' CRS-2676: Start of 'ora.asm' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'blade2' CRS-2676: Start of 'ora.crsd' on 'blade2' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'blade2' CRS-2676: Start of 'ora.evmd' on 'blade2' succeeded blade2 2009/11/12 14:44:31 /u01/grid/cdata/blade2/backup_20091112_144431.olr Preparing packages for installation... cvuqdisk-1.0.7-1
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 30
Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4048 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
nodes, Oracle OUI will continue to configure Oracle Grid
Infrastructure for a Cluster. 10. After executing root.sh on all cluster
11. Oracle will run cluvfy again after the configuration and post the errors on the screen. In this example, the error is about the inconsistent name resolution for SCAN which caused the
is verification of SCAN VIP and Listener setup to fail. According to Metalink Note: 887471.1, therror can be ignored because we are not using DNS in our network.
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 31
After you press OK and continue, the Oracle grid infrastructure installation has completed.
Please check the configuration log file for more details if there is any failure during the configuration. The configuration log file is located in Oracle Inventory location.
Performing post-installation tasks To confirm Oracle Clusterware is running correctly, use this command:
$CRS_HOME/bin/crsctl status resource "TYPE co 'ora'" -t
[[email protected] bin]$ ./crsctl status resource -w "TYPE co 'ora'" -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.DISK.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.LISTENER.lsnr ONLINE ONLINE blade1
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 32
ONLINE ONLINE blade2 ora.LOG.dg ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.asm ONLINE ONLINE blade1 Started ONLINE ONLINE blade2 Started ora.data.data_db1.acfs ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.eons ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.gsd OFFLINE OFFLINE blade1 OFFLINE OFFLINE blade2 ora.net1.network ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.ons ONLINE ONLINE blade1 ONLINE ONLINE blade2 ora.registry.acfs ONLINE ONLINE blade1 ONLINE ONLINE blade2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE blade1 ora.blade1.vip 1 ONLINE ONLINE blade1 ora.blade2.vip 1 ONLINE ONLINE blade2 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE blade1 Open 2 ONLINE ONLINE blade2 Open ora.scan1.vip 1 ONLINE ONLINE blade1
Another command, crsctl check cluster -all, can also be used for cluster check.
[[email protected] logs]# /u01/grid/bin/crsctl check cluster -all ************************************************************** blade1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** blade2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
Finally, the command, crsctl check crs, can also be used for a less detailed system check.
[[email protected] bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 33
CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
After the installation of Oracle Clusterware Oracle recommends that a backup is made for the contents of root.sh and emkey.ora for future usage. Emkey.ora is located in $ORACLE_HOME/_/sysman/config directory. In this example, emkey.ora is located under /u01/app/oracle/blade1_orcl/sysman/config directory. This file contains the encryption key for all enterprise manager data.
Installing Oracle Database 11g Release 2 (11.2.0.1)
Pre-Installation tasks
All of the pre-installation tasks for Oracle Database 11g Release 2 are the same as the pre-installation tasks for Oracle Clusterware.
Running Cluster Verification Utility
Cluster Verification Utility (CVU) can be used to verify if the systems are ready to install Oracle Database 11g Release 2 with Oracle RAC.
The command cluvfy.sh stage pre dbcfg n nodelist d $ORACLE_HOME is used to pre-check requirements for an Oracle Database with an Oracle RAC installation. Login as oracle user and run the cluvfy command.
[[email protected] ~]$ cluvfy stage -pre dbcfg -n blade1,blade2 -d /d01/app/oracle/product/11.2.0/dbhome_1 Performing pre-checks for database configuration Checking node reachability... Node reachability check passed from node "blade1" Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.10.10.0" with node(s) blade2,blade1 TCP connectivity check passed for subnet "10.10.10.0" Node connectivity passed for subnet "9.38.158.128" with node(s) blade2,blade1 TCP connectivity check passed for subnet "9.38.158.128" Interfaces found on subnet "9.38.158.128" that are likely candidates for VIP are: blade2 eth1:9.38.158.143 eth1:9.38.158.233
Oracle Database 11g Release 2 Enterprise Edition using Oracle Real Application Clusters on IBM BladeCenter running Red Hat Enterprise Linux 5 and IBM System Storage DS4800 http://www.ibm.com/support/techdocs Copyright 2010, IBM Corporation 34
blade1 eth1:9.38.158.142 eth1:9.38.158.232 eth1:9.38.158.231 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are: blade2 eth0:10.10.10.12 blade1 eth0:10.10.10.11 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "blade2:/d01/app/oracle/product/11.2.0/dbhome_1" Free disk space check passed for "blade1:/d01/app/oracle/product/11.2.0/dbhome_1" Fr