86
What is RAC? RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features. Mention the Oracle RAC software components:- Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database. Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion. Oracle RAC instances are composed of following background processes: ACMS—Atomic Controlfile to Memory Service (ACMS) GTX0-j—Global Transaction Process LMON—Global Enqueue Service Monitor LMD—Global Enqueue Service Daemon LMS—Global Cache Service Process LCK0—Instance Enqueue Process RMSn—Oracle RAC Management Processes (RMSn) RSMN—Remote Slave Monitor What is GRD? GRD stands for Global Resource Directory. The GES and GCS maintains records of the statuses of each datafile and each cahed block using global resource directory.This process is referred to as cache fusion and helps in data integrity. Give Details on Cache Fusion:- Oracle RAC is composed of two or more instances. When a block of data is

What is RAC

Embed Size (px)

Citation preview

Page 1: What is RAC

What is RAC?RAC stands for Real Application cluster. It is a clustering solution from OracleCorporation that ensures high availability of databases by providing instance failover,media failover features.

Mention the Oracle RAC software components:-Oracle RAC is composed of two or more database instances. They are composed ofMemory structures and background processes same as the single instance database.

Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global CacheService) that enable cache fusion.

Oracle RAC instances are composed of following background processes:ACMS—Atomic Controlfile to Memory Service (ACMS)GTX0-j—Global Transaction ProcessLMON—Global Enqueue Service MonitorLMD—Global Enqueue Service DaemonLMS—Global Cache Service ProcessLCK0—Instance Enqueue ProcessRMSn—Oracle RAC Management Processes (RMSn)RSMN—Remote Slave Monitor

What is GRD?GRD stands for Global Resource Directory. The GES and GCS maintains records of thestatuses of each datafile and each cahed block using global resource directory.Thisprocess is referred to as cache fusion and helps in data integrity.

Give Details on Cache Fusion:-Oracle RAC is composed of two or more instances. When a block of data is read fromdatafile by an instance within the cluster and another instance is in need of the sameblock,it is easy to get the block image from the insatnce which has the block in its SGArather than reading from the disk. To enable inter instance communication Oracle RACmakes use of interconnects. The Global Enqueue Service(GES) monitors and Instanceenqueue process manages the cahce fusion.

Give Details on ACMS:-ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environmentACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates areglobally committed on success or globally aborted in event of a failure.

Give details on GTX0-j :-The process provides transparent support for XA global transactions in a RAC

Page 2: What is RAC

environment.The database autotunes the number of these processes based on theworkload of XA global transactions.

Give details on LMON:-This process monitors global enques and resources across the cluster and performs globalenqueue recovery operations.This is called as Global Enqueue Service Monitor.

Give details on LMD:-This process is called as global enqueue service daemon. This process managesincoming remote resource requests within each instance.

Give details on LMS:-This process is called as Global Cache service process.This process maintains statuses ofdatafiles and each cahed block by recording information in a Global ResourceDectory(GRD).This process also controls the flow of messages to remote instances andmanages global data block access and transmits block images between the buffer cachesof different instances.This processing is a part of cache fusion feature.

Give details on LCK0:-This process is called as Instance enqueue process.This process manages non-cachefusion resource requests such as libry and row cache requests.

Give details on RMSn:-This process is called as Oracle RAC management process.These pocesses performmanagability tasks for Oracle RAC.Tasks include creation of resources related OracleRAC when new instances are added to the cluster.

Give details on RSMN:-This process is called as Remote Slave Monitor.This process manages background slaveprocess creation andd communication on remote instances. This is a background slaveprocess.This process performs tasks on behalf of a co-ordinating process running inanother instance.

What components in RAC must reside in shared storage?All datafiles, controlfiles, SPFIles, redo log files must reside on cluster-aware shredstorage.

What is the significance of using cluster-aware shared storage in an Oracle RACenvironment?All instances of an Oracle RAC can access all the datafiles,control files, SPFILE's,redolog files when these files are hosted out of cluster-aware shared storage which are

Page 3: What is RAC

group of shared disks.Give few examples for solutions that support cluster storage:-ASM(automatic storage management),raw disk devices,network file system(NFS),OCFS2 and OCFS(Oracle Cluster Fie systems).

What is an interconnect network?an interconnect network is a private network that connects all of the servers in a cluster.The interconnect network uses a switch/multiple switches that only the nodes in thecluster can access.

How can we configure the cluster interconnect?Configure User Datagram Protocol(UDP) on Gigabit ethernet for cluster interconnect.Onunia and linux systems we use UDP and RDS(Reliable data socket) protocols to be usedby Oracle Clusterware.Windows clusters use the TCP protocol.

Can we use crossover cables with Oracle Clusterware interconnects?No, crossover cables are not supported with Oracle Clusterware intercnects.

What is the use of cluster interconnect?Cluster interconnect is used by the Cache fusion for inter instance communication.

How do users connect to database in an Oracle RAC environment?Users can access a RAC database using a client/server configuration or through one ormore middle tiers ,with or without connection pooling.Users can use oracle servicesfeature to connect to database.

What is the use of a service in Oracle RAC environemnt?Applications should use the services feature to connect to the Oracle database.Servicesenable us to define rules and characteristics to control how users and applications connectto database instances.

What are the characteriscs controlled by Oracle services feature?The charateristics include a unique name, workload balancing and failover options,andhigh availability characteristics.

Which enable the load balancing of applications in RAC?Oracle Net Services enable the load balancing of application connections across all of theinstances in an Oracle RAC database.

Page 4: What is RAC

What is a virtual IP address or VIP?A virtual IP address or VIP is an alternate IP address that the client connectins use instead ofthe standard public IP address. To configureVIP address, we need to reserve a spare IPaddress for each node, and the IP addresses must use the same subnet as the publicnetwork.

What is the use of VIP?If a node fails, then the node's VIP address fails over to another node on which the VIPaddress can accept TCP connections but it cannot accept Oracle connections.

Give situations under which VIP address failover happens:-VIP addresses failover happens when the node on which the VIP address runs fails, allinterfaces for the VIP address fails, all interfaces for the VIP address are disconnectedfrom the network.

What is the significance of VIP address failover?When a VIP address failover happens, Clients that attempt to connect to the VIP addressreceive a rapid connection refused error .They don't have to wait for TCP connectiontimeout messages.

What are the administrative tools used for Oracle RAC environments?Oracle RAC cluster can be administered as a single image using OEM(EnterpriseManager),SQL*PLUS,Servercontrol(SRVCTL),clusterverificationutility(cvu),DBCA,NETCA

How do we verify that RAC instances are running?Issue the following query from any one node connecting through SQL*PLUS.$connect sys/sys as sysdbaSQL>select * from V$ACTIVE_INSTANCES;The query gives the instance number under INST_NUMBER column,host_:instancenameunder INST_NAME column.

What is FAN?Fast application Notification as it abbreviates to FAN relates to the events related toinstances,services and nodes.This is a notification mechanism that Oracle RAc uses tonotify other processes about the configuration and service level information that includesservice status changes such as,UP or DOWN events.Applications can respond to FANevents and take immediate action.

Page 5: What is RAC

Where can we apply FAN UP and DOWN events?FAN UP and FAN DOWN events can be applied to instances,services and nodes.

State the use of FAN events in case of a cluster configuration change?During times of cluster configuration changes,Oracle RAC high availability frameworkpublishes a FAN event immediately when a state change occurs in the cluster.Soapplications can receive FAN events and react immediately.This prevents applicationsfrom polling database and detecting a problem after such a state change.

Why should we have seperate homes for ASm instance?It is a good practice to have ASM home seperate from the databasehom(ORACLE_HOME).This helps in upgrading and patching ASM and the Oracledatabase software independent of each other.Also,we can deinstall the Oracle databasesoftware independent of the ASM instance.

What is the advantage of using ASM?Having ASM is the Oracle recommended storage option for RAC databases as the ASMmaximizes performance by managing the storage configuration across the disks.ASMdoes this by distributing the database file across all of the available storage within ourcluster database environment.

What is rolling upgrade?It is a new ASM feature from Database 11g.ASM instances in Oracle database 11grelease(from 11.1) can be upgraded or patched using rolling upgrade feature. This enablesus to patch or upgrade ASM nodes in a clustered environment without affecting databaseavailability.During a rolling upgrade we can maintain a functional cluster while one ormore of the nodes in the cluster are running in different software versions.

Can rolling upgrade be used to upgrade from 10g to 11g database?No,it can be used only for Oracle database 11g releases(from 11.1).

State the initialization parameters that must have same value for every instance in anOracle RAC database:-Some initialization parameters are critical at the database creation time and must havesame values.Their value must be specified in SPFILE or PFILE for every instance.Thelist of parameters that must be identical on every instance are given below:ACTIVE_INSTANCE_COUNTARCHIVE_LAG_TARGETCOMPATIBLECLUSTER_DATABASECLUSTER_DATABASE_INSTANCECONTROL_FILES

Page 6: What is RAC

DB_BLOCK_SIZEDB_DOMAINDB_FILESDB_NAMEDB_RECOVERY_FILE_DESTDB_RECOVERY_FILE_DEST_SIZEDB_UNIQUE_NAMEINSTANCE_TYPE (RDBMS or ASM)PARALLEL_MAX_SERVERSREMOTE_LOGIN_PASSWORD_FILEUNDO_MANAGEMENTCan the DML_LOCKS and RESULT_CACHE_MAX_SIZE be identical on allinstances?These parameters can be identical on all instances only if these parameter values are setto zero.

What two parameters must be set at the time of starting up an ASM instance in a RACenvironment?The parameters CLUSTER_DATABASE and INSTANCE_TYPE must be set.

Mention the components of Oracle clusterware:-Oracle clusterware is made up of components like voting disk and Oracle ClusterRegistry(OCR).

What is a CRS resource?Oracle clusterware is used to manage high-availability operations in a cluster.Anythingthat Oracle Clusterware manages is known as a CRS resource.Some examples of CRSresources are database,an instance,a service,a listener,a VIP address,an applicationprocess etc.

What is the use of OCR?Oracle clusterware manages CRS resources based on the configuration information ofCRS resources stored in OCR(Oracle Cluster Registry).

How does a Oracle Clusterware manage CRS resources?Oracle clusterware manages CRS resources based on the configuration information ofCRS resources stored in OCR(Oracle Cluster Registry).

Name some Oracle clusterware tools and their uses?OIFCFG - allocating and deallocating network interfacesOCRCONFIG - Command-line tool for managing Oracle Cluster RegistryOCRDUMP - Identify the interconnect being used

Page 7: What is RAC

CVU - Cluster verification utility to get status of CRS resources

What are the modes of deleting instances from ORacle Real Application clusterDatabases?We can delete instances using silent mode or interactive mode using DBCA(DatabaseConfiguration Assistant).

How do we remove ASM from a Oracle RAC environment?We need to stop and delete the instance in the node first in interactive or silentmode.After that asm can be removed using srvctl tool as follows:srvctl stop asm -n node_namesrvctl remove asm -n node_nameWe can verify if ASM has been removed by issuing the following command:srvctl config asm -n node_name

How do we verify that an instance has been removed from OCR after deleting aninstance?Issue the following srvctl command:srvctl config database -d database_namecd CRS_HOME/bin./crs_stat

How do we verify an existing current backup of OCR?We can verify the current backup of OCR using the following command : ocrconfig-showbackup

What are the performance views in an Oracle RAC environment?We have v$ views that are instance specific. In addition we have GV$ views called asglobal views that has an INST_ID column of numeric data type.GV$ views obtaininformation from individual V$ views.

What are the types of connection load-balancing?There are two types of connection load-balancing:server-side load balancing and client-side load balancing.

What is the differnece between server-side and client-side connection load balancing?Client-side balancing happens at client side where load balancing is done using listener.Incase of server-side load balancing listener uses a load-balancing advisory to redirectconnections to the instance providing best service.

Page 8: What is RAC

Give the usage of srvctl:-srvctl start instance -d db_name -i "inst_name_list" [-o start_options]srvctl stop instance-d name -i "inst_name_list" [-o stop_options]srvctl stop instance -d orcl -i "orcl3,orcl4"-o immediatesrvctl start database -d name [-o start_options]srvctl stop database -d name[-o stop_options]srvctl start database -d orcl -o mount

cluster demons

Demon means background process1) crsd - it is update the ocr file. it use for node apps work(ons,vip,gsd)2) cssd - it is update the vd file. it is monitor the node membership when ever HB(heart beat) fail between the nodes the cssd process reboot the node3) evmd - update the diag information

http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_65.shtml

Overview

Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when installed on servers running the same operating system, enables the servers to be bound together to operate and function as a single server or cluster. This infrastructure simplifies the requirement for an Oracle Real Application Clusters (RAC) database by providing cluster software that is tightly integrated with the Oracle Database.

The Oracle Clusterware requires two critical clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information:

Voting Disk

The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node membership and status. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect.

The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a

Page 9: What is RAC

maximum of 32 voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy.

A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.

Oracle Cluster Registry (OCR)

Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. OCR is the repository of configuration information for the cluster that manages information about like the cluster node list and instance-to-node mapping information. This configuration information is used by many of the processes that make up the CRS as well as other cluster-aware applications which use this repository to share information amoung them. Some of the main components included in the OCR are:

Node membership information Database instance, node, and other mapping information

ASM (if configured)

Application resource profiles such as VIP addresses, services, etc.

Service characteristics

Information about processes that Oracle Clusterware controls

Information about any third-party applications controlled by CRS (10g R2 and later)

The OCR stores configuration information in a series of key-value pairs within a directory tree structure. To view the contents of the OCR in a human-readable format, run the ocrdump command. This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.

The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single OCR, then you should use external mirroring to provide redundancy. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).

Page 10: What is RAC

This article provides a detailed look at how to administer the two critical Oracle Clusterware components — the voting disk and the Oracle Cluster Registry (OCR). The examples described in this guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.

Example Configuration

The example configuration used in this article consists of a two-node RAC with a clustered database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle instance named racdb1 and racdb2 respectively. For a detailed guide on building the example clustered database environment, please see:

  Building an Inexpensive Oracle RAC 10 g Release 2 on Linux - (CentOS 5.3 / iSCSI)

The example Oracle Clusterware environment is configured with a single voting disk and a single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in the oinstall group with 0640 permissions:

[oracle@racnode1 ~]$ ls -l /u02/oradata/racdbtotal 16608-rw-r--r-- 1 oracle oinstall 10240000 Aug 26 22:43 CSSFiledrwxr-xr-x 2 oracle oinstall 3896 Aug 26 23:45 dbs/-rw-r----- 1 root oinstall 6836224 Sep 3 23:47 OCRFileCheck Current OCR File [oracle@racnode1 ~]$ ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeededCheck Current Voting Disk [oracle@racnode1 ~]$ crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

located 1 votedisk(s).Preparation

To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to

Page 11: What is RAC

demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW devices:

Five New iSCSI Volumes and their Local Device Name Mappings

iSCSI Target Name Local Device Name Disk Sizeiqn.2006-01.com.openfiler:racdb.ocr1 /dev/iscsi/ocr1/part 512 MB

iqn.2006-01.com.openfiler:racdb.ocr2 /dev/iscsi/ocr2/part 512 MB

iqn.2006-01.com.openfiler:racdb.voting1 /dev/iscsi/voting1/part 32 MB

iqn.2006-01.com.openfiler:racdb.voting2 /dev/iscsi/voting2/part 32 MB

iqn.2006-01.com.openfiler:racdb.voting3 /dev/iscsi/voting3/part 32 MB

After creating the new iSCSI volumes from the SAN, they now need to be configured for access and bound to RAW devices by all Oracle RAC nodes in the database cluster.

1. From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes from the SAN which will be used to store the voting disks and OCR files.

[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3

[root@racnode2 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3

2. Manually login to the new iSCSI targets from all Oracle RAC nodes in the cluster.

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l

Page 12: What is RAC

[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p 192.168.2.195 -l[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p 192.168.2.195 -l[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -p 192.168.2.195 -l[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -p 192.168.2.195 -l[root@racnode2 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.voting3 -p 192.168.2.195 -l

3. Create a single primary partition on each of the five new iSCSI volumes that span the entire disk. Perform this from only one of the Oracle RAC nodes in the cluster:

[root@racnode1 ~]# fdisk /dev/iscsi/ocr1/part[root@racnode1 ~]# fdisk /dev/iscsi/ocr2/part[root@racnode1 ~]# fdisk /dev/iscsi/voting1/part[root@racnode1 ~]# fdisk /dev/iscsi/voting2/part[root@racnode1 ~]# fdisk /dev/iscsi/voting3/part

4. Re-scan the SCSI bus from all Oracle RAC nodes in the cluster: [root@racnode2 ~]# partprobe

5. Create a shell script (/usr/local/bin/setup_raw_devices.sh) on all Oracle RAC nodes in the cluster to bind the five Oracle Clusterware component devices to RAW devices as follows:

# +---------------------------------------------------------+# | FILE: /usr/local/bin/setup_raw_devices.sh |# +---------------------------------------------------------+

# +---------------------------------------------------------+# | Bind OCR files to RAW device files. |# +---------------------------------------------------------+/bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1/bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1sleep 3/bin/chown root:oinstall /dev/raw/raw1/bin/chown root:oinstall /dev/raw/raw2/bin/chmod 0640 /dev/raw/raw1/bin/chmod 0640 /dev/raw/raw2

# +---------------------------------------------------------+# | Bind voting disks to RAW device files. |# +---------------------------------------------------------+/bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1/bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1/bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1sleep 3/bin/chown oracle:oinstall /dev/raw/raw3/bin/chown oracle:oinstall /dev/raw/raw4/bin/chown oracle:oinstall /dev/raw/raw5/bin/chmod 0644 /dev/raw/raw3/bin/chmod 0644 /dev/raw/raw4/bin/chmod 0644 /dev/raw/raw5

6. From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to execute:

Page 13: What is RAC

[root@racnode1 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh[root@racnode2 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh

7. Manually execute the new shell script from all Oracle RAC nodes in the cluster to bind the voting disks to RAW devices:

[root@racnode1 ~]# /usr/local/bin/setup_raw_devices.sh/dev/raw/raw1: bound to major 8, minor 97/dev/raw/raw2: bound to major 8, minor 17/dev/raw/raw3: bound to major 8, minor 1/dev/raw/raw4: bound to major 8, minor 49/dev/raw/raw5: bound to major 8, minor 33

[root@racnode2 ~]# /usr/local/bin/setup_raw_devices.sh/dev/raw/raw1: bound to major 8, minor 65/dev/raw/raw2: bound to major 8, minor 49/dev/raw/raw3: bound to major 8, minor 33/dev/raw/raw4: bound to major 8, minor 1/dev/raw/raw5: bound to major 8, minor 17

8. Check that the character (RAW) devices were created from all Oracle RAC nodes in the cluster:

[root@racnode1 ~]# ls -l /dev/rawtotal 0crw-r----- 1 root oinstall 162, 1 Sep 24 00:48 raw1crw-r----- 1 root oinstall 162, 2 Sep 24 00:48 raw2crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5

[root@racnode2 ~]# ls -l /dev/rawtotal 0crw-r----- 1 root oinstall 162, 1 Sep 24 00:48 raw1crw-r----- 1 root oinstall 162, 2 Sep 24 00:48 raw2crw-r--r-- 1 oracle oinstall 162, 3 Sep 24 00:48 raw3crw-r--r-- 1 oracle oinstall 162, 4 Sep 24 00:48 raw4crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5

[root@racnode1 ~]# raw -qa/dev/raw/raw1: bound to major 8, minor 97/dev/raw/raw2: bound to major 8, minor 17/dev/raw/raw3: bound to major 8, minor 1/dev/raw/raw4: bound to major 8, minor 49/dev/raw/raw5: bound to major 8, minor 33

[root@racnode2 ~]# raw -qa/dev/raw/raw1: bound to major 8, minor 65/dev/raw/raw2: bound to major 8, minor 49/dev/raw/raw3: bound to major 8, minor 33/dev/raw/raw4: bound to major 8, minor 1/dev/raw/raw5: bound to major 8, minor 17

9. Include the new shell script in /etc/rc.local to run on each boot from all Oracle RAC nodes in the cluster:

[root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local[root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local

Page 14: What is RAC

10. Once the raw devices are created, use the dd command to zero out the device and make sure no data is written to the raw devices. Only perform this action from one of the Oracle RAC nodes in the cluster:

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1dd: writing to '/dev/raw/raw1': No space left on device1048516+0 records in1048515+0 records out536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2dd: writing to '/dev/raw/raw2': No space left on device1048516+0 records in1048515+0 records out536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3dd: writing to '/dev/raw/raw3': No space left on device65505+0 records in65504+0 records out33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4dd: writing to '/dev/raw/raw4': No space left on device65505+0 records in65504+0 records out33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5dd: writing to '/dev/raw/raw5': No space left on device65505+0 records in65504+0 records out33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s

  It is highly recommended to take a backup of the voting disk and OCR file before making any changes! Instruction are included in this guide on how to perform backups of the voting disk and OCR file.

  CRS_home

The Oracle Clusterware binaries included in this article (i.e. crs_stat, ocrcheck, crsctl, etc.) are being executed from the Oracle Clusterware home directory which for the purpose of this article is /u01/app/crs. The environment variable $ORA_CRS_HOME is set for both the oracle and root user accounts to this directory and is also included in the $PATH:

[root@racnode1 ~]# echo $ORA_CRS_HOME/u01/app/crs

[root@racnode1 ~]# which ocrcheck/u01/app/crs/bin/ocrcheck

Page 15: What is RAC

Administering the OCR File View OCR Configuration Information

Two methods exist to verify how many OCR files are configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary) Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror (not configured)

Cluster registry integrity check succeeded

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.lococrconfig_loc=/u02/oradata/racdb/OCRFilelocal_only=FALSE

To view the actual contents of the OCR in a human-readable format, run the ocrdump command. This command requires the CRS stack to be running. Running the ocrdump command will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE:

[root@racnode1 ~]# ocrdump[root@racnode1 ~]# ls -l OCRDUMPFILE-rw-r--r-- 1 root root 250304 Oct 2 22:46 OCRDUMPFILE

The ocrdump utility also allows for different output options:

## Write OCR contents to specified file name.#[root@racnode1 ~]# ocrdump /tmp/'hostname'_ocrdump_'date +%m%d%y:%H%M'

Page 16: What is RAC

## Print OCR contents to the screen.#[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css

## Write OCR contents out to XML format.#[root@racnode1 ~]# ocrdump -stdout -keyname SYSTEM.css -xml > ocrdump.xmlAdd an OCR File

Starting with Oracle Clusterware 10g Release 2 (10.2), users now have the ability to multiplex (mirror) the OCR. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. To avoid simultaneous loss of multiple OCR files, each copy of the OCR should be placed on a shared storage device that does not share any components (controller, interconnect, and so on) with the storage devices used for the other OCR file.

Before attempting to add a mirrored OCR, determine how many OCR files are currently configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account:

[oracle@racnode1 ~]$ ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4660 Available space (kbytes) : 257460 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary) Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror (not configured yet)

Cluster registry integrity check succeeded

If CRS is down, you can still determine the location and number of OCR files by viewing the file ocr.loc, whose location is somewhat platform dependent. For example, on the Linux platform it is located in /etc/oracle/ocr.loc while on Sun Solaris it is located at /var/opt/oracle/ocr.loc:

[root@racnode1 ~]# cat /etc/oracle/ocr.lococrconfig_loc=/u02/oradata/racdb/OCRFilelocal_only=FALSE

The results above indicate I have only one OCR file and that it is located on an OCFS2 file system. Since we are allowed a maximum of two OCR locations, I intend to create an OCR mirror and locate it on the same OCFS2 file system in the same directory as the primary OCR.

Page 17: What is RAC

Please note that I am doing this for the sake brevity. The OCR mirror should always be placed on a separate device than the primary OCR file to guard against a single point of failure.

Note that the Oracle Clusterware stack should be online and running on all nodes in the cluster while adding, replacing, or removing the OCR location and hence does not require any system downtime.

  The operations performed in this section affect the OCR for the entire cluster. However, the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. So, you should avoid shutting down nodes while modifying the OCR using the ocrconfig command. If for any reason, any of the nodes in the cluster are shut down while modifying the OCR using the ocrconfig command, you will need to perform a repair on the stopped node before it can brought online to join the cluster. Please see the section "Repair an OCR File on a Local Node" for instructions on repairing the OCR file on the affected node.

You can add an OCR mirror after an upgrade or after completing the Oracle Clusterware installation. The Oracle Universal Installer (OUI) allows you to configure either one or two OCR locations during the installation of Oracle Clusterware. If you already mirror the OCR, then you do not need to add a new OCR location; Oracle Clusterware automatically manages two OCRs when you configure normal redundancy for the OCR. As previously mentioned, Oracle RAC environments do not support more than two OCR locations; a primary OCR and a secondary (mirrored) OCR.

Run the following command to add or relocate an OCR mirror using either destination_file or disk to designate the target location of the additional OCR:

ocrconfig -replace ocrmirror <destination_file>ocrconfig -replace ocrmirror <disk>

  You must be logged in as the root user to run the ocrconfig command.

  Please note that ocrconfig -replace is the only way to add/relocate OCR files/mirrors. Attempting to copy the existing OCR file to a new location and then manually adding/changing the file pointer in the ocr.loc file is not supported and will actually fail to work.

For example:

## Verify CRS is running on node 1.#[root@racnode1 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

#

Page 18: What is RAC

# Verify CRS is running on node 2.#[root@racnode2 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Configure the shared OCR destination_file/disk before # attempting to create the new ocrmirror on it. This example # creates a destination_file on an OCFS2 file system. # Failure to pre-configure the new destination_file/disk # before attempting to run ocrconfig will result in the # following error:# # PROT-21: Invalid parameter#[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror

## Add new OCR mirror.#[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror

After adding the new OCR mirror, check that it can be seen from all nodes in the cluster:

## Verify new OCR mirror from node 1.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@racnode1 ~]# cat /etc/oracle/ocr.loc#Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirrorocrconfig_loc=/u02/oradata/racdb/OCRFileocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

Page 19: What is RAC

## Verify new OCR mirror from node 2.#[root@racnode2 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- New OCR Mirror Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@racnode2 ~]# cat /etc/oracle/ocr.loc#Device/file getting replaced by device /u02/oradata/racdb/OCRFile_mirrorocrconfig_loc=/u02/oradata/racdb/OCRFileocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

As mentioned earlier, you can have at most two OCR files in the cluster; the primary OCR and a single OCR mirror. Attempting to add an extra mirror will actually relocate the current OCR mirror to the new location specified in the command:

[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror2[root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror2[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror2[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirror2[root@racnode1 ~]# ocrconfig -replace ocrmirror /u02/oradata/racdb/OCRFile_mirror2[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror2 <-- Mirror was Relocated! Device/File integrity check succeeded

Cluster registry integrity check succeededRelocate an OCR File

Just as we were able to add a new ocrmirror while the CRS stack was online, the same holds true when relocating an OCR file or OCR mirror and therefore does not require any system downtime.

Page 20: What is RAC

  You can relocate OCR only when the OCR is mirrored. A mirror copy of the OCR file is required to move the OCR online. If there is no mirror copy of the OCR, first create the mirror using the instructions in the previous section.

Attempting to relocate OCR when an OCR mirror does not exist will produce the following error:

ocrconfig -replace ocr /u02/oradata/racdb/OCRFilePROT-16: Internal Error

If the OCR mirror is not required in the cluster after relocating the OCR, it can be safely removed.

Run the following command as the root account to relocate the current OCR file to a new location using either destination_file or disk to designate the new target location for the OCR:

ocrconfig -replace ocr <destination_file>ocrconfig -replace ocr <disk>

Run the following command as the root account to relocate the current OCR mirror to a new location using either destination_file or disk to designate the new target location for the OCR mirror:

ocrconfig -replace ocrmirror <destination_file>ocrconfig -replace ocrmirror <disk>

The following example assumes the OCR is mirrored and demonstrates how to relocate the current OCR file (/u02/oradata/racdb/OCRFile) from the OCFS2 file system to a new raw device (/dev/raw/raw1):

## Verify CRS is running on node 1.#[root@racnode1 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Verify CRS is running on node 2.#[root@racnode2 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Verify current OCR configuration.

Page 21: What is RAC

#[root@racnode2 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- Current OCR to Relocate Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded

Cluster registry integrity check succeeded

## Verify new raw storage device exists, is configured with # the correct permissions, and can be seen from all nodes # in the cluster.#[root@racnode1 ~]# ls -l /dev/raw/raw1crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1

[root@racnode2 ~]# ls -l /dev/raw/raw1crw-r----- 1 root oinstall 162, 1 Oct 2 19:54 /dev/raw/raw1

## Clear out the contents from the new raw device.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1

## Relocate primary OCR file to new raw device. Note that# there is no deletion of the old OCR file but simply a# replacement.#[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1

After relocating the OCR file, check that the change can be seen from all nodes in the cluster:

## Verify new OCR file from node 1.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 <-- Relocated OCR Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded

Page 22: What is RAC

Cluster registry integrity check succeeded

[root@racnode1 ~]# cat /etc/oracle/ocr.loc#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1ocrconfig_loc=/dev/raw/raw1ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

## Verify new OCR file from node 2.#[root@racnode2 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 <-- Relocated OCR Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@racnode2 ~]# cat /etc/oracle/ocr.loc#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1ocrconfig_loc=/dev/raw/raw1ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

After verifying the relocation was successful, remove the old OCR file at the OS level:

[root@racnode1 ~]# rm -v /u02/oradata/racdb/OCRFileremoved '/u02/oradata/racdb/OCRFile'Repair an OCR File on a Local Node

It was mentioned in the previous section that the ocrconfig command cannot modify OCR configuration information for nodes that are shut down or for nodes on which Oracle Clusterware is not running. You may need to repair an OCR configuration on a particular node if your OCR configuration changes while that node is stopped. For example, you may need to repair the OCR on a node that was shut down while you were adding, replacing, or removing an OCR.

To repair an OCR configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:

ocrconfig –repair ocr device_name

To repair an OCR mirror configuration, run the following command as root from the node on which you have stopped the Oracle Clusterware daemon:

ocrconfig –repair ocrmirror device_name

Page 23: What is RAC

  You cannot perform this operation on a node on which the Oracle Clusterware daemon is running. The CRS stack must be shutdown before attempting to repair the OCR configuration on the local node.

The ocrconfig –repair command changes the OCR configuration only on the node from which you run this command. For example, if the OCR mirror was relocated to a disk named /dev/raw/raw2 from racnode1 while the node racnode2 was down, then use the command ocrconfig -repair ocrmirror /dev/raw/raw2 on racnode2 while the CRS stack is down on that node to repair its OCR configuration:

## Shutdown CRS stack on node 2 and verify the CRS stack is not up.#[root@racnode2 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep

## Relocate OCR mirror to new raw device from node 1. Note# that node 2 is down (actually CRS down on node 2) while# we relocate the OCR mirror.#[root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2

## Verify relocated OCR mirror from node 1.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 <-- Relocated OCR Mirror Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@racnode1 ~]# cat /etc/oracle/ocr.loc#Device/file /u02/oradata/racdb/OCRFile_mirror getting replaced by device /dev/raw/raw2ocrconfig_loc=/dev/raw/raw1ocrmirrorconfig_loc=/dev/raw/raw2

#

Page 24: What is RAC

# Node 2 does not know about the OCR mirror relocation.#[root@racnode2 ~]# cat /etc/oracle/ocr.loc#Device/file /u02/oradata/racdb/OCRFile getting replaced by device /dev/raw/raw1ocrconfig_loc=/dev/raw/raw1ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror

## While the CRS stack is down on node 2, perform a local OCR # repair operation to inform the node of the relocated OCR # mirror. The ocrconfig -repair option will only update the # OCR configuration information on node 2. If there were # other nodes shutdown during the relocation, they too will # need repaired.#[root@racnode2 ~]# ocrconfig -repair ocrmirror /dev/raw/raw2

## Verify the repair updated the OCR configuration on node 2.#[root@racnode2 ~]# cat /etc/oracle/ocr.loc#Device/file /u02/oradata/racdb/OCRFile_mirror getting replaced by device /dev/raw/raw2ocrconfig_loc=/dev/raw/raw1ocrmirrorconfig_loc=/dev/raw/raw2

## Bring up the CRS stack on node 2.#[root@racnode2 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly

## Verify node 2 is back online.#[root@racnode2 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application ONLINE ONLINE racnode1ora....b1.inst application ONLINE ONLINE racnode1ora....b2.inst application ONLINE ONLINE racnode2ora....srvc.cs application ONLINE ONLINE racnode1ora....db1.srv application ONLINE ONLINE racnode1ora....db2.srv application ONLINE ONLINE racnode2ora....SM1.asm application ONLINE ONLINE racnode1ora....E1.lsnr application ONLINE ONLINE racnode1ora....de1.gsd application ONLINE ONLINE racnode1ora....de1.ons application ONLINE ONLINE racnode1ora....de1.vip application ONLINE ONLINE racnode1ora....SM2.asm application ONLINE ONLINE racnode2ora....E2.lsnr application ONLINE ONLINE racnode2ora....de2.gsd application ONLINE ONLINE racnode2ora....de2.ons application ONLINE ONLINE racnode2ora....de2.vip application ONLINE ONLINE racnode2Remove an OCR File

Page 25: What is RAC

To remove an OCR, you need to have at least one OCR online. You may need to perform this to reduce overhead or for other storage reasons, such as stopping a mirror to move it to SAN, RAID etc. Carry out the following steps:

Check if at least one OCR is online Verify the CRS stack is online — preferably on all nodes

Remove the OCR or OCR mirror

If using a clustered file system, remove the deleted file at the OS level

Run the following command as the root account to delete the current OCR or the current OCR mirror:

ocrconfig -replace ocrorocrconfig -replace ocrmirror

For example:

## Verify CRS is running on node 1.#[root@racnode1 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Verify CRS is running on node 2.#[root@racnode2 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Get the existing OCR file information by running ocrcheck# utility.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 <-- OCR Mirror to be Removed Device/File integrity check succeeded

Cluster registry integrity check succeeded

Page 26: What is RAC

## Delete OCR mirror from the cluster configuration.#[root@racnode1 ~]# ocrconfig -replace ocrmirror

After removing the new OCR mirror, check that the change is seen from all nodes in the cluster:

## Verify OCR mirror was removed from node 1.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror Removed

Cluster registry integrity check succeeded

## Verify OCR mirror was removed from node 2.#[root@racnode2 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror Removed

Cluster registry integrity check succeeded

  Removing the OCR or OCR mirror from the cluster configuration does not remove the physical file at the OS level when using a clustered file system.

Backup the OCR File

There are two methods for backing up the contents of the OCR and each backup method can be used for different recovery purposes. This section discusses how to ensure the stability of the cluster by implementing a robust backup strategy.

Page 27: What is RAC

The first type of backup relies on automatically generated OCR file copies which are sometimes referred to as physical backups. These physical OCR file copies are automatically generated by the CRSD process on the master node and are primarily used to recover the OCR from a lost or corrupt OCR file. Your backup strategy should include procedures to copy these automatically generated OCR file copies to a secure location which is accessible from all nodes in the cluster in the event the OCR needs to be restored.

The second type of backup uses manual procedures to create OCR export files; also known as logical backups. Creating a manual OCR export file should be performed both before and after making significant configuration changes to the cluster, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. If in the event a configuration change is made to the OCR that causes errors, the OCR can be restored to a previous state by performing an import of the logical backup taken before the configuration change. Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt OCR file.

  Unlike the methods used to backup the voting disk, attempting to backup the OCR by copying the OCR file directly at the OS level is not a valid backup and will result in errors after the restore!

Because of the importance of OCR information, Oracle recommends that you make copies of the automatically created backup files and an OCR export at least once a day. The following is a working UNIX script that can be scheduled in CRON to backup the OCR File(s) and the Voting Disk(s) on a regular basis:

  crs_components_backup_10g.kshAutomatic OCR Backups

The Oracle Clusterware automatically creates OCR physical backups every four hours. At any one time, Oracle always retains the last 3 backup copies of the OCR that are 4 hours old. The CRSD process that creates these backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of OCR physical backup files that Oracle retains.

The default location for generating physical backups on UNIX-based systems is CRS_home/cdata/cluster_name where cluster_name is the name of your cluster (i.e. crs). Use the ocrconfig -showbackup command to view all current OCR physical backups that were taken from the master node:

[oracle@racnode1 ~]$ ocrconfig -showbackup

racnode1 2009/09/29 13:05:22 /u01/app/crs/cdata/crs

racnode1 2009/09/29 09:05:22 /u01/app/crs/cdata/crs

racnode1 2009/09/29 05:05:22 /u01/app/crs/cdata/crs

Page 28: What is RAC

racnode1 2009/09/28 05:05:21 /u01/app/crs/cdata/crs

racnode1 2009/09/22 05:05:13 /u01/app/crs/cdata/crs

[oracle@racnode1 ~]$ ls -l $ORA_CRS_HOME/cdata/crstotal 59276-rw-r--r-- 1 root root 8654848 Sep 29 13:05 backup00.ocr <-- Most recent physical backup-rw-r--r-- 1 root root 8654848 Sep 29 09:05 backup01.ocr-rw-r--r-- 1 root root 8654848 Sep 29 05:05 backup02.ocr-rw-r--r-- 1 root root 8654848 Sep 29 05:05 day_.ocr-rw-r--r-- 1 root root 8654848 Sep 28 05:05 day.ocr <-- One day old-rw-r--r-- 1 root root 8654848 Sep 29 05:05 week_.ocr-rw-r--r-- 1 root root 8654848 Sep 22 05:05 week.ocr <-- One week old

You can change the location where the CRSD process writes the physical OCR copies to using:

ocrconfig -backuploc <new_dirname>

Restoring the OCR from an automatic physical backup is accomplished using the ocrconfig -restore command. Note that the CRS stack needs to be shutdown on all nodes in the cluster prior to running the restore operation:

ocrconfig -restore <backup_file_name>

You cannot restore the OCR from a physical backup using the -import option. The only method to restore the OCR from a physical backup is to use the -restore option.

The Master Node

As documented in Doc ID: 357262.1 on the My Oracle Support web site, the CRSD process only creates automatic OCR physical backups on one node in the cluster, which is the OCR master node. It does not create automatic backup copies on the other nodes; only from the OCR master node. If the master node fails, the OCR backups will be created from the new master node. You can determine which node in the cluster is the master node by examining the $ORA_CRS_HOME/log/<node_name>/cssd/ocssd.log file on any node in the cluster. In this log file, check for reconfiguration information (reconfiguration successful) after which you will see which node is the master and how many nodes are active in the cluster:

Node 1 - (racnode1)[ CSSD]CLSS-3000: reconfiguration successful, incarnation 1 with 2 nodes[ CSSD]CLSS-3001: local node number 1, master node number 1

Node 2 - (racnode2)[ CSSD]CLSS-3000: reconfiguration successful, incarnation 1 with 2 nodes[ CSSD]CLSS-3001: local node number 2, master node number 1

Another quick approach is to use either of the following methods:

Page 29: What is RAC

Node 1 - (racnode1)# grep -i "master node" $ORA_CRS_HOME/log/racnode?/cssd/ocssd.log | tail -1[ CSSD]CLSS-3001: local node number 1, master node number 1

Node 2 - (racnode2)# grep -i "master node" $ORA_CRS_HOME/log/racnode?/cssd/ocssd.log | tail -1[ CSSD]CLSS-3001: local node number 2, master node number 1

# If not found in the ocssd.log, then look through all# of the ocssd archives:

Node 1 - (racnode1)# for x in 'ls -tr $ORA_CRS_HOME/log/racnode?/cssd/ocssd.*'do grep -i "master node" $x; done | tail -1[ CSSD]CLSS-3001: local node number 1, master node number 1

Node 2 - (racnode2)# for x in 'ls -tr $ORA_CRS_HOME/log/racnode?/cssd/ocssd.*'do grep -i "master node" $x; done | tail -1[ CSSD]CLSS-3001: local node number 2, master node number 1

# The master node information is confirmed by the# ocrconfig -showbackup command:

# ocrconfig -showbackup

racnode1 2009/09/29 13:05:22 /u01/app/crs/cdata/crs

racnode1 2009/09/29 09:05:22 /u01/app/crs/cdata/crs

racnode1 2009/09/29 05:05:22 /u01/app/crs/cdata/crs

racnode1 2009/09/28 05:05:21 /u01/app/crs/cdata/crs

racnode1 2009/09/22 05:05:13 /u01/app/crs/cdata/crs

Because of the importance of OCR information, Oracle recommends that you make copies of the automatically created backup files at least once a day from the master node to a different device from where the primary OCR resides. You can use any backup software to copy the automatically generated physical backup files to a stable backup location:

[root@racnode1 ~]# cp -p -v -f -R /u01/app/crs/cdata /u02/crs_backup/ocrbackup/RACNODE1'/u01/app/crs/cdata/crs/day_.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/day_.ocr''/u01/app/crs/cdata/crs/backup02.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/backup02.ocr''/u01/app/crs/cdata/crs/backup01.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/backup01.ocr''/u01/app/crs/cdata/crs/week_.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/week_.ocr'

Page 30: What is RAC

'/u01/app/crs/cdata/crs/day.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/day.ocr''/u01/app/crs/cdata/crs/backup00.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/backup00.ocr''/u01/app/crs/cdata/crs/week.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE1/cdata/crs/week.ocr'

[root@racnode2 ~]# cp -p -v -f -R /u01/app/crs/cdata /u02/crs_backup/ocrbackup/RACNODE2'/u01/app/crs/cdata/crs/day_.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/day_.ocr''/u01/app/crs/cdata/crs/backup02.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/backup02.ocr''/u01/app/crs/cdata/crs/backup01.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/backup01.ocr''/u01/app/crs/cdata/crs/week_.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/week_.ocr''/u01/app/crs/cdata/crs/day.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/day.ocr''/u01/app/crs/cdata/crs/backup00.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/backup00.ocr''/u01/app/crs/cdata/crs/week.ocr' -> '/u02/crs_backup/ocrbackup/RACNODE2/cdata/crs/week.ocr'

It is possible and recommended that shared storage be used for the backup location(s). Keep in mind that if the master node goes down and cannot be rebooted, it is possible to loose all OCR physical backups if they were all on that node. The OCR backup process, however, will start on the new master node within four hours for all new backups. It is highly recommended that you integrate OCR backups with your normal database backup strategy. If possible, use a backup location that is shared by all nodes in the cluster.

Manual OCR Exports

Performing a manual export of the OCR should be done before and after making significant configuration changes to the cluster, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. This type of backup is often referred to as a logical backup. If in the event a configuration change is made to the OCR that causes errors, the OCR can be restored to its previous state by performing an import of the logical backup taken before the configuration change. For example, if you have unresolvable configuration problems, or if you are unable to restart your cluster database after such changes, then you can restore your configuration by importing the saved OCR content from a valid configuration.

Please note that an OCR logical export can also be used to restore the OCR from a lost or corrupt OCR file.

To export the contents of the OCR to a dump file, use the following command, where backup_file_name is the name of the OCR logical backup file you want to create:

ocrconfig -export <backup_file_name>

Page 31: What is RAC

For example:

[root@racnode1 ~]# ocrconfig -export /u02/crs_backup/ocrbackup/RACNODE1/exports/OCRFileBackup.dmp

# A second export is not strictly required, however, there is no such thing as too many backups![root@racnode2 ~]# ocrconfig -export /u02/crs_backup/ocrbackup/RACNODE2/exports/OCRFileBackup.dmp

To restore the OCR from an export/logical backup, use the ocrconfig –import command. Note that the CRS stack needs to be shutdown on all nodes in the cluster prior to running the restore operation. In addition, the total space required for the restored OCR location (typically 280MB) has to be pre-allocated. This is especially important when the OCR is located on a clustered file system like OCFS2.

ocrconfig –import <export_file_name>

You cannot restore the OCR from a logical backup using the -restore option. The only method to restore the OCR from a logical export is to use the -import option.

  You must be logged in as the root user to run the ocrconfig command.

Recover the OCR File

If an application fails, then before attempting to restore the OCR, restart the application. As a definitive verification that the OCR failed, run the ocrcheck command:

[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded <-- OCR (primary) Valid Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded <-- OCR (mirror) Valid

Cluster registry integrity check succeeded

The example above indicates that both the primary OCR and OCR mirror checks were successful and that no problems exist with the OCR configuration.

Page 32: What is RAC

If the ocrcheck command does not display the message 'Device/File integrity check succeeded' for at least one copy of the OCR, then both the primary OCR and the OCR mirror have failed. In this case, the CRS stack must be brought down on all nodes in the cluster to restore the OCR from a previous physical backup copy or an OCR export.

If there is at least one copy of the OCR available (either the primary OCR or the OCR mirror), you can use that valid copy to restore the contents of the other copy of the OCR. The restore in this case can be accomplished using the ocrconfig -replace command and does not require the applications or CRS stack to be down.

This section describes a number of possible OCR recovery scenarios using the OCR configuration described by the output of the ocrcheck command above. Both the primary OCR and the OCR mirror are located on an OCFS2 file system in the same directory. The recovery scenarios demonstrated in this section will make use of both types of OCR backups — automatically generated OCR file copies and manually created OCR export files.

  Although it should go without saying, DO NOT perform these recovery scenarios on a critical system like production!

Recover OCR from Valid OCR Mirror

This section demonstrates how to restore the OCR when only one of the OCR copies is missing or corrupt. The restore process will use the good OCR copy (whether its the primary OCR or the OCR mirror) to restore the missing/corrupt copy. Remember that if there is at least one copy of the OCR available, you can use that valid copy to restore the contents of the other copy of the OCR. The best part about this type of recovery is that it doesn't require any downtime! Oracle Clusterware and the applications can remain online during the recovery process.

For the purpose of this example, let's corrupt the primary OCR file:

[root@racnode1 ~]# dd if=/dev/zero of=/u02/oradata/racdb/OCRFile bs=4k count=100100+0 records in100+0 records out409600 bytes (410 kB) copied, 0.00756842 seconds, 54.1 MB/s

Running ocrcheck picks up the now corrupted primary OCR file:

[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4668 Available space (kbytes) : 257452 ID : 1331197 Device/File Name : /u02/oradata/racdb/OCRFile <-- Corrupt OCR Device/File needs to be synchronized with the other device Device/File Name : /u02/oradata/racdb/OCRFile_mirror Device/File integrity check succeeded

Page 33: What is RAC

Cluster registry integrity check succeeded

Note that after loosing the one OCR copy (in this case, the primary OCR file), Oracle Clusterware and the applications remain online:

[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application ONLINE ONLINE racnode1ora....b1.inst application ONLINE ONLINE racnode1ora....b2.inst application ONLINE ONLINE racnode2ora....srvc.cs application ONLINE ONLINE racnode2ora....db1.srv application ONLINE ONLINE racnode1ora....db2.srv application ONLINE ONLINE racnode2ora....SM1.asm application ONLINE ONLINE racnode1ora....E1.lsnr application ONLINE ONLINE racnode1ora....de1.gsd application ONLINE ONLINE racnode1ora....de1.ons application ONLINE ONLINE racnode1ora....de1.vip application ONLINE ONLINE racnode1ora....SM2.asm application ONLINE ONLINE racnode2ora....E2.lsnr application ONLINE ONLINE racnode2ora....de2.gsd application ONLINE ONLINE racnode2ora....de2.ons application ONLINE ONLINE racnode2ora....de2.vip application ONLINE ONLINE racnode2

While the applications and CRS remain online, perform the following steps to recover the primary OCR using the contents of the OCR mirror.

1. When using a clustered file system, remove the corrupt OCR file and re-initialize it: 2. [root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile3. [root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile4. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile5. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile

[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFileNOTE: If the target OCR is located on a raw device, verify the permissions are applied correctly for an OCR file (owned by root:oinstall with 0640 permissions), that the device is being shared by all nodes in the cluster, and finally use the dd command from only one node in the cluster to zero out the device and make sure no data is written to the raw device.

[root@racnode1 ~]# ls -l /dev/raw/raw1crw-r----- 1 root oinstall 162, 1 Oct 6 11:05 /dev/raw/raw1

[root@racnode2 ~]# ls -l /dev/raw/raw1crw-r----- 1 root oinstall 162, 1 Oct 6 11:04 /dev/raw/raw1

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw16. Restore the primary OCR using the contents of the OCR mirror. Note that this operation

is the same process used when adding a new OCR location:

[root@racnode1 ~]# ocrconfig -replace ocr /u02/oradata/racdb/OCRFileNOTE: If the target OCR is located on a raw device, substitute the path name above with that of the

Page 34: What is RAC

shared device name: (i.e. /dev/raw/raw1) 7. Verify the restore was successful by viewing the Clusterware alert log file. 8. [root@racnode1 ~]# tail $ORA_CRS_HOME/log/racnode1/alertracnode1.log9. ...10. 2009-10-06 17:46:51.118

[crsd(11054)]CRS-1007:The OCR/OCR mirror location was replaced by /u02/oradata/racdb/OCRFile.

11. Verify the OCR configuration by running the ocrcheck command: 12. [root@racnode1 ~]# ocrcheck13. Status of Oracle Cluster Registry is as follows :14. Version : 215. Total space (kbytes) : 26212016. Used space (kbytes) : 466817. Available space (kbytes) : 25745218. ID : 133119719. Device/File Name : /u02/oradata/racdb/OCRFile20. Device/File integrity check

succeeded <-- Primary OCR Restored21. Device/File Name : /u02/oradata/racdb/OCRFile_mirror22. Device/File integrity check

succeeded23.

Cluster registry integrity check succeeded24. As the oracle user account with user equivalence enabled on all the nodes, run the

cluvfy command to validate the OCR configuration: 25. [oracle@racnode1 ~]$ ssh racnode1 "hostname; date"26. racnode127. Tue Oct 6 17:52:52 EDT 200928.29. [oracle@racnode1 ~]$ ssh racnode2 "hostname; date"30. racnode231. Tue Oct 6 17:51:50 EDT 200932.33. [oracle@racnode1 ~]$ cluvfy comp ocr -n all34.35. Verifying OCR integrity36.37. Checking OCR integrity...38.39. Checking the absence of a non-clustered configuration...40. All nodes free of non-clustered, local-only configurations.41.42. Uniqueness check for OCR device passed.43.44. Checking the version of OCR...45. OCR of correct Version "2" exists.46.47. Checking data integrity of OCR...48. Data integrity check for OCR passed.49.50. OCR integrity check passed.51.

Verification of OCR integrity was successful.Recover OCR from Automatically Generated Physical Backup

Page 35: What is RAC

This section demonstrates how to recover the Oracle Cluster Registry from a lost or corrupt OCR file. This example assumes that both the primary OCR and the OCR mirror are lost from an accidental delete by a user and that the latest automatic OCR backup copy on the master node is accessible.

At this time, the second node in the cluster (racnode2) is the master node and currently available. We will be restoring the OCR using the latest OCR backup copy from racnode2 which is located at /u01/app/crs/cdata/crs/backup00.ocr.

Let's now corrupt the OCR by removing both the primary OCR and the OCR mirror:

[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile_mirror

Running ocrcheck fails to provide any useful information given that both OCR files are lost

[root@racnode1 ~]# ocrcheckPROT-602: Failed to retrieve data from the cluster registry

Note that after loosing both OCR files, Oracle Clusterware and the applications remain online. Before restoring the OCR, the applications and CRS will need to be shutdown as described in the steps below.

Perform the following steps to recover the OCR from the latest automatically generated physical backup:

1. With CRS still online, identify the master node (which in this example is racnode2) and all OCR backups using the ocrconfig -showbackup command:

2. [root@racnode1 ~]# ocrconfig -showbackup3.4. racnode2 2009/10/07 12:05:18 /u01/app/crs/cdata/crs5.6. racnode2 2009/10/07 08:05:17 /u01/app/crs/cdata/crs7.8. racnode2 2009/10/07 04:05:17 /u01/app/crs/cdata/crs9.10. racnode2 2009/10/07 00:05:16 /u01/app/crs/cdata/crs11.

racnode1 2009/09/24 08:49:19 /u01/app/crs/cdata/crs

Note that ocrconfig -showbackup may result in a segmentation fault or simply not show any results if CRS is shutdown.

12. For documentation purposes, identify the number and location of all configured OCR files that will be recovered in this example.

13. [root@racnode2 ~]# cat /etc/oracle/ocr.loc14. #Device/file /u02/oradata/racdb/OCRFile getting replaced by device

/u02/oradata/racdb/OCRFile15. ocrconfig_loc=/u02/oradata/racdb/OCRFile

Page 36: What is RAC

ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror16. Although all OCR files have been lost or corrupted, the Oracle Clusterware daemons as

well as the clustered database remain running. In this scenario, Oracle Clusterware and all managed resources need to be shut down in order to recover the OCR. Attempting to stop CRS using crsctl stop crs will fail given it cannot write to the now lost/corrupt OCR file:

17. [root@racnode1 ~]# crsctl stop crsOCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]

With the environment in this unstable state, shutdown all database instances from all nodes in the cluster and then reboot each node:

[oracle@racnode1 ~]$ sqlplus / as sysdba

SQL> shutdown immediate

[root@racnode1 ~]# reboot

------------------------------------------------

[oracle@racnode2 ~]$ sqlplus / as sysdba

SQL> shutdown immediate

[root@racnode2 ~]# reboot18. When the Oracle RAC nodes come back up, note that Oracle Clusterware will fail to start

as a result of the lost/corrupt OCR file: 19. [root@racnode1 ~]# crs_stat -t20. CRS-0184: Cannot communicate with the CRS daemon.21.22. [root@racnode2 ~]# crs_stat -t

CRS-0184: Cannot communicate with the CRS daemon.23. When using a clustered file system, re-initialize both the primary OCR and the OCR

mirror target locations identified earlier in the /etc/oracle/ocr.loc file: 24. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile25. [root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile26. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile27. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile28. [root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile29.30. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile_mirror31. [root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/OCRFile_mirror32. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror33. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror

[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirrorNOTE: If the target OCR is located on a raw device(s), verify the permissions are applied correctly for an OCR file (owned by root:oinstall with 0640 permissions), that the device is being shared by all nodes in the cluster, and finally use the dd command from only one node in the cluster to zero out the device(s) and make sure no data is written to the raw device(s).

[root@racnode1 ~]# ls -l /dev/raw/raw[12]

Page 37: What is RAC

crw-r----- 1 root oinstall 162, 1 Oct 7 15:00 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 7 15:00 /dev/raw/raw2

[root@racnode2 ~]# ls -l /dev/raw/raw[12]crw-r----- 1 root oinstall 162, 1 Oct 7 14:59 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 7 14:59 /dev/raw/raw2

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1 <-- OCR (primary)[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2 <-- OCR (mirror)

34. Before restoring the OCR, dump the contents of the physical backup you intend to recover from the master node (racnode2) to validate its availability as well as the accuracy of its contents:

35. [root@racnode2 ~]# ocrdump -backupfile /u01/app/crs/cdata/crs/backup00.ocr[root@racnode2 ~]# less OCRDUMPFILE

36. With CRS down, perform the restore operation from the master node (racnode2) by applying the latest automatically generated physical backup:

[root@racnode2 ~]# ocrconfig -restore /u01/app/crs/cdata/crs/backup00.ocr

37. Restart Oracle Clusterware on all of the nodes in the cluster by rebooting each node or by running the crsctl start crs command:

38. [root@racnode1 ~]# crsctl start crs39. Attempting to start CRS stack40. The CRS stack will be started shortly41.42. [root@racnode2 ~]# crsctl start crs43. Attempting to start CRS stack

The CRS stack will be started shortly44. Verify the OCR configuration by running the ocrcheck command: 45. [root@racnode1 ~]# ocrcheck46. Status of Oracle Cluster Registry is as follows :47. Version : 248. Total space (kbytes) : 26212049. Used space (kbytes) : 466850. Available space (kbytes) : 25745251. ID : 133119752. Device/File Name : /u02/oradata/racdb/OCRFile53. Device/File integrity check

succeeded <-- Primary OCR Restored54. Device/File Name : /u02/oradata/racdb/OCRFile_mirror55. Device/File integrity check

succeeded <-- Mirror OCR Restored56.

Cluster registry integrity check succeeded57. As the oracle user account with user equivalence enabled on all the nodes, run the

cluvfy command to validate the OCR configuration: 58. [oracle@racnode1 ~]$ ssh racnode1 "hostname; date"59. racnode160. Wed Oct 7 16:29:49 EDT 200961.62. [oracle@racnode1 ~]$ ssh racnode2 "hostname; date"63. racnode264. Wed Oct 7 16:29:06 EDT 2009

Page 38: What is RAC

65.66. [oracle@racnode1 ~]$ cluvfy comp ocr -n all67.68. Verifying OCR integrity69.70. Checking OCR integrity...71.72. Checking the absence of a non-clustered configuration...73. All nodes free of non-clustered, local-only configurations.74.75. Uniqueness check for OCR device passed.76.77. Checking the version of OCR...78. OCR of correct Version "2" exists.79.80. Checking data integrity of OCR...81. Data integrity check for OCR passed.82.83. OCR integrity check passed.84.

Verification of OCR integrity was successful.85. Finally, verify the applications are running: 86. [root@racnode1 ~]# crs_stat -t87. Name Type Target State Host88. ------------------------------------------------------------89. ora.racdb.db application ONLINE ONLINE racnode190. ora....b1.inst application ONLINE ONLINE racnode191. ora....b2.inst application ONLINE ONLINE racnode292. ora....srvc.cs application ONLINE ONLINE racnode293. ora....db1.srv application ONLINE ONLINE racnode194. ora....db2.srv application ONLINE ONLINE racnode295. ora....SM1.asm application ONLINE ONLINE racnode196. ora....E1.lsnr application ONLINE ONLINE racnode197. ora....de1.gsd application ONLINE ONLINE racnode198. ora....de1.ons application ONLINE ONLINE racnode199. ora....de1.vip application ONLINE ONLINE racnode1100. ora....SM2.asm application ONLINE ONLINE racnode2101. ora....E2.lsnr application ONLINE ONLINE racnode2102. ora....de2.gsd application ONLINE ONLINE racnode2103. ora....de2.ons application ONLINE ONLINE racnode2

ora....de2.vip application ONLINE ONLINE racnode2Recover OCR from an OCR Export File

This section demonstrates how to restore the Oracle Cluster Registry to a valid state after an OCR configuration change causes unresolvable errors and renders the cluster as unusable. This example can also be used to recover the OCR from a lost / corrupt OCR file. It is assumed a manual OCR export was taken prior to making the OCR configuration change that caused problems with the cluster registry:

[root@racnode1 ~]# ocrconfig -export /u02/crs_backup/ocrbackup/RACNODE1/exports/OCRFileBackup.dmp

Perform the following steps to restore the previous configuration stored in the OCR from an OCR export file:

Page 39: What is RAC

1. For documentation purposes, identify the number and location of all configured OCR files that will be recovered in this example.

2. [root@racnode2 ~]# cat /etc/oracle/ocr.loc3. #Device/file /u02/oradata/racdb/OCRFile getting replaced by device

/u02/oradata/racdb/OCRFile4. ocrconfig_loc=/u02/oradata/racdb/OCRFile

ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror5. Place the OCR export file that you created previously using the ocrconfig -export

command on a local disk for the node that will be performing the import: 6. [root@racnode1 ~]# mkdir -p /u03/crs_backup/ocrbackup/exports7. [root@racnode1 ~]# cd /u02/crs_backup/ocrbackup/RACNODE1/exports8. [root@racnode1 ~]# cp -p OCRFileBackup.dmp

/u03/crs_backup/ocrbackup/exports9. [root@racnode1 ~]# ls -l /u03/crs_backup/ocrbackup/exports10. total 112

-rw-r--r-- 1 root root 110233 Oct 8 09:38 OCRFileBackup.dmpNOTE: The ocrconfig -import process is unable to read an OCR export file from an OCFS2 file system. Attempting to import an OCR export file that is located on an OCFS2 file system will fail with the following error:

[root@racnode1 ~]# ocrconfig -import /u02/crs_backup/ocrbackup/RACNODE1/exports/OCRFileBackup.dmpPROT-8: Failed to import data from specified file to the cluster registryInvestigating the $ORA_CRS_HOME/log/<hostname>/client/ocrconfig_pid.log will reveal the error: ...[ OCRCONF][3012240]Error[112] encountered when reading from import file...The solution is to copy the OCR dump file to be imported from the OCFS2 file system to a file system on the local disk.

11. As the root user, stop Oracle Clusterware on all the nodes in the cluster by executing the following command:

12. [root@racnode1 ~]# crsctl stop crs13. Stopping resources. This could take several minutes.14. Error while stopping resources. Possible cause: CRSD is down.15.16. [root@racnode2 ~]# crsctl stop crs17. Stopping resources. This could take several minutes.

Error while stopping resources. Possible cause: CRSD is down.18. When using a clustered file system, re-initialize / pre-allocate the space (typically

280MB) for both the primary OCR and the OCR mirror target locations identified earlier in the /etc/oracle/ocr.loc file:

19. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile20. [root@racnode1 ~]# dd if=/dev/zero of=/u02/oradata/racdb/OCRFile

bs=4096 count=6558721. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile22. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile23. [root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile24.25. [root@racnode1 ~]# rm -f /u02/oradata/racdb/OCRFile_mirror26. [root@racnode1 ~]# dd if=/dev/zero

of=/u02/oradata/racdb/OCRFile_mirror bs=4096 count=6558727. [root@racnode1 ~]# chown root /u02/oradata/racdb/OCRFile_mirror

Page 40: What is RAC

28. [root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/OCRFile_mirror[root@racnode1 ~]# chmod 640 /u02/oradata/racdb/OCRFile_mirrorNOTE: If the target OCR is located on a raw device(s), verify the permissions are applied correctly for an OCR file (owned by root:oinstall with 0640 permissions), that the device is being shared by all nodes in the cluster, and finally use the dd command from only one node in the cluster to zero out the device(s) and make sure no data is written to the raw device(s).

[root@racnode1 ~]# ls -l /dev/raw/raw[12]crw-r----- 1 root oinstall 162, 1 Oct 8 09:43 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 8 09:43 /dev/raw/raw2

[root@racnode2 ~]# ls -l /dev/raw/raw[12]crw-r----- 1 root oinstall 162, 1 Oct 8 09:42 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 8 09:42 /dev/raw/raw2

[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1 <-- OCR (primary)[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2 <-- OCR (mirror)

29. With CRS down as the root user, restore the OCR data by importing the contents of the OCR export file using the following command:

[root@racnode1 ~]# ocrconfig -import /u03/crs_backup/ocrbackup/exports/OCRFileBackup.dmp

30. Restart Oracle Clusterware on all of the nodes in the cluster by rebooting each node or by running the crsctl start crs command:

31. [root@racnode1 ~]# crsctl start crs32. Attempting to start CRS stack33. The CRS stack will be started shortly34.35. [root@racnode2 ~]# crsctl start crs36. Attempting to start CRS stack

The CRS stack will be started shortly37. Verify the OCR configuration by running the ocrcheck command: 38. [root@racnode1 ~]# ocrcheck39. Status of Oracle Cluster Registry is as follows :40. Version : 241. Total space (kbytes) : 26212042. Used space (kbytes) : 466843. Available space (kbytes) : 25745244. ID : 133119745. Device/File Name : /u02/oradata/racdb/OCRFile46. Device/File integrity check

succeeded <-- Primary OCR Restored47. Device/File Name : /u02/oradata/racdb/OCRFile_mirror48. Device/File integrity check

succeeded <-- Mirror OCR Restored49.

Cluster registry integrity check succeeded50. As the oracle user account with user equivalence enabled on all the nodes, run the

cluvfy command to validate the OCR configuration: 51. [oracle@racnode1 ~]$ ssh racnode1 "hostname; date"52. racnode153. Thu Oct 8 11:34:15 EDT 200954.55. [oracle@racnode1 ~]$ ssh racnode2 "hostname; date"

Page 41: What is RAC

56. racnode257. Thu Oct 8 11:33:33 EDT 200958.59. [oracle@racnode1 ~]$ cluvfy comp ocr -n all60.61. Verifying OCR integrity62.63. Checking OCR integrity...64.65. Checking the absence of a non-clustered configuration...66. All nodes free of non-clustered, local-only configurations.67.68. Uniqueness check for OCR device passed.69.70. Checking the version of OCR...71. OCR of correct Version "2" exists.72.73. Checking data integrity of OCR...74. Data integrity check for OCR passed.75.76. OCR integrity check passed.77.

Verification of OCR integrity was successful.78. Finally, verify the applications are running: 79. [root@racnode1 ~]# crs_stat -t80. Name Type Target State Host81. ------------------------------------------------------------82. ora.racdb.db application ONLINE ONLINE racnode183. ora....b1.inst application ONLINE ONLINE racnode184. ora....b2.inst application ONLINE ONLINE racnode285. ora....srvc.cs application ONLINE ONLINE racnode286. ora....db1.srv application ONLINE ONLINE racnode187. ora....db2.srv application ONLINE ONLINE racnode288. ora....SM1.asm application ONLINE ONLINE racnode189. ora....E1.lsnr application ONLINE ONLINE racnode190. ora....de1.gsd application ONLINE ONLINE racnode191. ora....de1.ons application ONLINE ONLINE racnode192. ora....de1.vip application ONLINE ONLINE racnode193. ora....SM2.asm application ONLINE ONLINE racnode294. ora....E2.lsnr application ONLINE ONLINE racnode295. ora....de2.gsd application ONLINE ONLINE racnode296. ora....de2.ons application ONLINE ONLINE racnode2

ora....de2.vip application ONLINE ONLINE racnode2

Administering the Voting Disk View Voting Disk Configuration Information

Use the crsctl utility to verify how many voting disks are configured for the cluster as well as their location. The the crsctl command can be run as either the oracle or root user account:

[oracle@racnode1 ~]$ crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

Page 42: What is RAC

located 1 votedisk(s).Add a Voting Disk

Adding or removing a voting disk from the cluster is a fairly straightforward process. Oracle Clusterware 10g Release 1 (10.1) only allowed for one voting disk while Oracle Oracle Clusterware 10g Release 2 (10.2) lifted this restriction to allow for 32 voting disks. Having multiple voting disks available to the cluster removes the voting disk as a single point of failure and eliminates the need to mirror them outside of Oracle Clusterware (i.e. RAID). The Oracle Universal Installer (OUI) allows you to configure either one or three voting disks during the installation of Oracle Clusterware. Having three voting disks available allows Oracle Clusterware (CRS) to continue operating uninterrupted when any one of the voting disks fail.

  When deciding how many voting disks is appropriate for your environment, consider that for the cluster to survive failure of x number of voting disks, you need to configure (2x + 1) voting disks. For example, to allow for the failure of 2 voting disks, you would need to configure 5 voting disks.

When allocating shared raw storage devices for the voting disk(s), keep in mind that each voting disk requires 20MB of raw storage.

OCR Corruption after Adding/Removing Voting Disk when CRS Stack is Running

In addition to allowing for more than one voting disk in the cluster, the Oracle10g R2 documentation also indicates that adding and removing voting disks can be performed while CRS is online and does not require any cluster-wide downtime. After reading of this new capability, I immediately tried adding a new voting while CRS was running only to be greeted with the following error:

[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3Cluster is not in a ready state for online disk addition

After some research, it appears this is a known issue on at least the Linux and Sun Solaris platform with the 10.2.0.1.0 release and is fully documented in Oracle Bug 4898020: ADDING VOTING DISK ONLINE CRASH THE CRS. Some have reported that this issue was to be fixed with the 10.2.0.4 patch set; however that is the release I am currently using and the bug still exists.

In order to workaround this bug, you must first shut down CRS and then use the -force flag when running the crsctl command. Do not attempt to add or remove a voting disk to the cluster using the -force flag while CRS is online. Oracle Clusterware should be shut down on all nodes in the cluster before adding or removing voting disks.

  Using the -force flag to add or remove a voting disk while the Oracle Clusterware stack is active on any node in the cluster may corrupt your cluster configuration.

Bring down CRS on all nodes in the cluster prior to modifying the voting disk configuration

Page 43: What is RAC

using the -force flag to avoid interacting with active Oracle Clusterware daemons.

If the Oracle Clusterware stack is online while attempting to use the -force flag, all nodes in the cluster will reboot due to the css shutdown and corruption of your cluster configuration is very likely.

For a detailed discussion on this issue, please see Oracle Doc ID: 390880.1 "OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack is running) on the My Oracle Support web site.

To add a new voting disk to the cluster, use the following command where path is the fully qualified path for the additional voting disk. Run the following command as the root user to add a voting disk:

crsctl add css votedisk <path>

  You must be logged in as the root user to run the crsctl command to add/remove voting disks.

The following example demonstrates how to add two new voting disks to the current cluster. The new voting disks will reside on the same OCFS2 file system in the same directory as the current voting disk. Please note that I am doing this for the sake brevity. Multiplexed voting disks should always be placed on a separate device than the current voting disk to guard against a single point of failure.

Stop all application processes, shut down CRS on all nodes, and Oracle10g R2 users should use the -force flag to the crsctl command when adding the new voting disk(s). For example:

## Query current voting disk configuration.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

located 1 votedisk(s).

## Stop all application processes.#[root@racnode1 ~]# srvctl stop database -d racdb[root@racnode1 ~]# srvctl stop asm -n racnode1[root@racnode1 ~]# srvctl stop asm -n racnode2[root@racnode1 ~]# srvctl stop nodeapps -n racnode1[root@racnode1 ~]# srvctl stop nodeapps -n racnode2

## Verify all application processes are OFFLINE.#

Page 44: What is RAC

[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application OFFLINE OFFLINEora....b1.inst application OFFLINE OFFLINEora....b2.inst application OFFLINE OFFLINEora....srvc.cs application OFFLINE OFFLINEora....db1.srv application OFFLINE OFFLINEora....db2.srv application OFFLINE OFFLINEora....SM1.asm application OFFLINE OFFLINEora....E1.lsnr application OFFLINE OFFLINEora....de1.gsd application OFFLINE OFFLINEora....de1.ons application OFFLINE OFFLINEora....de1.vip application OFFLINE OFFLINEora....SM2.asm application OFFLINE OFFLINEora....E2.lsnr application OFFLINE OFFLINEora....de2.gsd application OFFLINE OFFLINEora....de2.ons application OFFLINE OFFLINEora....de2.vip application OFFLINE OFFLINE

## Shut down CRS on node 1 and verify the CRS stack is not up.#[root@racnode1 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode1 ~]# ps -ef | grep d.bin | grep -v grep

## Shut down CRS on node 2 and verify the CRS stack is not up.#[root@racnode2 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep

## Take a backup of the current voting disk.#[oracle@racnode1 ~]$ dd if=/u02/oradata/racdb/CSSFile of=/home/oracle/VotingDiskBackup.dmp bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.272872 seconds, 37.5 MB/s

## Add two new voting disks.#

Page 45: What is RAC

[root@racnode1 ~]# crsctl add css votedisk /u02/oradata/racdb/CSSFile_mirror1 -forceNow formatting voting disk: /u02/oradata/racdb/CSSFile_mirror1successful addition of votedisk /u02/oradata/racdb/CSSFile_mirror1.

[root@racnode1 ~]# crsctl add css votedisk /u02/oradata/racdb/CSSFile_mirror2 -forceNow formatting voting disk: /u02/oradata/racdb/CSSFile_mirror2successful addition of votedisk /u02/oradata/racdb/CSSFile_mirror2.

## Set the appropriate permissions on the new voting disks.#[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror1

[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror2[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror2[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror2

If the new voting disks will be created on raw devices

## Clear out the contents from the new raw devices.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4

## Add two new voting disks.#[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3 -forceNow formatting voting disk: /dev/raw/raw3successful addition of votedisk /dev/raw/raw3.

[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw4 -forceNow formatting voting disk: /dev/raw/raw4successful addition of votedisk /dev/raw/raw4.

After adding the new voting disk(s), check that they can be seen from all nodes in the cluster:

## Verify new voting disk access from node 1.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

located 3 votedisk(s).

Page 46: What is RAC

## Verify new voting disk access from node 2.#[root@racnode2 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

located 3 votedisk(s).

After verifying the new voting disk(s) can be seen from all nodes in the cluster, restart CRS and the application processes:

## Restart CRS and application processes from node 1.#[root@racnode1 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly

## Restart CRS and application processes from node 2.#[root@racnode2 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortlyRemove a Voting Disk

As discussed in the previous section, Oracle Clusterware must be shut down on all nodes in the cluster before adding or removing voting disks. Just as we were required to add the -force flag when adding a voting disk, the same holds true for Oracle10g R2 users attempting to remove a voting disk:

[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1Cluster is not in a ready state for online disk removal

[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror2Cluster is not in a ready state for online disk removal

The CRS stack must be shutdown on all nodes in the the cluster before attempting to use the -force flag. Failure to do so may result in OCR corruption.

Use the following command as the root user to remove a voting disk where path is the fully qualified path for the voting disk to be removed:

crsctl delete css votedisk <path>

  You must be logged in as the root user to run the crsctl command to add/remove voting disks.

Page 47: What is RAC

The "crsctl delete css votedisk" command deletes an existing voting disk from the cluster. This command does not, however, remove the physical file at the OS level if using a clustered file system nor does it clear the data from a raw storage device.

The following example demonstrates how to delete two voting disks from the current cluster. Stop all application processes, shut down CRS on all nodes, and Oracle10g R2 users should use the -force flag to the crsctl command when removing a voting disk(s). For example:

## Query current voting disk configuration.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

located 3 votedisk(s).

## Stop all application processes.#[root@racnode1 ~]# srvctl stop database -d racdb[root@racnode1 ~]# srvctl stop asm -n racnode1[root@racnode1 ~]# srvctl stop asm -n racnode2[root@racnode1 ~]# srvctl stop nodeapps -n racnode1[root@racnode1 ~]# srvctl stop nodeapps -n racnode2

## Verify all application processes are OFFLINE.#[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application OFFLINE OFFLINEora....b1.inst application OFFLINE OFFLINEora....b2.inst application OFFLINE OFFLINEora....srvc.cs application OFFLINE OFFLINEora....db1.srv application OFFLINE OFFLINEora....db2.srv application OFFLINE OFFLINEora....SM1.asm application OFFLINE OFFLINEora....E1.lsnr application OFFLINE OFFLINEora....de1.gsd application OFFLINE OFFLINEora....de1.ons application OFFLINE OFFLINEora....de1.vip application OFFLINE OFFLINEora....SM2.asm application OFFLINE OFFLINEora....E2.lsnr application OFFLINE OFFLINEora....de2.gsd application OFFLINE OFFLINEora....de2.ons application OFFLINE OFFLINEora....de2.vip application OFFLINE OFFLINE

## Shut down CRS on node 1 and verify the CRS stack is not up.#[root@racnode1 ~]# crsctl stop crsStopping resources. This could take several minutes.

Page 48: What is RAC

Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode1 ~]# ps -ef | grep d.bin | grep -v grep

## Shut down CRS on node 2 and verify the CRS stack is not up.#[root@racnode2 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep

## Remove two voting disks.#[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1 -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror1.

[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror2 -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror2.

## Remove voting disk files at the OS level.#[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror2

If the voting disks are on raw devices

## Remove two voting disks.#[root@racnode1 ~]# crsctl delete css votedisk /dev/raw/raw3 -forcesuccessful deletion of votedisk /dev/raw/raw3.

[root@racnode1 ~]# crsctl delete css votedisk /dev/raw/raw4 -forcesuccessful deletion of votedisk /dev/raw/raw4.

## (Optional)# Clear out the old contents (voting disk data) from the raw devices.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4

Page 49: What is RAC

After removing the voting disk(s), check that the voting disk(s) were removed from the cluster and the new voting disk configuration is seen from all nodes in the cluster:

## Verify voting disk(s) deleted from node 1.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

located 1 votedisk(s).

## Verify voting disk(s) deleted from node 2.#[root@racnode2 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

located 1 votedisk(s).

After verifying the voting disk(s) have been removed, restart CRS and the application processes on all nodes in the cluster:

## Restart CRS and application processes from node 1.#[root@racnode1 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly

## Restart CRS and application processes from node 2.#[root@racnode2 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortlyRelocate a Voting Disk

The process of moving a voting disk consists simply of removing the old voting disk and adding a new voting disk to the destination location:

crsctl delete css votedisk <old_path> -forcecrsctl add css votedisk <new_path> -force

As discussed earlier in this section, Oracle Clusterware must be shut down on all nodes in the cluster before adding or removing voting disks. Oracle10g R2 users are required to add the -force flag when removing/adding a voting disk. The CRS stack must be shutdown on all nodes in the the cluster before attempting to use the -force flag. Failure to do so may result in OCR corruption.

## Determine the current location and number of voting disks.# If there is only one voting disk location then first add# at least one new location before attempting to move the

Page 50: What is RAC

# current voting disk. The following will show that I have# only one voting disk location and will need to add at# least one additional voting disk in order to perform the # move. After the move, this temporary voting disk can be # removed from the cluster. The remainder of this example # will provide the instructions required to move the current # voting disk from its current location on an OCFS2 file# system to a new shared raw device (/dev/raw/raw3).#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile

located 1 votedisk(s).

## Stop all application processes.#[root@racnode1 ~]# srvctl stop database -d racdb[root@racnode1 ~]# srvctl stop asm -n racnode1[root@racnode1 ~]# srvctl stop asm -n racnode2[root@racnode1 ~]# srvctl stop nodeapps -n racnode1[root@racnode1 ~]# srvctl stop nodeapps -n racnode2

## Verify all application processes are OFFLINE.#[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application OFFLINE OFFLINEora....b1.inst application OFFLINE OFFLINEora....b2.inst application OFFLINE OFFLINEora....srvc.cs application OFFLINE OFFLINEora....db1.srv application OFFLINE OFFLINEora....db2.srv application OFFLINE OFFLINEora....SM1.asm application OFFLINE OFFLINEora....E1.lsnr application OFFLINE OFFLINEora....de1.gsd application OFFLINE OFFLINEora....de1.ons application OFFLINE OFFLINEora....de1.vip application OFFLINE OFFLINEora....SM2.asm application OFFLINE OFFLINEora....E2.lsnr application OFFLINE OFFLINEora....de2.gsd application OFFLINE OFFLINEora....de2.ons application OFFLINE OFFLINEora....de2.vip application OFFLINE OFFLINE

## Shut down CRS on node 1 and verify the CRS stack is not up.#[root@racnode1 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

Page 51: What is RAC

[root@racnode1 ~]# ps -ef | grep d.bin | grep -v grep

## Shut down CRS on node 2 and verify the CRS stack is not up.#[root@racnode2 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep

## Before moving the current voting disk# (/u02/oradata/racdb/CSSFile) to a new location, we first # need to add at least one new voting disks to the cluster.#[root@racnode1 ~]# cp /dev/null /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror1

[root@racnode1 ~]# crsctl add css votedisk /u02/oradata/racdb/CSSFile_mirror1 -forceNow formatting voting disk: /u02/oradata/racdb/CSSFile_mirror1successful addition of votedisk /u02/oradata/racdb/CSSFile_mirror1.

## Use the dd command to zero out the device and make sure # no data is written to the raw device.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3

## Delete the old voting disk (the voting disk that is to be # moved).#[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile.

## Add the new voting disk to the new location.#[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3 -forceNow formatting voting disk: /dev/raw/raw3successful addition of votedisk /dev/raw/raw3.

## (Optional)# Remove the temporary voting disk.#[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1 -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror1.

Page 52: What is RAC

## Remove all deleted voting disk files from the OCFS2 file system.#[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1

## Verify voting disk(s) relocation from node 1.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw3

located 1 votedisk(s).

## Verify voting disk(s) relocation from node 2.#[root@racnode2 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw3

located 1 votedisk(s).

## After verifying the voting disk(s) have been moved, restart # CRS and the application processes on all nodes in the # cluster.#[root@racnode1 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly

[root@racnode2 ~]# crsctl start crsAttempting to start CRS stackThe CRS stack will be started shortly

Backup the Voting Disk

Backing up the voting disk(s) is often performed on a regular basis by the DBA to guard the cluster against a single point of failure as the result of hardware failure or user error. Because the node membership information does not usually change, it is not a strict requirement that you back up the voting disk every day. At a minimum, however, your backup strategy should include procedures to back up all voting disks at the following times and make certain that the backups are stored in a secure location that is accessible from all nodes in the cluster in the event the voting disk(s) need to be restored:

After installing Oracle Clusterware After adding nodes to or deleting nodes from the cluster

After performing voting disk add or delete operations

Page 53: What is RAC

Oracle Clusterware 10g Release 1 (10.1) only allowed for one voting disk while Oracle Clusterware 10g Release 2 (10.2) lifted this restriction to allow for 32 voting disks. For high availability, Oracle recommends that Oracle Clusterware 10g R2 users configure multiple voting disks while keeping in mind that you must have an odd number of voting disks, such as three, five, and so on. To avoid simultaneous loss of multiple voting disks, each voting disk should be placed on a shared storage device that does not share any components (controller, interconnect, and so on) with the storage devices used for the other voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy.

To make a backup copy of the voting disk on UNIX/Linux, use the dd command:

dd if=<voting_disk_name> of=<backup_file_name> bs=<block_size>

Perform this operation on every voting disk where voting_disk_name is the name of the active voting disk (input file), backup_file_name is the name of the file to which you want to back up the voting disk contents (output file), and block_size is the value to set both the input and output block sizes. As a general rule on most platforms, including Linux and Sun, the block size for the dd command should be 4k to ensure that the backup of the voting disk gets complete blocks.

If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:

dd if=/dev/raw/raw3 of=/u03/crs_backup/votebackup/VotingDiskBackup.dmp bs=4k

When you use the dd command to make backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the CRS daemons (namely, the crsd.bin process) before taking a backup of the voting disk.

The following is a working UNIX script that can be scheduled in CRON to backup the OCR File and the Voting Disk on a regular basis:

  crs_components_backup_10g.ksh

For the purpose of this example, the current Oracle Clusterware environment is configured with three voting disks on an OCFS2 clustered file system that will be backed up to a local file system on one of the nodes in the cluster. For example:

## Query the location and number of voting disks.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

## Backup all three voting disks.#

Page 54: What is RAC

[root@racnode1 ~]# dd if=/u02/oradata/racdb/CSSFile of=/u03/crs_backup/votebackup/CSSFile.bak bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.259862 seconds, 39.4 MB/s

[root@racnode1 ~]# dd if=/u02/oradata/racdb/CSSFile_mirror1 of=/u03/crs_backup/votebackup/CSSFile_mirror1.bak bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.295964 seconds, 34.6 MB/s

[root@racnode1 ~]# dd if=/u02/oradata/racdb/CSSFile_mirror2 of=/u03/crs_backup/votebackup/CSSFile_mirror2.bak bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.249039 seconds, 41.1 MB/s

Recover the Voting Disk

The recommended way to recover from a lost or corrupt voting disk is to restore it from a previous good backup that was taken with the dd command.

There are actually very few steps required to restore the voting disks:

1. Shutdown CRS on all nodes in the cluster. 2. List the current location of the voting disks.

3. Restore each of the voting disks using the dd command from a previous good backup of the voting disks that was taken using the same dd command.

4. Re-start CRS on all nodes in the cluster.

For example:

[root@racnode1 ~]# crsctl stop crs[root@racnode2 ~]# crsctl stop crs

[root@racnode1 ~]# crsctl query css votedisk

[root@racnode1 ~]# # Do this for all voting disks...[root@racnode1 ~]# dd if=<backup_voting_disk> of=<voting_disk_name> bs=4k

[root@racnode1 ~]# crsctl start crs[root@racnode2 ~]# crsctl start crs

The following is an example of what occurs on all RAC nodes when a voting disk is destroyed. This example will manually corrupt all voting disks in the cluster. After the Oracle RAC nodes reboot from the crash, we will follow up with the steps required to restore the lost/corrupt voting disk which will make use of the voting disk backups that were created in the previous section.

Page 55: What is RAC

  Although it should go without saying, DO NOT perform this recovery scenario on a critical system like production!

First, let's check the status of the cluster and all RAC components, list the current location of the voting disk(s), and finally list the voting disk backup that will be used to recover from:

[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application ONLINE ONLINE racnode2ora....b1.inst application ONLINE ONLINE racnode1ora....b2.inst application ONLINE ONLINE racnode2ora....srvc.cs application ONLINE ONLINE racnode2ora....db1.srv application ONLINE ONLINE racnode1ora....db2.srv application ONLINE ONLINE racnode2ora....SM1.asm application ONLINE ONLINE racnode1ora....E1.lsnr application ONLINE ONLINE racnode1ora....de1.gsd application ONLINE ONLINE racnode1ora....de1.ons application ONLINE ONLINE racnode1ora....de1.vip application ONLINE ONLINE racnode1ora....SM2.asm application ONLINE ONLINE racnode2ora....E2.lsnr application ONLINE ONLINE racnode2ora....de2.gsd application ONLINE ONLINE racnode2ora....de2.ons application ONLINE ONLINE racnode2ora....de2.vip application ONLINE ONLINE racnode2

[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

located 3 votedisk(s).

[root@racnode1 ~]# ls -l /u03/crs_backup/votebackuptotal 30048-rw-r--r-- 1 root root 10240000 Oct 8 21:24 CSSFile.bak-rw-r--r-- 1 root root 10240000 Oct 8 21:24 CSSFile_mirror1.bak-rw-r--r-- 1 root root 10240000 Oct 8 21:25 CSSFile_mirror2.bak

The next step is to simulate the corruption or loss of the voting disk(s).

Oracle RAC 10g R1 / R2 (not patched with 10.2.0.4)

If you are using Oracle RAC 10g R1 or Oracle RAC 10g R2 (not patched with 10.2.0.4), simply write zero's to one of the voting disk:

[root@racnode1 ~]# dd if=/dev/zero of=/u02/oradata/racdb/CSSFile

Both RAC servers are now stuck and will be rebooted by CRS...Oracle RAC 11g or higher (including Oracle RAC 10g R2 patched with 10.2.0.4)

Page 56: What is RAC

Starting with Oracle RAC 11g R1 (including Oracle RAC 10g R2 patched with 10.2.0.4), attempting to corrupt a voting disk using dd will result in all nodes being rebooted, however, Oracle Clusterware will re-construct the corrupt voting disk and successfully bring up the RAC components. Because the voting disks do not contain persistent data, CSSD is able to fully reconstruct the voting disks so long as the cluster is running. This feature was introduced with Oracle Clusterware 11.1 and is also available with Oracle Clusterware 10.2 patched with 10.2.0.4.

This makes it a bit more difficult to corrupt a voting disk by simply writing zero's to it. You would need to find a way to dd the voting disks and stop the cluster before any of the voting disks could be automatically recovered by CSSD. Good luck with that! To simulate the corruption (actually the loss) of the voting disk and have both nodes crash, I'm simply going to delete all of the voting disks and then manually reboot the nodes:

Delete the voting disk...[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror2

Reboot both nodes to simulate the crash...[root@racnode1 ~]# reboot[root@racnode2 ~]# reboot

After the reboot, CRS will not come up and all RAC components will be down:

[root@racnode1 ~]# crs_stat -tCRS-0184: Cannot communicate with the CRS daemon.

[root@racnode2 ~]# crs_stat -tCRS-0184: Cannot communicate with the CRS daemon.

Ok, let's start the recovery process.

## Locate the voting disk backups that were taken in the# previous section.#[root@racnode1 ~]# cd /u03/crs_backup/votebackup[root@racnode1 votebackup]# ls -l *.bak-rw-r--r-- 1 root root 10240000 Oct 8 21:24 CSSFile.bak-rw-r--r-- 1 root root 10240000 Oct 8 21:24 CSSFile_mirror1.bak-rw-r--r-- 1 root root 10240000 Oct 8 21:25 CSSFile_mirror2.bak

## Recover the voting disk (or voting disks) using the same# dd command that was used to back it up, but with the input# file and output file in reverse.#[root@racnode1 ~]# dd if=/u03/crs_backup/votebackup/CSSFile.bak of=/u02/oradata/racdb/CSSFile bs=4k2500+0 records in

Page 57: What is RAC

2500+0 records out10240000 bytes (10 MB) copied, 0.252425 seconds, 40.6 MB/s

[root@racnode1 ~]# dd if=/u03/crs_backup/votebackup/CSSFile_mirror1.bak of=/u02/oradata/racdb/CSSFile_mirror1 bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.217645 seconds, 47.0 MB/s

[root@racnode1 ~]# dd if=/u03/crs_backup/votebackup/CSSFile_mirror2.bak of=/u02/oradata/racdb/CSSFile_mirror2 bs=4k2500+0 records in2500+0 records out10240000 bytes (10 MB) copied, 0.220051 seconds, 46.5 MB/s

## Verify the permissions on the recovered voting disk(s) are# set appropriately.#[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile

[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror1

[root@racnode1 ~]# chown oracle /u02/oradata/racdb/CSSFile_mirror2[root@racnode1 ~]# chgrp oinstall /u02/oradata/racdb/CSSFile_mirror2[root@racnode1 ~]# chmod 644 /u02/oradata/racdb/CSSFile_mirror2

## With the recovered voting disk(s) in place, restart CRS# on all Oracle RAC nodes.#[root@racnode1 ~]# crsctl start crs[root@racnode2 ~]# crsctl start crs

  If you have multiple voting disks, then you can remove the voting disks and add them back into your environment using the crsctl delete css votedisk path and crsctl add css votedisk path commands respectively, where path is the complete path of the location on which the voting disk resides.

After recovering the voting disk, run through several tests to verify that Oracle Clusterware is functioning correctly:

[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application ONLINE ONLINE racnode1ora....b1.inst application ONLINE ONLINE racnode1ora....b2.inst application ONLINE ONLINE racnode2ora....srvc.cs application ONLINE ONLINE racnode2ora....db1.srv application ONLINE ONLINE racnode1

Page 58: What is RAC

ora....db2.srv application ONLINE ONLINE racnode2ora....SM1.asm application ONLINE ONLINE racnode1ora....E1.lsnr application ONLINE ONLINE racnode1ora....de1.gsd application ONLINE ONLINE racnode1ora....de1.ons application ONLINE ONLINE racnode1ora....de1.vip application ONLINE ONLINE racnode1ora....SM2.asm application ONLINE ONLINE racnode2ora....E2.lsnr application ONLINE ONLINE racnode2ora....de2.gsd application ONLINE ONLINE racnode2ora....de2.ons application ONLINE ONLINE racnode2ora....de2.vip application ONLINE ONLINE racnode2

[root@racnode1 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

Move the Voting Disk and OCR from OCFS to RAW Devices

This section provides instructions on how to move the OCR and all voting disks used throughout this article from an OCFS2 file system to raw storage devices.

OCR / Voting Disk Mappings from OCFS2 to Raw Storage

OCR Component Current Location on OCFS2 New Location

OCR File (primary) /u02/oradata/racdb/OCRFile /dev/raw/raw1

OCR File (mirror) /u02/oradata/racdb/OCRFile_mirror /dev/raw/raw2

Vote Disk 1 /u02/oradata/racdb/CSSFile /dev/raw/raw3

Vote Disk 2 /u02/oradata/racdb/CSSFile_mirror1 /dev/raw/raw4

Vote Disk 3 /u02/oradata/racdb/CSSFile_mirror2 /dev/raw/raw5

Move the OCR ## The new raw storage devices for OCR should be owned by the # root user, must be in the oinstall group, and must have# permissions set to 640. Provide at least 280MB of disk# space for each OCR file and verify the raw storage devices# can be seen from all nodes in the cluster.#[root@racnode1 ~]# ls -l /dev/raw/raw[12]crw-r----- 1 root oinstall 162, 1 Oct 8 21:55 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 8 21:55 /dev/raw/raw2

[root@racnode2 ~]# ls -l /dev/raw/raw[12]crw-r----- 1 root oinstall 162, 1 Oct 8 21:54 /dev/raw/raw1crw-r----- 1 root oinstall 162, 2 Oct 8 21:54 /dev/raw/raw2

## Use the dd command to zero out the devices and make sure

Page 59: What is RAC

# no data is written to the raw devices.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2

## Verify CRS is running on node 1.#[root@racnode1 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Verify CRS is running on node 2.#[root@racnode2 ~]# crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy

## Query the current location and number of OCR files on# the OCFS2 file system.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4676 Available space (kbytes) : 257444 ID : 1513888898 Device/File Name : /u02/oradata/racdb/OCRFile <-- OCR (primary) Device/File integrity check succeeded Device/File Name : /u02/oradata/racdb/OCRFile_mirror <-- OCR (mirror) Device/File integrity check succeeded

Cluster registry integrity check succeeded

## Move OCR and OCR mirror to new storage location.#[root@racnode1 ~]# ocrconfig -replace ocr /dev/raw/raw1[root@racnode1 ~]# ocrconfig -replace ocrmirror /dev/raw/raw2

## Verify OCR relocation from node 1.#[root@racnode1 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4676 Available space (kbytes) : 257444

Page 60: What is RAC

ID : 1513888898 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded

Cluster registry integrity check succeeded

## Verify OCR relocation from node 2.#[root@racnode2 ~]# ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262120 Used space (kbytes) : 4676 Available space (kbytes) : 257444 ID : 1513888898 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded

Cluster registry integrity check succeeded

## Remove all deleted OCR files from the OCFS2 file system.#[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile[root@racnode1 ~]# rm /u02/oradata/racdb/OCRFile_mirrorMove the Voting Disk ## The new raw storage devices for the voting disks should be# owned by the oracle user, must be in the oinstall group, # and and must have permissions set to 644. Provide at least # 20MB of disk space for each voting disk and verify the raw# storage devices can be seen from all nodes in the cluster.#[root@racnode1 ~]# ls -l /dev/raw/raw[345]crw-r--r-- 1 oracle oinstall 162, 3 Oct 8 22:44 /dev/raw/raw3crw-r--r-- 1 oracle oinstall 162, 4 Oct 8 22:45 /dev/raw/raw4crw-r--r-- 1 oracle oinstall 162, 5 Oct 9 00:22 /dev/raw/raw5

[root@racnode2 ~]# ls -l /dev/raw/raw[345]crw-r--r-- 1 oracle oinstall 162, 3 Oct 8 22:53 /dev/raw/raw3crw-r--r-- 1 oracle oinstall 162, 4 Oct 8 22:54 /dev/raw/raw4crw-r--r-- 1 oracle oinstall 162, 5 Oct 9 00:23 /dev/raw/raw5

## Use the dd command to zero out the devices and make sure# no data is written to the raw devices.#[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4[root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5

Page 61: What is RAC

## Query the current location and number of voting disks on# the OCFS2 file system. There needs to be at least two# voting disks configured before attempting to perform the# move.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile 1. 0 /u02/oradata/racdb/CSSFile_mirror1 2. 0 /u02/oradata/racdb/CSSFile_mirror2

located 3 votedisk(s).

## Stop all application processes.#[root@racnode1 ~]# srvctl stop database -d racdb[root@racnode1 ~]# srvctl stop asm -n racnode1[root@racnode1 ~]# srvctl stop asm -n racnode2[root@racnode1 ~]# srvctl stop nodeapps -n racnode1[root@racnode1 ~]# srvctl stop nodeapps -n racnode2

## Verify all application processes are OFFLINE.#[root@racnode1 ~]# crs_stat -tName Type Target State Host------------------------------------------------------------ora.racdb.db application OFFLINE OFFLINEora....b1.inst application OFFLINE OFFLINEora....b2.inst application OFFLINE OFFLINEora....srvc.cs application OFFLINE OFFLINEora....db1.srv application OFFLINE OFFLINEora....db2.srv application OFFLINE OFFLINEora....SM1.asm application OFFLINE OFFLINEora....E1.lsnr application OFFLINE OFFLINEora....de1.gsd application OFFLINE OFFLINEora....de1.ons application OFFLINE OFFLINEora....de1.vip application OFFLINE OFFLINEora....SM2.asm application OFFLINE OFFLINEora....E2.lsnr application OFFLINE OFFLINEora....de2.gsd application OFFLINE OFFLINEora....de2.ons application OFFLINE OFFLINEora....de2.vip application OFFLINE OFFLINE

## Shut down CRS on node 1 and verify the CRS stack is not up.#[root@racnode1 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode1 ~]# ps -ef | grep d.bin | grep -v grep

Page 62: What is RAC

## Shut down CRS on node 2 and verify the CRS stack is not up.#[root@racnode2 ~]# crsctl stop crsStopping resources. This could take several minutes.Successfully stopped CRS resources.Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.

[root@racnode2 ~]# ps -ef | grep d.bin | grep -v grep

## Move all three voting disks to new storage location.#[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile.

[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw3 -forceNow formatting voting disk: /dev/raw/raw3successful addition of votedisk /dev/raw/raw3.

[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror1 -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror1.

[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw4 -forceNow formatting voting disk: /dev/raw/raw4successful addition of votedisk /dev/raw/raw4.

[root@racnode1 ~]# crsctl delete css votedisk /u02/oradata/racdb/CSSFile_mirror2 -forcesuccessful deletion of votedisk /u02/oradata/racdb/CSSFile_mirror2.

[root@racnode1 ~]# crsctl add css votedisk /dev/raw/raw5 -forceNow formatting voting disk: /dev/raw/raw5successful addition of votedisk /dev/raw/raw5.

## Verify voting disk(s) relocation from node 1.#[root@racnode1 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw3 1. 0 /dev/raw/raw4 2. 0 /dev/raw/raw5

located 3 votedisk(s).

## Verify voting disk(s) relocation from node 2.#[root@racnode2 ~]# crsctl query css votedisk 0. 0 /dev/raw/raw3 1. 0 /dev/raw/raw4 2. 0 /dev/raw/raw5

Page 63: What is RAC

located 3 votedisk(s).

## Remove all deleted voting disk files from the OCFS2 file system.#[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror1[root@racnode1 ~]# rm /u02/oradata/racdb/CSSFile_mirror2

## With all voting disks now located on raw storage devices,# restart CRS on all Oracle RAC nodes.#[root@racnode1 ~]# crsctl start crs[root@racnode2 ~]# crsctl start crs

Moving OCR==========You must be logged in as the root user, because root owns the OCR files. Also an ocrmirror must be in place before trying to replace the OCR device.

Make sure there is a recent backup of the OCR file before making any changes:

ocrconfig –showbackup

If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:

In 10.2

# ocrconfig –export -s online

In 11g

# ocrconfig -manualbackup

The new OCR disk must be owned by root, must be in the oinstall group, and must have permissions set to 640. Provide at least 100 MB disk space for the OCR.

On one node as root run:

# ocrconfig -replace ocr

Page 64: What is RAC

# ocrconfig -replace ocrmirror

Now run ocrcheck to verify if the OCR is pointing to the new file

Moving Voting Disk==================

Note: crsctl votedisk commands must be run as root

Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:

crsctl query css votedisk

Take a backup of all voting disk:

dd if=voting_disk_name of=backup_file_name

To move a Voting Disk, provide the full path including file name:

crsctl delete css votedisk –forcecrsctl add css votedisk –force

After modifying the voting disk, start the Oracle Clusterware stack on all nodes

# crsctl start crs

Verify the voting disk location using

crsctl query css votedisk

This Blog

The Web

This Blog 

 

 

 

Page 65: What is RAC

The Web 

 

 

Saturday, December 26, 2009

Administering Voting Disks in Oracle Real Application Clusters

The Oracle Clusterware is comprised primarily of two components: the voting disk and the OCR (Oracle Cluster Registry). The voting disk is nothing but a file that contains and manages information of all the node memberships and the OCR is a file that manages the cluster and RAC configuration. Let's take a quick look at administering the voting disks

Administering voting disks

Backup and Recovery:

First, let's look at backing up the voting disks by running the following command:

dd if=voting_disk_name of=backup_file_nameexample:dd if=/dev/sdc1 of=/tmp/votingdisk_bkp2088387+0 records in2088387+0 records out1069254144 bytes (1.1 GB) copied, 889.716 seconds, 1.2 MB/s

This operation needs to be performed on all voting disks. Here, clearly you see that the if (input file) is the source file (replace the voting_disk_name with your voting disk) and the of (output file) is the destination backup file containing all information of the voting disk contents. Type dd –help for more information. Running the command with the names of the files reversed will help you recover your voting disk file(s).

Page 66: What is RAC

dd if=backup_file_name of=voting_disk_name

note:before restore, stop the crs with the following command:[root@rac1 bin]# /etc/init.d/init.crs stop

You can use the ocopy command in Windows environments or use the crsctl commands to copy and administer the files. Also, note that if you have multiple voting disks, which are not abnormal to have, you can use the crsctl command to add and delete the voting disks. For instance:

crsctl delete css votedisk path

Here you delete the disk and the path, which is the complete path of the location of the file, and below you add your new or backup files by doing the following:

crsctl add css votedisk path

This way you can either statically or dynamically add or remove your voting disks in your RAC.

You must, however, note that if your cluster is down, then you can use the -force option

crsctl add css votedisk path -force

to modify the voting disk configuration. This way you don’t end up interfering with other Clusterware daemons. Using it in your active configuration may corrupt your configuration.