View
217
Download
0
Category
Preview:
Citation preview
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
1/112
Copyright 2007, Oracle. All rights reserved.
RAC Deployment Workshop
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
2/112
Copyright 2007, Oracle. All rights reserved.2
Objectives
After completing this workshop, you should be able to: Install and configure iSCSI storage on both client
clusters and Openfiler servers
Install and configure Oracle Clusterware and RealApplication Clusters (RAC) on more than two clustered
nodes Use Oracle Clusterware to protect a single-instance
database
Convert a single-instance database to RAC
Extend Oracle Clusterware and RAC to more than two
nodes Create a RAC primary-physical/logical standby
database environment
Upgrade Oracle Clusterware, RAC, and RAC databasesfrom 10.2.0.1 to 10.2.0.2 in a rolling fashion
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
3/112
Copyright 2007, Oracle. All rights reserved.3
Assumptions
You are familiar with:
Linux (all examples and labs are Linux-related)
Oracle Clusterware
Oracle Real Application Clusters Data Guard
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
4/112
Copyright 2007, Oracle. All rights reserved.4
Workshop Format and Challenge
Start with Workshop I
Instructor presents workshop
Instructor comments on workshopusing viewlet or lab
Students do workshop using:
Go to next workshop
1: Viewlets2: Labs document
3: Solution scripts
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
5/112
Copyright 2007, Oracle. All rights reserved.5
Workshop Flow Overview
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
Six nodes cluster
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
6/112
Copyright 2007, Oracle. All rights reserved.6
Hardware Organization
Private interconnect
Public network
Node a Node b
ETH1
SCSIdisk
OpenfileriSCSI servers
160 GB
Node c
switch
ETH0
Node a Node b Node c Node a Node b Node c Node a Node b Node c
Private interconnect
Public network
Group1 Group2 Group3 Group4
SCSIdisk
160 GB
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
7/112Copyright 2007, Oracle. All rights reserved.7
Openfiler Storage Organization
Disk
Openfiler 1
160Go
cg1:
/dev/sda1
/dev/sda2/dev/sda3
/dev/sda4
ocr
voteasm
ocfs 36GB
34GB
1.5GB
1GB
cg2:
/dev/sda1
/dev/sda2/dev/sda3/dev/sda4
ocr
voteasm
ocfs 36GB
34GB
1.5GB
1GB
Disk
Openfiler 2
cg3:
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
ocr
vote
asm
ocfs 36GB
34GB
1.5GB
1GB
cg4:
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
ocr
vote
asm
ocfs 36GB
34GB
1.5GB
1GB
160Go
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
8/112Copyright 2007, Oracle. All rights reserved.8
Cluster Storage Organization
Node a
disk
Node b
/home/oracle:
Local disk
priminfo & stdbinfo
/solutions/u01:
/app/oracle/oraInventory
/crs1020
/app/oracle/product/10.2.0/rac
/stage/10gR2:
/rdbms/clusterware/rdbms/database
/stage/
/app/oracle/product/10.2.0/sgl
Node c
160Go
/dev/mapper:
ocr1ocr2
vote1vote2vote3
asm1asm2asm5asm6
ocfs2
265MB
512MB
265MB
512MB
512MB
2GB
2GB
7.5GB
7.5GB
36GB
asm7 7.5GBasm8 7.5GB
p4547817_10202_LINUX.zip/DGpatch
/ocfs2
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
9/112Copyright 2007, Oracle. All rights reserved.9
Group Formation and Naming Conventions
Three students per group
Volume groups: cg[1|2|3|4]
Names must be unique within a classroom:
Cluster name: XY#CLUST[1|2|3|4]
Database name: XY#RDB[A|B|C|D]
Standby database name: XY#SDB[A|C]
Example: Atlanta in the Buckhead office in Room 9
AB9CLUST1, AB9CLUST2, AB9CLUST3, AB9CLUST4
AB9RDBA, AB9RDBB, AB9RDBC, AB9RDBD AB9SDBA, AB9RDBC
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
10/112Copyright 2007, Oracle. All rights reserved.10
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
11/112Copyright 2007, Oracle. All rights reserved.11
Workshop I: Configuring Storage
1. Openfiler volume setup
2. iSCSI client setup
3. fdisk client setup
4. Multipathing client setup5. Raw and udev client setup
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
12/112Copyright 2007, Oracle. All rights reserved.12
Filer Cluster nodes
Persistent name
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Determinevolumes-to-devices
mapping
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
Persistent Storage Flow
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
13/112Copyright 2007, Oracle. All rights reserved.13
Enterprise Network Storage
NAS devices utilize a client-server architecture to sharefile systems.
Most NAS solutions support one or moreof the following file access protocols:
NFS Version 3
SMB/CIFS HTTP/WebDAV
SAN storage appears like locally attachedSCSI disks to nodes using the storage.
The main difference between NAS and SAN is that: SAN devices transfer data in disk blocks
NAS devices operate at the file level
AoE enables ATA disks to be remotely accessed.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
14/112Copyright 2007, Oracle. All rights reserved.15
Openfiler: Your Classroom Storage Solution
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
15/112
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
16/112Copyright 2007, Oracle. All rights reserved.17
Creating Physical Volumes
1
2
3
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
17/112Copyright 2007, Oracle. All rights reserved.18
FilerCluster nodes
Persistent name
Physical volume
Logical volumes
Definelist of nodes
Allowed nodes
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Determinevolumes-to-devices
mapping
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
Persistent Storage Flow
Volume group
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
18/112Copyright 2007, Oracle. All rights reserved.19
Creating Volume Groups
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
19/112Copyright 2007, Oracle. All rights reserved.20
Filer Cluster nodesPersistent name
Physical volume
Definelist of nodes
Allowed nodes
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
fdisk devices(One node only)
Determinevolumes-to-devicesmapping
/dev/mapper/mpath0
/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
Persistent Storage Flow
Volume group
Logical volumes(ocr, vote, asm, ocfs2)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
20/112Copyright 2007, Oracle. All rights reserved.21
Creating Logical Volumes
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
21/112Copyright 2007, Oracle. All rights reserved.22
Filer Cluster nodes Persistent name
Physical volume
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
fdisk devices(One node only)
Determinevolumes-to-devices
mapping
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
Persistent Storage Flow
Volume group
Logical volumes(ocr, vote, asm, ocfs2)
Definelist of nodes
Allowed nodes
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
22/112Copyright 2007, Oracle. All rights reserved.23
Initializing the Storage
To initialize the storage, disable or enableiSCSI Targetfrom the management interface.
Or execute service iscsi-target restart as root:
View the contents of the /etc/ietd.conf file:
Edit the initiators.deny and initiators.allow files:
[root@ed-dnfiler06b ~]# service iscsi-target restart
[root@ed-dnfiler06b ~]# cat /etc/ietd.confTarget iqn.2006-01.com.oracle.us:cg1.ocr
Lun 0 Path=/dev/cg3/ocr,Type=fileio...
[root@ed-dnfiler06b ~]# cat /etc/initiators.denyiqn.2006-01.com.oracle.us:cg3.ocr ALLiqn.2006-01.com.oracle.us:cg3.vote ALL...
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
23/112Copyright 2007, Oracle. All rights reserved.24
Filer Cluster nodes Persistent name
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Determinevolumes-to-devices
mapping
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
Persistent Storage Flow
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
24/112Copyright 2007, Oracle. All rights reserved.25
Accessing the Shared Storage
Ensure that the iscsi-initiator-tools RPM is
loaded:
Edit the /etc/iscsi.conf file to add discovery entry:
Make sure the iscsi service is started on system boot:
Start the iscsi service:
[root@ed-otraclin10a ~]# rpm -qa|grep iscsi
[root@ed-otraclin10a ~]# chkconfig -add iscsi[root@ed-otraclin10a ~]# chkconfig iscsi on
[root@ed-otraclin11b ~]# service iscsi start
[root@ed-otraclin11b ~]# vi /etc/iscsi.confDiscoveryAddress=ed-dnfiler06b.us.oracle.com
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
25/112Copyright 2007, Oracle. All rights reserved.26
Persistent Storage Flow
Cluster nodes Persistent name
fdisk devices(One node only)
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
Determinevolumes-to-devices
mapping
Filer
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
(ocr, vote, asm, ocfs2)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
26/112
Copyright 2007, Oracle. All rights reserved.27
Accessing the Shared Storage
Check to see that the volumes are accessible withiscsi-ls and dmesg.
[root@ed-otraclin10a ~]# iscsi-ls*************************************************************
SFNet iSCSI Driver Version ...4:0.1.11-3(02-May-2006)*************************************************************TARGET NAME : iqn.2006-01.com.oracle.us:cg1.ocrTARGET ALIAS :HOST ID : 24BUS ID : 0TARGET ID : 0
TARGET ADDRESS : 10.156.49.151:3260,1SESSION STATUS : ESTABLISHED AT Thu Nov 23 10:07:20EST 2006SESSION ID : ISID 00023d000001 TSIH 600*************************************************************
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
27/112
Copyright 2007, Oracle. All rights reserved.29
Persistent Storage Flow
Cluster nodes Persistent name
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
Determinevolumes-to-devices
mapping
Filer
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
28/112
Copyright 2007, Oracle. All rights reserved.30
Partitioning the iSCSI Disk
Use the fdisk utility to create iSCSI slices within the
iSCSI volumes.
These device names are notpersistent across reboots.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
29/112
Copyright 2007, Oracle. All rights reserved.31
Persistent Storage Flow
Cluster nodes Persistent name
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
Determinevolumes-to-devices
mapping
Filer
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Start multipath
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
30/112
Copyright 2007, Oracle. All rights reserved.32
Udev Basics
Udev simplifies device management for cold and hotplug devices.
Udev uses hot plug events sent by the kernel whenevera device is added or removed from the system.
Details about newly added devices are exported to /sys. Udev manages device entries in /dev by monitoring
/sys.
Udev is a standard package in RH4.0.
The primary benefit Udev provides for Oracle RACenvironments is persistent:
Disk device naming
Device ownership and permissions
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
31/112
Copyright 2007, Oracle. All rights reserved.33
Udev Configuration
Udev behavior is controlled by /etc/udev/udev.conf.
Important parameters include the following:
udev_root sets the location where udev creates devicenodes. (/dev is the default.)
default_mode controls the permissions of devicenodes.
default_owner sets user ID of files.
default_group sets group ID of files.
udev_rules sets the directory for Udev rules files.(/etc/udev/udev.rules is the default.)
udev_permissions sets the directory for permissions.(/etc/udev/udev.permissions is the default.)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
32/112
Copyright 2007, Oracle. All rights reserved.34
Udev Rules Parameters
Common parameters forNAME, SYMLINK, and PROGRAM:
%n is the kernel number, sda2 would be 2.
%k is the kernel name for the device, sda for example.
%Mis the kernel major number for the device.
%mis the kernel minor number for the device.
%b is the bus ID for the device.
%p is the path for the device.
%c is the string returned by the external programdefined by PROGRAM.
%s{filename} is the content of a sysfs (/sys)
attribute.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
33/112
Copyright 2007, Oracle. All rights reserved.35
Multipathing and Device Mapper
Multipathing tools aggregate a devices independent
paths into a single logical path.
Multipathing is an important aspect of high availabilityconfigurations.
RHEL4 incorporates a tool called Device Mapper (DM)to manage multipathed devices.
DM is dependent on the following packages:
device-mapper
udev
device-mapper-multipath
The /etc/init.d/multipathd start command will
initialize Device Mapper.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
34/112
Copyright 2007, Oracle. All rights reserved.36
Persistent Storage Flow
Cluster nodes
Start iSCSI
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery
Determinevolumes-to-devices
mapping
Filer
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Start multipath
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Persistent name
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
35/112
Copyright 2007, Oracle. All rights reserved.37
Configuring Multipath
multipaths {multipath {wwid 14f70656e66696c000000000001000000d54alias ocr
path_grouping_policy multibuspath_checker readsector0path_selector "round-robin 0"failback manualno_path_retry 5
}...}
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
36/112
Copyright 2007, Oracle. All rights reserved.38
Device Mapper Devices
DM devices are created as /dev/dm-n.
DM only maps whole drives.
If a drive has multiple partitions, the device mapping ofeach partition is handled bykpartx.
If the device is partitioned, the partitions will appear as:
/dev/mapper/mpathNpN
/dev/mapper/pN
/dev/mapper/pN
OCR and voting disks should use /dev/dm-Nor/dev/mapper/pNpath formats.
# cat /etc/udev/rules.d/40-multipath.rulesKERNEL="dm-[0-9]*", PROGRAM="/sbin/mpath_get_name %M %m", \
RESULT="?*", NAME="%k", SYMLINK="mpath/%c"KERNEL="dm-[0-9]*", PROGRAM="/sbin/kpartx_get_name %M %m", \RESULT="?*", NAME="%k", SYMLINK="mpath/%c"
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
37/112
Copyright 2007, Oracle. All rights reserved.
Storage Configuration Summary
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
38/112
Copyright 2007, Oracle. All rights reserved.40
Persistent Storage Flow
Filer Cluster nodesPersistent name
Physical volume
Volume group
Logical volumes
Definelist of nodes
Allowed nodes
/dev/sdw/dev/sdx/dev/sdy/dev/sdz
iSCSI
Discovery(ocr, vote, asm, ocfs2)
fdisk devices(One node only)
Determinevolumes-to-devices
mapping
/dev/mapper/mpath0/dev/mapper/mpath0p1/dev/mapper/mpath0p2
/dev/mapper/mpath1/dev/mapper/mpath1p1/dev/mapper/mpath1p2/dev/mapper/mpath1p3
Start multipath
Start iSCSI
Restart multipath
Determine and definewwids-to-volume mapping
/dev/mapper/ocr/dev/mapper/ocr1/dev/mapper/ocr2
/dev/mapper/vote/dev/mapper/vote1/dev/mapper/vote2/dev/mapper/vote3
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
39/112
Copyright 2007, Oracle. All rights reserved.41
Openfiler Storage Goal
As a class, create four volume groups called:
CG1
CG2
CG3
CG4
Within each group, create logical volumes called:
ocr
vote
asm
ocfs2
One volume group will be used per cluster group.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
40/112
Copyright 2007, Oracle. All rights reserved.42
Configuring Openfiler Storage
1. Openfiler Storage Control Center:
Ensure that iSCSI target service is enabled.
Create a physical volume partition: /dev/sdan (75.5GB).
Create a new volume group (cgx) using thephysical volume.
Create iSCSI logical volumes insidevolume group:
ocr: 1000 MB
vote: 1500 MB
asm: 34000 MB
ocfs2: 36000 MB
2. Edit the /etc/initiators.deny and/etc/initiators.allow files to restrict access.
3. Execute service iscsi-target status|restart.
Physical volume
Volume group
Logical volumes
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
41/112
Copyright 2007, Oracle. All rights reserved.43
Configuring Cluster Storage: iSCSI + fdisk
1. Check /etc/iscsi.conf: DiscoveryAddress=
2. Check /etc/hosts to make sure your filer is there.
3. Execute service iscsi restart + iscsi-ls.
4. Ensure that iSCSI is started on boot:chkconfig -add/on.
5. Determine which logical volumes are attached to yourblock devices: /var/log/messages and iscsi-ls.
6. Use fdisk to partition each block device (one node):
Two slices for OCR: 256 MB, 512 MB
Three slices for voting: 256 MB, 512 MB, 512 MB
Six slices for ASM: 2x2000 MB (primary), 4x7500 MB (ext)
OCSF2 uses whole slice.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
42/112
Copyright 2007, Oracle. All rights reserved.44
Configuring Cluster Storage: Multipathing
1. Comment blacklist in /etc/multipath.conf.
2. Execute service multipathd start + chkconfigmultipathd on.
3. reboot
4. Determine list of wwids associated to your logicalvolumes from /var/lib/multipath/bindings and
multipath -v3.
5. Edit /etc/multipath.conf to add wwids and aliasesinmultipaths section: ocr, vote, asm, and ocfs2.
Logical volume Block device mpath device mapper device Raw device
Filer Cluster node
Persistent name
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
43/112
Copyright 2007, Oracle. All rights reserved.45
Configuring Cluster Storage: Permissions
1. Associate /dev/mapper devices to/dev/raw/raw[1-5] for OCR and voting in/etc/sysconfig/rawdevices.
(This is not strictly necessary in 10gR2; see OUI bug.)
2. service rawdevices restart3. Edit /etc/udev/permissions.d/40-
rac.permissions:
raw/raw[1-2]:root:oinstall:660
raw/raw[3-5]: oracle:oinstall:660
4. Edit /etc/rc.local:
chown oracle:dba /dev/mapper/asm*
chmod 660 /dev/mapper/asm*
5. reboot
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
44/112
Copyright 2007, Oracle. All rights reserved.46
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
45/112
Copyright 2007, Oracle. All rights reserved.47
Workshop II: Install Clusterware and ASM
1. Install Oracle Clusterware locally on the first and
second nodes only.2. Install database software locally on the first and
second nodes.
3. Configure ASM with DATAand FRAdisk groups.
You do not use ASMLib (See the viewlet for installation information).Node a
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
+ASM1CRS
Node b
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
+ASM2CRS
DATA FRA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
46/112
Copyright 2007, Oracle. All rights reserved.48
Installing Oracle Clusterware
1. Use the provided solution script to set up ssh on allthree nodes.
2. Check your interfaces and storage devices on all threenodes: ifconfig ls al /dev/mapper, raw qa, ls al /dev/raw
In 10gR2, although you can use block devices to storeOCR and voting disks, OUI does not accept it.
3. Run OUI from /stage/10gR2/rdbms/clusterware:
Inventory = /u01/app/oracle/oraInventory
Home = /u01/crs1020
OCR and voting disks: /dev/raw/raw1,2,3,4,5
VIPCA needs to be manually executed.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
47/112
Copyright 2007, Oracle. All rights reserved.49
Installing Oracle RAC Software and ASM
1. Run OUI from /stage/10gR2/rdbms/database:
Home = /u01/app/oracle/product/10.2.0/rac
Software installation only
First two nodes
2. Run dbca from /u01/app/oracle/product/10.2.0/rac/bin : Export ORACLE_HOME.
Use the first two nodes
Create two disk groups used later: DATA: /dev/mapper/asm1 & 2
FRA: /dev/mapper/asm5,6,7 & 8 dbca automatically creates the listeners and ASM
instances.
Use initialization parameter files:$ORACLE_HOME/dbs/init+ASM.ora
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
48/112
Copyright 2007, Oracle. All rights reserved.50
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
49/112
Copyright 2007, Oracle. All rights reserved.51
Workshop III: Set Up Single-Instance Protection
1. Install single-install database software on the first andsecond nodes only.
2. Create single-instance database on the first node.
3. Protect it against both instance and node failure using
Oracle Clusterware.4. Three possible starting case scenarios:
No software installed
Single-instance database running on nonclustered ASM
Single-instance database running on local file system
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
50/112
Copyright 2007, Oracle. All rights reserved.52
Installing Single-Instance Database Software
1. Run OUI from /stage/10gR2/rdbms/database:
Home = /u01/app/oracle/product/10.2.0/sgl
Software install only
To be done on the first and second nodes
2. Do the same on your second node: You could parallelize the work.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
51/112
Copyright 2007, Oracle. All rights reserved.53
Creating Single-Instance Database
Run dbca from /u01/app/oracle/product/10.2.0/sgl:
Store your database and Flash Recovery Area on ASM:DATAand FRAdisk groups.
Use sample schemas.
You use shared storage to protect against nodefailures.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
52/112
Copyright 2007, Oracle. All rights reserved.54
Protecting the Single-Instance Database by UsingOracle Clusterware
1. Copy init.ora to the second node (spfile on ASM).
2. On the second node:
Create a password file
Create the $ORACLE_HOME/admin tree for your database
3. Create action script for your database:start/check/stop.
4. Store it on both nodes: /u01/crs1020/crs/public.
5. Create profile, db: ci=30 ra=3 (sudo).
6. Register the DB with Oracle Clusterware (sudo).7. Set DB permissions (sudo).
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
53/112
Copyright 2007, Oracle. All rights reserved.55
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle single-instance Oracle single-instance
+ASM1 +ASM2RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
54/112
Copyright 2007, Oracle. All rights reserved.56
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle single-instance Oracle single-instance
+ASM1 +ASM2RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
55/112
Copyright 2007, Oracle. All rights reserved.57
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle single-instance Oracle single-instance
+ASM1 +ASM2
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
56/112
Copyright 2007, Oracle. All rights reserved.58
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle single-instance Oracle single-instance
+ASM1 +ASM2RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
57/112
Copyright 2007, Oracle. All rights reserved.59
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle single-instance Oracle single-instance
+ASM1 +ASM2RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
58/112
Copyright 2007, Oracle. All rights reserved.60
Protection Flow Diagrams
Node a Node b
Oracle Clusterware Oracle Clusterware
Oracle RAC/ASM Oracle RAC/ASM
Oracle Single-instance Oracle single-instance
+ASM1 +ASM2RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
59/112
Copyright 2007, Oracle. All rights reserved.61
Protection Flow Diagrams
Node b
Oracle Clusterware
Oracle RAC/ASM
Oracle single-instance
+ASM2
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
60/112
Copyright 2007, Oracle. All rights reserved.62
Protection Flow Diagrams
Node b
Oracle Clusterware
Oracle RAC/ASM
Oracle single-instance
+ASM2 RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
61/112
Copyright 2007, Oracle. All rights reserved.63
Protection Flow Diagrams
Node b
Oracle Clusterware
Oracle RAC/ASM
Oracle single-instance
+ASM2 RDBA
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
62/112
Copyright 2007, Oracle. All rights reserved.64
Resource Action Script
#!/usr/bin/perl
# Copyright (c) 2002, 2006, Oracle. All rights reserved.# action_db.pl# This perl script is the action script for start / stop / check# the Oracle Instance in a cold failover configuration.## NAME# action_db.pl##DESCRIPTION
##NOTES## Usage:# rknapp 05/22/06 - Creation
# Environment settings, please modify and adapt this
$ORA_CRS_HOME = "/u01/crs1020";$CRS_HOME_BIN = "/u01/crs1020/bin";$CRS_HOME_SCRIPT = "/u01/crs1020/crs/public";$ORACLE_HOME_BIN = "/u01/app/oracle/product/10.2.0/sgldb_1/bin";$ORACLE_HOME = "/u01/app/oracle/product/10.2.0/sgldb_1";$ORA_SID = "OL8RDBA";$ORA_USER = "oracle";
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
63/112
Copyright 2007, Oracle. All rights reserved.65
Resource Action Script
if ($#ARGV != 0 ) {print "usage: start stop check required \n";exit;
}
$command = $ARGV[0];
# Database start stop check
# Start databaseif ($command eq "start" ) {system ("
su - $ORA_USER
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
64/112
Copyright 2007, Oracle. All rights reserved.66
Resource Action Script
# Stop databaseif ($command eq "stop" ) {system ("
su - $ORA_USER
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
65/112
Copyright 2007, Oracle. All rights reserved.67
Resource Action Script
sub check {my($check_proc,$process) = @_;$process = "ora_pmon_$ORA_SID";$check_proc = qx(ps -aef | grep ora_pmon_$ORA_SID | grep -v grep | awk '{print\$8}');
chomp($check_proc);if ($process eq $check_proc) {
exit 0;} else {
exit 1;}
}
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
66/112
Copyright 2007, Oracle. All rights reserved.68
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
W k h IV Si l I t t RAC C i
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
67/112
Copyright 2007, Oracle. All rights reserved.69
Workshop IV: Single-Instance to RAC Conversion
1. Use dbca from single-instance home to create a
database template including files.
2. Propagate template files from single-instance home toRAC home.
3. Use dbca from single-instance home to remove theexisting database.
4. Use dbca from RAC home to create a new database
with the same name by using the new template.
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
68/112
Copyright 2007, Oracle. All rights reserved.70
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
69/112
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
70/112
Ch ki P i it B f
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
71/112
Copyright 2007, Oracle. All rights reserved.73
Checking Prerequisites BeforeOracle Clusterware Installation
Adding Oracle Clusterware to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
72/112
Copyright 2007, Oracle. All rights reserved.74
Adding Oracle Clusterware to the New Node
Execute/oui/bin/addNode.sh.
Adding Oracle Clusterware to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
73/112
Copyright 2007, Oracle. All rights reserved.75
Adding Oracle Clusterware to the New Node
Adding Oracle Clusterware to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
74/112
Copyright 2007, Oracle. All rights reserved.76
Adding Oracle Clusterware to the New Node
Configuring the New ONS
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
75/112
Copyright 2007, Oracle. All rights reserved.77
Configuring the New ONS
Use the racgons add_config command to add new nodeONS configuration information to OCR.
Adding ASM Home to the New Node (Optional)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
76/112
Copyright 2007, Oracle. All rights reserved.78
Adding ASM Home to the New Node (Optional)
Adding RAC Home to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
77/112
Copyright 2007, Oracle. All rights reserved.79
Adding RAC Home to the New Node
Adding a Listener to the New Node (Optional)
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
78/112
Copyright 2007, Oracle. All rights reserved.80
Adding a Listener to the New Node (Optional)
Adding a Database Instance to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
79/112
Copyright 2007, Oracle. All rights reserved.81
Adding a Database Instance to the New Node
Adding a Database Instance to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
80/112
Copyright 2007, Oracle. All rights reserved.82
Adding a Database Instance to the New Node
Adding a Database Instance to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
81/112
Copyright 2007, Oracle. All rights reserved.83
Adding a Database Instance to the New Node
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
82/112
Copyright 2007, Oracle. All rights reserved.84
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
RAC and Data Guard Architecture
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
83/112
Copyright 2007, Oracle. All rights reserved.85
RAC and Data Guard Architecture
Primary instance A
Primary instance B
Standby receiving instance C
Standby apply instance D
Onlineredofiles
Standbyredofiles
RFS
ARCn
LGWR RFS
ARCn
LGWR
Primary
database
Standbydatabase
Apply
FlashRecovery
Area
FlashRecovery
Area
ARCn
ARCn
Workshop VI: Creating a RAC Logical Standby
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
84/112
Copyright 2007, Oracle. All rights reserved.87
Workshop VI: Creating a RAC Logical Standby
1. Form super groupstwo groups together: The first group works on the primary database (three
nodes).
The second group works on the standby database (twonodes).
Stop the standby database.
2. Install OCFS2 on the second cluster.3. Create the physical standby database:
The second group uses OCFS2 storage for the standby.
4. Convert your physical standby to a logical standby.
Group1 Group2 Group3 Group4
Primary RACASM DB
Standby RACOCFS2 DB
Primary RACASM DB
Standby RACOCFS2 DB
PI1 PI2 PI3 PI1 PI2 PI3SI1 SI2 SI1 SI2
Installing OCFS2 on the Second Cluster
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
85/112
Copyright 2007, Oracle. All rights reserved.88
Installing OCFS2 on the Second Cluster
1. On both nodes, install the OCFS2 RPMs (/stage/ocfs2): rpmUvh ocfs2-tools-1.2.2-1.i386.rpm
rpmUvh ocfs2-2.6.9-42.Elsmp-1.2.3-1.i686.rpm
rpmUvh ocfs2console-1.2.2-1.i386.rpm
2. Run the OCFS2 console (ocfs2console):
Configure nodes (both nodes of your cluster).
Propagate configuration.
From first node only: Format /dev/dm-1.
Cluster size set to 128K, Block size set to 4K
3. Run /etc/init.d/o2cb configure.4. Edit /etc/fstab to add:
/dev/mapper/ocfs2p1 /ocfs2 ocfs2 _netdev,datavolume,nointr 0 0
5. mount /ocfs2
Creating the Physical Standby Database I
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
86/112
Copyright 2007, Oracle. All rights reserved.89
Creating the Physical Standby Database I
1. Create the /opt/standbydb/stage directory on one nodeon both primary and secondary sites.
2. Change your redo log group configuration on theprimary DB to use a 10 MB redo log.
3. Put primary DB inARCHIVELOG mode and FORCE LOGGINGmode.
4. Back up primary DB Pfile to the stage directory.
5. Back up primary DB plus archive logs to the stagedirectory.
6. Back up primary DB controlfile for standby to the stagedirectory.
7. Back up primary network file to stage.
8. Copy stage to your standby site first node.
Creating the Physical Standby Database II
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
87/112
Copyright 2007, Oracle. All rights reserved.90
Creating the Physical Standby Database II
1. Manually configure listeners on standby site.
2. Use NETCA to register listeners as CRS resources.3. Create password files on standby site.
4. Create DB directories on /ocfs2 to mimic ASM.
5. Create Spfiles on standby site.
6. Create /admin/DBdirectories on standby site.
7. RMAN: duplicate target database for standby
8. Add standby redo log files: three groups per thread.
9. Issue alter database recover managed standby database usingcurrent logfile disconnect
10. Register standby DB/instances Clusterware resources.11. Configure primary init parameters.
12. Add standby redo logs files on primary site.
13. Check that propagation is working.
[ ]
[ ]
Standby Initialization Parameter: Example
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
88/112
Copyright 2007, Oracle. All rights reserved.91
Standby Initialization Parameter: Example
*.audit_file_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/adump'
*.background_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/bdump'*.cluster_database_instances=2*.cluster_database=true*.control_files='+DATA/ol8rdba/controlfile/current.261.607490653','+FRA/ol8rdba/controlfile/current.256.607490655'*.core_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/cdump'*.db_create_file_dest='/ocfs2/STANDBY/DATA'*.db_name='OL8RDBA'*.db_recovery_file_dest='/ocfs2/STANDBY/FRA'
*.db_recovery_file_dest_size=33554432000*.dispatchers='(PROTOCOL=TCP) (SERVICE=OL8SDBAXDB)'OL8SDBA1.instance_number=1OL8SDBA2.instance_number=2OL8SDBA3.instance_number=3*.remote_listener='LISTENERS_OL8SDBA'*.remote_login_passwordfile='exclusive'OL8SDBA3.thread=3OL8SDBA2.thread=2
OL8SDBA1.thread=1*.undo_management='AUTO'OL8SDBA3.undo_tablespace='UNDOTBS3'OL8SDBA2.undo_tablespace='UNDOTBS2'OL8SDBA1.undo_tablespace='UNDOTBS1'*.user_dump_dest='/u01/app/oracle/product/10.2.0/rac/admin/OL8SDBA/udump'
Standby Initialization Parameter: Example
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
89/112
Copyright 2007, Oracle. All rights reserved.92
Standby Initialization Parameter: Example
*.log_archive_config='dg_config=(OL8SDBA,OL8RDBA)'*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'*.log_archive_dest_2='service=OL8RDBA valid_for=(online_logfiles,primary_role)db_unique_name=OL8RDBA'
*.db_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdba','/ocfs2/STANDBY/FRA/OL8SDBA'*.log_file_name_convert='+DATA/ol8rdba/','/ocfs2/STANDBY/DATA/OL8SDBA/','+FRA/ol8rdba','/ocfs2/STANDBY/FRA/OL8SDBA'*.standby_file_management=auto*.fal_server='OL8RDBA'*.fal_client='OL8SDBA'*.service_names='OL8SDBA'*.db_unique_name=OL8SDBA
Primary Initialization Parameter: Example
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
90/112
Copyright 2007, Oracle. All rights reserved.93
Primary Initialization Parameter: Example
log_archive_config='dg_config=(OL8SDBA,OL8RDBA)'log_archive_dest_2='service=OL8SDBA valid_for=(online_logfiles,primary_role)db_unique_name=OL8SDBA'db_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDBY/FRA/OL8SDBA','+FRA/ol8rdba'log_file_name_convert='/ocfs2/STANDBY/DATA/OL8SDBA/','+DATA/ol8rdba/','/ocfs2/STANDBY/FRA/OL8SDBA','+FRA/ol8rdba'standby_file_management=autofal_server='OL8SDBA'fal_client='OL8RDBA'
Converting Physical Standby to Logical Standby
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
91/112
Copyright 2007, Oracle. All rights reserved.94
Converting Physical Standby to Logical Standby
1. Set the primary site toMAXIMIZE PERFORMANCE.
2. Stop standby recovery on the standby site.
3. Build a logical standby dictionary on the primary site.
4. Create +FRA/logical_arch to support standby archive.
5. Configure initialization parameters for logical standby
on both sites.6. Shut down the second standby instance.
7. Start up mount exclusive first standby instance: alter database recover to logical standby DB
Startup mount force
alter database open resetlogs
8. Start up the second instance.
9. alter database start logical standby apply immediate
10.Check propagation.
Logical Standby Initialization Parameter: Example
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
92/112
Copyright 2007, Oracle. All rights reserved.95
Logical Standby Initialization Parameter: Example
standby_archive_dest='+FRA/logical_arch/'log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OL8RDBA'log_archive_dest_state_1=enablelog_archive_dest_2='SERVICE=OL8SDBA VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LGWRSYNC AFFIRM DB_UNIQUE_NAME=OL8SDBA'log_archive_dest_state_2=enablelog_archive_dest_3='LOCATION=+FRA/logical_arch/VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=OL8RDBA'log_archive_dest_state_3=enablelog_archive_dest_10=''
parallel_max_servers=9
standby_archive_dest='/ocfs2/logical_arch/'Log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DESTVALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=OL8SDBA'log_archive_dest_state_1=enablelog_archive_dest_2='SERVICE=OL8RDBA VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LGWRSYNC AFFIRM DB_UNIQUE_NAME=OL8RDBA'
log_archive_dest_state_2=enablelog_archive_dest_3='LOCATION=/ocfs2/logical_arch/VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=OL8SDBA'log_archive_dest_state_3=enablelog_archive_dest_10=''parallel_max_servers=9
Standby site
Primary site
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
93/112
Copyright 2007, Oracle. All rights reserved.96
Storage setup
Install Clusterware and ASM
Set up single-instance protection
Single-instance to RAC conversion
Cluster extension to three nodes
Create a logical standby DB
Rolling upgrade
Patches and the RAC Environment
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
94/112
Copyright 2007, Oracle. All rights reserved.97
ex0043
/u01/app/oracle/product/db_1
ex0045
/u01/app/oracle/product/db_1
Apply a patchset to/u01/app/oracle/product/db_1 on
all nodes.
ex0044
/u01/app/oracle/product/db_1
Inventory List Locks
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
95/112
Copyright 2007, Oracle. All rights reserved.98
"Unable to acquire a writer lock on nodes ex0044.
Restart the install after verifying that there isno OUI session on any of the selected nodes."
y
The OUI employs a timed lock on the inventory liststored on a node.
The lock prevents an installation from changing a listbeing used concurrently by another installation.
If a conflict is detected, the second installation issuspended and the following message appears:
OPatch Support for RAC: Overview
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
96/112
Copyright 2007, Oracle. All rights reserved.99
pp
OPatch supports four different methods:
All-node patch: Stop all/Patch all/Start all
Minimize down time: Stop/Patchall but one, Stoplast, Start all down, Patch last/Start last
Rolling patch: Stop/Patch/Start one at a time
Local patch: Stop/Patch/Start only one How OPatch selects which method to use:
If (users specify -local | -local_node)
patching mechanism = Local
else if (users specify -minimize_downtime)
patching mechanism = Min. Downtime
else if (patch is a rolling patch)
patching mechanism = Rolling
else patching mechanism = All-node
Rolling Patch Upgrade Using RAC
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
97/112
Copyright 2007, Oracle. All rights reserved.100
g pg g
Operatingsystem
upgrades
Oraclepatch
upgrades
Hardwareupgrades
Clients
Initial RAC configuration
Clients
Clients on , patch
ClientsClients
Upgrade complete
1 2
4 3
A B Patch
Patch
A B
Clients on , patchB A
Downloading and Installing Patch Updates
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
98/112
Copyright 2007, Oracle. All rights reserved.101
g g p
Downloading and Installing Patch Updates
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
99/112
Copyright 2007, Oracle. All rights reserved.102
g g p
Rolling Release Upgrade Using SQL Apply
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
100/112
Copyright 2007, Oracle. All rights reserved.103
g pg g pp y
Majorrelease
upgrades
Patch setupgrades
Clustersoftware
andhardwareupgrades
Versionn
Versionn
Logsship
Clients
Initial SQL Apply setup Versionn
Versionn+1
Logs
queue
Clients
Upgrade standby site
Versionn Versionn+1
Logsship
Clients
Run mixed to testVersionn+1 Versionn+1
Logsship
Clients
Switchover, upgrade standby
1 2
4 3
Workshop VII: Rolling Upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
101/112
Copyright 2007, Oracle. All rights reserved.104
1. Both groups: Perform a rolling upgrade of OracleClusterware to 10.2.0.2 (one instance at a time).
2. Upgrade your logical standby database to 10.2.0.2.
3. Switch over.
4. Upgrade your new logical standby database to10.2.0.2.
5. Switch back.
Oracle Clusterware Rolling Upgrade: Initial Status
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
102/112
Copyright 2007, Oracle. All rights reserved.105
Node a
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
RDBA1 +ASM1CRS
Node b
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
RDBA2 +ASM2CRS
Node c
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
RDBA3 +ASM3CRS
Node a
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
SDBA1 CRS
Node b
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
SDBA2 CRS
Oracle Clusterware Rolling Upgrade: Primary Site
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
103/112
Copyright 2007, Oracle. All rights reserved.106
1. Run unzip p4547817_10202_LINUX.zip.
2. Run runInstaller from the Disk1 directory: Choose your Oracle Clusterware home installation.
Choose all three nodes.
3. Repeat on each node, one after the other:
crsctl stop crs
/u01/crs1020/install/root102.sh
Node a/b/c
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
RDBA1 +ASM1CRS
Node a/b/c
Oracle Clusterware10.2.0.1
Oracle RAC/ASM10.2.0.1
Node a/b/c
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
RDBA1 +ASM1CRS
stop root102
Oracle Clusterware Rolling Upgrade: Standby Site
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
104/112
Copyright 2007, Oracle. All rights reserved.107
1. Stop the logical standby apply engine.
2. Optional: Stop the logical standby database.
3. Unzipp4547817_10202_LINUX.zip.
4. Run runInstaller from the Disk1 directory:
Choose your Oracle Clusterware home installation. Choose both nodes.
5. Repeat on each node, one after the other:
crsctl stop crs
/u01/crs1020/install/root102.shNote: There is no need to restart the logical standbydatabase.
6. Restart the logical standby apply engine.
Oracle Clusterware Rolling Upgrade: Final Status
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
105/112
Copyright 2007, Oracle. All rights reserved.108
Node a
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
RDBA1 +ASM1CRS
Node b
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
RDBA2 +ASM2CRS
Node c
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
RDBA3 +ASM3CRS
Node a
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
SDBA1 CRS
Node b
Oracle Clusterware10.2.0.2
Oracle RAC/ASM10.2.0.1
SDBA2 CRS
Standby Database Upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
106/112
Copyright 2007, Oracle. All rights reserved.109
10.2.0.1 10.2.0.1
Logsship
Clients Logsqueue
Clients
Stop logicalstandby
apply engineand
standby DB.
Logsqueue
Clients
1) Apply patch5287523 (10.2.0.1).
2) Upgrade databasehome to 10.2.0.2.
3) Execute catupgrd.sql
and utlrp.sql.4) Apply patch5287523 (10.2.0.2).
Createdatabaselink usingtheSYSTEMuser.
Clients
Logsship
10.2.0.1 10.2.0.1
10.2.0.1 10.2.0.2
10.2.0.1 10.2.0.2
Startstandby DB
and
logicalstandbyapply engine.
1 2
3 4
Create databaselink to standby
usingSYSTEMaccount.
Switching Over
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
107/112
Copyright 2007, Oracle. All rights reserved.110
10.2.0.1 10.2.0.2
Logsship
Clients
Clients
Log off usersand
switch over
to logicalstandby.
10.2.0.1 10.2.0.2
Stop the secondinstance
anddisable
its thread.
Stop secondand thirdinstances
and disabletheir threads.
Switch overto logicalprimary
andlog on users.
Clients
10.2.0.1 10.2.0.2
Enablesecond
andthird threads
and restartcorresponding
instances.
Enable secondthread
and restart the
correspondinginstance.
Logsqueue Logs
queue
Clients
Logsship
10.2.0.1 10.2.0.2
Startstandby DB
andlogical
standbyapply engine.
5 6
7 8
Old Primary Database Upgrade
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
108/112
Copyright 2007, Oracle. All rights reserved.111
Clients
10.2.0.1 10.2.0.2
Logs
queue
Stop thestandby DB.
Clients
10.2.0.2 10.2.0.2
Logsqueue
1) Apply patch5287523 (10.2.0.1).
2) Upgrade database
home to 10.2.0.2.3) Execute catupgrd.sql
and utlrp.sql.4) Apply patch
5287523 (10.2.0.2).
Clients
10.2.0.2 10.2.0.2
Restart thestandby DB
and the
logicalstandby applyengine.
Clients
10.2.0.1 10.2.0.2
Logsqueue
Logsship
9 10
11 12
Switching Back
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
109/112
Copyright 2007, Oracle. All rights reserved.112
Clients
10.2.0.2 10.2.0.2
Logsship
Clients
10.2.0.2 10.2.0.2
Logsship Stop the second
instanceand
disableits thread.
Stop secondand thirdinstances
and disabletheir threads.
Clients
10.2.0.2 10.2.0.2
Prepare toswitch overto logical
standby.
Prepare toswitch over
to primary.
Clients
10.2.0.2 10.2.0.2
Commit toswitch overto logicalstandby,
restart logicalstandby apply
engine,and
restart/enablesecond instance/
thread.
Commit toswitch overto primary
andrestart/enablesecond/third
instances/threads.
Logsship
13 14
15 16
Using a Test Environment
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
110/112
Copyright 2007, Oracle. All rights reserved.113
The most common cause of down time is change. Test your changes on a separate test cluster before
changing your production environment.
RACdatabase
Productioncluster
RACdatabase
Testcluster
Optional Workshop: Six Nodes Cluster
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
111/112
Copyright 2007, Oracle. All rights reserved.114
1. Use the same groups as the previous workshop.2. Reconfigure OCFS2 on all six nodes:
cg1.ocfs2 visible from six nodes
cg3.ocfs2 visible from six nodes
3. Install Oracle Clusterware on all six nodes.
4. Install RAC and Database on all six nodes usingOCFS2 storage.
Group1 Group2
RAC DB on/ocfs2/big
I1 I2 I3 I4 I5 I6
Group3 Group4
RAC DB on/ocfs2/big
I1 I2 I3 I4 I5 I6
Summary
7/31/2019 Les 01 RAC Deploy Latest JFV 070314
112/112
In this workshop, you should have learned how to: Install and configure iSCSI storage on both client
clusters and Openfiler servers
Install and configure Oracle Clusterware and RAC onmore than two clustered nodes
Use Oracle Clusterware to protect a single-instancedatabase
Convert a single-instance database to RAC
Extend Oracle Clusterware and RAC to more than twonodes
Create a RAC primary-physical/logical standbydatabase environment
Upgrade Oracle Clusterware RAC and RAC databases
Recommended