47
Oracle Clusterware 11GR2 Presented By : Qari Kamran Siddique Senior Database Consultant CGI

Clusterware 11.2

  • Upload
    ledang

  • View
    241

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Clusterware 11.2

Oracle Clusterware 11GR2

Presented By :Qari Kamran SiddiqueSenior Database ConsultantCGI

Page 2: Clusterware 11.2

What is CLUSTEREnables Servers to Communicate with each other as a COLLECTIVE UNIT.A software that make clustered hardware to run multiple instances against ONE database .Database files are stored on disks that are either physically or logically connected to each node.Cluster Software hides the structure.Disks are available for read and write by all nodes.Operating system is the same on each machine.This architecture enables users and applications to benefit from the processing power of multiple machines.In case of crash of one node or instance, application can still access to the surviving node.

Page 3: Clusterware 11.2
Page 4: Clusterware 11.2
Page 5: Clusterware 11.2

Benefits Scalability of applicationsUse of less expensive commodity hardwareAbility to fail overAbility to increase capacity over time by adding serversAbility to program the startup of applications in a planned order Ability to monitor processes and restart them if they stopResource Control

Page 6: Clusterware 11.2

More BenefitsEliminate unplanned downtime due to hardware failures.Reduce or eliminate planned downtime for software maintenance.Increase throughput for cluster-aware applicationsReduce the total cost of ownership

Page 7: Clusterware 11.2

Basic RAC ComponentsOracle 10g R1,R2,11g R1

Oracle ClusterwareShared StorageOracle RAC Database

Page 8: Clusterware 11.2

Basic RAC Components

(Oracle 11g R2)

Grid Infrastructure

RAC Database

Page 9: Clusterware 11.2

Oracle Clusterware Hardware Concepts and Requirements

One or more servers connected with each other with a network,

called “INTERCONNECT”At least two network interface cards: one for a public network and one for a private networkThe interconnect network is a private network using a switch (or multiple switches) that only the nodes in the cluster can accessNo not support using crossover cables At least two network interfaces for the public network, bonded to provide one addressAt least two network interfaces for the private interconnect networkOracle Clusterware supports NFS, iSCSI, Direct Attached Storage DAS), Storage Area Network (SAN) storage, and Network Attached Storage (NAS).

Page 10: Clusterware 11.2

Oracle Clusterware Hardware Concepts and Requirements

(Continue)Consider the I/O requirements of the entire cluster when choosing your storage subsystem.At least one local disk that is internal to the serverThis disk is used for the operating system and Oracle Software binariesIncrease HA by providing safe side for binary corruptionAllows rolling upgrades, which reduce downtime.

Page 11: Clusterware 11.2

Oracle Clusterware Operating System Concepts and Requirements

(Product Certification)

Page 12: Clusterware 11.2

Software Concepts

Voting Disks Oracle Clusterware uses voting disk files to determine

which nodes are members of a cluster. Can be configured on Oracle ASM ,or on shared storage

( Raw volumes). In case of ASM, redundancy level defines number of

voting disks Without ASM => Minimum THREE voting disks for

HA Use external redundancy Do not use more than five voting disks The maximum number of voting disks that is supported

is 15.

Page 13: Clusterware 11.2

Software Concepts Oracle Cluster Registry (OCR) Store and manage information about the components

that oracle clusterware controls , e.g; Rac Database,listeners, virtual IP addresses (VIPs), services,applications.

Can be configured on Oracle ASM ,or on shared storage( Raw volumes)

stores configuration information in a series of key-valuepairs in a tree structure.

multiple OCR locations (multiplexing) should be defined You can have up to five OCR locations Each OCR location must reside on shared storage that is

accessible by all of the nodes in the cluster

Page 14: Clusterware 11.2

Software ConceptsVirtual Internet Protocol Address (VIP)

Oracle RAC requires a virtual IP address for each server in the cluster.It is an unused IP address on the same subnet as the Local Area Network (LAN).This address is used by applications to connect to the RAC database (NOT 11G R2).If a node fails, the Virtual IP is failed over to another node in the cluster to provide an immediate node down response to connection requests.

Page 15: Clusterware 11.2

Oracle Clusterware Network Configuration Concepts

Grid Infrastructure through the self-management of the network requirements for the cluster.Oracle Clusterware 11g release 2 (11.2) supports the use of dynamic host configuration protocol (DHCP) for all private interconnect addresses, as well as for most of the VIP addresses.DHCP provides dynamic configuration of the host's IP addresses.Addition of the Oracle Grid Naming Service (GNS) to the cluster=========11gR2Clusterware

Page 16: Clusterware 11.2

Oracle Clusterware Network Configuration Concepts(Continue)

Grid Naming Service (GNS)

Linked to the corporate Domain Name Service (DNS)

Clients can easily connect to the cluster.

Requires DHCP service on the public network.

Obtain an IP address on the public network for the GNS VIP.

DNS uses the GNS VIP to forward requests to the cluster .

Delegate a subdomain in the network to the cluster

Subdomain forwards all requests for addresses in the subdomain to theGNS VIP.

Page 17: Clusterware 11.2

Grid Naming Service (GNS

Reference

DNS and DHCP Setup Example for Grid Infrastructure GNS [ID 946452.1]

Page 18: Clusterware 11.2

Network Configuration Concepts(Continue)

Single Client Access Name (SCAN)

Virtual hostname to provide for all clients connecting to the cluster (asopposed to the vip hostnames in 10g and 11gR1).

Domain name registered to at least one and up to three IP addresses,either in the domain name service (DNS) or the Grid Naming Service(GNS).

By default, the name used as the SCAN is also the name of the cluster. For installation to succeed, the SCAN must resolve to at least one

address. Do not configure SCAN VIP addresses in the hosts file. But if you use

the hosts file to resolve SCAN name, you can have only one SCAN IPaddress

If hosts file is used, Cluster Verification Utility failure at end ofinstallation.

Page 19: Clusterware 11.2

Network Configuration Concepts(Continue)

DNS Round Robin resolution to three addresses –RECOMMENDED

Add/remove nodes without reconfiguring clients Adds location independence for the databases, so that client

configuration does not have to depend on which nodes arerunning a particular database.

local listener LISTENER on all nodes to listen on local VIP, andSCAN listener LISTENER_SCAN1 (up to three cluster wide) tolisten on SCAN VIP(s)

system/manager@cgi1-scan:1521/appsjdbc:oracle:thin:@cgi-scan:1521/apps

Page 20: Clusterware 11.2
Page 21: Clusterware 11.2

Network Configuration Concepts(Continue)

Sample TNS entry for SCAN

TEST. CGI.COM =(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SCAN-TEST.CGI.COM)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.CGI.COM)))

Sample TNS entry without SCAN

TEST.CGI.COM =(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=TEST1-vip.CGI.COM)(PORT=1521))(ADDRESS=(PROTOCOL=tcp)(HOST=TEST2-vip.CGI.COM)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=11GR2TEST.CGI.COM)))

Page 22: Clusterware 11.2

Network Configuration Concepts(Continue)

The node VIP and the three SCAN VIPs are obtained from theDHCP server when using GNS. If a new server joins thecluster, then Oracle Clusterware dynamically obtains therequired VIP address from the DHCP server, updates thecluster resource, and makes the server accessible throughGNS.$ srvctl config scanSCAN name: cgi-scan, Network:192.168.182.0/255.255.255.0/SCAN VIP name: scan1, IP: /192.168.182.109SCAN VIP name: scan2, IP: /192.168.182.110SCAN VIP name: scan3, IP: /192.168.182.108

Page 23: Clusterware 11.2

Node Name

Instance Name

Database Name

cginode1 cgirac1 cgi.dbservices.cacginode2 cgirac2

Node Name Public IP Private IP VIP

cginode1 192.168.1.151 192.168.2.1 192.168.1.153

cginode2 192.168.1.152 192.168.2.2 192.168.1.154

SCAN NAME IPSCAN VIP1 192.168.2.201SCAN VIP2 192.168.2.202SCAN VIP3 192.168.2.203

Page 24: Clusterware 11.2

Oracle Clusterware startup sequence

Do not worry..

that is the Clusterware'sjob!

image from the “Oracle Clusterware Administration and Deployment Guide)”

Page 25: Clusterware 11.2

Oracle Grid InfrastructureGrid HOME

Grid Infrastructure home => Oracle ASM + Oracle Clusterware

Single Oracle home for both

OCR and voting disk files can be placed either on Oracle ASM,oron a cluster file system or NFS system

Installing Oracle Clusterware files on raw or block devices is no longer supported

Page 26: Clusterware 11.2

Oracle Grid InfrastructureGrid HOME

Oracle Clusterware and Oracle ASM are installedinto a single home directory , which is called GridHome

# su - gridORACLE_SID=+ASM1; export ORACLE_SIDORACLE_BASE=/u01/app/grid;export ORACLE_BASE# Specifies the directory containing the Oracle Grid Infrastructure software.

ORACLE_HOME=/u01/app/11.2.0/gridexport ORACLE_HOME

Page 27: Clusterware 11.2

Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

new multi-platform, scalable file system and storage management solution

provides dynamic file system resizing

improved performance

provides storage reliability through the mirroring and parity protection Oracle ASM provides.

Page 28: Clusterware 11.2

Cluster Time Synchronization Service

Ensures that there is a synchronization service in the cluster.

Network Time Protocol (NTP) is not found during cluster configuration, then CTSS is configured to ensure time synchronization.

Page 29: Clusterware 11.2
Page 30: Clusterware 11.2

Mandatory OS Users and Groups

Oracle Inventory Group (typically, oinstall) => Must be the primary group for Oracle Software installation owners.

Oracle Software Owner => Typically oracle

OSDBA group => typically, dba for Database authentication. (SYSDB A + SYSAM)

Page 31: Clusterware 11.2

Recommended Approach for OS Users and GroupsReference (Oracle Grid Infrastructure installation guide

Grid Infrastructure software owner => GRIDOracle RAC Software owner => ORACLESeparate group for Oracle ASM => OSASM group Members of this group would connect to ASM by using sysasm O/S authenticationASM Database Administrator group (OSDBA) => Members of the OSDBA group for Oracle ASM are granted read and write access to files managed by Oracle ASMThe Oracle Automatic Storage Management Group (typically asmadmin)OSOPER for Oracle ASM group (typically asmper) => Member of this group are granted access to a subset of the SYSASM privileges.

Page 32: Clusterware 11.2

Example of Creating Role-allocated Groups, Users, and Paths

# groupadd -g 1000 oinstall# groupadd -g 1020 asmadmin# groupadd -g 1021 asmdba# groupadd -g 1031 dba1# groupadd -g 1041 dba2# groupadd -g 1022 asmoper# useradd -u 1100 -g oinstall -G asmadmin,asmdba grid# useradd -u 1101 -g oinstall -G dba1,asmdba oracle1# useradd -u 1102 -g oinstall -G dba2,asmdba oracle2# mkdir -p /u01/app/11.2.0/grid# mkdir -p /uo1/app/grid# chown -R grid:oinstall /u01# mkdir -p /u01/app/oracle1# chown oracle1:oinstall /u01/app/oracle1# mkdir -p /u01/app/oracle2# chown oracle2:oinstal

Page 33: Clusterware 11.2

Oracle Base Directory path

# mkdir -p /u01/app/11.2.0/grid#chown grid:oinstall /u01/app/11.2.0/grid#chmod -R 775 /u01/app/11.2.0/grid

# mkdir -p /u01/app/oracle#chown -R oracle:oinstall /u01/app/11.2.0/oracle#chmod -R 775 /u01/app/11.2.0/oracle

Page 34: Clusterware 11.2

Storage Options

Page 35: Clusterware 11.2
Page 36: Clusterware 11.2
Page 37: Clusterware 11.2
Page 38: Clusterware 11.2
Page 39: Clusterware 11.2
Page 40: Clusterware 11.2
Page 41: Clusterware 11.2
Page 42: Clusterware 11.2
Page 43: Clusterware 11.2
Page 44: Clusterware 11.2
Page 45: Clusterware 11.2
Page 46: Clusterware 11.2

What’s Next !!!Administering Oracle Clusterware, ASM and RAC databasesOracle RAC Backup and RecoveryRAC ServicesRAC , Oracle Clusterware and ASM tuningAdding and Deleting RAC NodesPatch Management in RACOracle Clusterware CloningApplication high availability with clusterwareOracle Clusterware utilities usageWhole clusterware stack upgrade to 11g R2RAC + Clusterware + ASM …tips & tricks…………………..and Troubleshooting

Page 47: Clusterware 11.2

Questions ???