200
Oracle Maximum Availability Architecture 1 Ricardo Portilho Proni [email protected] Esta obra está licenciada sob a licença Creative Commons Atribuição-SemDerivados 3.0 Brasil. Para ver uma cópia desta licença, visite http://creativecommons.org/licenses/by-nd/3.0/br/.

Oracle Maximum Availability Architecturenervinformatica.com.br/Downloads/Materiais/OMAA-12c.pdf · Oracle Maximum Availability Architecture 1 Ricardo Portilho Proni [email protected]

  • Upload
    others

  • View
    34

  • Download
    0

Embed Size (px)

Citation preview

Oracle Maximum Availability Architecture

1

Ricardo Portilho [email protected]

Esta obra está licenciada sob a licençaCreative Commons Atribuição-SemDerivados 3.0 Brasil.

Para ver uma cópia desta licença, visite http://creativecommons.org/licenses/by-nd/3.0/br/.

2

Alta Disponibilidade

3

• Confiança• Recuperabilidade• Detecção de erros em tempo hábil• Continuidade operacional

Características da Alta Disponibilidade

4

• Análise de impacto no negócio• Custo de tempo fora de operação• Objetivo de tempo de recuperação (RTO)• Objetivo de ponto de recuperação (RPO)• Meta de gerenciamento• Custo total de propriedade (TCO)• Retorno sobre o investimento (ROI)

Análise de Alta Disponibilidade

5

• Camada 1 (Faturamento, Vendas)• Camada 2 (Compras, Estoque)• Camada 3 (BI, Desenvimento)

Sistemas e Alta Disponibilidade

6

• Tempo máximo de parada tolerado.• Frequência máxima de paradas tolerada.• Custos facilmente mensuráveis (vendas, funcionários ociosos, multas

contratuais)• Custos dificilmente mensuráveis (processos judiciais)• Custos não mensusáveis (publicidade negativa, clientes irritados)

Custos e Alta Disponibilidade

7

Implantação de Alta Disponibilidade

8

• Fast-Start Fault Recovery• Oracle Restart• Oracle Real Application Clusters and Oracle Clusterware• Oracle RAC One Node• Oracle Data Guard• Oracle GoldenGate / Oracle Streams• Oracle Flashback Technology• Oracle Automatic Storage Management• Fast Recovery Area• Recovery Manager• Data Recovery Advisor• Oracle Secure Backup• Oracle Security Features• LogMiner• Oracle Exadata Storage Server Software (Exadata Cell)• Oracle Exadata Database Machine• Oracle Database File System (DBFS)• Oracle Automatic Storage Management Cluster File System (Oracle ACFS)• Client Failover• Automatic Block Repair• Corruption Prevention, Detection, and Repair

Soluções Oracle para Alta Disponibilidade

9

• Operating system and hardware upgrades -> Oracle RAC• Oracle Database patches -> Oracle RAC• Oracle Grid Infrastructure upgrades and patches -> Oracle RAC• Storage Migration -> Oracle ASM• Migrating to Exadata Storage -> Oracle MAA best practices• Upgrading Exadata Storage -> Exadata Patch Manager• Migrating a single-instance database to Oracle RAC -> Oracle Grid Infrastructure• Migrating to Oracle ASM -> Oracle Data Guard• Migrating a single-instance database to Oracle RAC -> Oracle Data Guard• Patch set and database upgrades -> Oracle Data Guard using SQL Apply• Oracle interim patches, Oracle clusterware upgrades and patches, Oracle ASM

upgrades, Operating System and Hardware Upgrades -> Oracle Data Guard Standby-First Patch Apply

• Migration across Windows and Linux -> Oracle Data Guard• Platform migration across the same endian format platforms -> Transportable database• Platform migration across different endian format platforms -> Transportable tablespace• Patch set and database upgrades, platform migration, rolling upgrades, and when

different character sets are required -> Oracle GoldenGate and Oracle Streams• Application upgrades -> Online Application Maintenance and Upgrades

Indisponibilidades Planejadas

10

• Site Failures -> Oracle Data Guard• Site Failures -> Oracle GoldenGate and Oracle Streams• Site Failures -> Recovery Manager• Computer Failures -> Oracle Real Application Clusters and Oracle Clusterware• Computer Failures -> Oracle RAC One Node• Computer Failures -> Fast-Start Fault Recovery• Computer Failures -> Oracle Data Guard• Computer Failures -> Oracle GoldenGate and Oracle Streams• Storage Failures -> Oracle Automatic Storage Management• Storage Failures -> Oracle Data Guard• Storage Failures -> RMAN with Fast Recovery Area and Oracle Secure Backup• Storage Failures -> Oracle GoldenGate and Oracle Streams• Data Corruption -> Oracle Exadata Storage Server Software (Exadata Cell) and Oracle ASM• Data Corruption -> Corruption Prevention, Detection, and Repair• Data Corruption -> Data Recovery Advisor and RMAN with Fast Recovery Area• Data Corruption -> Oracle Data Guard• Data Corruption -> Oracle GoldenGate and Oracle Streams• Human Errors -> Oracle Security Features• Human Errors -> Oracle Flashback Technology• Human Errors -> LogMiner• Lost writes -> Oracle Data Guard, RMAN, DB_LOST_WRITE_PROTECT• Lost writes -> Oracle Data Guard Oracle Exadata Storage Server Software (Exadata Cell)• Hangs or slow down - Oracle Database and Oracle Enterprise Manager

Indisponibilidades Não Planejadas

11

High Availability Overview:http://docs.oracle.com/database/121/HAOVW/toc.htm

High Availability Best Practices:http://docs.oracle.com/database/121/HABPT/toc.htm

Maiores informações

12

Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09

Ambiente ContingênciaOracle Data Guard Physical Standby: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10

Ambiente ObservadorOracle Client: observer-rac01 / observer-rac02 / observer-rac03 / observer-rac04

Cenário 1: Oracle RAC + Oracle Data Guard

13

Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09

Ambiente ContingênciaOracle Database: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10

Cenário 2: Oracle RAC + Oracle Golden Gate

14

Ambiente ProduçãoOracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08NFS: nerv09ASM: nerv09DNS: nerv09

Ambiente ContingênciaOracle RAC: nerv11 / nerv12 / nerv13 / nerv14NFS: nerv10ASM: nerv10DNS: nerv10

Ambiente ObservadorNFS: nerv15

Cenário 3: Oracle RAC Extended

15

Lab 1 – Instalação OEL 6

Hands On !

15

16

16

Lab 1.1: Instalação OEL 6

Nas máquinas nerv01, nerv02 e nerv11, instale o OEL.

- 1a tela: Install or upgrade an existing system - 2a tela: Skip - 3a tela: Next - 4a tela: English (English), Next - 5a tela: Brazilian ABNT2, Next - 6a tela: Basic Storage Devices, Next - 7a tela: Fresh Installation, Next - 8a tela: nerv01.localdomain, Next - 9a tela: America/Sao Paulo, Next - 10a tela: Nerv2015, Nerv2015, Next - 11a tela: Create Custom Layout, Next

17

17

Lab 1.2: Instalação OEL 6

17

- 12a tela: Crie as partições como abaixo, e em seguida, Next:sda1 1024 MB /bootsda2 100000 MB /sda3 20000 MB /homesda5 16384 MB swapsda6 10000 MB /varsda7 10000 MB /tmpsda8 Espaço restante /u01

- 13a tela: Format- 14a tela: Write changes to disk- 15a tela: Next- 16a tela: Minimal- 17a tela: Reboot- Retire o DVD.

18

Lab 2 – Configuração OEL 6

Hands On !

18

19

Nas máquinas nerv01, nerv02 e nerv11, configure as placas de rede.

19

Lab 2.1 – Configuração OEL 6

20

Nas máquinas nerv01, nerv02 e nerv11, atualize o sistema operacional e execute a instalação dos pré-requisitos.# service network restart# yum -y update# yum -y install oracle-rdbms-server-12cR1-preinstall# yum -y install oracleasm-support# yum -y install unzip wget iscsi-initiator-utils java-1.7.0-openjdk parted# yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686

# wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm# rpm -ivh oracleasmlib-2.0.12-1.el6.x86_64.rpm

Nas máquinas nerv01, nerv02 e nerv11, remova o DNS 8.8.8.8 da placa de rede eth0.

Nas máquinas nerv01, nerv02 e nerv11, altere a seguinte linha no arquivo /etc/fstab.tmpfs /dev/shm tmpfs defaults,size=4g 0 0

20

Lab 2.2 – Configuração OEL 6

21

Nas máquinas nerv01, nerv02 e nerv11, ACRESCENTAR AO FINAL do arquivo /etc/hosts:# Public192.168.0.101 nerv01.localdomain nerv01192.168.0.102 nerv02.localdomain nerv02192.168.0.121 nerv11.localdomain nerv11# Private192.168.1.101 nerv01-priv.localdomain nerv01-priv192.168.1.102 nerv02-priv.localdomain nerv02-priv192.168.1.121 nerv11-priv.localdomain nerv11-priv# Virtual192.168.0.111 nerv01-vip.localdomain nerv01-vip192.168.0.112 nerv02-vip.localdomain nerv02-vip192.168.0.131 nerv11-vip.localdomain nerv11-vip# Storage192.168.0.201 nerv09.localdomain nerv09192.168.0.202 nerv10.localdomain nerv10# Client192.168.0.191 observer-rac01.localdomain observer-rac01192.168.0.195 nerv15.localdomain nerv15

21

Lab 2.3 – Configuração OEL 6

22

Nas máquinas nerv01, nerv02 e nerv11, executar os comandos abaixo.# groupadd oper# groupadd asmadmin# groupadd asmdba# groupadd asmoper# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle# mkdir -p /u01/app/12.1.0.2/grid# mkdir -p /u01/app/oracle/product/12.1.0.2/db_1# chown -R oracle:oinstall /u01# chmod -R 775 /u01# passwd oracle (Coloque como senha do usuário oracle: Nerv2015)

22

Lab 2.4 – Configuração OEL 6

23

Nas máquinas nerv01, nerv02 e nerv11, altere o SELinux de “enforcing” para “permissive”.# vi /etc/selinux/config

Nas máquinas nerv01, nerv02 e nerv11, desabilite o firewall.# chkconfig iptables off# chkconfig ip6tables off

Nas máquinas nerv01, nerv02 e nerv11, desabilite o NTP.# mv /etc/ntp.conf /etc/ntp.conf.org# reboot

23

Lab 2.5 – Configuração OEL 6

24

Nas máquinas nerv01 e nerv02 , com o usuário oracle, ACRESCENTAR NO FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=nerv01.localdomainexport ORACLE_UNQNAME=ORCLexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1export GRID_HOME=/u01/app/12.1.0.2/gridexport CRS_HOME=$GRID_HOMEexport ORACLE_SID=ORCL1export ORACLE_TERM=xtermexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi

24

Lab 2.6 – Configuração OEL 6

25

Na máquina nerv11, com o usuário oracle, ACRESCENTAR NO FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.export TMP=/tmpexport TMPDIR=$TMPexport ORACLE_HOSTNAME=nerv11.localdomainexport ORACLE_UNQNAME=ORCLexport ORACLE_BASE=/u01/appexport ORACLE_HOME=$ORACLE_BASE/oracle/product/12.1.0.2/db_1export GRID_HOME=/u01/app/12.1.0.2/gridexport CRS_HOME=$GRID_HOMEexport ORACLE_SID=ORCLexport ORACLE_TERM=xtermexport PATH=/usr/sbin:$PATHexport PATH=$ORACLE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/libexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlibif [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fifi

25

Lab 2.7 – Configuração OEL 6

26

Lab 3 – Storage

Hands On !

26

27

27

Lab 3.1 – Storage

Nas máquinas nerv09 e nerv10, crie 3 partições de 5GB, e 4 de 10GB.

Nas máquinas nerv09 e nerv10, configure o iSCSI server.# yum -y install scsi-target-utils# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5 initiator-address 192.168.0.101 initiator-address 192.168.0.102</target><target iqn.2010-10.com.nervinformatica:storage.asm01-02> backing-store /dev/sda6 initiator-address 192.168.0.101 initiator-address 192.168.0.102</target>...

# service tgtd start# chkconfig tgtd on

28

28

Lab 3.2 – Storage

Nas máquinas nerv01, nerv02 e nerv11, ative o pacote iSCSI Initiator.# chkconfig iscsid on

Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no Storage.# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l

Nas máquinas nerv01, nerv02 e nerv11, deixe APENAS os novos discos no arquivo /etc/iscsi/initiatorname.iscsi.InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-01InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-02InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-03InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-04InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-05InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-06InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-07

29

29

Lab 3.3 – Storage

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram adicionados.# fdisk -l

Nas máquinas nerv01 e nerv11, particione os novos discos.# fdisk /dev/sdbn <enter>p <enter>1 <enter><enter><enter>w <enter>...

Na máquina nerv02, execute a detecção dos novos discos.# partprobe /dev/sdb...# fdisk -l

30

Nas máquinas nerv01, nerv02 e nerv11, configure a ASMLib.# /etc/init.d/oracleasm configureoracle <enter>asmadmin <enter>y <enter>y <enter># /etc/init.d/oracleasm status

Nas máquinas nerv01 e nerv11, crie os discos do ASM.# /etc/init.d/oracleasm createdisk DISK01 /dev/sdb1# /etc/init.d/oracleasm createdisk DISK02 /dev/sdc1# /etc/init.d/oracleasm createdisk DISK03 /dev/sdd1# /etc/init.d/oracleasm createdisk DISK04 /dev/sde1# /etc/init.d/oracleasm createdisk DISK05 /dev/sdf1# /etc/init.d/oracleasm createdisk DISK06 /dev/sdg1# /etc/init.d/oracleasm createdisk DISK07 /dev/sdh1

Na máquina nerv02, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks

30

Lab 3.4 – Storage

31

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK01# /etc/init.d/oracleasm querydisk -v -p DISK02# /etc/init.d/oracleasm querydisk -v -p DISK03# /etc/init.d/oracleasm querydisk -v -p DISK04# /etc/init.d/oracleasm querydisk -v -p DISK05# /etc/init.d/oracleasm querydisk -v -p DISK06# /etc/init.d/oracleasm querydisk -v -p DISK07

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle asmadmin 8, 33 Mar 18 08:25 DISK01brw-rw----. 1 oracle asmadmin 8, 65 Mar 18 08:26 DISK02brw-rw----. 1 oracle asmadmin 8, 81 Mar 18 08:26 DISK03brw-rw----. 1 oracle asmadmin 8, 49 Mar 18 08:26 DISK04brw-rw----. 1 oracle asmadmin 8, 97 Mar 18 08:26 DISK05brw-rw----. 1 oracle asmadmin 8, 113 Mar 18 08:26 DISK06brw-rw----. 1 oracle asmadmin 8, 17 Mar 18 08:26 DISK07

31

Lab 3.5 – Storage

32

Lab 4 - Grid Infraestructure

Hands On !

32

33

Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador do Grid Infrastructure.$ cd /home/oracle$ unzip -q linuxamd64_12102_grid_1of2.zip$ unzip -q linuxamd64_12102_grid_2of2.zip

Nas máquinas nerv01 e nerv02, instale o Cluster Verification Utility.# rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm

Na máquina nerv01, inicie a instalação do Grid Infrastructure.$ cd grid$ ./runInstaller

33

Lab 4.1 – Grid Infrastructure

3434

Lab 4.2 – Grid Infrastructure

3535

Lab 4.3 – Grid Infrastructure

3636

Lab 4.4 – Grid Infrastructure

3737

Lab 4.5 – Grid Infrastructure

3838

Lab 4.6 – Grid Infrastructure

3939

Lab 4.7 – Grid Infrastructure

4040

Lab 4.8 – Grid Infrastructure

4141

Lab 4.9 – Grid Infrastructure

4242

Lab 4.10 – Grid Infrastructure

4343

Lab 4.11 – Grid Infrastructure

4444

Lab 4.12 – Grid Infrastructure

4545

Lab 4.13 – Grid Infrastructure

4646

Lab 4.14 – Grid Infrastructure

4747

Lab 4.15 – Grid Infrastructure

4848

Lab 4.16 – Grid Infrastructure

4949

Lab 4.17 – Grid Infrastructure

5050

Lab 4.18 – Grid Infrastructure

5151

Lab 4.19 – Grid Infrastructure

5252

Lab 4.20 – Grid Infrastructure

5353

Lab 4.21 – Grid Infrastructure

5454

Lab 4.22 – Grid Infrastructure

5555

Lab 4.23 – Grid Infrastructure

5656

Lab 4.24 – Grid Infrastructure

5757

Lab 4.25 – Grid Infrastructure

5858

Lab 4.26 – Grid Infrastructure

5959

Lab 4.27 – Grid Infrastructure

6060

Lab 4.28 – Grid Infrastructure

6161

Lab 4.29 – Grid Infrastructure

6262

Lab 4.30 – Grid Infrastructure

6363

Lab 4.31 – Grid Infrastructure

64

Na máquina nerv11, com o usuário oracle, descompacte e execute o instalador do Grid Infrastructure.[oracle@nerv01 ~]$ ssh -CX oracle@nerv11[oracle@nerv11 ~]$ unzip -q linuxamd64_12102_grid_1of2.zip[oracle@nerv11 ~]$ unzip -q linuxamd64_12102_grid_2of2.zip

Na máquina nerv11, instale o Cluster Verification Utility.# rpm -ivh /home/oracle/grid/rpm/cvuqdisk-1.0.9-1.rpm

Na máquina nerv11, inicie a instalação do Grid Infrastructure.$ cd grid$ ./runInstaller

64

Lab 4.32 – Grid Infrastructure

6565

Lab 4.33 – Grid Infrastructure

6666

Lab 4.34 – Grid Infrastructure

6767

Lab 4.35 – Grid Infrastructure

6868

Lab 4.36 – Grid Infrastructure

6969

Lab 4.37 – Grid Infrastructure

7070

Lab 4.38 – Grid Infrastructure

7171

Lab 4.39 – Grid Infrastructure

7272

Lab 4.40 – Grid Infrastructure

7373

Lab 4.41 – Grid Infrastructure

7474

Lab 4.42 – Grid Infrastructure

7575

Lab 4.43 – Grid Infrastructure

7676

Lab 4.44 – Grid Infrastructure

7777

Lab 4.45 – Grid Infrastructure

7878

Lab 4.46 – Grid Infrastructure

7979

Lab 4.47 – Grid Infrastructure

8080

Lab 4.48 – Grid Infrastructure

81

Lab 5 – Oracle Database Software

Hands On !

81

82

Na máquina nerv01, com o usuário oracle, descompacte e execute o instalador do Oracle Database Software.$ cd /home/oracle$ unzip -q linuxamd64_12102_database_1of2.zip$ unzip -q linuxamd64_12102_database_2of2.zip$ cd database$ ./runInstaller

82

Lab 5.1 – Oracle Database Software

8383

Lab 5.2 – Oracle Database Software

8484

Lab 5.3 – Oracle Database Software

8585

Lab 5.4 – Oracle Database Software

8686

Lab 5.5 – Oracle Database Software

8787

Lab 5.6 – Oracle Database Software

8888

Lab 5.7 – Oracle Database Software

8989

Lab 5.8 – Oracle Database Software

9090

Lab 5.9 – Oracle Database Software

9191

Lab 5.10 – Oracle Database Software

9292

Lab 5.11 – Oracle Database Software

9393

Lab 5.12 – Oracle Database Software

9494

Lab 5.13 – Oracle Database Software

9595

Lab 5.14 – Oracle Database Software

9696

Lab 5.15 – Oracle Database Software

9797

Lab 5.16 – Oracle Database Software

98

Na máquina nerv11, com o usuário oracle, descompacte e execute o instalador do Oracle Database Software.$ cd /home/oracle$ unzip -q linuxamd64_12102_database_1of2.zip$ unzip -q linuxamd64_12102_database_2of2.zip$ cd database$ ./runInstaller

98

Lab 5.17 – Oracle Database Software

9999

Lab 5.18 – Oracle Database Software

100100

Lab 5.19 – Oracle Database Software

101101

Lab 5.20 – Oracle Database Software

102102

Lab 5.21 – Oracle Database Software

103103

Lab 5.22 – Oracle Database Software

104104

Lab 5.23 – Oracle Database Software

105105

Lab 5.24 – Oracle Database Software

106106

Lab 5.25 – Oracle Database Software

107107

Lab 5.26 – Oracle Database Software

108108

Lab 5.27 – Oracle Database Software

109109

Lab 5.28 – Oracle Database Software

110110

Lab 5.29 – Oracle Database Software

111

LAB 6 – ASM

Hands On !

111

112

Na máquina nerv01, configure os outros Disk Groups do ASM.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1$ sqlplus / AS SYSASMSQL> CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04', 'ORCL:DISK05';SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06', 'ORCL:DISK07';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';$ srvctl start diskgroup -g DATA -n nerv02$ srvctl enable diskgroup -g DATA -n nerv02$ srvctl start diskgroup -g FRA -n nerv02$ srvctl enable diskgroup -g FRA -n nerv02

112

Lab 6.1 – ASM

113

Na máquina nerv11, configure os outros Disk Groups do ASM.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM$ sqlplus / AS SYSASMSQL> CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK 'ORCL:DISK04', 'ORCL:DISK05';SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:DISK06', 'ORCL:DISK07';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';

113

Lab 6.2 – ASM

114

Lab 7 – Oracle Database

Hands On !

114

115115

Lab 7.1 – Oracle DatabaseNa máquina nerv01, execute o DBCA.

116116

Lab 7.2 – Oracle Database

117117

Lab 7.3 – Oracle Database

118118

Lab 7.4 – Oracle Database

119119

Lab 7.5 – Oracle Database

120120

Lab 7.6 – Oracle Database

121121

Lab 7.7 – Oracle Database

122122

Lab 7.8 – Oracle Database

123123

Lab 7.9 – Oracle Database

124124

Lab 7.10 – Oracle Database

125125

Lab 7.11 – Oracle Database

126126

Lab 7.12 – Oracle Database

127127

Lab 7.13 – Oracle Database

128128

Lab 7.14 – Oracle Database

129129

Lab 7.15 – Oracle Database

130

Lab 8 – RAC + Data Guard

Hands On !

130

131

131

Lab 8.1 – Data Guard

Nas máquinas nerv01, nerv02 e nerv11, deixe o tnsnames do ORACLE_HOME como abaixo.PROD = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac01-scan.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCL) ) )

DR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = nerv11.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DR) ) )

132

132

Lab 8.2 – Data Guard

Nas máquinas nerv01, nerv02 e nerv11, acrescente as linhas abaixo no final do arquivo $GRID_HOME/network/admin/listener.ora.SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=ORCL_DGMGRL) (ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/db_1) (SID_NAME=ORCL1) ) )

Nas máquinas nerv01, nerv02 e nerv11, teste a nova configuração do LISTENER.$ export ORACLE_HOME=$GRID_HOME$ $GRID_HOME/bin/lsnrctl status$ srvctl stop listener$ srvctl start listener$ $GRID_HOME/bin/lsnrctl status

133

133

Lab 8.3 – Data Guard

Na máquina nerv01, habilite os pré-requisitos do Data Guard.$ export ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1$ export ORACLE_SID=ORCL1$ srvctl stop database -d ORCL$ srvctl start instance -d ORCL -i ORCL1 -o mount$ sqlplus / AS SYSDBASQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER DATABASE FORCE LOGGING;SQL> ALTER DATABASE FLASHBACK ON;SQL> ALTER DATABASE OPEN;$ srvctl start instance -d ORCL -i ORCL2

Na máquina nerv01, altere a localização do SNAPSHOT CONTROLFILE.$ rman target /RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA/snapcf.f';

134

134

Lab 8.4 – Data Guard

Na máquina nerv01, crie um STANDBY CONTROLFILE.SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stb.ctl';

Na máquina nerv01, execute um Backup do Banco de Dados e Archives.RMAN> BACKUP DATABASE FORMAT '/home/oracle/Backup_Banco_%U.rman';RMAN> BACKUP ARCHIVELOG ALL FORMAT '/home/oracle/Backup_Archives_%U.rman';

Na máquina nerv01, copie o PASSWORD FILE para a máquina nerv11.ASMCMD [+] > ls -l DATA/ORCL/PASSWORD/ASMCMD [+] > pwcopy DATA/ORCL/PASSWORD/pwdorcl.123 /home/oracle/orapwORCL$ scp /home/oracle/orapwORCL nerv11:$ORACLE_HOME/dbs/orapwORCL

Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.SQL> CREATE PFILE='/home/oracle/initORCL.ora' FROM SPFILE;$ scp /home/oracle/initORCL.ora nerv11:$ORACLE_HOME/dbs/initORCL.ora

Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.$ scp /home/oracle/stb.ctl nerv11:/home/oracle/

Na máquina nerv01, copie o BACKUP para a máquina nerv11.$ scp /home/oracle/Backup_*.rman nerv11:/home/oracle/

135

135

Lab 8.5 – Data Guard

Na máquina nerv11, remova as seguintes linhas do arquivo initORCL.ora.ORCL1.*ORCL2.**.cluster_database=true

Na máquina nerv11, adicione a seguinte linha do arquivo initORCL.ora.*.undo_tablespace='UNDOTBS1'

Na máquina nerv11, crie o diretório do ADUMP, que está no initORCL.ora.$ mkdir -p /u01/app/oracle/admin/ORCL/adump

Na máquina nerv11, crie um SPFILE a partir do arquivo initORCL.ora.SQL> CREATE SPFILE FROM PFILE;SQL> STARTUP NOMOUNT;

136

136

Lab 8.6 – Data Guard

Na máquina nerv11, altere o parâmetro DB_UNIQUE_NAME.SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='DR' SCOPE=SPFILE;SQL> SHUTDOWN IMMEDIATE;SQL> STARTUP NOMOUNT;

Na máquina nerv11, restaure o CONTROLFILE.RMAN> RESTORE CONTROLFILE FROM '/home/oracle/stb.ctl';SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

Na máquina nerv11, corrija os metadados do RMAN.RMAN> CROSSCHECK BACKUP;RMAN> CROSSCHECK ARCHIVELOG ALL;RMAN> DELETE NOPROMPT EXPIRED BACKUP;RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;RMAN> CATALOG START WITH '/home/oracle/Backup';

Na máquina nerv11, restaure o banco de dados.RMAN> RESTORE DATABASE;RMAN> RECOVER DATABASE;

137

137

Lab 8.7 – Data Guard

Teste o Na máquina nerv11, habilite os pré-requisitos do Data Guard.SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER DATABASE FLASHBACK ON;

Na máquina nerv11, adicione o banco de dados ao Grid.$ srvctl add database -d ORCL -oraclehome /u01/app/oracle/product/12.1.0.2/db_1$ srvctl start database -d ORCL$ srvctl modify database -db ORCL -pwfile /u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwORCL

Teste a autenticação via PASSWORD FILE entre as três máquinas.[oracle@nerv01 ~]$ sqlplus SYS/Nerv2015@DR AS SYSDBA[oracle@nerv02 ~]$ sqlplus SYS/Nerv2015@DR AS SYSDBA[oracle@nerv11 ~]$ sqlplus SYS/Nerv2015@PROD AS SYSDBA

Nas máquinas nerv01 e nerv11, configure o Data Guard Broker.SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1 = '+FRA/DR1.DAT' SCOPE=BOTH;SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2 = '+FRA/DR2.DAT' SCOPE=BOTH;SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE;

138

138

Lab 8.8 – Data Guard

Teste o Na máquina nerv01, crie a configuração do Data Guard Broker.$ dgmgrl SYS/Nerv2015@PRODDGMGRL> CREATE CONFIGURATION 'DRSolution' AS PRIMARY DATABASE IS ORCL CONNECT IDENTIFIER IS PROD;

Na máquina nerv01, adicione a máquina nerv11 na configuração.DGMGRL> ADD DATABASE DR AS CONNECT IDENTIFIER IS DR;

Nas três máquinas, acompanhe o Alert Log.

Na máquina nerv01, habilite a configuração.DGMGRL> SHOW CONFIGURATION;DGMGRL> ENABLE CONFIGURATION;DGMGRL> SHOW CONFIGURATION;

139

139

Lab 8.9 – Data Guard

Teste o Na máquina nerv11, crie STANDBY LOGFILEs.SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE ADD STANDBY LOGFILE;SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Nas máquina nerv01 crie STANDBY LOGFILEs.SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;

140

Na máquina nerv01, verifique detalhes de um banco de dados.DGMGRL> SHOW DATABASE VERBOSE ORCL;DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'ArchiveLagTarget'=600;DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'StandbyFileManagement'=AUTO;DGMGRL> SHOW DATABASE ORCL 'ArchiveLagTarget';DGMGRL> SHOW DATABASE ORCL 'StandbyFileManagement';

Na máquina nerv01, altere o Protection Mode.DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

Na máquina nerv01, verifique se o Protection Mode foi alterado.DGMGRL> SHOW CONFIGURATION;DGMGRL> SHOW DATABASE VERBOSE ORCL;DGMGRL> SHOW DATABASE VERBOSE DR;DGMGRL> SHOW INSTANCE VERBOSE “ORCL1” ON DATABASE ORCL;DGMGRL> SHOW INSTANCE VERBOSE “ORCL2” ON DATABASE ORCL;DGMGRL> SHOW INSTANCE VERBOSE “ORCL” ON DATABASE DR;

Lab 8.10 – Data Guard

141

Na máquina nerv01, execute SWITCHOVER para a máquina nerv11, sempre acompanhando os Alert Logs.DGMGRL> SHOW CONFIGURATION;DGMGRL> SWITCHOVER TO DR;

Na máquina nerv11, execute SWITCHBACK para a máquina nerv01, sempre acompanhando os Alert Logs.DGMGRL> SHOW CONFIGURATION;DGMGRL> SWITCHOVER TO ORCL;

Lab 8.11 – Data Guard

142

Desligue as máquinas nerv01 e nerv02.

Execute FAILOVER para a máquina nerv11.$ dgmgrl SYS/Nerv2015@DRDGMGRL> FAILOVER TO DR;DGMGRL> SHOW CONFIGURATION;

Ligue as máquinas nerv01 e nerv02, e na máquina nerv11, execute o REISNTATE.DGMGRL> REINSTATE DATABASE ORCL;

Na máquina nerv11, execute o SWITCHOVER.DGMGRL> SWITCHOVER TO ORCL;

Lab 8.12 – Data Guard

143

Lab 9 – Fast-Start Failover

Hands On !

143

144

Lights out administration

145

Na máquina nerv01, configure o Fast-Start Failover.DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverLagLimit = 600;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 30;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverAutoReinstate = TRUE;DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverPmyShutdown = TRUE;DGMGRL> EDIT DATABASE ORCL SET PROPERTY FastStartFailoverTarget=DR;DGMGRL> ENABLE FAST_START FAILOVER;DGMGRL> SHOW CONFIGURATION;DGMGRL> SHOW FAST_START FAILOVER;

Lab 9.1: Fast-Start Failover

146

Na máquina observer-rac01, inicie o observador.$ dgmgrl -logfile /home/oracle/observer.log SYS/Nerv2015@DRDGMGRL> START OBSERVER;

Desligue as máquinas nerv01 e nerv02, e aguarde o FAILOVER.

Ligue as máquinas nerv01 e nerv02.

Aguarde o REINSTATE.

Execute o SWITCHOVER.

Lab 9.2: Fast-Start Failover

147

Na máquina nerv01, crie dois Services como abaixo.$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLTP -l PRIMARY -w 1 -z 10$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10

Na máquina nerv11, crie dois Services como abaixo.$GRID_HOME/bin/srvctl add service -d ORCL -s OLTP -l PRIMARY -w 1 -z 10$GRID_HOME/bin/srvctl add service -d ORCL -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10

Na máquina nerv01, inicie os dois Services.$GRID_HOME/bin/srvctl start service -d ORCL -s OLTP$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP

Na máquina nerv01, gere alguns Archived Redo Logs e espere replicar para o DR.

Na máquina nerv01, pare o Service OLAP.$GRID_HOME/bin/srvctl stop service -d ORCL -s OLAP

Na máquina nerv11, inicie o Service OLAP.$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP

Lab 9.3: Fast-Start Failover

148

Lab 9.4: Fast-Start Failover

Na máquina observer-rac01, adicione estas duas entradas ao tnsnames.ora, e teste sua conexão após um novo Failover.

OLTP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLTP)) )

OLAP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLAP)) )

149

Lab 10 – ACFS

Hands On !

149

150

• Mirroring• Stripping• Replicação• Snapshots• Alta Disponibilidade

Vantagens ACFS

151

• Produto recente (11gR2)• Configuração complexa• Dependência do Kernel• Depêndencia dos componentes do Grid Infrastructure• Ainda não suportado pelo UEK 3 sem Patch

Bug ID 16318126Oracle ASM Cluster File System (ACFS) is currently not supported for use with UEK R3.http://docs.oracle.com/cd/E37670_01/E51472/E51472.pdf

Desvantagens ACFS

152

Na máquina nerv09, crie 1 diretório.# mkdir /shared_ogg

Na máquina nerv09, adicionar no arquivo /etc/exports:/shared_ogg *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Na máquina nerv09, reiniciar o NFS Server:# yum -y install nfs-utils# service rpcbind start; service nfs start; chkconfig rpcbind on; chkconfig nfs on

Nas máquinas nerv01 e nerv02, adicionar no arquivo /etc/fstab a linha abaixo.nerv09:/shared_ogg /u01/shared_ogg nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0

Na máquina nerv01, executar:# mkdir /u01/shared_ogg# mount /u01/shared_ogg# mkdir /u01/shared_ogg/rac01# chown -R oracle:oinstall /u01/shared_ogg/rac01

Na máquina nerv02, executar:# mkdir /u01/shared_ogg# mount /u01/shared_ogg

152

Lab 10.1: NFS

153

Lab 11 – Golden Gate Unidirecional

Hands On !

153

154

Lab 11.1: Golden Gate UnidirecionalNa máquina nerv11, crie um novo banco de dados.$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName BI -sid BI \ -sysPassword Nerv2015 -systemPassword Nerv2015 \ -storageType ASM -asmsnmpPassword Nerv2015 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv11 \ -characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 20 -sampleSchema true -emConfiguration NONE \ -continueOnNonFatalErrors false

Na máquina nerv11, coloque o banco BI em modo ARCHIVELOG.$ export ORACLE_SID=BISQL> SHUTDOWN IMMEDIATE;SQL> STARTUP MOUNT;SQL> ALTER DATABASE ARCHIVELOG;SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA';SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G;SQL> ALTER DATABASE FORCE LOGGING;SQL> ALTER DATABASE OPEN;

Lab 11.2: Instalação Golden Gate

Nas máquinas nerv01 e nerv11, inicie a instalação do Golden Gate.[oracle@nerv01 ~]$ unzip -q fbo_ggs_Linux_x64_shiphome.zip[oracle@nerv01 ~]$ cd fbo_ggs_Linux_x64_shiphome/Disk1[oracle@nerv01 Disk1]$ ./runInstaller

155

Lab 11.3: Instalação Golden Gate

156

Lab 11.4: Instalação Golden Gate

157

Na máquina nerv01, instale em /u01/shared_ogg/rac01Na máquina nerv11, instale em /u01/app/oracle/product/12.1.0.2/ogg

Lab 11.5: Instalação Golden Gate

158

Lab 11.6: Instalação Golden Gate

159

160

Lab 11.7: Golden Gate Unidirecional

Na máquina nerv01, verifique se o MANAGER está em funcionamento.$ cd /u01/shared_ogg/rac01$ ./ggsciGGSCI> info all

Na máquina nerv11, verifique se o MANAGER está em funcionamento.$ cd /u01/app/oracle/product/12.1.0.2/ogg$ ./ggsciGGSCI> info all

161

Lab 11.8: Golden Gate Unidirecional

Na máquina nerv01, habilite os pré-requisitos do Golden Gate.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;

Na máquina nerv01, crie o usuário para o Golden Gate.$ cd /u01/shared_ogg/rac01/$ORACLE_HOME/bin/sqlplus / AS SYSDBASQL> CREATE TABLESPACE OGG;SQL> CREATE USER OGG IDENTIFIED BY Nerv2015 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;SQL> GRANT EXECUTE ON UTL_FILE TO OGG;@marker_setup.sql OGG <enter>@ddl_setup.sql OGG <enter>@role_setup.sql OGG <enter>@ddl_enable.sql

162

Lab 11.9: Golden Gate Unidirecional

Na máquina nerv11, habilite os pré-requisitos do Golden Gate.$ export ORACLE_SID=BI$ORACLE_HOME/bin/sqlplus / AS SYSDBASQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;

Na máquina nerv11, crie o usuário para o Golden Gate.$ cd /u01/app/oracle/product/12.1.0.2/ogg$ORACLE_HOME/bin/sqlplus / AS SYSDBASQL> CREATE TABLESPACE OGG;SQL> CREATE USER OGG IDENTIFIED BY Nerv2015 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG;SQL> GRANT EXECUTE ON UTL_FILE TO OGG;@marker_setup.sql OGG <enter>@ddl_setup.sql OGG <enter>@role_setup.sql OGG <enter>@ddl_enable.sql

163

Lab 11.10: Golden Gate Unidirecional

Na máquina nerv01, adicione o processo EXTRACT.GGSCI> add extract ext1, tranlog, THREADS 2, begin nowGGSCI> add exttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/lt, extract ext1

Na máquina nerv01, edite o arquivo de parâmetros do processo EXTRACT.GGSCI> edit params ext1extract ext1userid OGG@ORCL, password Nerv2015rmthost nerv11, mgrport 7809rmttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/ltTRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2015ddl include mapped objname SCOTT.*;table SCOTT.*;

164

Lab 11.11: Golden Gate Unidirecional

Na máquina nerv11, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBALGGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint

Na máquina nerv11, crie a tabela de CHECKPOINT.GGSCI> dblogin userid OGGNerv2015 <enter>GGSCI> add checkpointtable OGG.checkpoint

Na máquina nerv11, adicione o processo REPLICAT.GGSCI> add replicat rep1, exttrail /u01/app/oracle/product/12.1.0.2/ogg/dirdat/lt, checkpointtable OGG.checkpoint

Na máquinas nerv11, edite o arquivo de parâmetros do processo REPLICAT.GGSCI> edit params rep1replicat rep1ASSUMETARGETDEFSuserid OGG@BI, password Nerv2015discardfile /u01/app/oracle/product/12.1.0.2/ogg/dircrd/rep1_discard.txt, append, megabytes 10DDLmap SCOTT.*, target SCOTT.*;

165

Lab 11.12: Golden Gate Unidirecional

Nas máquinas nerv01, nerv02 e nerv11, adicione o ASM ao tnsnames.ora do ORACLE_HOME.ASM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = nerv01.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = +ASM) ) )

Nas máquinas nerv01 e nerv11, habilite o usuário SCOTT.SQL> ALTER USER SCOTT IDENTIFIED BY TIGER ACCOUNT UNLOCK;

166

Lab 11.13: Golden Gate Unidirecional

Na máquina nerv01, acompanhe o log de erros.$ tail -f /u01/shared_ogg/rac01/ggserr.log

Na máquina nerv11, acompanhe o log de erros.$ tail -f /u01/app/oracle/product/12.1.0.2/ogg/ggserr.log

Na máquina nerv01, inicie o processo EXTRACT.GGSCI> info allGGSCI> start extract ext1GGSCI> info all

Na máquina nerv11, inicie o processo REPLICAT.GGSCI> info allGGSCI> start replicat rep1GGSCI> info all

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.

167

Lab 12 – Golden Gate Bidirecional

Hands On !

167

168

Lab 12.1: Golden Gate Bidirecional

Na máquina nerv11, verifique se o MANAGER e o REPLICAT estão em funcionamento.cd /u01/app/oracle/product/12.1.0.2/ogg$ ./ggsciGGSCI> info all

Na máquina nerv11, adicione o processo EXTRACT.GGSCI> add extract ext2, tranlog, THREADS 1, begin nowGGSCI> add exttrail /u01/shared_ogg/rac01/dirdat/lt, extract ext2

Na máquina nerv11, edite o arquivo de parâmetros do processo EXTRACT.GGSCI> edit params ext2extract ext2userid OGG@BI, password Nerv2015rmthost nerv01-vip, mgrport 7809TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2015rmttrail /u01/shared_ogg/rac01/dirdat/ltddl include mapped objname SCOTT.*;table SCOTT.*;

169

Lab 12.2: Golden Gate Bidirecional

Na máquina nerv01, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBALGGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint

Na máquina nerv01, crie a tabela de CHECKPOINT.GGSCI> dblogin userid OGGNerv2015 <enter>GGSCI> add checkpointtable OGG.checkpoint

Na máquina nerv01, adicione o processo REPLICAT.GGSCI> add replicat rep2, exttrail /u01/shared_ogg/rac01/dirdat/lt, checkpointtable OGG.checkpoint

Na máquinas nerv01, edite o arquivo de parâmetros do processo REPLICAT.GGSCI> edit params rep2replicat rep2ASSUMETARGETDEFSuserid OGG@ORCL, password Nerv2015discardfile /u01/shared_ogg/rac01/dircrd/rep1_discard.txt, append, megabytes 10DDLmap SCOTT.*, target SCOTT.*;

170

Lab 12.3: Golden Gate Bidirecional

Na máquina nerv01, acompanhe o log de erros.$ tail -f /u01/shared_ogg/rac01/ggserr.log

Na máquina nerv11, acompanhe o log de erros.$ tail -f /u01/app/oracle/product/12.1.0.2/ogg/ggserr.log

Na máquina nerv11, inicie o processo EXTRACT.GGSCI> info allGGSCI> start extract ext2GGSCI> info all

Na máquina nerv01, inicie o processo REPLICAT.GGSCI> info allGGSCI> start replicat rep2GGSCI> info all

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.

Teste a replicação de dados do usuário SCOTT entre as máquinas nerv11 e nerv01.

171

Lab 13 – Golden Gate High Availability

Hands On !

171

172

Lab 13.1: Golden Gate HA

Na máquina nerv01, crie um VIP e um Resource para o Golden Gate.# /u01/app/12.1.0.2/grid/bin/appvipcfg create -network=1 -ip=192.168.0.141 -vipname=rac01-ogg-vip -user=root

# /u01/app/12.1.0.2/grid/bin/crsctl start resource rac01-ogg-vip -n nerv01

# vi /u01/shared_ogg/rac01/ogg_action.sh

# chmod +x /u01/shared_ogg/rac01/ogg_action.sh

# chown oracle:oinstall /u01/shared_ogg/rac01/ogg_action.sh

# /u01/app/12.1.0.2/grid/bin/crsctl add resource ogg -type cluster_resource -attr "ACTION_SCRIPT=/u01/shared_ogg/rac01/ogg_action.sh, CHECK_INTERVAL=30, START_DEPENDENCIES='hard(rac01-ogg-vip,ora.orcl.db) pullup(rac01-ogg-vip)', STOP_DEPENDENCIES='hard(rac01-ogg-vip)'"

# /u01/app/12.1.0.2/grid/bin/crsctl setperm resource rac01-ogg-vip -u user:oracle:r-x

# /u01/app/12.1.0.2/grid/bin/crsctl setperm resource ogg -o oracle

173

Lab 13.2: Golden Gate HA

Na máquina nerv01, verifique e inicie o Resource do Golden Gate.$ /u01/shared_ogg/rac01/ogg_action.sh stop$ $GRID_HOME/bin/crsctl status res ogg$ $GRID_HOME/bin/crsctl start res ogg$ $GRID_HOME/bin/crsctl status res ogg$ $GRID_HOME/bin/crsctl stop res ogg$ $GRID_HOME/bin/crsctl status res ogg$ $GRID_HOME/bin/crsctl start res ogg$ $GRID_HOME/bin/crsctl status res ogg

Reinicie a máquina nerv01, e verifique se o Golden Gate é iniciado na máquina nerv02.

174

Lab 14 – RAC Extended

Hands On !

174

175

Lab 14.0: RAC Extended

Na máquina nerv01, desabilite a replicação via Data Guard.DGMGRL> DISABLE FAST_START FAILOVER;DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE;DGMGRL> REMOVE CONFIGURATION;SQL> ALTER SYSTEM SET DG_BROKER_START=FALSE;

Na máquina nerv01, remova os SERVICEs utilizados para o replicação via Data Guard.$ /u01/app/12.1.0.2/grid/bin/srvctl stop service -d ORCL -s OLTP$ /u01/app/12.1.0.2/grid/bin/srvctl stop service -d ORCL -s OLAP$ /u01/app/12.1.0.2/grid/bin/srvctl disable service -d ORCL -s OLTP$ /u01/app/12.1.0.2/grid/bin/srvctl disable service -d ORCL -s OLAP$ /u01/app/12.1.0.2/grid/bin/srvctl remove service -d ORCL -s OLAP$ /u01/app/12.1.0.2/grid/bin/srvctl remove service -d ORCL -s OLTP

Na máquina nerv01, desabilite a replicação via Golden Gate.# /u01/app/12.1.0.2/grid/bin/crsctl stop resource ogg# /u01/app/12.1.0.2/grid/bin/crsctl delete resource ogg# /u01/app/12.1.0.2/grid/bin/crsctl stop resource rac01-ogg-vip# /u01/app/12.1.0.2/grid/bin/appvipcfg delete -vipname=rac01-ogg-vip

176

Lab 14.1: RAC Extended

Na máquina nerv11, execute novamente o Lab 1 e 2.

Na máquina nerv10, apague o conteúdo dos discos iSCSI.# dd if=/dev/zero of=/dev/sda5 bs=512 count=10000...

Na máquina nerv09, altere o Servidor iSCSI para permitir acesso aos discos para a máquina nerv11.# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5 initiator-address 192.168.0.101 initiator-address 192.168.0.102 initiator-address 192.168.0.121</target>...# service tgtd restart

Também na máquina nerv10, altere o Servidor iSCSI para permitir acesso aos discos para as máquinas nerv01 e nerv02.

177

177

Lab 14.2: RAC Extended

Nas máquinas nerv01, nerv02 e nerv11, verifique os discos exportados pelo Storage.# chkconfig iscsid on# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l# iscsiadm -m discovery -t sendtargets -p 192.168.0.202 -l

Nas máquinas nerv01, nerv02 e nerv11, adicione os novos discos no arquivo /etc/iscsi/initiatorname.iscsi....InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-01InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-02InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-03InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-04InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-05InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-06InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-07

178

178

Lab 14.3: RAC Extended

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram configurados localmente.# fdisk -l

Na máquina nerv01, particione os novos discos.# fdisk /dev/sdi (e sdj, sdk, sdl, sdm, sdn, sdo)n <enter>p <enter>1 <enter><enter><enter>w <enter>...

179

179

Lab 14.4: RAC Extended

Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.# partprobe /dev/sdi# partprobe /dev/sdj# partprobe /dev/sdk# partprobe /dev/sdl# partprobe /dev/sdm# partprobe /dev/sdn# partprobe /dev/sdo

Na máquinas nerv11, configure a ASMLib.# /etc/init.d/oracleasm configureoracle <enter>asmadmin <enter>y <enter>y <enter># /etc/init.d/oracleasm status

180

Na máquina nerv01, crie os novos discos do ASM.# /etc/init.d/oracleasm createdisk DISK08 /dev/sdi1# /etc/init.d/oracleasm createdisk DISK09 /dev/sdj1# /etc/init.d/oracleasm createdisk DISK10 /dev/sdk1# /etc/init.d/oracleasm createdisk DISK11 /dev/sdl1# /etc/init.d/oracleasm createdisk DISK12 /dev/sdm1# /etc/init.d/oracleasm createdisk DISK13 /dev/sdn1# /etc/init.d/oracleasm createdisk DISK14 /dev/sdo1

Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK08...

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01...

180

Lab 14.5: RAC Extended

181

Nas máquinas nerv01, nerv02 e nerv11, remova a pasta /home/oracle/.ssh.$ rm -rf .ssh

Na máquina nerv01, reconfigure o SSH sem senha.[oracle@nerv01 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh[oracle@nerv01 ~]$ ssh oracle@nerv11 mkdir -p .ssh[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

Lab 14.6: RAC Extended

182

Na máquina nerv02, reconfigure o SSH sem senha.[oracle@nerv02 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

Na máquina nerv11, reconfigure o SSH sem senha.[oracle@nerv11 ~]$ ssh-keygen -t rsa<enter><enter><enter>[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys'[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys'[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'

Lab 14.7: RAC Extended

183

Na máquina nerv01, execute a instalação do Grid na máquina nerv11.$ cd $GRID_HOME/addnode$ ./addnode.sh -silent “CLUSTER_NEW_NODES={nerv11}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={nerv11-vip}”

Na máquina nerv11, com o usuário root, execute os seguintes scripts.# /u01/app/oraInventory/orainstRoot.sh# /u01/app/12.1.0.2/grid/root.sh

Na máquina nerv01, execute instalação do Oracle na máquina nerv11.$ cd $ORACLE_HOME/addnode$ ./addnode.sh -silent "CLUSTER_NEW_NODES={nerv11}"

Na máquina nerv11, com o usuário root, execute o script abaixo.# /u01/app/oracle/product/12.1.0.2/db_1/root.sh

Na máquina nerv01, execute a adição da instância.$ $GRID_HOME/bin/srvctl add instance -d ORCL -i ORCL3 -n nerv11

Lab 14.8: RAC Extended

184

Na máquina nerv01, conclua a adição do nó.SQL> ALTER SYSTEM SET INSTANCE_NUMBER=3 SID='ORCL3' SCOPE=SPFILE;SQL> ALTER DATABASE ADD LOGFILE THREAD 3;SQL> ALTER DATABASE ADD LOGFILE THREAD 3;SQL> CREATE UNDO TABLESPACE UNDOTBS3;SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS3 SID='ORCL3' SCOPE=SPFILE;

$ $GRID_HOME/bin/srvctl start instance -d ORCL -i ORCL3

Lab 14.9: RAC Extended

185

Na máquina nerv01, faça a preparação para a criação dos novos FAILGROUPs.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER SYSTEM SET ASM_POWER_LIMIT = 11;SQL> ALTER DISKGROUP CONFIG REBALANCE POWER 11;SQL> ALTER DISKGROUP DATA REBALANCE POWER 11;SQL> ALTER DISKGROUP FRA REBALANCE POWER 11;

Lab 14.10: RAC Extended

186

Na máquina nerv01, crie os novos FAILGROUPs.SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK08';SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK09';SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK10';SQL> ALTER DISKGROUP CONFIG DROP DISK DISK01;SQL> ALTER DISKGROUP CONFIG DROP DISK DISK02;SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK01';SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK02'; SQL> ALTER DISKGROUP CONFIG DROP DISK DISK03;SQL> SELECT * FROM V$ASM_OPERATION;SQL> ALTER DISKGROUP CONFIG ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK03';SQL> SELECT * FROM V$ASM_OPERATION;

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

Lab 14.11: RAC Extended

187

Na máquina nerv01, crie os novos FAILGROUPs.SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK11';SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK12';SQL> ALTER DISKGROUP DATA DROP DISK DISK04;SQL> ALTER DISKGROUP DATA DROP DISK DISK05;SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK04';SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK05';SQL> ALTER DISKGROUP DATA DROP DISK DISK11;SQL> SELECT * FROM V$ASM_OPERATION;SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK11';

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

Lab 14.12: RAC Extended

188

Na máquina nerv01, crie os novos FAILGROUPs.SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK13';SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK14';SQL> ALTER DISKGROUP FRA DROP DISK DISK06;SQL> ALTER DISKGROUP FRA DROP DISK DISK07;SQL> SELECT * FROM V$ASM_OPERATION;

SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK06';SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPA DISK 'ORCL:DISK07';SQL> ALTER DISKGROUP FRA DROP DISK DISK13;SQL> SELECT * FROM V$ASM_OPERATION;SQL> ALTER DISKGROUP FRA ADD FAILGROUP FAILGROUPB DISK 'ORCL:DISK13';

SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;

Lab 14.13: RAC Extended

189

Na máquina nerv01, selecione os FAILGROUPs preferidos para leituras.SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM1';SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPA' SCOPE=BOTH SID='+ASM2';SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.FAILGROUPB' SCOPE=BOTH SID='+ASM3';

Lab 14.14: RAC Extended

190

Lab 15 – RAC Extended Quorum

Hands On !

190

191

Na máquina nerv15, crie 1 diretório.# mkdir /shared_config

Nas máquinas nerv15, adicionar no arquivo /etc/exports:/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

Nas máquinas nerv15, iniciar o NFS Server:# service nfs start# chkconfig nfs on

191

Lab 15.1: RAC Extended Quorum

192

192

Lab 15.2: RAC Extended Quorum

Nas máquinas nerv01, nerv02 e nerv11, adicionar no arquivo /etc/fstab a linha abaixo.nerv15:/shared_config /u01/shared_config15 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0

Nas máquinas nerv01, nerv02 e nerv11, executar:# mkdir /u01/shared_config15# mount /u01/shared_config15

Na máquina nerv01, executar:# mkdir /u01/shared_config15/rac01# chown -R oracle:oinstall /u01/shared_config15/rac01

193

193

Lab 15.3: RAC Extended Quorum

Nas máquinas nerv09 e nerv10, crie 1 partição de 1GB, sem formatar.

Nas máquinas nerv09 e nerv10, adicione o disco ao iSCSI server.# cat /etc/tgt/targets.conf<target iqn.2010-10.com.nervinformatica:storage.asm01-08> backing-store /dev/sda33 initiator-address 192.168.0.101 Initiator-address 192.168.0.102 initiator-address 192.168.0.121...

# service tgtd restart

194

194

Lab 15.4: RAC Extended Quorum

Nas máquinas nerv01, nerv02 e nerv11, verifique os Discos exportados no Storage.# iscsiadm -m discovery -t sendtargets -p 192.168.0.201 -l# iscsiadm -m discovery -t sendtargets -p 192.168.0.202 -l

Nas máquinas nerv01, nerv02 e nerv11, adicione o novo disco no arquivo /etc/iscsi/initiatorname.iscsi.InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-08InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-08

195

195

Lab 15.5: RAC Extended Quorum

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos foram configurados localmente.# fdisk -l

Nas máquinas nerv01, particione os novos discos.# fdisk /dev/sdpn <enter>p <enter>1 <enter><enter><enter>w <enter>

# fdisk /dev/sdqn <enter>p <enter>1 <enter><enter><enter>w <enter>

196

Nas máquinas nerv02 e nerv11, execute a detecção dos novos discos.# partprobe /dev/sdp# partprobe /dev/sdq

Na máquina nerv01, execute a criação dos novos discos do ASM.# /etc/init.d/oracleasm createdisk DISK15 /dev/sdp1# /etc/init.d/oracleasm createdisk DISK16 /dev/sdq1

Nas máquinas nerv02 e nerv11, execute a detecção dos discos criados.# /etc/init.d/oracleasm scandisks

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# /etc/init.d/oracleasm listdisks# /etc/init.d/oracleasm querydisk -v -p DISK15# /etc/init.d/oracleasm querydisk -v -p DISK16

Nas máquinas nerv01, nerv02 e nerv11, verifique se os discos estão corretos.# ls -lh /dev/oracleasm/disks/brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00...

196

Lab 15.6: RAC Extended Quorum

197

197

Lab 15.8: RAC Extended Quorum

Na máquina nerv01, crie o FAILGROUP para o Voting Disk.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> CREATE DISKGROUP VD NORMAL REDUNDANCY FAILGROUP FG1 DISK 'ORCL:DISK15' FAILGROUP FG2 DISK 'ORCL:DISK16' ATTRIBUTE 'compatible.asm' = '12.1.0.0.0';

Na máquina nerv01, crie um arquivo para o Voting Disk.# dd if=/dev/zero of=/u01/shared_config15/rac01/asm01 bs=10M count=58

Na máquina nerv01, altere a permissão do disco.# chown -R oracle:oinstall /u01/shared_config15/rac01/

Na máquina nerv01, altere a localização de discos.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER SYSTEM SET asm_diskstring='ORCL:*', '/u01/shared_config15/rac01/*' SID='*';

198

198

Lab 15.9: RAC Extended Quorum

Na máquina nerv01, adicione o FAILGROUP de QUORUM.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1SQL> ALTER DISKGROUP VD ADD QUORUM FAILGROUP FG3 DISK '/u01/shared_config15/rac01/asm01';

Na máquina nerv01, habilitar o novo FAILGROUP para as outr máquinas.$GRID_HOME/bin/srvctl start diskgroup -g VD -n nerv02$GRID_HOME/bin/srvctl enable diskgroup -g VD -n nerv02$GRID_HOME/bin/srvctl start diskgroup -g VD -n nerv11$GRID_HOME/bin/srvctl enable diskgroup -g VD -n nerv11

199

199

Lab 15.10 RAC Extended Quorum

Na máquina nerv01, altere a localização do OCR para o novo DISKGROUP.# /u01/app/12.1.0.2/grid/bin/ocrcheck# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +VD# /u01/app/12.1.0.2/grid/bin/ocrcheck# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +CONFIG# /u01/app/12.1.0.2/grid/bin/ocrcheck

Na máquina nerv01, altere a localização do Voting Disk para o novo DISKGROUP.# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +VD# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk

200

200

Lab 15.11: RAC Extended QuorumNa máquina nerv01, configure o tempo permitido de DOWNTIME.$ export ORACLE_HOME=$GRID_HOME$ export ORACLE_SID=+ASM1

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.rdbms' = '12.1.0.2.0';

SQL> ALTER DISKGROUP CONFIG SET ATTRIBUTE 'disk_repair_time' = '30m';SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'disk_repair_time' = '30m';SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'disk_repair_time' = '30m';SQL> ALTER DISKGROUP FRA SET ATTRIBUTE 'disk_repair_time' = '30m';

Desligue o Storage de Produção, e teste o funcionamento do Site Produção e DR.