Replicating RAC database using RMAN at Remote Server

Posted by Sagar Patil

Here I am duplicating 11g RAC database from one RHEL Server to Another by old 10g method.
I could have used 11g “DUPLICATE TARGET DATABASE TO TARGET_DB FROM ACTIVE DATABASE” which doesn’t need previous rman backup at source. But it may not be a good option for large databases or at places with narrow network bandwidth.

Assumptions Made:

– RAC Clusterware and Database binaries are installed at Destination Nodes
– Clusterware services “crsctl check crs” reported active

PRIMARY site Tasks (Ora01a1,Ora01a2):

  • Create FULL RMAN Backup
  • Copy backup files from PRIMARY server to New server
  • Create pfile from spfile at source RAC
  • Copy init.ora from $Primary_Server:ORACLE_HOME/dbs to $New_Server:ORACLE_HOME/dbs
  • Copy $Primary_Server:ORACLE_HOME/dbs/password file to $New_Server:ORACLE_HOME/dbs

[oracle@Ora01a1 RAC1]$ scp Ora01a1BKUP.tgz oracle@Node1:/mnt/data
Warning: Permanently added (RSA) to the list of known hosts.
Ora01a1BKUP.tgz                                                          100%  274MB  11.4MB/s   00:24

SQL> show parameter pfile
NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      /mnt/data/oradata/primary/spfileRAC.ora

SQL> show parameter spfile
NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string      /mnt/data/oradata/primary/spfileRAC.ora

SQL> create pfile=’/mnt/data/oradata/primary/init.ora’ from spfile;
File created

[oracle@Ora01a1 RAC]$ scp init.ora oracle@Node1:/mnt/data/rman_backups/bkup/init.ora 100% 1612     1.6KB/s   00:00
[oracle@Ora01a1 dbs]$ scp /mnt/data/oradata/primary/orapwRAC oracle@Node1:/mnt/data/rman_backups/bkup   orapwRAC 100% 1536     1.5KB/s   00:00

Destination Site Tasks (Node1,Node2)
Create required directories for bdump,adump as well as database mount volumes.

[oracle@Node1]$ grep /mnt initRAC.ora
*.control_files=’/mnt/data/oradata/primary/control01.ctl’,’/mnt/data/oradata/primary/control02.ctl’
*.db_recovery_file_dest=’/mnt/logs/oradata/primary/fast_recovery_area’
*.log_archive_dest_1=’LOCATION=/mnt/logs/oradata/primary/arch’

[oracle@Node1]$ mkdir -p /mnt/data/oradata/primary/
[oracle@Node1]$ mkdir -p /mnt/logs/oradata/primary/fast_recovery_area
[oracle@Node1]$ mkdir -p /mnt/logs/oradata/primary/arch

“opt” is a local volume for each instance so create directories on both RAC nodes
[oracle@Node1]$ grep /opt initRAC.ora
*.audit_file_dest=’/opt/app/oracle/admin/primary/adump’
*.diagnostic_dest=’/opt/app/oracle’

[oracle@Node1]$ mkdir -p /opt/app/oracle/admin/primary/adump
[oracle@Node1]$ mkdir -p /opt/app/oracle

[oracle@Node2]$ mkdir -p /opt/app/oracle/admin/primary/adump
[oracle@Node3]$ mkdir -p /opt/app/oracle

Under 11g background trace will be maintained at “$ORACLE_BASE/diag/rdbms”, if required create necessary directories there.

Modify init.ora file ($ORACLE_HOME/dbs/init.ora) and amend/change parameters. I had to comment out “remote_listener” parameter as the serversnames at destination are different.

Copy init.ora at both nodes “Node1,Node2″@$ORACLE_HOME/dbs

[oracle@Node1 dbs]$ cp initRAC.ora initRAC1.ora

[oracle@Node1 dbs]$ echo $ORACLE_SID
RAC1
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 9152860160 bytes
Fixed Size                  2234056 bytes
Variable Size            6945769784 bytes
Database Buffers         2181038080 bytes
Redo Buffers               23818240 bytes

[oracle@Node1 dbs]$ rman target / nocatalog
connected to target database: RAC (not mounted)
using target database control file instead of recovery catalog
RMAN> restore controlfile from ‘/mnt/data/rman_backups/bkup/c-4020163152-20110405-01’;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=588 instance=RAC1 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/mnt/data/oradata/primary/control01.ctl
output file name=/mnt/data/oradata/primary/control02.ctl
.

………
Finished restore at 05-APR-11

Verify controlfiles are copied at right location

[oracle@Node2 RAC]$ pwd
/mnt/data/oradata/RAC

[oracle@Node2 RAC]$ ls -lrt
-rw-r—–  1 oracle oinstall 22986752 Apr  5 16:35 control01.ctl
-rw-r—–  1 oracle oinstall 22986752 Apr  5 16:35 control02.ctl

RMAN> alter database mount;
database mounted
RMAN> RESTORE DATABASE;
Starting restore at 05-APR-11
Starting implicit crosscheck backup at 05-APR-11
allocated channel: ORA_DISK_1
allocated channel: ORA_DISK_2
******* This returned errors
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 04/05/2011 16:36:54
RMAN-06026: some targets not found – aborting restore
RMAN-06023: no backup or copy of datafile 3 found to restore

RMAN was not able to locate backup since backupset was not registered with rman inventory & copied at different location. Let’s catalog backup pieces that were shipped from Primary database.

I have multiple copies of backup files so I used
RMAN> CATALOG START WITH ‘/mnt/data/rman_backups/bkup/’ NOPROMPT;
List of Cataloged Files
=======================
File Name: /mnt/data/rman_backups/bkup/c-4020163152-20110405-04
File Name: /mnt/data/rman_backups/bkup/c-4020163152-20110405-05
File Name: /mnt/data/rman_backups/bkup/db_bk_ub8m91cg7_s3432_p1_t747680263.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ub9m91cg8_s3433_p1_t747680264.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubam91cg9_s3434_p1_t747680265.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubcm91cgi_s3436_p1_t747680274.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubdm91cgi_s3437_p1_t747680274.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubbm91cgi_s3435_p1_t747680274.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubem91ck0_s3438_p1_t747680384.bkp
File Name: /mnt/data/rman_backups/bkup/db_bk_ubfm91ck0_s3439_p1_t747680384.bkp
File Name: /mnt/data/rman_backups/bkup/ctl_bk_ubhm91ck3_s3441_p1_t747680387.bkp

RMAN> RESTORE DATABASE;
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /mnt/data/oradata/primary/system01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /mnt/data/oradata/primary/undotbs02.dbf
channel ORA_DISK_1: reading from backup piece /mnt/data/rman_backups/bkup/db_bk_ubdm91cgi_s3437_p1_t747680274.bkp
.
..
channel ORA_DISK_1: piece handle=/mnt/data/rman_backups/bkup/db_bk_ubdm91cgi_s3437_p1_t747680274.bkp tag=TAG20110405T165753
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:26
channel ORA_DISK_2: piece handle=/mnt/data/rman_backups/bkup/db_bk_ubcm91cgi_s3436_p1_t747680274.bkp tag=TAG20110405T165753
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:01:26
channel ORA_DISK_3: piece handle=/mnt/data/rman_backups/bkup/db_bk_ubbm91cgi_s3435_p1_t747680274.bkp tag=TAG20110405T165753
channel ORA_DISK_3: restored backup piece 1
channel ORA_DISK_3: restore complete, elapsed time: 00:01:56
Finished restore at 05-APR-11

RMAN> recover database;
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3289
channel ORA_DISK_1: reading from backup piece /mnt/data/rman_backups/bkup/db_bk_ubem91ck0_s3438_p1_t747680384.bkp
channel ORA_DISK_2: starting archived log restore to default destination
channel ORA_DISK_2: restoring archived log
archived log thread=2 sequence=3484
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/05/2011 17:17:46
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 3290 and starting SCN of 246447604

RMAN> ALTER DATABASE OPEN RESETLOGS;
database opened

Shutdown and restart database RAC1

SQL> shutdown abort;
ORACLE instance shut down.

set sqlprompt ‘&_CONNECT_IDENTIFIER > ‘
RAC1> startup;
ORACLE instance started.
Total System Global Area 9152860160 bytes
Fixed Size                  2234056 bytes
Variable Size            6945769784 bytes
Database Buffers         2181038080 bytes
Redo Buffers               23818240 bytes
Database mounted.
Database opened.

RAC1> set linesize 200;
RAC1> set pagesize 20;
RAC1> select inst_id,substr(member,1,35) from gv$logfile;
INST_ID SUBSTR(MEMBER,1,35)
———- ——————————————————————————————————————————————–
1 /mnt/data/oradata/primary/redo02.log
1 /mnt/data/oradata/primary/redo01.log
1 /mnt/data/oradata/primary/redo03.log
1 /mnt/data/oradata/primary/redo04.log

I can see , INSTANCE 2 REDO log files are not listed so startup RAC2 instance at Node2

[oracle@Node2 dbs]$ echo $ORACLE_SID
RAC2
SQL> set sqlprompt ‘&_CONNECT_IDENTIFIER > ‘
RAC2> startup;
ORACLE instance started.
Total System Global Area 9152860160 bytes
Fixed Size                  2234056 bytes
Variable Size            6610225464 bytes
Database Buffers         2516582400 bytes
Redo Buffers               23818240 bytes
Database mounted.
Database opened.

I can now locate REDO files for Instance 1 as well as 2

select inst_id,substr(member,1,35) from gv$logfile;
INST_ID SUBSTR(MEMBER,1,35)
———- ——————————————————————————————————————————————–
2 /mnt/data/oradata/primary/redo02.log
2 /mnt/data/oradata/primary/redo01.log
2 /mnt/data/oradata/primary/redo03.log
2 /mnt/data/oradata/primary/redo04.log
1 /mnt/data/oradata/primary/redo02.log
1 /mnt/data/oradata/primary/redo01.log
1 /mnt/data/oradata/primary/redo03.log
1 /mnt/data/oradata/primary/redo04.log
8 rows selected.

I will carry log switchs to see ARCHIVE files create at Archive Destination “/mnt/logs/oradata/primary/arch”

RAC1 > alter system switch logfile;
System altered.
RAC1 > /
System altered.
RAC2 > alter system switch logfile;
System altered.
RAC2 > /
System altered.
[oracle@Node2 arch]$ pwd
/mnt/logs/oradata/primary/arch
[oracle@Node2 arch]$ ls -lrt
total 5348
-rw-r—–  1 oracle oinstall  777216 Apr  6 10:00 1_10_747681489.arc
-rw-r—–  1 oracle oinstall    4096 Apr  6 10:00 1_11_747681489.arc
-rw-r—–  1 oracle oinstall 4667392 Apr  6 10:00 2_11_747681489.arc
-rw-r—–  1 oracle oinstall   56832 Apr  6 10:01 2_12_747681489.arc

We have some background jobs in this database. I will set them to sleep at both databases for some time

RAC1 > alter system set job_queue_processes=0;
System altered.

RAC2 > alter system set job_queue_processes=0;
System altered.

See if there are any alrtlog errors reported at nodes node1/node2 before Registering  database with CRS

RAC1> create spfile from pfile;
File created.

[oracle@Node1 dbs]$ pwd
/opt/app/oracle/product/11.2/db_1/dbs
-rw-r—–  1 oracle oinstall     3584 Apr  6 10:20 spfileRAC1.ora

Move spfile at a shared clustered location accessible to both Nodes/Instances RAC1/RAC2.

cp spfileRAC1.ora /mnt/data/oradata/primary/spfileRAC.ora

[oracle@(RAC1 or RAC2 ) ]$ df -k
/dev/mapper/System-Opt 20314748  14636172   4630208  76% /opt   — Local Storage
NETAPP_Server:/vol/prod_data 52428800  33919456  18509344  65% /mnt/data — Clustered Storage

[oracle@RAC1 PROD]$ ls -l /mnt/data/oradata/primary/spfile*
-rw-r—– 1 oracle oinstall 7680 May 10 15:18 spfileRAC.ora

Link individual init files on nodes RAC1/RAC2 to spfile

[oracle@RAC1]$ cd $ORACLE_HOME/dbs

[oracle@RAC1 dbs]$ cat initRAC1.ora
SPFILE=’/mnt/data/oradata/primary/spfileRAC.ora’

[oracle@RAC2 dbs]$ cat initRAC2.ora
SPFILE=’/mnt/data/oradata/primary/spfileRAC.ora’

Registering  database with CRS

[oracle@Node1 dbs]$ srvctl add database -d RAC -o /opt/app/oracle/product/11.2/db_1 -p  /mnt/data/oradata/primary/spfileRAC.ora
[oracle@Node1 dbs]$ srvctl add instance -d RAC -i RAC1 -n Node1
[oracle@Node1 dbs]$ srvctl add instance -d RAC -i RAC2 -n Node2
[oracle@Node2 arch]$ crsstat.sh  | grep RAC
ora.RAC.db                                 OFFLINE    OFFLINE

Before using services, we must check the cluster configuration is correct

[oracle@Node1 dbs]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /opt/app/oracle/product/11.2/db_1
Oracle user: oracle
Spfile: /mnt/data/oradata/primary/spfileRAC.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RAC
Database instances: RAC1,RAC2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed

[oracle@Node1 dbs]$ srvctl start database -d RAC
PRCR-1079 : Failed to start resource ora.RAC.db
CRS-5017: The resource action “ora.RAC.db start” encountered the following error:
ORA-29760: instance_number parameter not specified

Solution of the Problem
srvctl is case sensitive. So we need to ensure that instance and database definitions set in spfile/pfile are same case as those in the OCR and as are used in the srvctl commands. I made a mistake here and added “GDPROD1/2” in lowercase “RAC1/RAC2” while creating services.
Before going into solution be sure that ORACLE_SID reflects correct case so that instance can be accessed using SQL*Plus

I will have to remove services created earlier and add them with “UPPERCASE” instance name

[oracle@Node1 dbs]$ srvctl remove database -d RAC

Remove the database RAC? (y/[n]) y
[oracle@Node1 dbs]$ srvctl remove instance -d RAC -i RAC1
PRCD-1120 : The resource for database RAC could not be found.
PRCR-1001 : Resource ora.RAC.db does not exist
[oracle@Node1 dbs]$ srvctl remove instance -d RAC -i RAC2
PRCD-1120 : The resource for database RAC could not be found.

[oracle@Node1 dbs]$ srvctl add database -d RAC -o /opt/app/oracle/product/11.2/db_1 -p /mnt/data/oradata/primary/spfileRAC.ora
[oracle@Node1 dbs]$ srvctl add instance -d RAC -i RAC1 -n Node1
[oracle@Node1 dbs]$ srvctl add instance -d RAC -i RAC2 -n Node2
[oracle@Node2 arch]$ crsstat.sh  | grep RAC
ora.RAC.db                                 OFFLINE OFFLINE

Moment of TRUTH , start the Database

[oracle@Node1 dbs]$ srvctl start database -d RAC
[oracle@Node1 dbs]$ crsstat.sh  | grep RAC
ora.RAC.db                                 ONLINE ONLINE on Node1

[oracle@Node1 ~]$ export ORACLE_SID=RAC1
SQL> set sqlprompt ‘&_CONNECT_IDENTIFIER > ‘
RAC1 > archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /mnt/logs/oradata/primary/arch
Oldest online log sequence     18
Next log sequence to archive   19
Current log sequence           19

SQL>  set sqlprompt ‘&_CONNECT_IDENTIFIER > ‘
RAC2 > archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /mnt/logs/oradata/primary/arch
Oldest online log sequence     20
Next log sequence to archive   21
Current log sequence           21

Finally have a look at alrtlog for any issues reported

To test session failover , I will create SQLPLUS connection and see if  it gets migrated to other node when instance goes down.

SQL> select machine from v$session where rownum <5;
MACHINE
—————————————————————-
Node1
Node1
Node1
Node1

Node1 RAC1> shutdown abort;
ORACLE instance shut down.

SQL> select machine from v$session where rownum <5;
MACHINE
—————————————————————-
Node2
Node2
Node2
Node2

Cleaning up a machine with previous Oracle 11g Clusterware/RAC install

Posted by Sagar Patil

Here I will be deleting everything from a 2 node 11g RAC cluster

  1. Use “crs_stop -all” to stop all services on RAC nodes
  2. Use DBCA GUI to delete all RAC databases from nodes
  3. Use netca to delete LISTENER config
  4. Deinstall Grid Infrastructure from Server
  5. Deinstall Oracle database software from Server

Steps 1-3 are self-explanatory

4.Deinstall Grid Infrastructure from Server :

[oracle@RAC2 backup]$ $GRID_HOME/deinstall/deinstall

Checking for required files and bootstrapping …
Please wait …
Location of logs /opt/app/oracle/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
Install check configuration START
Checking for existence of the Oracle home location /opt/app/grid/product/11.2/grid_1
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/grid/product/11.2/grid_1
The following nodes are part of this cluster: RAC1,RAC2
Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END
Traces log file: /opt/app/oracle/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_check2011-03-31_10-14-05-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_check2011-03-31_10-14-06-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]:
######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/grid/product/11.2/grid_1
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/grid/product/11.2/grid_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
ASM was not detected in the Oracle Home
Do you want to continue (y – yes, n – no)? [n]: y
A log of this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.out’
Any error messages from this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.err’

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_clean2011-03-31_10-14-44-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_clean2011-03-31_10-14-44-AM.log
De-configuring Naming Methods configuration file on all nodes…
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes…
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes…
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes…
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
—————————————->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node
Run the following command as the root user or the administrator on node “RAC1″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Run the following command as the root user or the administrator on node “RAC2″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp” -lastnode
Press Enter after you finish running the above commands
<—————————————-

Let’s run these comamnds on Nodes

[oracle@RAC1 app]$ /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp
[oracle@RAC1 app]$ su –
Password:
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
>
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-22-37AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-22-37AM/perl/lib -I/tmp/deinstall2011-03-31_10-22-37AM/crs/install /tmp/deinstall2011-03-31_10-22-37AM/crs/install/rootcrs.pl -force  -deconfig -paramfile “/tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Using configuration parameter file: /tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.31.0/255.255.255.0/bond0, type static
VIP exists: /RAC1-vip/192.168.31.21/192.168.31.0/255.255.255.0/bond0, hosting node RAC1
VIP exists: /RAC2-vip/192.168.31.23/192.168.31.0/255.255.255.0/bond0, hosting node RAC2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crsd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crf’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.ctssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.cssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘RAC1’
CRS-2677: Stop of ‘ora.diskmon’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘RAC1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘RAC1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
************** **************

… continue as below once above commands compiled successfully

Removing Windows and .NET products configuration END
Oracle Universal Installer clean START
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node : Done
Failed to delete the directory ‘/opt/app/grid/product/11.2/grid_1’. The directory is in use.
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node : Failed <<<<
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by central inventory.
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’ : Done
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’ : Done
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by central inventory.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC2’
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC1’
Oracle install clean END
######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node “RAC2”
Oracle Clusterware is stopped and successfully de-configured on node “RAC1”
Oracle Clusterware is stopped and de-configured successfully.
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node.
Failed to delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’.
Successfully deleted directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@RAC2 11.2]$ cd $GRID_HOME
[oracle@RAC2 grid_1]$ pwd
/opt/app/grid/product/11.2/grid_1
[oracle@RAC2 grid_1]$ ls -lrt
total 0

Oracle clusterware was clearly removed from $CRS_HOME /$GRID_HOME. Lets proceed with next step.

5. Deinstall Oracle database software from Server

Note: Always use the Oracle Universal Installer to remove Oracle software. Do not delete any Oracle home directories without first using the Installer to remove the software.

[oracle@RAC2 11.2]$ pwd
/opt/app/oracle/product/11.2
oracle@RAC2 11.2]$ du db_1/
4095784 db_1/

Start the Installer as follows:
[oracle@RAC2 11.2]$ $ORACLE_HOME/oui/bin/runInstaller
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-03-31_10-37-33AM. Please wait …[oracle@RAC2 11.2]$ Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/oracle/product/11.2/db_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
Following RAC listener(s) will be de-configured: LISTENER
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
RAC1 : Oracle Home exists with CCR directory, but CCR is not configured
RAC2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y – yes, n – no)? [n]:

……………………………………….  You will see lots of messages

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the local node.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the remote nodes ‘RAC2’.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the remote nodes ‘RAC2’.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Let’s go to $ORACLE_HOME and see if any executables are remaining?

[oracle@RAC1 app]$ cd $ORACLE_HOME
-bash: cd: /opt/app/oracle/product/11.2/db_1: No such file or directory
[oracle@RAC2 product]$ pwd
/opt/app/oracle/product
[oracle@RAC2 product]$ du 11.2/
4       11.2/
(clearly no files available here)

10g RAC Install under RHEL/OEL 4.5

Posted by Sagar Patil

1.Objectives

5 Installation
5.1 CRS install
2 System Configuration 5.2 ASM Install
2.1 Machine Configuration 5.3 Install Database Software
2.2 External/Shared Storage 5.4 create RAC Database
2.3 Kernel Parameters 6 Scripts and profile files
5.4 .bash_profile rac01
3 Oracle Software Configuration 5.5 .bash_profile rac02
3.1 Directory Structure
3.2 Database Layout
3.3 Redo Logs 6 RAC Infrastructure Testing
3.4 Controlfiles 6.1 RAC Voting Disk Test
6.2 RAC Cluster Registry Test
4 Oracle Pre-Installation tasks 6.3 RAC ASM Tests
4.1 Installing Redhat 6.4 RAC Interconnect Test
4.2 Network Configuration 6.5 Loss of Oracle Config File
4.3 Copy Oracle 10.2.0.1 software onto server
4.4 Check installed packages Appendix
4.5 validate script 1. OCR/Voting disk volumes INAccessible by rac02 87
4.6 Download ASM packages 2. RAC cluster went down On PUBLIC network test. 88
4.7 Download OCFS packages
4.8 Creating Required Operating System Groups and Users.
4.9 Oracle required directory creation
4.10 Verifying That the User nobody Exists
4.11 Configuring SSH on Cluster Member Nodes For oracle
4.12 Configuring SSH on Cluster Member Nodes for root.
4.13 VNC setup
4.14 Kernel parameters
4.15 Verifying Hangcheck-timer Module on Kernel 2.6
4.16 Oracle user limits
4.17 Installing the cvuqdisk Packeage for linux.
4.18 Disk Partitioning
4.19 Checking the Network Setup with CVU
4.20 Checking the Hardware and Operating System Setup with CVU
4.21 Checking the Operating System Requirements with CVU.
4.22 Verifying Shared Storage
4.23 Verifying the Clusterware Requirements with CVU
4.24 ASM package install
4.25 OCFS package install
4.26 disable SELinux
4.27 OCFS2 Configuration
4.28 OCFS2 File system format
4.29 OCFS2 File system mount

Read more…

RAC Build on Solaris: Fifth Phase

Posted by Sagar Patil

Step by Step instructions on how to remove temp nodes from RAC cluster. Step by step instruction on how to verify removal of temp nodes.

REMOVAL OF CLUSTERING AFTER FAILOVER

1.shutdown the instances prod1,prod2 and then do the following.

2.Remove all the devdb entries for devdb or tempracsrv3,tempracsrv4 in tnsnames.ora

In both the servers—i.e. prodracsrv1,prodracsrv2.

3.Remove the following entries from init.ora in prodracsrv1,prodracsrv2

*.log_archive_config=’dg_config=

(PROD,DEVDB)’

*.log_archive_dest_2=’service=DEVDB

valid_for=(online_logfiles,primary_role)

db_unique_name=DEVDB’

*.standby_file_management=auto

*.fal_server=’DEVDB’

*.fal_client=’PROD’

*.service_names=’PROD’

4.After this Your PROD Database is Ready after failover .

RAC Build on Solaris, Step-By-Step

Posted by Sagar Patil

The scope of this STEP By STEP documentation will include the following workflow:

*First Phase*
The two prod nodes have Oracle 9i databases and we want to export the database to oracle 10g RAC. Step by Step export instructions for creating a backup copy of databases. Step By Step instruction on how to install new OS version of Sun Solaris 10 to replace Sun Solaris 9.

Setting Up nodes
Step by Step documentation for setting up 4 Sun Solaris 10 nodes. This portion needs to include Step by Step instructions for NFS setup. Also include step by step instructions to verify connectivity.

*Second Phase*
Oracle 10g R2 RAC Installation for Temp Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP by STEP guide you for installing two nodes (tempracsrv3 and tempracsrv4). Installation on this phase includes documentation on how to verify the installation and configuration is installed correctly. Step by Step instructions on creating RAC database, importing schemas from oracle 9i database into new databases, and testing database and node connectivity for this phase of the step by step instructions.

*Third Phase*
Oracle 10g R2 RAC Installation for PROD Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP By STEP guide you for installing two nodes (prodracsrv1and prodracsrv2) and adding to the existing RAC cluster.

*Fourth Phase*
Step by Step instructions on how to fail RAC databases over from temp nodes to prod nodes. Includes step by step instructions on how to verify the failover from temp nodes to prod nodes. Step by Step instructions on how to test RAC database connectivity after failover.

*Fifth Phase*
Step by Step instructions on how to remove temp nodes from RAC cluster. Step by step instruction on how to verify removal of temp nodes.

Bring Cluster Online/Offline

Posted by Sagar Patil

Step A> Sequence of events to pull cluster database down..

1. Bring down load balanced/TAF service
srvctl stop service -d orcl -s RAC

2. Stop RAC instances using
srvctl stop instance -d (database) -I (instance)

3. If needed stop ASM instnace using
srvctl stop asm -n (node)

4. Stop all services using
srvctl stop -nodeapps

Step B> Sequence of events to bring cluster database back..

1. Start all services using
srvctl start -nodeapps

2. Start ASM instnace using
srvctl start asm -n (node)

3. Start RAC instances using
srvctl start instance -d (database) -I (instance)

4. Finish up by bringing our load balanced/TAF service online
srvctl start service -d orcl -s RAC

Oracle Clusterware Administration Quick Reference

Posted by Sagar Patil

Sequence of events to bring cluster database back..

1.    Start all services using “start -nodeapps”
2.    Start ASM instnace using “srvctl start asm -n (node)”
3.    Start RAC instances using “srvctl start instance -d (database) -I (instance)”
4.    Finish up by bringing our load balanced/TAF service online “srvctl start service -d orcl -s RAC”

List of nodes and other information for all nodes participating in the cluster:

[oracle@oradb4 oracle]$ olsnodes -n
oradb4 oradb3 oradb2 oradb1

List all nodes participating in the cluster with their assigned node numbers:

[oracle@oradb4 tmp]$ olsnodes -n
oradb4 1 oradb3 2 oradb2 3 oradb1 4

List all nodes participating in the cluster with the private interconnect assigned to each node:

[oracle@oradb4 tmp]$ olsnodes -p
oradb4 oradb4-priv oradb3 oradb3-priv oradb2 oradb2-priv oradb1 oradb1-pr

Check the health of the Oracle Clusterware daemon processes:

[oracle@oradb4 oracle]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

Query and administer css vote disks :

[root@oradb4 root]# crsctl add css votedisk /u03/oradata/ CssVoteDisk.dbf
Now formatting voting disk: /u03/oradata/CssVoteDisk.dbfRead -1 bytes of 512 at offset 0 in voting device (CssVoteDisk.dbf) successful addition of votedisk /u03/oradata/CssVoteDisk.dbf

For dynamic state dump of the CRS:

[root@oradb4 root]# crsctl debug statedump crs
dumping State for crs objects Dynamic state dump information is appended to the crsd log file located in the $ORA_CRS_HOME/log/oradb4/crsd directory.

Verify the Oracle Clusterware version:

[oracle@oradb4 log]$ crsctl query crs softwareversion
CRS software version on node [oradb4] is [10.2.0.0.0]

Verify the current version of Oracle Clusterware being used:

[oracle@oradb4 log]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.0.0]

RAC on Windows,Linux with VMWARE, FIREWIRE, NFS

Posted by Sagar Patil

Some cheap/easy ways to Install RAC on inexpensive hardware

Step-By-Step Installation of RAC on Linux – Single Node (Oracle9i 9.2.0 with OCFS) single_node_oracle9i_920_with_ocfs

RAC Different Test Environments Made Easy.pdf rac_different_test_environments_made_easy_208 from “Plamen Zyumbyulev”

Why RAC need a VIP (Virtual IP address)

Posted by Sagar Patil

Importance of VIP
Real Application Clusters in 10g, however, don’t particularly want you to connect to physical IP address associated with network interface.
Doing so means IP packets are routed to a physical MAC address, so that if that address ever ceases to exist (such as when a server dies), we have to wait for TCP/IP networking protocol itself to work out that packets are undeliverable.
That can take up to 10 minutes, and would mean failover in a RAC can potentially be very slow.
Instead, Oracle wants users to connect to a Virtual IP Address (VIP). That’s an IP address that’s bound to a software-controlled MAC address and since it’s software controlled, the software can arrange for failures to be handled a lot quicker than plain old TCP/IP stack (in seconds, usually).
The VIP for a RAC node is quite often the normal, real IP address plus one so, in my case, that would imply a VIP of 192.168.1.111. I won’t be needing this until it comes time to installing the Oracle software, but it’s good to plan ahead.
Even if check fails you can continue the installation,by configuring vipca seaparately , even if that IP does not exist (no need that you own that ip).

RAC/CRS/Voting disk failover Tests

Posted by Sagar Patil

Read more…

TAF Failover Configuration and Testing

Posted by Sagar Patil

Configure the service on RAC servers for a failover

TNS Client side config

PROD =
(DESCRIPTION =
(enable=broken)
(LOAD_BALANCE = yes)
(ADDRESS = (PROTOCOL = TCP)(HOST = oravip01.oracledbasupport.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oravip02.oracledbasupport.com)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = prod)
(failover_mode=(type=select)(method=basic))
)
)

Let’s test a Failover – Connect to an Oracle Instance 1 or 2

oracle@ora02 ~]$ showcrs
HA Resource Target State
———– —— —–
ora.ora01.ASM1.asm ONLINE ONLINE on ora01
ora.ora01.LISTENER_ora01.lsnr ONLINE ONLINE on ora01
ora.ora01.gsd ONLINE UNKNOWN on ora01
ora.ora01.ons ONLINE UNKNOWN on ora01
ora.ora01.vip ONLINE ONLINE on ora01
ora.ora02.ASM2.asm ONLINE ONLINE on ora02
ora.ora02.LISTENER_ora02.lsnr ONLINE ONLINE on ora02
ora.ora02.gsd ONLINE UNKNOWN on ora02
ora.ora02.ons ONLINE UNKNOWN on ora02
ora.ora02.vip ONLINE ONLINE on ora02
ora.prod.db ONLINE ONLINE on ora01
ora.prod.prod.cs ONLINE ONLINE on ora02
ora.prod.prod.prod1.srv ONLINE ONLINE on ora01
ora.prod.prod.prod2.srv ONLINE ONLINE on ora02
ora.prod.prod1.inst ONLINE ONLINE on ora01
ora.prod.prod2.inst ONLINE ONLINE on ora02

SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod2

[oracle@ora02 ~]$ crs_stop ora.prod.prod2.inst
Attempting to stop `ora.prod.prod2.inst` on member `ora02`
Stop of `ora.prod.prod2.inst` on member `ora02` succeeded.
At this stage the connections are diverted to prod1 instance.

SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod1

[oracle@ora02 ~]$ showcrs
HA Resource Target State
———– —— —–
ora.ora01.ASM1.asm ONLINE ONLINE on ora01
ora.ora01.LISTENER_ora01.lsnr ONLINE ONLINE on ora01
ora.ora01.gsd ONLINE UNKNOWN on ora01
ora.ora01.ons ONLINE UNKNOWN on ora01
ora.ora01.vip ONLINE ONLINE on ora01
ora.ora02.ASM2.asm ONLINE ONLINE on ora02
ora.ora02.LISTENER_ora02.lsnr ONLINE ONLINE on ora02
ora.ora02.gsd ONLINE UNKNOWN on ora02
ora.ora02.ons ONLINE UNKNOWN on ora02
ora.ora02.vip ONLINE ONLINE on ora01
ora.prod.db ONLINE ONLINE on ora01
ora.prod.prod.cs ONLINE ONLINE on ora02
ora.prod.prod.prod1.srv ONLINE ONLINE on ora01
ora.prod.prod.prod2.srv ONLINE OFFLINE
ora.prod.prod1.inst ONLINE ONLINE on ora01
ora.prod.prod2.inst OFFLINE OFFLINE

[oracle@ora02 ~]$ crs_start ora.prod.prod2.inst
Attempting to start `ora.prod.prod2.inst` on member `ora02`
Start of `ora.prod.prod2.inst` on member `ora02` succeeded.

What happens if Server is restarted?

I am connected to prod2 instance and a reboot migrates my connection to prod1 automatically.

SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod2

SQL> select count(*) from
(select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source union select * from dba_source)
COUNT(*)
———-
292465

SQL> select instance_name from v$instance;
INSTANCE_NAME
—————-
prod1

Let’s see how a RAC Load balancing works? Write a small sql test Script (verify.sql) like below

REM the following query is for TAF connection verification
col sid format 999
col serial# format 9999999
col failover_type format a13
col failover_method format a15
col failed_over format a11
SELECT   sid,
 serial#,
 failover_type,
 failover_method,
 failed_over
 FROM   v$session
 WHERE   username = 'SU';

REM the following query is for load balancing verification
SELECT   instance_name FROM v$instance;
exit

REM We can also combine two queries:
col inst_id format 999
col sid format 999
col serial# format 9999999
col failover_type format a13
col failover_method format a15
col failed_over format a11
SELECT   inst_id,
 sid,
 serial#,
 failover_type,
 failover_method,
 failed_over
 FROM   gv$session
 WHERE   username = 'SU';

REM a simple select to see the distribution of users when testing connection : load balancing
 SELECT   inst_id, COUNT ( * )
 FROM   gv$session
GROUP BY   inst_id;

Write loop.sh file to make number SQL connections. Please copy and paste at least 100 entries of line below. Oracle Listener will load balance connections by diverting new connections to least loaded oracle RAC instance.

nohup sqlplus system/0ra01@failover @verify.sql &
sleep 1
nohup sqlplus system/0ra01@failover @verify.sql &
sleep 1
nohup sqlplus system/0ra01@failover @verify.sql &
sleep 1
nohup sqlplus system/0ra01@failover @verify.sql &
sleep 1

Run loop.sh and note down connections shared between RAC 1 & RAC 2 nodes

[oracle@ora01 scripts]$ grep prod2 nohup.out | wc -l
35
[oracle@ora01 scripts]$ grep prod1 nohup.out | wc -l
41

 

RAC | How to use SRVCTL Command

Posted by Sagar Patil

Check out current configuration information

srvctl config database Displays the configuration information of the cluster database.
srvctl config service Displays the configuration information for the services.
srvctl config nodeapps Displays the configuration information for the node applications.
srvctl config asm Displays the configuration for the ASM instances on the node.

Summary of srvctl commands.

Command Targets Description
srvctl add
srvctl modify
srvctl remove
database
instance
service
nodeapps
srvctl add / remove adds/removes target‘s configuration information to/from the OCR.srvctl modify allows you to change some of target‘s configuration information in the OCR without wiping out the rest.
srvctl relocate service Allows you to reallocate a service from one named instance to another named instance.
srvctl config database
service
nodeapps
asm
Lists configuration information for target from the OCR.
srvctl disable
srvctl enable
database
instance
service
asm
srvctl disable disables target, meaning CRS will not consider it for automatic startup, failover, or restart. This option is useful to ensure an object that is down for maintenance is not accidentally automatically restarted.srvctl enable reenables the specified object.
srvctl getenv
srvctl setenv
srvctl unsetenv
database
instance
service
nodeapps
srvctl getenv displays the environment variables stored in the OCR for target.srvctl setenv allows these variables to be set, and unsetenv unsets them.
srvctl start
srvctl status
srvctl stop
database
instance
service
nodeapps
asm
Start, stop, or display status (started or stopped) of target.

Adding a Database Service
srvctl add service -d <database_name> -s <5ervice_name> -r “<preferred list>”

londonl$ srvctl add service -6 RAC -s SERVICE2 -i “RACl,RAC2” -a “RAC3,RAC4”

Starting a Database Service
srvctl start service -d <database_name> [-s “<service_name_li$t>” [-1 <in$t_name>]] [-0 <start_options>] [-c <connect_str> | -q]

londonl$ srvctl start service -d RAC -s “SERVICEl,SERVICE2”

Stopping a Database Service
srvctl stop service -d <database_name> [-s “<service_naine_list>” [-1 <inst_name>]] [-C <connect_str> | -q] [-f]

londonl$ srvctl stop service -d RAC -s “SERVICE2,SERVICE3” -f

Checking the Current Database Service Configuration
srvctl config service -d <database_name> [-s <service_name>] [-a] [-S <level>]

londonl$ srvctl config service -d RAC -a
The -a option includes information about the configuration of TAF for the database service

Checking Current Database Service Status
srvctl status service -d <name> -s “<service_name_list>” [-f] [-v] [-S <level>]

londonl$ srvctl status service -d RAC -s “SERVICEl,SERVICE4”

Enabling and Disabling a Database Service
srvctl disable service -d <database_name> -s “<service_name_list>” [-i <in$t_name>]

londonl$ srvctl disable service -d RAC -s SERVICE2 -i RAC4

srvctl enable service -d <database_name> -s “<service_name_list>” [-i <inst_name>]

londonl$ srvctl enable service -d RAC -s SERVICE2 -i RAC4

Removing a Database Service
srvctl remove service -d <database_name> -s <service_name> [ – i <inst_narne>] [-f]

londonl$ srvctl remove service -d RAC -s SERVICE4

Relocating a Database Service
srvctl relocate service -d <database_name> -s <service_name> -i <old_inst_name> -r <new_inst_name> [-f]

londonl$ srvctl relocate service -d RAC -s SERVICES -i RAC2 -t RAC4

Administering Instances

Starting an instance : srvctl start instance -d prod -i “prod1,prod2”

Stopping an instance: srvctl stop instance -d prod -i “prod1,prod2”

Checking the status of an instance : srvctl status instance -d prod – i “prod1,prod2”

Adding a new instance configuration : srvctl add instance -d prod – i prod3 -n prod3_node

Removing an existing instance configuration: srvctl remove instance -d prod3-i prod3_node

Disabling an instance: srvctl disable instance -d prod -i “prod1,prod2”

Enabling an instance : srvctl enable instance -d prod -i “prod1,prod2

OCFS2 Support Guide Linux/Solaris

Posted by Sagar Patil

This support guide is a supplement to the OCFS2 User’s Guide and the OCFS2 FAQ. The information provided is directed towards support. End users should consult theUsers’ Guide and/or FAQ for information on setting up and using OCFS2.

Read more…

Top of Page

Top menu