Oracle 10g R2 RAC Installation for PROD Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP By STEP guide you for installing two nodes (prodracsrv1and prodracsrv2) and adding to the existing RAC cluster(Configuring Failover).
10g RAC Installation (Part-II Clusterware & Database Installation)
1. Install Oracle Clusterware
Mount the Clusterware dvd in the prodracsrv1 and run the runInstaller
After downloading, as the oracle user on prodracsrv1, execute
1. Welcome: Click on Next.
2. Specify Inventory directory and credentials:
o Enter the full path of the inventory directory:
/u01/app/oracle/oraInventory.
o Specify Operating System group name: oinstall.
3. Specify Home Details:
o Name: OraCrs10g_home
o /u01/app/oracle/product/10.2.0/crs_1
4. Product-Specific Prerequisite Checks:
o Ignore the warning on physical memory requirement.
5. Specify Cluster Configuration: Click on Add.
o Public Node Name: prodracsrv2.mycorpdomain.com
o Private Node Name: prodracsrv2-priv.mycorpdomain.com
o Virtual Host Name: prodracsrv2-vip.mycorpdomain.com
6. Specify Network Interface Usage:
o Interface Name: eth0
o Subnet: 192.168.2.0
o Interface Type: Public
o Interface Name: eth1
o Subnet: 10.10.10.0
o Interface Type: Private
7. Specify Oracle Cluster Registry (OCR) Location: Select External Redundancy.
For simplicity, here you will not mirror the OCR. In a production environment,
you may want to consider multiplexing the OCR for higher redundancy.
o Specify OCR Location: /u01/ocr_config
8. Specify Voting Disk Location: Select External Redundancy.
Similarly, for simplicity, we have chosen not to mirror the Voting Disk.
o Voting Disk Location: /u01/votingdisk
9. Summary: Click on Install.
10. Execute Configuration scripts: Execute the scripts below as the root user
sequentially, one at a time. Do not proceed to the next script until the current
script completes.
o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv1.
o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv2.
o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv1.
o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv2.
The root.sh script on prodracsrv2 invoked the VIPCA automatically but it failed with the
error “The given interface(s), “eth0″ is not public. Public interfaces should be
used to configure virtual IPs.” As you are using a non-routable IP address
(192.168.x.x) for the public interface, the Oracle Cluster Verification Utility
(CVU) could not find a suitable public interface. A workaround is to run VIPCA
manually.
11. As the root user, manually invokes VIPCA on the second node.
# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca
Welcome: Click on Next.
Network Interfaces: Select eth0.
Virtual IPs for cluster nodes:
o Node name: prodracsrv1
o IP Alias Name: prodracsrv1-vip
o IP address: 192.168.2.31
o Subnet Mask: 255.255.255.0
o Node name: prodracsrv2
o IP Alias Name: prodracsrv2-vip
o IP address: 192.168.2.32
o Subnet Mask: 255.255.255.0
Summary: Click on Finish.
Configuration Assistant Progress Dialog: After the configuration has completed,
Click on OK.
Configuration Results: Click on Exit.
Return to the Execute Configuration scripts screen on prodracsrv1 and click on OK.
Configuration Assistants: Verify that all checks are successful. The OUI does a
Clusterware post-installation check at the end. If the CVU fails, correct the
Problem and re-run the following command as the oracle user.
prodracsrv1-> /u01/app/oracle/product/10.2.0/crs_1/bin/cluvfy stage -post crsinst -n prodracsrv1, prodracsrv2
23. Performing post-checks for cluster services setup
24.
25. Checking node reachability…
26. Node reachability check passed from node “prodracsrv1”.
27.
28. Checking user equivalence…
29. User equivalence check passed for user “oracle”.
30.
31. Checking Cluster manager integrity…
32.
33. Checking CSS daemon…
34. Daemon status check passed for “CSS daemon”.
35.
36. Cluster manager integrity check passed.
37.
38. Checking cluster integrity…
39.
40. Cluster integrity check passed
41.
42. Checking OCR integrity…
43.
44. Checking the absence of a non-clustered configuration…
45. All nodes free of non-clustered, local-only configurations.
46.
47. Uniqueness check for OCR device passed.
48.
49. Checking the version of OCR…
50. OCR of correct Version “2” exists.
51.
52. Checking data integrity of OCR…
53. Data integrity check for OCR passed.
54.
55. OCR integrity check passed.
56.
57. Checking CRS integrity…
58.
59. Checking daemon liveness…
60. Liveness check passed for “CRS daemon”.
61.
62. Checking daemon liveness…
63. Liveness check passed for “CSS daemon”.
64.
65. Checking daemon liveness…
66. Liveness check passed for “EVM daemon”.
67.
68. Checking CRS health…
69. CRS health check passed.
70.
71. CRS integrity check passed.
72.
73. Checking node application existence…
74.
75. Checking existence of VIP node application (required)
76. Check passed.
77.
78. Checking existence of ONS node application (optional)
79. Check passed.
80.
81. Checking existence of GSD node application (optional)
82. Check passed.
83.
84. Post-check for cluster services setup was successful.
85. End of Installation: Click on Exit.
2. Install Oracle Database 10g Release 2
After mounting database 10g R2 DVD run the installer
prodracsrv1-> ./runInstaller
1. Welcome: Click on Next.
2. Select Installation Type:
o Select Enterprise Edition.
3. Specify Home Details:
o Name: OraDb10g_home1
o Path: /u01/app/oracle/product/10.2.0/db_1
4. Specify Hardware Cluster Installation Mode:
o
Select the “Cluster Install” option and make sure both RAC nodes are selected, the click the “Next” button
o Select the “Install database Software only” option, then click the “Next” button.
Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the “Next” button.
7. Select the “Install database Software only” option, then click the “Next” button.
8. On the “Summary” screen, click the “Install” button to continue.
9. Wait while the database software installs.
10. Once the installation is complete, wait while the configuration assistants run.
11. Execute the “root.sh” scripts on both nodes, as instructed on the “Execute Configuration scripts” screen, then click the “OK” button.
12. When the installation is complete, click the “Exit” button to leave the installer.
Adding to the cluster
RAC Physical Standby for a RAC Primary
Overview……………………………………………………………………………………………….2
Task 1: Gather Files and Perform Back Up…………………………………………..3
Task 2: Configure Oracle Net SERVICES on the Standby……………………3
Task 3: Create the Standby Instances and Database………………………………4
Task 4: Configure The Primary Database For Data Guard……………………9
Task 5: Verify Data Guard Configuration……………………………………………10
the database unique name of the RAC database as DEVDB. The instance names of the two RAC instances are DEVDB1 (on node DEVDB_tempracsrv3) and DEVDB2 (on node DEVDB_tempracsrv4). The database unique name of the RAC standby database is PROD, and the two standby instance names are PROD1 (on node PROD_prodracsrv1) and PROD2 (on node PROD_prodracsrv2).
This document includes the following tasks:
• Task 1: Gather Files and Perform Back Up
• Task 2: Configure Oracle Net on the Standby
• Task 3: Create the Standby Instances and Database
• Task 4: Configure the Primary Database for Data Guard
• Task 5: Verify Data Guard Configuration
This document assumes that the following conditions are met:
• The primary RAC database is in archivelog mode
Creating a RAC Physical Standby for a RAC Primary
• The primary and standby databases are using a flash recovery area.
• The standby RAC hosts have existing Oracle software installation.(prodracsrv1,prodracsrv2)…..
• Oracle Managed Files (OMF) is used for all storage.
TASK 1: GATHER FILES AND PERFORM BACK UP
1. On tempracsrv3 node, create a staging directory. For example:
[oracle@DEVDB_tempracsrv3 oracle]$ mkdir -p /opt/oracle/stage
2. Create the same exact path on one of the standby hosts:
[oracle@PROD_prodracsrv1 oracle]$ mkdir -p /opt/oracle/stage
3. On the tempracsrv3 node, connect to the primary database and create a PFILE from the SPFILE in the staging directory. For example:
SQL> CREATE PFILE=’/opt/oracle/stage/initDEVDB.ora’ FROM SPFILE;
4. On the tempracsrv3 node, perform an RMAN backup of the primary database that places the backup pieces into the staging directory. For example:
[oracle@DEVDB_host1 stage]$ rman target /
RMAN> BACKUP DEVICE TYPE DISK FORMAT ‘/opt/oracle/stage/%U’ DATABASE PLUS ARCHIVELOG;
RMAN> BACKUP DEVICE TYPE DISK FORMAT ‘/opt/oracle/stage/%U’ CURRENT CONTROLFILE FOR STANDBY;
5. Place a copy of the listener.ora, tnsnames.ora, and sqlnet.ora files into the staging directory. For example:
[oracle@DEVDB_tempracsrv3 oracle]$ cp $ORACLE_HOME/network/admin/*.ora /opt/oracle/stage
6. Copy the contents of the staging directory on the RAC primary node to the standby node on which the staging directory was created on in step 2. For example:
[oracle@DEVDB_host1 oracle]$ scp /opt/oracle/stage/* \
oracle@PROD_prodracsrv1:/opt/oracle/stage
and from there copy all the dbf’s the current logfiles,standby controlfile to the according locations(Note the location should be the same location as it was in the primary).
TASK 2: CONFIGURE ORACLE NET SERVICES ON THE STANDBY
1. Copy the listener.ora, tnsnames.ora, and sqlnet.ora files from the staging directory on the standby host to the $ORACLE_HOME/network/admin directory on all standby hosts.
2. Modify the listener.ora file each standby host to contain the VIP address of that host.
Creating a RAC Physical Standby for a RAC Primary
3. Modify the tnsnames.ora file on each node, including the primary RAC nodes and standby RAC nodes, to contain all primary and standby net service names. You should also modify the Oracle Net aliases that are used for the local_listener and remote_listener parameters to point to the listener on each standby host. In this example, each tnsnames.ora file should contain all of the net service names in the following table:
Example Entries in the tnsnames.ora Files
Primary Net Service Names |
Standby Net Service Name |
DEVDB =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = DEVDB_tempracsrv3vip)
(HOST = DEVDB_tempracsrv4vip)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DEVDB)
)
) |
PROD =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = PROD_prodracsrv1vip)
(HOST = PROD_prodracsrv2vip)
(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PROD)
)
) |
4. Start the standby listeners on all standby hosts.
TASK 3: CREATE THE STANDBY INSTANCES AND DATABASE
1. To enable secure transmission of redo data, make sure the primary and standby databases use a password file, and make sure the password for the SYS user is identical on every system. For example:
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwPROD password=oracle
2. Copy and rename the primary database PFILE from the staging area on all standby hosts to the $ORACLE_HOME/dbs directory on all standby hosts. For example:
[oracle@PROD_host1 stage]$ cp initDEVDB1.ora $ORACLE_HOME/dbs/initPROD1.ora
3. Modify the standby initialization parameter file copied from the primary node to include Data Guard parameters as illustrated in the following table:
Creating a RAC Physical Standby for a RAC Primary
Initialization Parameter Modifications
Parameter
Category |
Before |
After |
RAC Parameters |
*.cluster_database=true
*.db_unique_name=DEVDB
DEVDB1.instance_name=DEVDB1
DEVDB2.instance_name=DEVDB2
DEVDB1.instance_number=1
DEVDB2.instance_number=2
DEVDB1.thread=1
DEVDB2.thread=2
DEVDB1.undo_tablespace=UNDOTBS1
DEVDB2.undo_tablespace=UNDOTBS2
*.remote_listener=LISTENERS_DEVDB
DEVDB1.LOCAL_LISTENER=LISTENER_DEVDB_tempracsrv3
DEVDB2.LOCAL_LISTENER=LISTENER_DEVDB_tempracsrv4 |
*.cluster_database=true
*.db_unique_name=PROD
PROD1.instance_name=PROD1
PROD2.instance_name=PROD2
PROD1.instance_number=1
PROD2.instance_number=2
PROD1.thread=1
PROD2.thread=2
PROD1.undo_tablespace=UNDOTBS1
PROD2.undo_tablespace=UNDOTBS2
*.remote_listener=LISTENERS_PROD
PROD1.LOCAL_LISTENER=LISTENER_PROD_prodracsrv1
PROD2.LOCAL_LISTENER=LISTENER_PROD_prodracsrv2 |
Data Guard Parameters |
*.log_archive_config=’dg_config=
(PROD,DEVDB)’
*.log_archive_dest_2=’service=DEVDB
valid_for=(online_logfiles,primary_role)
db_unique_name=DEVDB’
*.standby_file_management=auto
*.fal_server=’DEVDB’
*.fal_client=’PROD’
*.service_names=’PROD’ |
Other parameters |
*.background_dump_dest=
/u01/app/oracle/admin/DEVDB/bdump
*.core_dump_dest=
/u01/app/oracle/admin/DEVDB/cdump
*.user_dump_dest=
/u01/oracle/admin/DEVDB/udump
*.audit_file_dest=
/u01/app/oracle/admin/DEVDB/adump
*.db_recovery_dest=’/u01/app/oracle/flash_recoveryarea’
*.log_archive_dest_1 =
‘LOCATION=’/u01/app/oracle/arch’
*.dispatchers=DEVDBXDB |
*.background_dump_dest=
/u01/oracle/admin/PROD/bdump
*.core_dump_dest=
/u01/oracle/admin/PROD/cdump
*.user_dump_dest=
/opt/oracle/admin/PROD/udump
*.audit_file_dest=
/u01/oracle/admin/PROD/adump
*.db_recovery_dest=’/u01/app/oracle/flash_recoveryarea’
*.log_archive_dest_1=
‘LOCATION=USE_DB_RECOVERY_FILE_DEST’
*.dispatchers=PRODXDB |
|
|
|
|
5. Connect to the standby database on one standby host, with the standby in the IDLE state, and create an SPFILE in the standby DATA disk group:
SQL> CREATE SPFILE=’+DATA/PROD/spfilePROD.ora’ FROM PFILE=’?/dbs/initPROD.ora’;
6. In the $ORACLE_HOME/dbs directory on each standby host, create a PFILE that is named initoracle_sid.ora that contains a pointer to the SPFILE. For example:
[oracle@PROD_prodracsrv1 oracle]$ cd $ORACLE_HOME/dbs
[oracle@PROD_prodracsrv1 dbs]$ echo spfilePROD.ora > initPROD1.ora
7. Create the dump directories on all standby hosts as referenced in the standby initialization parameter file. For example:
[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/bdump
[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/cdump
[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/udump
[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/adump
8. After setting up the appropriate environment variables on each standby host, such as ORACLE_SID, ORACLE_HOME, and PATH, start the standby database instance on the standby host that has the staging directory, without mounting the control file.
SQL> STARTUP NOMOUNT
9. From the standby host where the standby instance was just started, duplicate the primary database as a standby into the ASM disk group. For example:
$ rman target sys/oracle@DEVDB auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY;
10. Connect to the standby database, and create the standby redo logs to support the standby role. The standby redo logs must be the same size as
the primary database online logs. The recommended number of standby redo logs is:
(maximum # of logfiles +1) * maximum # of threads
This example uses two online log files for each thread. Thus, the number of standby redo logs should be (2 + 1) * 2 = 6. That is, one more standby redo log file for each thread.
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 5 SIZE 10M,
GROUP 6 SIZE 10M,
GROUP 7 SIZE 10M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 8 SIZE 10M,
GROUP 9 SIZE 10M,
GROUP 10 SIZE 10M;
These statements create two standby log members for each group, and each member is 10MB in size. One member is created in the directory specified by the DB_CREATE_FILE_DEST initialization parameter, and the other member is created in the directory specified by DB_RECOVERY_FILE_DEST initialization parameter. Because this example assumes that there are two redo log groups in two threads, the next group is group five.
You can check the number and group numbers of the redo logs by querying the V$LOG view:
SQL> SELECT * FROM V$LOG;
You can check the results of the previous statements by querying the V$STANDBY_LOG view:
SQL> SELECT * FROM V$STANDBY_LOG;
You can also see the members created by querying the V$LOGFILE view:
SQL> SELECT * FROM V$LOGFILE;
.
11. On only one standby host (and this is your designated Redo Apply instance), start managed recovery and real-time apply on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
12. On either node of the standby cluster, register the standby database and the database instances with the Oracle Cluster Registry (OCR) using the Server Control (SRVCTL) utility. For example:
$ srvctl add database -d PROD –o /u01/app/oracle/product/10.2.0/db_1
$ srvctl add instance -d PROD -i PROD1 -n PROD_prodracsrv1
$ srvctl add instance -d PROD -i PROD2 -n PROD_prodracsrv2
The following are descriptions of the options in these commands:
The -d option specifies the database unique name (DB_UNIQUE_NAME) of the database.
The -i option specifies the database insance name.
The -n option specifies the node on which the instance is running.
The -o option specifies the Oracle home of the database.
TASK 4: CONFIGURE THE PRIMARY DATABASE FOR DATA GUARD
1. Configure the primary database initialization parameters to support both the primary and standby roles.
*.log_archive_config=’dg_config=(PROD,DEVDB)’
*.log_archive_dest_2=’service=PROD
valid_for=(online_logfiles,primary_role)
db_unique_name=PROD’
*.standby_file_management=auto
*.fal_server=’PROD’
*.fal_client=’DEVDB’
*.service_names=DEVDB
Note that all the parameters listed above can be dynamically modified with the exception of the standby role parameters log_file_name_convert and db_file_name_convert. It is recommended to set the parameters with “scope=spfile” so that they can be put into effect upon the next role change.
2. Create standby redo logs on the primary database to support the standby role. The standby redo logs are the same size as the primary database online logs. The recommended number of standby redo logs is one more than the number of online redo logs for each thread. Because this example has two online redo logs for each thread, three standby redo logs are required for each thread.
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1
GROUP 5 SIZE 10M,
GROUP 6 SIZE 10M,
GROUP 7 SIZE 10M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2
GROUP 8 SIZE 10M,
GROUP 9 SIZE 10M,
GROUP 10 SIZE 10M;
These statements create two standby log members for each group, and each member is 10MB in size. One member is created in the directory specified by the DB_CREATE_FILE_DEST initialization parameter, and the other member is created in the directory specified by DB_RECOVERY_FILE_DEST initialization parameter. Because this example assumes that there are two redo log groups in two threads, the next group is group five.
You can check the number and group numbers of the redo logs by querying the V$LOG view:
SQL> SELECT * FROM V$LOG;
You can check the results of the previous statements by querying the V$STANDBY_LOG view:
SQL> SELECT * FROM V$STANDBY_LOG;
You can also see the members created by querying V$LOGFILE view:
SQL> SELECT * FROM V$LOGFILE;
TASK 5: VERIFY DATA GUARD CONFIGURATION
1. On the standby database, query the V$ARCHIVED_LOG
view to identify existing files in the archived redo log. For example:
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
2. On the primary database, issue the following SQL statement to force a log switch and archive the current online redo log file group:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
3. On the standby database, query the V$ARCHIVED_LOG
view to verify that the redo data was received and archived on the standby database:
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
Now we have setup a Failover instance for RAC on prodracsrv1,prodracsrv2.