Cloning Oracle Clusterware (Applicable only to 11.2.0.2.0 and not for any previous Releases)

Posted by Sagar Patil

Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment.

The following list describes some situations in which cloning is useful:

  • Cloning provides a way to prepare a Oracle Clusterware home once and deploy it to many hosts simultaneously. You can complete the installation in silent mode, as a noninteractive process. You do not need to use a graphical user interface (GUI) console, and you can perform cloning from a Secure Shell (SSH) terminal session, if required.
  • Cloning enables you to create a new installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
  • Installing Oracle Clusterware by cloning is a quick process. For example, cloning an Oracle Clusterware home to a new cluster with more than two nodes requires a few minutes to install the Oracle software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh script).
  • Cloning provides a guaranteed method of repeating the same Oracle Clusterware installation on multiple clusters.

The steps to create a new cluster through cloning are as follows:

Prepare the new cluster nodes
Deploy Oracle Clusterware on the destination nodes
Run the clone.pl script on each destination node
Run the orainstRoot.sh script on each node
Run the CRS_home/root.sh script
Run the configuration assistants and the Oracle Cluster Verify utility

Step 1: Prepare Oracle Clusterware Home for Cloning
Install the Oracle Clusterware 11g Release 1 (11.2.0.2.0).
Install any patches that are required (for example, 11.2.0.2.n, if necessary.
Apply one-off patches, if necessary.

Step 2   Shutdown Oracle Clusterware
[root@RAC1 root]# crsctl stop crs

Step 3   Create a Gold copy of Oracle Clusterware Installation
cd /opt/app/grid/product/11.2/grid_1
tar -czvf /mnt/backup/CRS_build_gold_image_rac02a2.tgz grid_1

Step 4   Copy Oracle Clusterware on the destination nodes
[root@rac02a1 backup]# scp CRS_build_gold_image_rac02a1.tgz  oracle@RAC1:/opt/app/grid/product/11.2
Warning: Permanently added ‘RAC1,192.168.31.120’ (RSA) to the list of known hosts.
oracle@RAC1’s password:
CRS_build_gold_image_rac02a1.tgz                         100%  987MB  17.3MB/s   00:57

Step 5   Remove unnecessary files from the copy of the Oracle Clusterware home
The Oracle Clusterware home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, racg/dump, srvm/log, and cdata directories. The following example for Linux and UNIX systems shows the commands you can run to remove unnecessary files from the copy of the Oracle Clusterware home:

[root@node1 root]# cd /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# rm -rf ./opt/app/grid/product/11.2/grid_1/log/hostname
[root@node1 crs]# find . -name ‘*.ouibak’ -exec rm {} \;
[root@node1 crs]# find . -name ‘*.ouibak.1’ -exec rm {} \;
[root@node1 crs]# rm -rf root.sh*
[root@node1 crs]# cd cfgtoollogs
[root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;

Step 6  Deploy Oracle Clusterware on the destination nodes (RUN it at EACH NODE ****)
Change the ownership of all files to oracle:oinstall group, and create a directory for the Oracle Inventory

[root@node1 crs]# chown -R oracle:oinstall /opt/app/grid/product/11.2/grid_1
[root@node1 crs]# mkdir -p /opt/app/oracle/oraInventory/
[root@node1 crs]# chown oracle:oinstall /opt/app/oracle/oraInventory/

Goto $GRID_HOME/clone/bin directory on each destination node and run clone.pl script  which performs main Oracle Clusterware cloning tasks
$perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory

[oracle@RAC1 bin]$ perl clone.pl -silent ORACLE_BASE=/opt/app/oracle ORACLE_HOME=/opt/app/grid/product/11.2/grid_1 ORACLE_HOME_NAME=OraHome1Grid INVENTORY_LOCATION=/opt/app/oracle/oraInventory
./runInstaller -clone -waitForCompletion  “ORACLE_BASE=/opt/app/oracle” “ORACLE_HOME=/opt/app/grid/product/11.2/grid_1” “ORACLE_HOME_NAME=OraHome1Grid” “INVENTORY_LOCATION=/opt/app/oracle/oraInventory” -silent -noConfig -nowait
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB.   Actual 1983 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-04-01_05-05-56PM. Please wait …Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

You can find the log of this install session at:
/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log
………………………………………………………………………………………. 100% Done.
Installation in progress (Friday, 1 April 2011 17:06:08 o’clock BST)
………………………………………………………………72% Done.
Install successful
Linking in progress (Friday, 1 April 2011 17:06:10 o’clock BST)
Link successful
Setup in progress (Friday, 1 April 2011 17:06:50 o’clock BST)
…………….                                                100% Done.
Setup successful
End of install phases.(Friday, 1 April 2011 17:07:00 o’clock BST)
WARNING:
The following configuration scripts need to be executed as the “root” user.
/opt/app/grid/product/11.2/grid_1/root.sh

To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts

Run the script on the local node.

The cloning of OraHome1Grid was successful. Please check ‘/opt/app/oracle/oraInventory/logs/cloneActions2011-04-01_05-05-56PM.log’ for more details.

Launch the Configuration Wizard
[oracle@RAC2 bin]$ nslookup rac04scan
Server:         10.20.11.11
Address:        10.20.11.11#53
Name:   rac04scan
Address: 192.168.31.188
Name:   rac04scan
Address: 192.168.31.187
Name:   rac04scan
Address: 192.168.31.189

$ $GRID_HOME/crs/config/config.sh

 

 

 

 

 

 

RUN root.sh screen on NODE A

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log for the output of root script

 

[oracle@RAC1 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC1_2011-04-04_12-41-24.log

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /opt/app/grid/product/11.2/grid_1
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘RAC1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘RAC1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘RAC1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘RAC1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘RAC1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘RAC1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘RAC1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘RAC1’
CRS-2676: Start of ‘ora.diskmon’ on ‘RAC1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘RAC1’ succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting disk: /mnt/crs1/vdisk/rac04vdsk1.
Now formatting voting disk: /mnt/crs2/vdisk/rac04vdsk2.
Now formatting voting disk: /mnt/crs3/vdisk/rac04vdsk3.
CRS-4603: Successful addition of voting disk /mnt/crs1/vdisk/rac04vdsk1.
CRS-4603: Successful addition of voting disk /mnt/crs2/vdisk/rac04vdsk2.
CRS-4603: Successful addition of voting disk /mnt/crs3/vdisk/rac04vdsk3.
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
1. ONLINE   a77b9ecfd10c4f8abf9dae8e403458e6 (/mnt/crs1/vdisk/rac04vdsk1) []
2. ONLINE   3a2c370ffe014f20bff0673b01d8164c (/mnt/crs2/vdisk/rac04vdsk2) []
3. ONLINE   8597ee290c994fd8bf23a4b3c97a98bb (/mnt/crs3/vdisk/rac04vdsk3) []
Located 3 voting disk(s).
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
Preparing packages for installation…
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster … succeeded

RUN root.sh screen on NODE B

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/root.sh
Check /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log for the output of root script

[oracle@RAC2 ~]$ tail -f /opt/app/grid/product/11.2/grid_1/install/root_RAC2_2011-04-04_12-50-53.log
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/grid/product/11.2/grid_1/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
Adding daemon to inittab
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
ACFS-9201: Not Supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: ‘Linux 2.4’
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node RAC1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster

[root@RAC2 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@RAC1 ~]# /opt/app/grid/product/11.2/grid_1/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Step 6. Locating and Viewing Log Files Generated During Cloning
The cloning script runs multiple tools, each of which can generate log files.
After the clone.pl script finishes running, you can view log files to obtain more information about the status of your cloning procedures. Table 4-4 lists the log files that are generated during cloning that are the key log files for diagnostic purposes.

Ref : http://download.oracle.com/docs/cd/E11882_01/rac.112/e16794/clonecluster.htm

Cleaning up a machine with previous Oracle 11g Clusterware/RAC install

Posted by Sagar Patil

Here I will be deleting everything from a 2 node 11g RAC cluster

  1. Use “crs_stop -all” to stop all services on RAC nodes
  2. Use DBCA GUI to delete all RAC databases from nodes
  3. Use netca to delete LISTENER config
  4. Deinstall Grid Infrastructure from Server
  5. Deinstall Oracle database software from Server

Steps 1-3 are self-explanatory

4.Deinstall Grid Infrastructure from Server :

[oracle@RAC2 backup]$ $GRID_HOME/deinstall/deinstall

Checking for required files and bootstrapping …
Please wait …
Location of logs /opt/app/oracle/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
Install check configuration START
Checking for existence of the Oracle home location /opt/app/grid/product/11.2/grid_1
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/grid/product/11.2/grid_1
The following nodes are part of this cluster: RAC1,RAC2
Install check configuration END
Skipping Windows and .NET products configuration check
Checking Windows and .NET products configuration END
Traces log file: /opt/app/oracle/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_check2011-03-31_10-14-05-AM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_check2011-03-31_10-14-06-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]:
######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/grid/product/11.2/grid_1
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/grid/product/11.2/grid_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
ASM was not detected in the Oracle Home
Do you want to continue (y – yes, n – no)? [n]: y
A log of this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.out’
Any error messages from this session will be written to: ‘/opt/app/oracle/oraInventory/logs/deinstall_deconfig2011-03-31_10-14-02-AM.err’

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/app/oracle/oraInventory/logs/asmcadc_clean2011-03-31_10-14-44-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oracle/oraInventory/logs/netdc_clean2011-03-31_10-14-44-AM.log
De-configuring Naming Methods configuration file on all nodes…
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes…
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes…
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes…
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
—————————————->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node
Run the following command as the root user or the administrator on node “RAC1″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Run the following command as the root user or the administrator on node “RAC2″.
/tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp” -lastnode
Press Enter after you finish running the above commands
<—————————————-

Let’s run these comamnds on Nodes

[oracle@RAC1 app]$ /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp
[oracle@RAC1 app]$ su –
Password:
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-13-56AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-13-56AM/perl/lib -I/tmp/deinstall2011-mp/deinstall2011-03-31_10-13-56AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
>
[root@RAC1 ~]# /tmp/deinstall2011-03-31_10-22-37AM/perl/bin/perl -I/tmp/deinstall2011-03-31_10-22-37AM/perl/lib -I/tmp/deinstall2011-03-31_10-22-37AM/crs/install /tmp/deinstall2011-03-31_10-22-37AM/crs/install/rootcrs.pl -force  -deconfig -paramfile “/tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp”
Using configuration parameter file: /tmp/deinstall2011-03-31_10-22-37AM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.31.0/255.255.255.0/bond0, type static
VIP exists: /RAC1-vip/192.168.31.21/192.168.31.0/255.255.255.0/bond0, hosting node RAC1
VIP exists: /RAC2-vip/192.168.31.23/192.168.31.0/255.255.255.0/bond0, hosting node RAC2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
ACFS-9200: Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crsd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.crf’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘RAC1’
CRS-2677: Stop of ‘ora.crf’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.ctssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.cssd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.cssd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘RAC1’
CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘RAC1’
CRS-2677: Stop of ‘ora.diskmon’ on ‘RAC1’ succeeded
CRS-2677: Stop of ‘ora.gipcd’ on ‘RAC1’ succeeded
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘RAC1’
CRS-2677: Stop of ‘ora.gpnpd’ on ‘RAC1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘RAC1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
************** **************

… continue as below once above commands compiled successfully

Removing Windows and .NET products configuration END
Oracle Universal Installer clean START
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node : Done
Failed to delete the directory ‘/opt/app/grid/product/11.2/grid_1’. The directory is in use.
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node : Failed <<<<
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on local node. The directory is in use by central inventory.
Detach Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’ : Done
Delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’ : Done
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by Oracle Home ‘/opt/app/oracle/product/11.2/db_1’.
The Oracle Base directory ‘/opt/app/oracle’ will not be removed on node ‘RAC1’. The directory is in use by central inventory.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC2’
Clean install operation removing temporary directory ‘/tmp/deinstall2011-03-31_10-22-37AM’ on node ‘RAC1’
Oracle install clean END
######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node “RAC2”
Oracle Clusterware is stopped and successfully de-configured on node “RAC1”
Oracle Clusterware is stopped and de-configured successfully.
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the local node.
Failed to delete directory ‘/opt/app/grid/product/11.2/grid_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/grid/product/11.2/grid_1’ from the central inventory on the remote nodes ‘RAC1’.
Successfully deleted directory ‘/opt/app/grid/product/11.2/grid_1’ on the remote nodes ‘RAC1’.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@RAC2 11.2]$ cd $GRID_HOME
[oracle@RAC2 grid_1]$ pwd
/opt/app/grid/product/11.2/grid_1
[oracle@RAC2 grid_1]$ ls -lrt
total 0

Oracle clusterware was clearly removed from $CRS_HOME /$GRID_HOME. Lets proceed with next step.

5. Deinstall Oracle database software from Server

Note: Always use the Oracle Universal Installer to remove Oracle software. Do not delete any Oracle home directories without first using the Installer to remove the software.

[oracle@RAC2 11.2]$ pwd
/opt/app/oracle/product/11.2
oracle@RAC2 11.2]$ du db_1/
4095784 db_1/

Start the Installer as follows:
[oracle@RAC2 11.2]$ $ORACLE_HOME/oui/bin/runInstaller
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-03-31_10-37-33AM. Please wait …[oracle@RAC2 11.2]$ Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home de-installation will be performed are:RAC1,RAC2
Oracle Home selected for de-install is: /opt/app/oracle/product/11.2/db_1
Inventory Location where the Oracle home registered is: /opt/app/oracle/oraInventory
Skipping Windows and .NET products configuration check
Following RAC listener(s) will be de-configured: LISTENER
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
RAC1 : Oracle Home exists with CCR directory, but CCR is not configured
RAC2 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y – yes, n – no)? [n]:

……………………………………….  You will see lots of messages

####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Skipping Windows and .NET products configuration clean
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the local node.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the local node.
Successfully detached Oracle home ‘/opt/app/oracle/product/11.2/db_1’ from the central inventory on the remote nodes ‘RAC2’.
Successfully deleted directory ‘/opt/app/oracle/product/11.2/db_1’ on the remote nodes ‘RAC2’.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Let’s go to $ORACLE_HOME and see if any executables are remaining?

[oracle@RAC1 app]$ cd $ORACLE_HOME
-bash: cd: /opt/app/oracle/product/11.2/db_1: No such file or directory
[oracle@RAC2 product]$ pwd
/opt/app/oracle/product
[oracle@RAC2 product]$ du 11.2/
4       11.2/
(clearly no files available here)

10g RAC Install under RHEL/OEL 4.5

Posted by Sagar Patil

1.Objectives

5 Installation
5.1 CRS install
2 System Configuration 5.2 ASM Install
2.1 Machine Configuration 5.3 Install Database Software
2.2 External/Shared Storage 5.4 create RAC Database
2.3 Kernel Parameters 6 Scripts and profile files
5.4 .bash_profile rac01
3 Oracle Software Configuration 5.5 .bash_profile rac02
3.1 Directory Structure
3.2 Database Layout
3.3 Redo Logs 6 RAC Infrastructure Testing
3.4 Controlfiles 6.1 RAC Voting Disk Test
6.2 RAC Cluster Registry Test
4 Oracle Pre-Installation tasks 6.3 RAC ASM Tests
4.1 Installing Redhat 6.4 RAC Interconnect Test
4.2 Network Configuration 6.5 Loss of Oracle Config File
4.3 Copy Oracle 10.2.0.1 software onto server
4.4 Check installed packages Appendix
4.5 validate script 1. OCR/Voting disk volumes INAccessible by rac02 87
4.6 Download ASM packages 2. RAC cluster went down On PUBLIC network test. 88
4.7 Download OCFS packages
4.8 Creating Required Operating System Groups and Users.
4.9 Oracle required directory creation
4.10 Verifying That the User nobody Exists
4.11 Configuring SSH on Cluster Member Nodes For oracle
4.12 Configuring SSH on Cluster Member Nodes for root.
4.13 VNC setup
4.14 Kernel parameters
4.15 Verifying Hangcheck-timer Module on Kernel 2.6
4.16 Oracle user limits
4.17 Installing the cvuqdisk Packeage for linux.
4.18 Disk Partitioning
4.19 Checking the Network Setup with CVU
4.20 Checking the Hardware and Operating System Setup with CVU
4.21 Checking the Operating System Requirements with CVU.
4.22 Verifying Shared Storage
4.23 Verifying the Clusterware Requirements with CVU
4.24 ASM package install
4.25 OCFS package install
4.26 disable SELinux
4.27 OCFS2 Configuration
4.28 OCFS2 File system format
4.29 OCFS2 File system mount

Read more…

RAC Build on Solaris: Fifth Phase

Posted by Sagar Patil

Step by Step instructions on how to remove temp nodes from RAC cluster. Step by step instruction on how to verify removal of temp nodes.

REMOVAL OF CLUSTERING AFTER FAILOVER

1.shutdown the instances prod1,prod2 and then do the following.

2.Remove all the devdb entries for devdb or tempracsrv3,tempracsrv4 in tnsnames.ora

In both the servers—i.e. prodracsrv1,prodracsrv2.

3.Remove the following entries from init.ora in prodracsrv1,prodracsrv2

*.log_archive_config=’dg_config=

(PROD,DEVDB)’

*.log_archive_dest_2=’service=DEVDB

valid_for=(online_logfiles,primary_role)

db_unique_name=DEVDB’

*.standby_file_management=auto

*.fal_server=’DEVDB’

*.fal_client=’PROD’

*.service_names=’PROD’

4.After this Your PROD Database is Ready after failover .

RAC Build on Solaris : Fourth Phase

Posted by Sagar Patil

Step by Step instructions on how to fail RAC databases over from temp nodes to prod nodes. Includes step by step instructions on how to verify the failover from temp nodes to prod nodes. Step by Step instructions on how to test RAC database connectivity after failover.

FAILOVER

Performing a failover in a Data Guard configuration converts the standby database into the production database. The following sections describe this

Manual Failover

Manual failover is performed by the administrator directly through the Enterprise Manager graphical user interface, or the Data Guard broker command-line interface (DGMGRL), or by issuing SQL*Plus statements. The sections below describe the relevant SQL*Plus commands.

Simulation of Failover :-

Shutdown both the instances devdb1 ,devdb2(tempracsrv3,tempracsrv4) by connecting / as sysdba from command line And issuing the following command

SQL>shutdown abort..

Manual Failover to a Physical Standby Database(in PROD_PRODRACSRV1)

Use the following commands to perform a manual failover of a physical standby Database:

1. Initiate the failover by issuing the following on the target standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;

Note: Include the FORCE keyword to ensure that the RFS processes on the standby database will fail over without waiting for the network Connections to time out through normal TCP timeout processing before Shutting down.

2. Convert the physical standby database to the production role:

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

3. If the standby database was never opened read-only since the last time it was Started, then open the new production database by issuing the following Statement:

ALTER DATABASE OPEN;

If the physical standby database has been opened in read-only mode since the

last time it was started, shut down the target standby database and restart it:

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP;

Note: In rare circumstances, administrators may wish to avoid waiting for the standby database to complete applying redo in the current standby redo log file before performing the failover. (note: use of Data Guard real-time apply will avoid this delay by keeping apply up to date on the standby database). If so desired, administrators may issue the ALTER DATABASE ACTIVATE STANDBY DATABASE statement to perform an immediate failover.

This statement converts the standby database to the production database, creates a new resetlogs branch, and opens the database. However, because this statement will cause any un-applied redo in the standby redo log to be lost, Oracle recommends you only use the failover procedure described in the above steps to perform a failover.

RAC Build on Solaris : Third Phase

Posted by Sagar Patil

Oracle 10g R2 RAC Installation for PROD Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP By STEP guide you for installing two nodes (prodracsrv1and prodracsrv2) and adding to the existing RAC cluster(Configuring Failover).

10g RAC Installation (Part-II Clusterware & Database Installation)

1. Install Oracle Clusterware

Mount the Clusterware dvd in the prodracsrv1 and run the runInstaller

After downloading, as the oracle user on prodracsrv1, execute

1. Welcome: Click on Next.

2. Specify Inventory directory and credentials:

o Enter the full path of the inventory directory:

/u01/app/oracle/oraInventory.

o Specify Operating System group name: oinstall.

3. Specify Home Details:

o Name: OraCrs10g_home

o /u01/app/oracle/product/10.2.0/crs_1

4. Product-Specific Prerequisite Checks:

o Ignore the warning on physical memory requirement.

5. Specify Cluster Configuration: Click on Add.

o Public Node Name: prodracsrv2.mycorpdomain.com

o Private Node Name: prodracsrv2-priv.mycorpdomain.com

o Virtual Host Name: prodracsrv2-vip.mycorpdomain.com

6. Specify Network Interface Usage:

o Interface Name: eth0

o Subnet: 192.168.2.0

o Interface Type: Public

o Interface Name: eth1

o Subnet: 10.10.10.0

o Interface Type: Private

7. Specify Oracle Cluster Registry (OCR) Location: Select External Redundancy.

For simplicity, here you will not mirror the OCR. In a production environment,

you may want to consider multiplexing the OCR for higher redundancy.

o Specify OCR Location: /u01/ocr_config

8. Specify Voting Disk Location: Select External Redundancy.

Similarly, for simplicity, we have chosen not to mirror the Voting Disk.

o Voting Disk Location: /u01/votingdisk

9. Summary: Click on Install.

10. Execute Configuration scripts: Execute the scripts below as the root user

sequentially, one at a time. Do not proceed to the next script until the current

script completes.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv1.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on prodracsrv2.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv1.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on prodracsrv2.

The root.sh script on prodracsrv2 invoked the VIPCA automatically but it failed with the

error “The given interface(s), “eth0″ is not public. Public interfaces should be

used to configure virtual IPs.” As you are using a non-routable IP address

(192.168.x.x) for the public interface, the Oracle Cluster Verification Utility

(CVU) could not find a suitable public interface. A workaround is to run VIPCA

manually.

11. As the root user, manually invokes VIPCA on the second node.

# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca

Welcome: Click on Next.

Network Interfaces: Select eth0.

Virtual IPs for cluster nodes:

o Node name: prodracsrv1

o IP Alias Name: prodracsrv1-vip

o IP address: 192.168.2.31

o Subnet Mask: 255.255.255.0

o Node name: prodracsrv2

o IP Alias Name: prodracsrv2-vip

o IP address: 192.168.2.32

o Subnet Mask: 255.255.255.0

Summary: Click on Finish.

Configuration Assistant Progress Dialog: After the configuration has completed,

Click on OK.

Configuration Results: Click on Exit.

Return to the Execute Configuration scripts screen on prodracsrv1 and click on OK.

Configuration Assistants: Verify that all checks are successful. The OUI does a

Clusterware post-installation check at the end. If the CVU fails, correct the

Problem and re-run the following command as the oracle user.

prodracsrv1-> /u01/app/oracle/product/10.2.0/crs_1/bin/cluvfy stage -post crsinst -n prodracsrv1, prodracsrv2

23. Performing post-checks for cluster services setup

24.

25. Checking node reachability…

26. Node reachability check passed from node “prodracsrv1”.

27.

28. Checking user equivalence…

29. User equivalence check passed for user “oracle”.

30.

31. Checking Cluster manager integrity…

32.

33. Checking CSS daemon…

34. Daemon status check passed for “CSS daemon”.

35.

36. Cluster manager integrity check passed.

37.

38. Checking cluster integrity…

39.

40. Cluster integrity check passed

41.

42. Checking OCR integrity…

43.

44. Checking the absence of a non-clustered configuration…

45. All nodes free of non-clustered, local-only configurations.

46.

47. Uniqueness check for OCR device passed.

48.

49. Checking the version of OCR…

50. OCR of correct Version “2” exists.

51.

52. Checking data integrity of OCR…

53. Data integrity check for OCR passed.

54.

55. OCR integrity check passed.

56.

57. Checking CRS integrity…

58.

59. Checking daemon liveness…

60. Liveness check passed for “CRS daemon”.

61.

62. Checking daemon liveness…

63. Liveness check passed for “CSS daemon”.

64.

65. Checking daemon liveness…

66. Liveness check passed for “EVM daemon”.

67.

68. Checking CRS health…

69. CRS health check passed.

70.

71. CRS integrity check passed.

72.

73. Checking node application existence…

74.

75. Checking existence of VIP node application (required)

76. Check passed.

77.

78. Checking existence of ONS node application (optional)

79. Check passed.

80.

81. Checking existence of GSD node application (optional)

82. Check passed.

83.

84. Post-check for cluster services setup was successful.

85. End of Installation: Click on Exit.

2. Install Oracle Database 10g Release 2

After mounting database 10g R2 DVD run the installer

prodracsrv1-> ./runInstaller

1. Welcome: Click on Next.

2. Select Installation Type:

o Select Enterprise Edition.

3. Specify Home Details:

o Name: OraDb10g_home1

o Path: /u01/app/oracle/product/10.2.0/db_1

4. Specify Hardware Cluster Installation Mode:

o

Select the “Cluster Install” option and make sure both RAC nodes are selected, the click the “Next” button

o Select the “Install database Software only” option, then click the “Next” button.

Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the “Next” button.

7. Select the “Install database Software only” option, then click the “Next” button.

8. On the “Summary” screen, click the “Install” button to continue.

9. Wait while the database software installs.

10. Once the installation is complete, wait while the configuration assistants run.

11. Execute the “root.sh” scripts on both nodes, as instructed on the “Execute Configuration scripts” screen, then click the “OK” button.

12. When the installation is complete, click the “Exit” button to leave the installer.

Adding to the cluster

RAC Physical Standby for a RAC Primary

Overview……………………………………………………………………………………………….2

Task 1: Gather Files and Perform Back Up…………………………………………..3

Task 2: Configure Oracle Net SERVICES on the Standby……………………3

Task 3: Create the Standby Instances and Database………………………………4

Task 4: Configure The Primary Database For Data Guard……………………9

Task 5: Verify Data Guard Configuration……………………………………………10

the database unique name of the RAC database as DEVDB. The instance names of the two RAC instances are DEVDB1 (on node DEVDB_tempracsrv3) and DEVDB2 (on node DEVDB_tempracsrv4). The database unique name of the RAC standby database is PROD, and the two standby instance names are PROD1 (on node PROD_prodracsrv1) and PROD2 (on node PROD_prodracsrv2).

This document includes the following tasks:

• Task 1: Gather Files and Perform Back Up

• Task 2: Configure Oracle Net on the Standby

• Task 3: Create the Standby Instances and Database

• Task 4: Configure the Primary Database for Data Guard

• Task 5: Verify Data Guard Configuration

This document assumes that the following conditions are met:

• The primary RAC database is in archivelog mode

Creating a RAC Physical Standby for a RAC Primary

• The primary and standby databases are using a flash recovery area.

• The standby RAC hosts have existing Oracle software installation.(prodracsrv1,prodracsrv2)…..

• Oracle Managed Files (OMF) is used for all storage.

TASK 1: GATHER FILES AND PERFORM BACK UP

1. On tempracsrv3 node, create a staging directory. For example:

[oracle@DEVDB_tempracsrv3 oracle]$ mkdir -p /opt/oracle/stage

2. Create the same exact path on one of the standby hosts:

[oracle@PROD_prodracsrv1 oracle]$ mkdir -p /opt/oracle/stage

3. On the tempracsrv3 node, connect to the primary database and create a PFILE from the SPFILE in the staging directory. For example:

SQL> CREATE PFILE=’/opt/oracle/stage/initDEVDB.ora’ FROM SPFILE;

4. On the tempracsrv3 node, perform an RMAN backup of the primary database that places the backup pieces into the staging directory. For example:

[oracle@DEVDB_host1 stage]$ rman target /

RMAN> BACKUP DEVICE TYPE DISK FORMAT ‘/opt/oracle/stage/%U’ DATABASE PLUS ARCHIVELOG;

RMAN> BACKUP DEVICE TYPE DISK FORMAT ‘/opt/oracle/stage/%U’ CURRENT CONTROLFILE FOR STANDBY;

5. Place a copy of the listener.ora, tnsnames.ora, and sqlnet.ora files into the staging directory. For example:

[oracle@DEVDB_tempracsrv3 oracle]$ cp $ORACLE_HOME/network/admin/*.ora /opt/oracle/stage

6. Copy the contents of the staging directory on the RAC primary node to the standby node on which the staging directory was created on in step 2. For example:

[oracle@DEVDB_host1 oracle]$ scp /opt/oracle/stage/* \

oracle@PROD_prodracsrv1:/opt/oracle/stage

and from there copy all the dbf’s the current logfiles,standby controlfile to the according locations(Note the location should be the same location as it was in the primary).

TASK 2: CONFIGURE ORACLE NET SERVICES ON THE STANDBY

1. Copy the listener.ora, tnsnames.ora, and sqlnet.ora files from the staging directory on the standby host to the $ORACLE_HOME/network/admin directory on all standby hosts.

2. Modify the listener.ora file each standby host to contain the VIP address of that host.

Creating a RAC Physical Standby for a RAC Primary

3. Modify the tnsnames.ora file on each node, including the primary RAC nodes and standby RAC nodes, to contain all primary and standby net service names. You should also modify the Oracle Net aliases that are used for the local_listener and remote_listener parameters to point to the listener on each standby host. In this example, each tnsnames.ora file should contain all of the net service names in the following table:

Example Entries in the tnsnames.ora Files

Primary Net Service Names Standby Net Service Name
DEVDB =

(DESCRIPTION =

(ADDRESS =

(PROTOCOL = TCP)

(HOST = DEVDB_tempracsrv3vip)

(HOST = DEVDB_tempracsrv4vip)

(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = DEVDB)

)

)

PROD =

(DESCRIPTION =

(ADDRESS =

(PROTOCOL = TCP)

(HOST = PROD_prodracsrv1vip)

(HOST = PROD_prodracsrv2vip)

(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = PROD)

)

)

4. Start the standby listeners on all standby hosts.

TASK 3: CREATE THE STANDBY INSTANCES AND DATABASE

1. To enable secure transmission of redo data, make sure the primary and standby databases use a password file, and make sure the password for the SYS user is identical on every system. For example:

$ cd $ORACLE_HOME/dbs

$ orapwd file=orapwPROD password=oracle

2. Copy and rename the primary database PFILE from the staging area on all standby hosts to the $ORACLE_HOME/dbs directory on all standby hosts. For example:

[oracle@PROD_host1 stage]$ cp initDEVDB1.ora $ORACLE_HOME/dbs/initPROD1.ora

3. Modify the standby initialization parameter file copied from the primary node to include Data Guard parameters as illustrated in the following table:

Creating a RAC Physical Standby for a RAC Primary

Initialization Parameter Modifications

Parameter

Category

Before After
RAC Parameters *.cluster_database=true

*.db_unique_name=DEVDB

DEVDB1.instance_name=DEVDB1

DEVDB2.instance_name=DEVDB2

DEVDB1.instance_number=1

DEVDB2.instance_number=2

DEVDB1.thread=1

DEVDB2.thread=2

DEVDB1.undo_tablespace=UNDOTBS1

DEVDB2.undo_tablespace=UNDOTBS2

*.remote_listener=LISTENERS_DEVDB

DEVDB1.LOCAL_LISTENER=LISTENER_DEVDB_tempracsrv3

DEVDB2.LOCAL_LISTENER=LISTENER_DEVDB_tempracsrv4

*.cluster_database=true

*.db_unique_name=PROD

PROD1.instance_name=PROD1

PROD2.instance_name=PROD2

PROD1.instance_number=1

PROD2.instance_number=2

PROD1.thread=1

PROD2.thread=2

PROD1.undo_tablespace=UNDOTBS1

PROD2.undo_tablespace=UNDOTBS2

*.remote_listener=LISTENERS_PROD

PROD1.LOCAL_LISTENER=LISTENER_PROD_prodracsrv1

PROD2.LOCAL_LISTENER=LISTENER_PROD_prodracsrv2

Data Guard Parameters *.log_archive_config=’dg_config=

(PROD,DEVDB)’

*.log_archive_dest_2=’service=DEVDB

valid_for=(online_logfiles,primary_role)

db_unique_name=DEVDB’

*.standby_file_management=auto

*.fal_server=’DEVDB’

*.fal_client=’PROD’

*.service_names=’PROD’

Other parameters *.background_dump_dest=

/u01/app/oracle/admin/DEVDB/bdump

*.core_dump_dest=

/u01/app/oracle/admin/DEVDB/cdump

*.user_dump_dest=

/u01/oracle/admin/DEVDB/udump

*.audit_file_dest=

/u01/app/oracle/admin/DEVDB/adump

*.db_recovery_dest=’/u01/app/oracle/flash_recoveryarea’

*.log_archive_dest_1 =

‘LOCATION=’/u01/app/oracle/arch’

*.dispatchers=DEVDBXDB

*.background_dump_dest=

/u01/oracle/admin/PROD/bdump

*.core_dump_dest=

/u01/oracle/admin/PROD/cdump

*.user_dump_dest=

/opt/oracle/admin/PROD/udump

*.audit_file_dest=

/u01/oracle/admin/PROD/adump

*.db_recovery_dest=’/u01/app/oracle/flash_recoveryarea’

*.log_archive_dest_1=

‘LOCATION=USE_DB_RECOVERY_FILE_DEST’

*.dispatchers=PRODXDB

5. Connect to the standby database on one standby host, with the standby in the IDLE state, and create an SPFILE in the standby DATA disk group:

SQL> CREATE SPFILE=’+DATA/PROD/spfilePROD.ora’ FROM PFILE=’?/dbs/initPROD.ora’;

6. In the $ORACLE_HOME/dbs directory on each standby host, create a PFILE that is named initoracle_sid.ora that contains a pointer to the SPFILE. For example:

[oracle@PROD_prodracsrv1 oracle]$ cd $ORACLE_HOME/dbs

[oracle@PROD_prodracsrv1 dbs]$ echo spfilePROD.ora > initPROD1.ora

7. Create the dump directories on all standby hosts as referenced in the standby initialization parameter file. For example:

[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/bdump

[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/cdump

[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/udump

[oracle@PROD_prodracsrv1 oracle]$ mkdir -p $ORACLE_BASE/admin/PROD/adump

8. After setting up the appropriate environment variables on each standby host, such as ORACLE_SID, ORACLE_HOME, and PATH, start the standby database instance on the standby host that has the staging directory, without mounting the control file.

SQL> STARTUP NOMOUNT

9. From the standby host where the standby instance was just started, duplicate the primary database as a standby into the ASM disk group. For example:

$ rman target sys/oracle@DEVDB auxiliary /

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY;

10. Connect to the standby database, and create the standby redo logs to support the standby role. The standby redo logs must be the same size as

the primary database online logs. The recommended number of standby redo logs is:

(maximum # of logfiles +1) * maximum # of threads

This example uses two online log files for each thread. Thus, the number of standby redo logs should be (2 + 1) * 2 = 6. That is, one more standby redo log file for each thread.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1

GROUP 5 SIZE 10M,

GROUP 6 SIZE 10M,

GROUP 7 SIZE 10M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2

GROUP 8 SIZE 10M,

GROUP 9 SIZE 10M,

GROUP 10 SIZE 10M;

These statements create two standby log members for each group, and each member is 10MB in size. One member is created in the directory specified by the DB_CREATE_FILE_DEST initialization parameter, and the other member is created in the directory specified by DB_RECOVERY_FILE_DEST initialization parameter. Because this example assumes that there are two redo log groups in two threads, the next group is group five.

You can check the number and group numbers of the redo logs by querying the V$LOG view:

SQL> SELECT * FROM V$LOG;

You can check the results of the previous statements by querying the V$STANDBY_LOG view:

SQL> SELECT * FROM V$STANDBY_LOG;

You can also see the members created by querying the V$LOGFILE view:

SQL> SELECT * FROM V$LOGFILE;

.

11. On only one standby host (and this is your designated Redo Apply instance), start managed recovery and real-time apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

12. On either node of the standby cluster, register the standby database and the database instances with the Oracle Cluster Registry (OCR) using the Server Control (SRVCTL) utility. For example:

$ srvctl add database -d PROD –o /u01/app/oracle/product/10.2.0/db_1

$ srvctl add instance -d PROD -i PROD1 -n PROD_prodracsrv1

$ srvctl add instance -d PROD -i PROD2 -n PROD_prodracsrv2

The following are descriptions of the options in these commands:

The -d option specifies the database unique name (DB_UNIQUE_NAME) of the database.

The -i option specifies the database insance name.

The -n option specifies the node on which the instance is running.

The -o option specifies the Oracle home of the database.

TASK 4: CONFIGURE THE PRIMARY DATABASE FOR DATA GUARD

1. Configure the primary database initialization parameters to support both the primary and standby roles.

*.log_archive_config=’dg_config=(PROD,DEVDB)’

*.log_archive_dest_2=’service=PROD

valid_for=(online_logfiles,primary_role)

db_unique_name=PROD’

*.standby_file_management=auto

*.fal_server=’PROD’

*.fal_client=’DEVDB’

*.service_names=DEVDB

Note that all the parameters listed above can be dynamically modified with the exception of the standby role parameters log_file_name_convert and db_file_name_convert. It is recommended to set the parameters with “scope=spfile” so that they can be put into effect upon the next role change.

2. Create standby redo logs on the primary database to support the standby role. The standby redo logs are the same size as the primary database online logs. The recommended number of standby redo logs is one more than the number of online redo logs for each thread. Because this example has two online redo logs for each thread, three standby redo logs are required for each thread.

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1

GROUP 5 SIZE 10M,

GROUP 6 SIZE 10M,

GROUP 7 SIZE 10M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2

GROUP 8 SIZE 10M,

GROUP 9 SIZE 10M,

GROUP 10 SIZE 10M;

These statements create two standby log members for each group, and each member is 10MB in size. One member is created in the directory specified by the DB_CREATE_FILE_DEST initialization parameter, and the other member is created in the directory specified by DB_RECOVERY_FILE_DEST initialization parameter. Because this example assumes that there are two redo log groups in two threads, the next group is group five.

You can check the number and group numbers of the redo logs by querying the V$LOG view:

SQL> SELECT * FROM V$LOG;

You can check the results of the previous statements by querying the V$STANDBY_LOG view:

SQL> SELECT * FROM V$STANDBY_LOG;

You can also see the members created by querying V$LOGFILE view:

SQL> SELECT * FROM V$LOGFILE;

TASK 5: VERIFY DATA GUARD CONFIGURATION

1. On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log. For example:

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME

FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

2. On the primary database, issue the following SQL statement to force a log switch and archive the current online redo log file group:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

3. On the standby database, query the V$ARCHIVED_LOG view to verify that the redo data was received and archived on the standby database:

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME

FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

Now we have setup a Failover instance for RAC on prodracsrv1,prodracsrv2.

RAC Build on Solaris : Second Phase

Posted by Sagar Patil

Oracle 10g R2 RAC Installation for Temp Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP by STEP guide you for installing two nodes (tempracsrv3 and tempracsrv4). Installation on this phase includes documentation on how to verify the installation and configuration is installed correctly. Step by Step instructions on creating RAC database, importing schemas from oracle 9i database into new databases, and testing database and node connectivity for this phase of the step by step instructions.

10g RAC Installation (Part-II Clusterware & Database Installation)

1. Install Oracle Clusterware

Mount the Clusterware dvd in the tempracsrv3 and run the runInstaller

After downloading, as the oracle user on tempracsrv3, execute

1. Welcome: Click on Next.

2. Specify Inventory directory and credentials:

o Enter the full path of the inventory directory:

/u01/app/oracle/oraInventory.

o Specify Operating System group name: oinstall.

3. Specify Home Details:

o Name: OraCrs10g_home

o /u01/app/oracle/product/10.2.0/crs_1

4. Product-Specific Prerequisite Checks:

o Ignore the warning on physical memory requirement.

5. Specify Cluster Configuration: Click on Add.

o Public Node Name: tempracsrv4.mycorpdomain.com

o Private Node Name: tempracsrv4-priv.mycorpdomain.com

o Virtual Host Name: tempracsrv4-vip.mycorpdomain.com

6. Specify Network Interface Usage:

o Interface Name: eth0

o Subnet: 192.168.2.0

o Interface Type: Public

o Interface Name: eth1

o Subnet: 10.10.10.0

o Interface Type: Private

7. Specify Oracle Cluster Registry (OCR) Location: Select External Redundancy.

For simplicity, here you will not mirror the OCR. In a production environment,

you may want to consider multiplexing the OCR for higher redundancy.

o Specify OCR Location: /u01/ocr_config

8. Specify Voting Disk Location: Select External Redundancy.

Similarly, for simplicity, we have chosen not to mirror the Voting Disk.

o Voting Disk Location: /u01/votingdisk

9. Summary: Click on Install.

10. Execute Configuration scripts: Execute the scripts below as the root user

sequentially, one at a time. Do not proceed to the next script until the current

script completes.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on tempracsrv3.

o Execute /u01/app/oracle/oraInventory/orainstRoot.sh on tempracsrv4.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on tempracsrv3.

o Execute /u01/app/oracle/product/10.2.0/crs_1/root.sh on tempracsrv4.

The root.sh script on tempracsrv4 invoked the VIPCA automatically but it failed with the

error “The given interface(s), “eth0″ is not public. Public interfaces should be

used to configure virtual IPs.” As you are using a non-routable IP address

(192.168.x.x) for the public interface, the Oracle Cluster Verification Utility

(CVU) could not find a suitable public interface. A workaround is to run VIPCA

manually.

11. As the root user, manually invokes VIPCA on the second node.

# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca

Welcome: Click on Next.

Network Interfaces: Select eth0.

Virtual IPs for cluster nodes:

o Node name: tempracsrv3

o IP Alias Name: tempracsrv3-vip

o IP address: 192.168.2.31

o Subnet Mask: 255.255.255.0

o Node name: tempracsrv4

o IP Alias Name: tempracsrv4-vip

o IP address: 192.168.2.32

o Subnet Mask: 255.255.255.0

Summary: Click on Finish.

Configuration Assistant Progress Dialog: After the configuration has completed,

Click on OK.

Configuration Results: Click on Exit.

Return to the Execute Configuration scripts screen on tempracsrv3 and click on OK.

Configuration Assistants: Verify that all checks are successful. The OUI does a

Clusterware post-installation check at the end. If the CVU fails, correct the

Problem and re-run the following command as the oracle user.

tempracsrv3-> /u01/app/oracle/product/10.2.0/crs_1/bin/cluvfy stage -post crsinst -n tempracsrv3, tempracsrv4

23. Performing post-checks for cluster services setup

24.

25. Checking node reachability…

26. Node reachability check passed from node “tempracsrv3”.

27.

28. Checking user equivalence…

29. User equivalence check passed for user “oracle”.

30.

31. Checking Cluster manager integrity…

32.

33. Checking CSS daemon…

34. Daemon status check passed for “CSS daemon”.

35.

36. Cluster manager integrity check passed.

37.

38. Checking cluster integrity…

39.

40. Cluster integrity check passed

41.

42. Checking OCR integrity…

43.

44. Checking the absence of a non-clustered configuration…

45. All nodes free of non-clustered, local-only configurations.

46.

47. Uniqueness check for OCR device passed.

48.

49. Checking the version of OCR…

50. OCR of correct Version “2” exists.

51.

52. Checking data integrity of OCR…

53. Data integrity check for OCR passed.

54.

55. OCR integrity check passed.

56.

57. Checking CRS integrity…

58.

59. Checking daemon liveness…

60. Liveness check passed for “CRS daemon”.

61.

62. Checking daemon liveness…

63. Liveness check passed for “CSS daemon”.

64.

65. Checking daemon liveness…

66. Liveness check passed for “EVM daemon”.

67.

68. Checking CRS health…

69. CRS health check passed.

70.

71. CRS integrity check passed.

72.

73. Checking node application existence…

74.

75. Checking existence of VIP node application (required)

76. Check passed.

77.

78. Checking existence of ONS node application (optional)

79. Check passed.

80.

81. Checking existence of GSD node application (optional)

82. Check passed.

83.

84. Post-check for cluster services setup was successful.

85. End of Installation: Click on Exit.

2. Install Oracle Database 10g Release 2

After mounting database 10g R2 DVD run the installer

tempracsrv3-> ./runInstaller

1. Welcome: Click on Next.

2. Select Installation Type:

o Select Enterprise Edition.

3. Specify Home Details:

o Name: OraDb10g_home1

o Path: /u01/app/oracle/product/10.2.0/db_1

4. Specify Hardware Cluster Installation Mode:

o

Select the “Cluster Install” option and make sure both RAC nodes are selected, the click the “Next” button

o Select the “Install database Software only” option, then click the “Next” button.

Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the “Next” button.

7. Select the “Install database Software only” option, then click the “Next” button.

8. On the “Summary” screen, click the “Install” button to continue.

9. Wait while the database software installs.

10. Once the installation is complete, wait while the configuration assistants run.

11. Execute the “root.sh” scripts on both nodes, as instructed on the “Execute Configuration scripts” screen, then click the “OK” button.

12. When the installation is complete, click the “Exit” button to leave the installer.

Create a Database using the DBCA

Login to tempracsrv3 as the oracle user and start the Database Configuration Assistant.

Run the command dbca

On the “Welcome” screen, select the “Oracle Real Application Clusters database” option and click the “Next” button.

Select the “Create a Database” option and click the “Next” button.

Highlight both RAC nodes(tempracsrv3,tempracsrv4) and click the “Next” button.

Select the “Custom Database” option and click the “Next” button.

Enter the values “DEVDB.WORLD” and “DEVDB” for the Global Database Name and SID Prefix respectively, and then click the “Next” button.

Accept the management options by clicking the “Next” button. If you are attempting the installation on a server with limited memory, you may prefer not to configure Enterprise Manager at this time.

Enter database passwords then click the “Next” button.

Select the “Cluster File System” option, then click the “Next” button.

Select the “Use Oracle-Managed Files” option and enter “/u01/oradata/” as the database location, then click the “Next” button.

Check the “Specify Flash Recovery Area” option and accept the default location by clicking the “Next” button.

(ORACLE_BASE)/flash_recovery_area

Uncheck all but the “Enterprise Manager Repository” option, then click the “Standard Database Components…” button.

Uncheck all but the “Oracle JVM” option, then click the “OK” button, followed by the “Next” button on the previous screen. If you are attempting the installation on a server with limited memory, you may prefer not to install the JVM at this time.

Accept the current database services configuration by clicking the “Next” button.

Select the “Custom” memory management option and accept the default settings by clicking the “Next” button.

Accept the database storage settings by clicking the “Next” button.

Accept the database creation options by clicking the “Finish” button.

Accept the summary information by clicking the “OK” button.

Wait while the database is created.

Once the database creation is complete you are presented with the A screen. Make a note of the information on the screen and click the “Exit” button.

VERIFYING STEPS:-

$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 16 12:27:11 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> CONN sys/password@rac1 AS SYSDBA
Connected.
SQL> SELECT instance_name, host_name FROM v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
devdb1             tempracsrv3
SQL> CONN sys/password@rac2 AS SYSDBA
Connected.
SQL> SELECT instance_name, host_name FROM v$instance;

INSTANCE_NAME    HOST_NAME
---------------- ----------------------------------------------------------------
devdb2             tempracsrv4

Importing data

Login to the EM of the RAC

http://tempracsrv3:5500/em

Using sys as sysdba

And create the required tablespace as it was in the source…

Then import the entire dump file which we did via export in the oracle 9i database one by one with the same user in which schema the data was exported.

%imp

RAC Build on Solaris : First Phase

Posted by Sagar Patil

Objective : The two Solaris 10 prod nodes have Oracle 9i databases and I  want to export database to Oracle 10g X 4 node Solaris 10 RAC.

Exporting Database (from Oracle 9i Database):-

vi export.par
userid = “sys/change_on_install as sysdba” buffer 1MB FILESIZE=750MB file=dmpfil<1-20> FULL=Y DIRECT=Y LOG=exp_full.log
At later stage create identical tables paces with enough size in the target db and then import it.

Setting up 4 Solaris 10 nodes (tempracsrv3,tempracsr4) and (prodracsrv1,prodracsrv2) with NFS setup between them

For this Type of configuration you should have two Ethernet cards in each sever ,two switches for private interconnect(for networking the second eth. i.e. eth1).

Install Solaris 10 from ISO image or CD, Hit Enter to install in graphical mode.

Skip the media test and start the installation(IF ANY)
. Language Selection: <select your language preference>.
. Keyboard Configuration: <select your keyboard preference>.
. Installation Type: Custom.
. Disk Partitioning Setup: Automatically partition. .
Hostname –enter tempracsrv3

Same way install 4 nodes namely tempracsrv4, prodrac1,prodrac2

After installation give the ip address for eth0 and eth1 as follows

For tempracsrv3
192.168.2.131 tempracsrv3.mycorpdomain.com tempracsrv3 — for eth0
192.168.2.31 tempracsrv3-vip.mycorpdomain.com tempracsrv3-vip
10.10.10.31 tempracsrv3-priv.mycorpdomain.com tempracsrv3-priv —for eth1 ( leave gateway blank)

For tempracsrv4
192.168.2.132 tempracsrv4.mycorpdomain.com tempracsrv4
192.168.2.32 tempracsrv4-vip.mycorpdomain.com tempracsrv4-vip
10.10.10.32 tempracsrv4-priv.mycorpdomain.com tempracsrv4-priv – for eth1( leave gateway blank)

Connect the network wire from the second eth on each machine to the One Gigabit Switch)—Note this is separate switch

After that verify that both ip’s are pinging from both machines.

From tempracsrv3
Ping tempracsrv4
Ping tempracsrv4-priv
Ping tempracsrv3
Ping tempracsrv3-priv

From tempracsrv4
Ping tempracsrv3
Ping tempracsrv3-priv
ping tempracsrv4
Ping tempracsrv4-priv

Ensure that everything is pinging successfully. No need to worry about the vip.

After the Installation of Solaris 10 in tempracsrv3 . Make directory u01 under / Using the following command “mkdir /u01”

Create the oracle user.

As the root user, execute
Create group, —dba ,oinstall
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/groupadd -g 200 oinstall
# /usr/sbin/groupadd -g 201 dba
# /usr/sbin/groupadd -g 202 oper

Test
# id -a oracle
Create user—oracle(oracle software owner)…

# /usr/sbin/useradd -u 200 -g oinstall -G dba[,oper] oracle
Verify that the user nobody exists in the system.

# /usr/sbin/useradd nobody
# passwd oracle

Preventing Oracle Clusterware Installation Errors Caused by stty Commands

During an Oracle Clusterware installation, Oracle Universal Installer uses SSH (if available) to run commands and copy files to the other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause installation errors if they contain stty commands.

To avoid this problem, you must modify these files to suppress all output on STDERR, as in the following examples:

Configuring the oracle User’s Environment

Bourne, Bash, or Korn shell:
if [ -t 0 ]; then
stty intr ^C
fi

C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif

Configuring Kernel Parameters on Solaris 10

On Solaris 10 operating systems, verify that the kernel parameters shown in the following table are set to values greater than or equal to the recommended value shown. The table also contains the resource controls that replace the /etc/system file for a specific kernel parameter. The procedure following the table describes how to verify and set the values.

Configuring Kernel Parameters

Pre-Installation Tasks

On Solaris 10, use the following procedure to view the current value specified for resource controls, and to change them if necessary:

1. To view the current values of the resource control, enter the following commands:

# id -p // to verify the project id
uid=0(root) gid=0(root) projid=1 (user.root)
# prctl -n project.max-shm-memory -i project user.root
# prctl -n project.max-sem-ids -i project user.root

2. If you must change any of the current values, then:

To modify the value of max-shm-memory to 6 GB:
# prctl -n project.max-shm-memory -v 6442450944 -r -i project user.root
To modify the value of max-sem-ids to 256:
# prctl -n project.max-sem-ids -v 256 -r -i project user.root

Use the following procedure to modify the resource control project settings, so that they persist after a system restart:

Note: In Solaris 10, you are not required to make changes to the /etc/system file to implement the System V IPC. Solaris 10 uses the resource control facility for its implementation. However, Oracle recommends that you set both resource control and /etc/system/ parameters.

Operating system parameters not replaced by resource controls continue to affect performance and security on Solaris 10 systems. For further information, contact your Sun vendor. In case you have any problem just edit the /etc/system file.

Parameter Replaced by Resource Control : Recommended value

noexec_user_stack NA 1
semsys:seminfo_semmns project.max-sem-ids 100
semsys:seminfo_semmns NA 1024
semsys:seminfo_semmsl process.max-sem-nsems 256
semsys:seminfo_semvmx NA 32767
shmsys:shminfo_shmmax project.max-shm-memory 4294967295
shmsys:shminfo_shmmin NA 1
shmsys:shminfo_shmmni project.max-shm-ids 100
shmsys:shminfo_shmseg NA 10

Note: When you use the command prctl (Resource Control) to change system parameters, you do not need to restart the system for these parameter changes to take effect. However, the changed parameters do not persist after a system restart.

Checking UDP Parameter Settings

1. By default, Oracle instances are run as the oracle user of the dba group . A project with the name group.dba is created to serve as the default project for the oracle user. Run the command id to verify the default project for the oracle user:

# su – oracle
$ id -p
uid=100(oracle) gid=100(dba) projid=100(group.dba)
$ exit

2. To set the maximum shared memory size to 2 GB, run the projmod command:

# projmod -sK “project.max-shm-memory=(privileged,2G,deny)” group.dba
Alternatively, add the resource control value  project.max-shm-memory=(privileged,2147483648,deny) to the last field of the project entries for the Oracle project.

3. After these steps are complete, check the values for the /etc/project file using the following command:

# cat /etc/project
The output should be similar to the following:
system:0::::
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
group.dba:100:Oracle default
project:::project.max-shmmemory=(privileged,2147483648,deny)

4. To verify that the resource control is active, check process ownership, and run the commands id and prctl, as in the following example:

# su – oracle
$ id -p
uid=100(oracle) gid=100(dba) projid=100(group.dba)

$ prctl -n project.max-shm-memory -i process $$
process: 5754: -bash
NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
project.max-shm-memory
privileged 2.00GB – deny

Checking UDP Parameter Settings

The User Data Protocol (UDP) parameter settings define the amount of send and receive buffer space for sending and receiving datagrams over an IP network. these settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.

On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. OnSolaris 10 the default values for these parameters are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.

Note: For additional information, refer to the Solaris Tunable Parameters Reference Manual.

Checking the Operating System Requirements Setup with CVU. To check current settings for udp_recv_hiwat and udp_xmit_hiwat, enter the following commands:

# ndd /dev/udp udp_xmit_hiwat
# ndd /dev/udp udp_recv_hiwat

To set the values of these parameters to 65536 bytes in current memory, enter the following commands:

# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536

To set the values of these parameters to 65536 bytes on system restarts, open the /etc/system file, and enter the following lines:

set udp:xmit_hiwat=65536
set udp:udp_recv_hiwat=65536

Checking the Hardware and Operating System Setup with CVU

/dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh stage –post hwos –n node1,node2

Checking NFS Buffer Size Parameters

then you must set the values for the NFS buffer size parameters rsize and wsize to at least 16384. Oracle recommends that you use the value 32768. For example, if you decide to use rsize and wsize buffer settings with the value 32768, then update the /etc/vfstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs -yes
rw,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3

If you use NFS mounts, then Oracle recommends that you use the option forcedirectio to force direct I/O for better performance. However, if you add forcedirectio to the mount option, then the same mount point cannot be used for Oracle software binaries, executables, shared libraries, and objects. You can only use the forcedirectio option for Oracle data files, the OCR, and voting disks.

For these mount points, enter the following line:

nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs -yes
rw,hard,nointr,rsize=32768,wsize=32768,tcp,noac,forcedirectio,vers=3

Create the oracle user environment file.

/export/home/oracle/.profile
export PS1=”`/bin/hostname -s`-> ”
export EDITOR=vi
export ORACLE_SID=devdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:
/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022

Create the filesystem directory structure. As the oracle user, execute

tempracsrv3-> mkdir -p $ORACLE_BASE/admin
tempracsrv3-> mkdir -p $ORACLE_HOME
tempracsrv3-> mkdir -p $ORA_CRS_HOME
tempracsrv3-> mkdir -p /u01/oradata/devdb

Increase the shell limits for the Oracle user. So that it has unlimited resources in the oracle user

Modify the /etc/hosts file.

# more /etc/hosts
127.0.0.1 localhost
192.168.2.131 tempracsrv3.mycorpdomain.com tempracsrv3
192.168.2.31 tempracsrv3-vip.mycorpdomain.com tempracsrv3-vip
10.10.10.31 tempracsrv3-priv.mycorpdomain.com tempracsrv3-priv
192.168.2.132 tempracsrv4.mycorpdomain.com tempracsrv4
192.168.2.32 tempracsrv4-vip.mycorpdomain.com tempracsrv4-vip
10.10.10.32 tempracsrv4-priv.mycorpdomain.com tempracsrv4-priv

Configuring SSH, Add in all the cluster.

$ ps -ef | grep sshd

Create RSA and DSA keys on each node: Complete the following steps on each node:

1. Log in as the oracle user.

2. If necessary, create the .ssh directory in the oracle user’s home directory and set the correct permissions on it:

$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ chmod 700

3. Enter the following commands to generate an RSA key for version 2 of the SSH protocol:

$ /usr/bin/ssh-keygen -t rsa
At the prompts: Accept the default location for the key file.

Enter and confirm a pass phrase that is different from the oracle user’s password. This command writes the public key to the ~/.ssh/id_rsa.pub file and the  private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone.

4. Enter the following commands to generate a DSA key for version 2 of the SSH protocol:

$ /usr/bin/ssh-keygen -t dsa

At the prompts:

  • Accept the default location for the key file
  • Enter and confirm a pass phrase that is different from the oracle user’s password

This command writes the public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file. Never distribute the private key to anyone.

Add keys to an authorized key file: Complete the following steps:

1. On the local node, determine if you have an authorized key file (~/.ssh/authorized_keys). If the authorized key file already exists, then proceed to step 2. Otherwise, enter the following commands:

$ touch ~/.ssh/authorized_keys
$ cd ~/.ssh
$ ls

You should see the id_dsa.pub and id_rsa.pub keys that you have created.

2. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the file ~/.ssh/authorized_keys, and provide the Oracle user password as prompted. This process is illustrated in the following syntax example with a two-node cluster, with nodes node1 and node2, where the Oracle user path is /home/oracle:

[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >>
authorized_keys
oracle@node1’s password:
[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >>
authorized_keys
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_rsa.pub >>
authorized_keys
oracle@node2’s password:
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_dsa.pub
>>authorized_keys
oracle@node2’s password:

3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file  to the Oracle user .ssh directory on a remote node. The following example is with SCP, on a node called node2, where the Oracle user path is /home/oracle:

[oracle@node1 .ssh]scp authorized_keys node2:/home/oracle/.ssh/

Note: Repeat this process for each node in the cluster.

Creating Required Operating System Groups and User

Pre-Installation Tasks

4. Repeat step 2 and 3 for each cluster node member. When you have added keys from each cluster node member to the authorized_keys file on the last node you want to have as a cluster node member, then use SCP to copy the complete authorized_keys file back to each cluster node member

5. Change the permissions on the Oracle user’s /.ssh/authorized_keys file on all cluster nodes:

$ chmod 600 ~/.ssh/authorized_keys

At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass phrase that you specified when you created the DSA key.

Enabling SSH User Equivalency on Cluster Member Nodes

To enable Oracle Universal Installer to use the ssh and scp commands without being prompted for a pass phrase, follow these steps:

1. On the system where you want to run Oracle Universal Installer, log in as the oracle user.

2. Enter the following commands:

$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

3. At the prompts, enter the pass phrase for each key that you generated. If you have configured SSH correctly, then you can now use the ssh or scp commands without being prompted for a password or a pass phrase.

4. If you are on a remote terminal, and the local node has only one visual (which is typical), then use the following syntax to set the DISPLAY environment Variable:

Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY 0
For example, if you are using the Bash shell, and if your hostname is node1, then enter the following command:
$ export DISPLAY=node1:0

5. To test the SSH configuration, enter the following commands from the same terminal session, testing the configuration of each cluster node, where tempracsrv3, tempracsrv4, and so on, are the names of nodes in the cluster:

$ ssh tempracsrv3 date
$ ssh tempracsrv4 date

Note: The Oracle user’s /.ssh/authorized_keys file on every node must contain the contents from all of the /.ssh/id_rsa.pub and /.ssh/id_dsa.pub files that you generated on all cluster nodes.

These commands should display the date set on each node. If any node prompts for a password or pass phrase, then verify that the ~/.ssh/authorized_keys file on that node contains the correct public keys. If you are using a remote client to connect to the local node, and you see a message similar to “Warning: No xauth data; using fake authentication data for X11 forwarding,” then this means that your authorized keys file is configured correctly, but your ssh configuration has X11 forwarding enabled. To correct this, proceed to step 6.

6. To ensure that X11 forwarding will not cause the installation to fail, create a user-level SSH client configuration file for the Oracle software owner user, as follows:

a. Using any text editor, edit or create the ~oracle/.ssh/config file.

b. Make sure that the ForwardX11 attribute is set to no. For example:

Host *
ForwardX11 no

Note: The first time you use SSH to connect to a node from a particular system, you may see a message similar to the following: The authenticity of host ‘node1 (140.87.152.153)’ can’t be established.

RSA key fingerprint is
7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?

Enter yes at the prompt to continue. You should not see this message again when you connect from this system to that node. If you see any other messages or text, apart from the date, then the installation can fail. Make any changes required to ensure that only the date is displayed when you enter these commands. You should ensure that any parts of login scripts that generate any output, or ask any questions, are modified so that they act only when the shell is an interactive shell.

At Ttempracsrv3

Create NFS mount point /u01. Create directory using following command Mkdir /u01

1. enable NFS sever by running the following:
svcadm -v enable -r network/nfs/server

3. Run the following command to share via NFS
share -F nfs -o rw /u01
Note: The above share command will not persist over reboots. To persist over reboots, add an entry to /etc/dfs/dfstab

At Tempracsrv4

4. Run the following command to mount from tempracsrv4: mount -F nfs tempracsrv3:/u01 /u01
Note: The above mount command will not persist over reboots. To persist over reboots, add the following line in /etc/vfstab:

tempracsrv3:/u01 – /u01 nfs – yes rw,soft

Now copy the first machine as we did for tempracsrv4 and configure it as temprac1,temprac2.(for failover).also configure /u02 as nfs in temprac1, and mount it in temprac2.

creating blank files which will be used later for placing voting disk and ocr

touch /u01/crs_config
touch /u01/voting_disk

Now configure prodrac1,prodrac2 as we did for tempracsrv3,tempracsrv4.(for failover).

Change the sid in .bash_profile from devdb to prod. Use the following ip’s for prodrac1,prodrac2

127.0.0.1 localhost
192.168.2.133 prodrac1.mycorpdomain.com prodrac1— eth0 on prodrac1
192.168.2.34 prodrac1-vip.mycorpdomain.com prodrac1-vip
10.10.10.33 prodrac1-priv.mycorpdomain.com prodrac1-priv –eth1 on prodrac1
192.168.2.133 prodrac2.mycorpdomain.com prodrac2—eth0 on prodrac2
192.168.2.34 prodrac2-vip.mycorpdomain.com prodrac2-vip
10.10.10.34 prodrac2-priv.mycorpdomain.com prodrac2-priv—eth1 on prodrac2

RAC Build on Solaris, Step-By-Step

Posted by Sagar Patil

The scope of this STEP By STEP documentation will include the following workflow:

*First Phase*
The two prod nodes have Oracle 9i databases and we want to export the database to oracle 10g RAC. Step by Step export instructions for creating a backup copy of databases. Step By Step instruction on how to install new OS version of Sun Solaris 10 to replace Sun Solaris 9.

Setting Up nodes
Step by Step documentation for setting up 4 Sun Solaris 10 nodes. This portion needs to include Step by Step instructions for NFS setup. Also include step by step instructions to verify connectivity.

*Second Phase*
Oracle 10g R2 RAC Installation for Temp Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP by STEP guide you for installing two nodes (tempracsrv3 and tempracsrv4). Installation on this phase includes documentation on how to verify the installation and configuration is installed correctly. Step by Step instructions on creating RAC database, importing schemas from oracle 9i database into new databases, and testing database and node connectivity for this phase of the step by step instructions.

*Third Phase*
Oracle 10g R2 RAC Installation for PROD Nodes:
Step by Step instructions for installing Oracle 10g R2 RAC installation. The procedures will provide STEP By STEP guide you for installing two nodes (prodracsrv1and prodracsrv2) and adding to the existing RAC cluster.

*Fourth Phase*
Step by Step instructions on how to fail RAC databases over from temp nodes to prod nodes. Includes step by step instructions on how to verify the failover from temp nodes to prod nodes. Step by Step instructions on how to test RAC database connectivity after failover.

*Fifth Phase*
Step by Step instructions on how to remove temp nodes from RAC cluster. Step by step instruction on how to verify removal of temp nodes.

Top of Page

Top menu