Why RAC need a VIP (Virtual IP address)

Posted by Sagar Patil

Importance of VIP
Real Application Clusters in 10g, however, don’t particularly want you to connect to physical IP address associated with network interface.
Doing so means IP packets are routed to a physical MAC address, so that if that address ever ceases to exist (such as when a server dies), we have to wait for TCP/IP networking protocol itself to work out that packets are undeliverable.
That can take up to 10 minutes, and would mean failover in a RAC can potentially be very slow.
Instead, Oracle wants users to connect to a Virtual IP Address (VIP). That’s an IP address that’s bound to a software-controlled MAC address and since it’s software controlled, the software can arrange for failures to be handled a lot quicker than plain old TCP/IP stack (in seconds, usually).
The VIP for a RAC node is quite often the normal, real IP address plus one so, in my case, that would imply a VIP of 192.168.1.111. I won’t be needing this until it comes time to installing the Oracle software, but it’s good to plan ahead.
Even if check fails you can continue the installation,by configuring vipca seaparately , even if that IP does not exist (no need that you own that ip).

DDL create commands for Objects (DBMS_METADATA)

Posted by Sagar Patil

Metadata API (DBMS_METADATA)
Prior to Oracle9i metadata could only be extracted using SQL statements, export utilities and the OCIDescribeAny interface, all of which are limited:
* The SQL approach is limited in that as versions change, so must your scripts.
* Using export with ROWS=N and an import with SHOW=Y will produce the text to allow you to recreate a schema, but it can require considerable editing.
* The OCIDescribeAny interface is limited in the objects it supports.
To solve these issues Oracle9i has introduced the DBMS_METADATA package which can be used to retrieve object definitions as XML or SQL DDL:

FUNCTION Get_XML (
object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT NULL)
RETURN CLOB;
FUNCTION Get_DDL (
object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT NULL)
RETURN CLOB; 

These functions can be used as part of a select statement to produce output for groups of objects:

SET LONG 20000
SET PAGESIZE 0
SELECT DBMS_METADATA.get_ddl ('TABLE','EMP','SCOTT')
FROM dual;
CREATE TABLE "SCOTT"."EMP"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE,
CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"
1 row selected.

The results can be spooled into a file for editing.

Shell or UNIX OS Commands From PL/SQL

Posted by Sagar Patil

http://www.oracle-base.com/articles/8i/ShellCommandsFromPLSQL.php

Main Procedure to kill a latch holder locate_latchholder

Host java procedure to Fire OS commands host_command

Identifying a Poor performing SQL

Posted by Sagar Patil

Our preferred v$sqlarea query is to actually report on physical disk I/O per statement execution. Hit ratios are informative but sometimes misleading. Logical I/O is less relevant. If the statement executes 1,000,000 logical I/Os but still only takes less than one-tenth of a second, who cares? It is the total physical I/O that consumes nearly all the time and identifies the potentially incorrect SQL. For example:

SELECT sql_text, executions,
ROUND(disk_reads / executions, 2) reads_per_run,
disk_reads, buffer_gets,
ROUND((buffer_gets – disk_reads)/
buffer_gets, 2) hit_ratio,
SQL_TEXT
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 0
AND (buffer_gets – disk_reads) / buffer_gets < 0.80
ORDER by 3 desc ;

The previous two statements would have reported more enlightening results:
Executions Reads_Per_Run Disk_Reads Buffer_Gets Hit_Ratio Sql_Text
———- ————- ———- ———– ——— ————
2 3 6 19 0.68 SELECT …
2 1812.5 3625 178777 0.98 SELECT …
From this view of the v$sqlarea table, we can immediately isolate all statements that are performing high numbers of physical reads. These statements might not necessarily be inefficient or poorly written, but they are prime candidates for investigation and further tuning.

How to find out hidden and system Parameters?

Posted by Sagar Patil

System Parameters Purpose: Reports all system parameters, including the hidden ones


select
nam.indx+1 numb,
nam.ksppinm name,
val.ksppstvl value,
nam.ksppity type,
val.ksppstdf is_default,
decode(bitand(nam.ksppiflg/256,1),
1,'True',
'False'
) is_session_modifiable,
decode(bitand(nam.ksppiflg/65536,3),
1,'Immediate',
2,'Deferred' ,
3,'Immediate',
'False'
) is_system_modifiable,
decode(bitand(val.ksppstvf,7),
1,'Modified',
4,'System Modified',
'False'
) is_modified,
decode(bitand(val.ksppstvf,2),
2,'True',
'False'
) is_adjusted,
nam.ksppdesc description
from
x$ksppi nam,
x$ksppsv val
where
nam.indx = val.indx
and
nam.ksppinm like '&&PARAMETER%';

Session Parameters


select
nam.indx+1 numb,
nam.ksppinm name,
val.ksppstvl value,
nam.ksppity type,
val.ksppstdf is_default,
decode(bitand(nam.ksppiflg/256,1),
1,'True',
'False'
) is_session_modifiable,
decode(bitand(nam.ksppiflg/65536,3),
1,'Immediate',
2,'Deferred' ,
3,'Immediate',
'False'
) is_system_modifiable,
decode(bitand(val.ksppstvf,7),
1,'Modified',
4,'System Modified',
'False'
) is_modified,
decode(bitand(val.ksppstvf,2),
2,'True',
'False'
) is_adjusted,
nam.ksppdesc description
from
x$ksppi nam,
x$ksppcv val
where
nam.indx = val.indx

ORA 600 Corrupt UNDO/INDEX/TABLE Segment

Posted by Sagar Patil

I have seen this issue on number of SAP systems. I will compile a procedure I followed here.

select segment_name,
segment_type,
owner,
tablespace_name
from dba_extents
where file_id = <file_id>
and <block> between block_id and block_id + blocks - 1;

For example, given this information:
Corrupt Block Found
TSN = 24, TSNAME = STATS_INDEXES_06
RFN = 23, BLK = 211531, RDBA = 96680523
OBJN = -1, OBJD = 102711, OBJECT = , SUBOBJECT =
I would execute:
select segment_name,
segment_type,
owner,
tablespace_name
from dba_extents
where file_id = 23
and 211531 between block_id and block_id + blocks – 1;

Alrtlog messages
Corrupt Block Found
TSN = 24, TSNAME = STATS_INDEXES_06
RFN = 23, BLK = 293131, RDBA = 96762123
OBJN = -1, OBJD = 103744, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =
ORA-00600: INTERNAL ERROR CODE, ARGUMENTS: [4194], [29], [42],
Alrtlog Errors :
Errors in file /oracle/BPA/saptrace/background/bpa_smon_19615.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [29], [42], [], [], [], [], []
Errors in file /oracle/BPA/saptrace/background/bpa_smon_19615.trc:
ORA-00600: internal error code, arguments: [4193], [21415], [64890], [], [], [], [], []
Tue Aug 7 10:10:10 2007
Errors in file /oracle/BPA/saptrace/usertrace/bpa_ora_19821.trc:
ORA-00600: internal error code, arguments: [4194], [29], [42], [], [], [], [], []

As per undo corruption, restore from backup may be the best and fastest solution here. Can this be an option here?
— Restoring from last backup is not an easy option for us. It’s 3.5 Tera byte database. It costs time and resources which I didn’t had here.

DBVERIFY
It can be used to perform a physical data structure integrity check on data files whether the database is online or offline.

dbv file=system_ts_01.dbf blocksize=8192 logfile=dbvsys_ts.log feedback=1000
Data file = system_ts_01.dbf.
Start block is 9
end block is 25
Blocksize 8192

Logfile parameter specifies the file to which logging information written feedback To display one dot on the screen for every 1000 blocks processed

Oracle Support Communication :
This database was restored from live system last Monday. Everything went very well and database was running fine until Monday morning. Y’day we reported this error first time after running system for 5/6
days. A restart at afternoon fixed those ora-600 errors until this morning when we are getting ora-600 at all times. It is necessary to identify what went wrong before thinking for restore. Personally I don’t see a point restoring entire da tabase when there is no guarantee that these errors will go away after restore.
As per undo corruption, restore from backup may be the best and fastest solution here. Can this be an option here?
–Restoring from last backup is not an easy option for us. It’s 3.5 Tera byte database. It costs time and resources which we don’t have available
ACTION PLAN
============
Let’s start checking objects reported on recovery processes:
fno: 81 blk: 322532
fno: 18 blk: 125593
fno: 18 blk: 125673
fno: 116 blk: 266003

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
WHERE file_id = &fno
and &blk between block_id AND block_id + blocks – 1;
TABLESPACE_NAME SEGMENT_TYPE OWNER
—————————— —————— ——————————
SEGMENT_NAME
——————————————————————————–
PSAPUNDO TYPE2 UNDO SYS
_SYSSMU37$
no rows selected
no rows selected
PSAPUNDO TYPE2 UNDO SYS
_SYSSMU42$

Oracle Support Response We’re facing undo corruption here. Using _corrupted or _offline parameters will be helpful to export full database, but if there are active transactions (as I guess) this database will be desupported after setting them. Let’s try to dump undo headers to check if there are active transactions:

alter system dump undo header <undo>;
Depending on what trace file with header dump tells we can decide what to do. If there are no active transactions, we can try to drop these undo segments. Otherwise, we can just try to restore these datafiles first (the 3 ones) and apply recover on them
== Check if there are Active x’actions before dropping UNDO segments==
We can check
TRN TBL::
which is the starting for the transaction table dump.
On state column, we can see status. 9 means commited, 10 means active. So in our case, all were 9s.

ACTION PLAN
============
As per all transactions are committed, we can try to drop these undo segments. As per using AUM, we need to set smu_debug to allow actions
on undo segments, or place AUM to manual
SQL> alter session set “_smu_debug_mode”=4;
SQL> alter rollback segment “_SYSSMU37$” offline;
-> Check if undo segment went offline in DBA_ROLLBACK_SEGS. If so:
SQL> drop rollback segment “_SYSSMU37$”;
SQL> alter rollback segment “_SYSSMU42$” offline;
-> Check if undo segment went offline in DBA_ROLLBACK_SEGS. If so:
SQL> drop rollback segment “_SYSSMU42$”;
A clean database restart after this was done and things were sorted.

Estimate Undo Retention , Undo Optimization

Posted by Sagar Patil

undo_retention parameter will not guarantee a RETENTION unless you define a RETENTION GUARANTEE CLAUSE on tablespace level

Actual Undo Size

SELECT SUM(a.bytes) “UNDO_SIZE”
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#;

UNDO_SIZE
———-
209715200

Undo Blocks per Second

SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
“UNDO_BLOCK_PER_SEC”
FROM v$undostat;

UNDO_BLOCK_PER_SEC
——————
3.12166667

DB Block Size
SELECT TO_NUMBER(value) “DB_BLOCK_SIZE [KByte]”
FROM v$parameter
WHERE name = ‘db_block_size’;

DB_BLOCK_SIZE [Byte]
——————–
4096

Calculation for Optimal Undo Retention
209’715’200 / (3.12166667 * 4’096) = 16’401 [Sec]  Using Inline Views, you can do all in one query!

SELECT d.undo_size/(1024*1024) “ACTUAL UNDO SIZE [MByte]”,
SUBSTR(e.value,1,25) “UNDO RETENTION [Sec]”,
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) “OPTIMAL UNDO RETENTION [Sec]”
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = ‘undo_retention’
AND f.name = ‘db_block_size’
/
ACTUAL UNDO SIZE [MByte]
————————
200
UNDO RETENTION [Sec]
——————–
10800
OPTIMAL UNDO RETENTION [Sec]
—————————-
16401

Calculate Needed UNDO Size for given Database Activity

SELECT d.undo_size/(1024*1024) “ACTUAL UNDO SIZE [MByte]”,
SUBSTR(e.value,1,25) “UNDO RETENTION [Sec]”,
(TO_NUMBER(e.value) * TO_NUMBER(f.value) *
g.undo_block_per_sec) / (1024*1024)
“NEEDED UNDO SIZE [MByte]”
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = ‘undo_retention’
AND f.name = ‘db_block_size’
/
ACTUAL UNDO SIZE [MByte]
————————
200
UNDO RETENTION [Sec]
——————–
10800
NEEDED UNDO SIZE [MByte]
————————
131.695313

10G ASH script :True Session Wait Activity in Oracle 10g

Posted by Sagar Patil

1> What resource is currently in high demand?

select  active_session_history.event
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 where active_session_history.sample_time between
sysdate – 60 / 2880
 and sysdate group by active_session_history.event
 order by 2
2> Which user is waiting the most?
select  sesion.sid
 , sesion.username
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , v$session sesion
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.session_id = sesion.sid
 group by sesion.sid
 , sesion.username
 order by 3

SID User TTL_WAIT_TIME
—– ———- ————-
135 SCOTT 91167481
149 SCOTT 107409491
153 SCOTT 110796799

3> What SQL is currently using the most resources?
select  active_session_history.user_id
 , dba_users.username
 , sqlarea.sql_text
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , v$sqlarea sqlarea
 , dba_users
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.sql_id = sqlarea.sql_id
 and active_session_history.user_id = dba_users.user_id
 group by active_session_history.user_id
 , sqlarea.sql_text
 , dba_users.username
 order by 4;

USER_ID User SQL_TEXT TTL_WAIT_TIME
——- —— —————————————————– ————-
57 SCOTT insert into sys.sourcetable (select * from sys.source$) 304169752

4> What object is currently causing the highest resource waits?

select  dba_objects.object_name
 , dba_objects.object_type
 , active_session_history.event
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , dba_objects
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.current_obj# = dba_objects.object_id
 group by dba_objects.object_name
 , dba_objects.object_type
 , active_session_history.event
 order by 4

Complete (recursive) object dependency

Posted by Sagar Patil

The easiest approach is to use an Oracle utldtree.sql script. This will be located in the %ORACLE_HOME%\rdbms\admin\ directory.

For each schema that you want to be able to get a dependency list, you will open up a SQL*Plus session and execute the script
scott@> @d:\oracle\ora92\rdbms\admin\utldtree.sql
Don’t worry about the ORA-00942 errors you will get– this script tries to drop objects before it re-creates them, so you’ll get errors when the objects don’t exist yet.
Once the script has been run, you can get a listing of dependent objects for a particular object by calling

scott@jcave > exec deptree_fill('table', 'scott', 'emp' );
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
scott@jcave > select * from ideptree;
DEPENDENCIES
-------------------------------------------------------------------------------- 
TABLE SCOTT.EMP
   VIEW SCOTT.V2
   PROCEDURE SCOTT.INCREASE_SALARY
   VIEW SCOTT.V3
   PROCEDURE SCOTT.CALL_ME
   TRIGGER SCOTT.ODBC_EMP_TST
   <no>
7 rows selected.
Elapsed: 00:00:08.03

Find execution time for given SQL from Statpack tables

Posted by Sagar Patil

Please chnage “like” string for filtering results for a specific SQL statement.

set linesize 150;
set pagesize 1000;
BREAK ON hash_value SKIP 1;
spool 1_sql.lst;
select hash_value,to_char(snap_time,’DD-MON-YY HH24:MI:SS’),snap_id,piece, sql_text from STATS$SQLTEXT a, stats$snapshot b where hash_value in (
select hash_value
from STATS$SQLTEXT where sql_text like ‘SELECT%VBELN%KUNNR%ZZCURRENT_OWNER%VBKA%’)
and b.snap_id=a.last_snap_id
order by snap_time,hash_value,piece;
spool off;

Resumable Space Allocation / SUSPEND RESUME transactions

Posted by Sagar Patil

Long running operations such as imports and batch processes sometimes fail because the server is unable to allocate more extents for an object. This may be because the object has reached max_extents or there isn’t sufficient room in the tablespace for the object to expand. In previous releases the operation would have to be rerun, possible with some manual cleanup necessary. In Oracle9i operations that would fail due to space allocation problems can be suspended and restarted once the problem is fixed.
* Resumable Mode
* Timeout Period
* AFTER SUSPEND Trigger
* Views
* DBMS_RESUMABLE Package
* Restrictions

Resumable Mode
Operations can be made resumable by explicitly switching the session mode using:
ALTER SESSION ENABLE RESUMABLE;
ALTER SESSION DISABLE RESUMABLE;

When the session is in resumable mode, any operations that result in the following errors will be suspended:
* Out of space condition.
* Maximum extents reached condition.
* Space quota exceeded condition.
Once the error is corrected the operations will automatically resume.
Timeout Period

Operations will remain in a suspended state until the timeout period, 2 hours by default, is reached. The timeout period can be modified using any of the following commands:
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600;
EXECUTE Dbms_Resumable.Set_Timeout(3600);
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600 NAME ‘insert into table’;

The final example can be used to assign a name to the suspended session.
AFTER SUSPEND Trigger
Since suspended operations do not produce error messages, an alternative method is required to notify users/DBAs so that the problem can be corrected. Typically, these procedures are initiated using the AFTER SUSPEND trigger which always fires as an autonomous transaction. This trigger can be used to insert rows into an error table or email an operator using the DBMS_SMTP package:

CREATE OR REPLACE TRIGGER resumable_default
AFTER SUSPEND
ON DATABASE
DECLARE
-- Declare any variables 
BEGIN
-- Alter default timeout period. 
Dbms_Resumable.Set_Timeout(3600);
-- Perform resumable space allocation 
-- notification code here. 
COMMIT;
END;
/

Views
Information about suspended sessions can be viewed via the USER_ and DBA_RESUMABLE views. When a session is suspended a row is added to the V$SESSION_WAIT view with the EVENT column containing “suspended on space error”.
DBMS_RESUMABLE Package
* ABORT(sessionID) – Ends the specified suspended session. Caller must be the owner of the session with sessionID, have ALTER SYSTEM privilege, or have DBA privileges.
* GET_SESSION_TIMEOUT(sessionID) – Returns the timeout period in seconds for the specified session, or -1 if the session does not exist.
* SET_SESSION_TIMEOUT(sessionID, timeout) – Sets the timeout in seconds of the specified session with immediate effect.
* GET_TIMEOUT() – Returns the timeout period in seconds for the current session.
* SET_TIMEOUT(timeout) – Sets the timeout in seconds of the current session with immediate effect.
* DBMS_RESUMABLE.SPACE_ERROR_INFO(…) – Returns information about the space error when called from within an AFTER SUSPEND trigger.
Restrictions
* When using dictionary managed tablespaces, DDL CREATE operations with explicit MAXEXTENTS which run out of extents are aborted because the MAXEXTENTS parameter cannot be modified until the object is created. Extension of existing objects is resumable.
* If rollback segments are placed in dictionary managed tablespaces any RBS errors will not be resumable. Oracle recommends that automatic undo management is configured or rollback segments are placed in a locally managed tablespace.
* Remote operations are not supported in resumable mode.
* Individual parallel server processes may be suspended while others proceed without any problem. When the error is corrected any suspended operations will resume normally. If aborted, the whole transaction is aborted.

Running SHELL script from Oracle PL/SQL

Posted by Sagar Patil
exec dbms_java.grant_permission ('DBA_ADMIN', 'java.io.FilePermission','/usr/bin/ps', 'execute');
exec dbms_java.grant_permission ('DBA_ADMIN','java.lang.RuntimePermission','*','writeFileDescriptor' );

Create or replace and compile
java source named “Util”
as
import java.io.*;
import java.lang.*;
public class Util extends Object
{
public static int RunThis(String args)
{
Runtime rt = Runtime.getRuntime();
int        rc = -1;
try
{
Process p = rt.exec(args);
int bufSize = 4096;
BufferedInputStream bis =
new BufferedInputStream(p.getInputStream(), bufSize);
int len;
byte buffer[] = new byte[bufSize];
// Echo back what the program spit out
while ((len = bis.read(buffer, 0, bufSize)) != -1)
System.out.write(buffer, 0, len);
rc = p.waitFor();
}
catch (Exception e)
{
e.printStackTrace();
rc = -1;
}
finally
{
return rc;
}
}
}
/
Java created.
create or replace
function RUN_CMD(p_cmd in varchar2) return number
as
language java
name ‘Util.RunThis(java.lang.String) return integer’;
/
create or replace procedure RC(p_cmd in varchar2)
as
x number;
begin
x := run_cmd(p_cmd);
end;
/
variable x number;
set serveroutput on
exec dbms_java.set_output(100000);
exec 😡 := RUN_CMD(‘/usr/bin/ls /tmp’);

SQL> exec :x := RUN_CMD('/usr/bin/ls /tmp');
10439.err
10757.err
11319.err
13399.err
13654.err
1429.err
14792.err
20648.err
20994.err
2139.err
21690.err

What is a database EVENT and how does one set events?

Posted by Sagar Patil

Oracle trace events are useful for debugging the Oracle database server.

The following two examples are simply to demonstrate syntax. Refer to later notes on this page for an explanation of what these particular events do.

Events can be activated by either adding them to the INIT.ORA parameter file. E.g.:
event=’1401 trace name errorstack, level 12′ … or, by issuing an ALTER SESSION SET EVENTS command: E.g.
alter session set events ‘10046 trace name context forever, level 4’;

Frequently Used Events:

10046 trace name context forever, level 4 Trace SQL statements and show bind variables in trace output.
10046 trace name context forever, level 8 Shows wait events in the SQL trace files
10046 trace name context forever, level 12 Shows both bind variable names and wait events in the SQL trace files

1401 trace name errorstack, level 12, 1401 trace name errorstack, level 4 , 1401 trace name processstate

Dumps out trace information if an ORA-1401 “inserted value too large for column” error occurs. The 1401 can be replaced by any other Oracle Server error code that you want to trace.
60 trace name errorstack level 10 Show where in the code Oracle gets a deadlock (ORA-60), and may help to diagnose the problem.
10210 trace name context forever, level 10 ,10211 trace name context forever, level 10 ,10231 trace name context forever, level 10 These events prevent database block corruptions
10049 trace name context forever, level 2 Memory protect cursor
10210 trace name context forever, level 2 Data block check
10211 trace name context forever, level 2 Index block check
10235 trace name context forever, level 1 Memory heap check
10262 trace name context forever, level 300 Allow 300 bytes memory leak for connections

Where to find Event Numbers ?

Event numbers can be found in $ORACLE_HOME/rdbms/mesg/oraus.msg

cd $ORACLE_HOME/rdbms/mesg
vi oraus.msg
/10015

10015, 00000, “Undo Segment Recovery”
// *Cause:
// *Action:

Rman backup script to backup entire database : level 0

Posted by Sagar Patil

2 Oracle databases – A & B. Server A is available and need to be replicated on server B. We will use rman to backup datafiles including control file and archive logs.

RMAN script

connect rcvcat rmanc/obsequ10us@rcvcat
connect target rmant/obstreper0us@tknwp.yell
run {
 allocate channel tp1 type "sbt_tape";
 allocate channel tp2 type "sbt_tape";
 allocate channel tp3 type "sbt_tape";
 backup current controlfile for standby;
 backup incremental level = 0 database plus archivelog;
 change archivelog until time 'sysdate - 3' delete;
 release channel tp1;
 release channel tp2;
 release channel tp3;
}

Spool of rman script
C:\Documents and Settings\oracle>set oracle_sid=TKNWP
C:\Documents and Settings\oracle>cd S:\nsr
C:\Documents and Settings\oracle>s:
S:\nsr>rman
Recovery Manager: Release 9.2.0.8.0 – Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> @tknwp_full.rman
RMAN> connect rcvcat rmanc/obsequ10us@rcvcat
connected to recovery catalog database
RMAN> connect target rmant/obstreper0us@tknwp.yell
connected to target database: TKNWP (DBID=38387564)
RMAN> run {
2> allocate channel tp1 type “sbt_tape”;
3> allocate channel tp2 type “sbt_tape”;
4> allocate channel tp3 type “sbt_tape”;
5> send ‘NSR_ENV=(NSR_SERVER=bsbk10,NSR_DATA_VOLUME_POOL=Default)’;
6> backup current controlfile for standby;
7> backup incremental level = 0 database plus archivelog;
8> change archivelog until time ‘sysdate – 3’ delete;
9> release channel tp1;
10> release channel tp2;
11> release channel tp3;
12> }
allocated channel: tp1
channel tp1: sid=17 devtype=SBT_TAPE
channel tp1: NMO v4.1.0.0
allocated channel: tp2
channel tp2: sid=18 devtype=SBT_TAPE
channel tp2: NMO v4.1.0.0
allocated channel: tp3
channel tp3: sid=22 devtype=SBT_TAPE
channel tp3: NMO v4.1.0.0
sent command to channel: tp1
sent command to channel: tp2
sent command to channel: tp3
Starting backup at 28-FEB-2007:11:55:28
channel tp1: starting full datafile backupset
channel tp1: specifying datafile(s) in backupset
including standby controlfile in backupset
channel tp1: starting piece 1 at 28-FEB-2007:11:55:29
channel tp1: finished piece 1 at 28-FEB-2007:11:55:36
piece handle=01ib6i5g_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp1: backup set complete, elapsed time: 00:00:08
Finished backup at 28-FEB-2007:11:55:36
Starting backup at 28-FEB-2007:11:55:37
current log archived
channel tp1: starting archive log backupset
channel tp1: specifying archive log(s) in backup set
input archive log thread=1 sequence=14 recid=1 stamp=614526721
input archive log thread=1 sequence=15 recid=2 stamp=614526722
input archive log thread=1 sequence=16 recid=3 stamp=614528129
input archive log thread=1 sequence=17 recid=4 stamp=614536625
input archive log thread=1 sequence=18 recid=6 stamp=614536632
input archive log thread=1 sequence=19 recid=8 stamp=614833726
input archive log thread=1 sequence=20 recid=9 stamp=615033366
input archive log thread=1 sequence=21 recid=10 stamp=615042001
input archive log thread=1 sequence=22 recid=11 stamp=615117605
channel tp1: starting piece 1 at 28-FEB-2007:11:55:40
channel tp2: starting archive log backupset
channel tp2: specifying archive log(s) in backup set
input archive log thread=1 sequence=23 recid=12 stamp=615189602
input archive log thread=1 sequence=24 recid=13 stamp=615265205
input archive log thread=1 sequence=25 recid=14 stamp=615337204
input archive log thread=1 sequence=26 recid=15 stamp=615404575
channel tp2: starting piece 1 at 28-FEB-2007:11:55:40
channel tp3: starting archive log backupset
channel tp3: specifying archive log(s) in backup set
input archive log thread=1 sequence=27 recid=16 stamp=615404584
input archive log thread=1 sequence=28 recid=26 stamp=615477606
input archive log thread=1 sequence=29 recid=28 stamp=615553206
input archive log thread=1 sequence=30 recid=29 stamp=615621602
input archive log thread=1 sequence=31 recid=30 stamp=615639537
input archive log thread=1 sequence=32 recid=31 stamp=615639544
input archive log thread=1 sequence=33 recid=32 stamp=615639606
input archive log thread=1 sequence=34 recid=33 stamp=615640180
input archive log thread=1 sequence=35 recid=34 stamp=615640586
channel tp3: starting piece 1 at 28-FEB-2007:11:55:40
channel tp1: finished piece 1 at 28-FEB-2007:11:55:55
piece handle=02ib6i5r_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp1: backup set complete, elapsed time: 00:00:16
channel tp2: finished piece 1 at 28-FEB-2007:11:55:55
piece handle=03ib6i5r_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp2: backup set complete, elapsed time: 00:00:16
channel tp3: finished piece 1 at 28-FEB-2007:11:55:55
piece handle=04ib6i5r_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp3: backup set complete, elapsed time: 00:00:16
channel tp1: starting archive log backupset
channel tp1: specifying archive log(s) in backup set
input archive log thread=1 sequence=36 recid=36 stamp=615640591
input archive log thread=1 sequence=37 recid=44 stamp=615640774
input archive log thread=1 sequence=38 recid=46 stamp=615658800
input archive log thread=1 sequence=39 recid=47 stamp=615658801
input archive log thread=1 sequence=40 recid=48 stamp=615729337
channel tp1: starting piece 1 at 28-FEB-2007:11:55:55
channel tp1: finished piece 1 at 28-FEB-2007:11:56:02
piece handle=05ib6i6b_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp1: backup set complete, elapsed time: 00:00:07
Finished backup at 28-FEB-2007:11:56:02
Starting backup at 28-FEB-2007:11:56:03
channel tp1: starting incremental level 0 datafile backupset
channel tp1: specifying datafile(s) in backupset
input datafile fno=00007 name=G:\ORACLE\ORADATA\TKNWP\PERFSTAT_01.DBF
input datafile fno=00005 name=H:\ORACLE\ORADATA\TKNWP\TOOLS_01.DBF
input datafile fno=00006 name=H:\ORACLE\ORADATA\TKNWP\USERS_01.DBF
channel tp1: starting piece 1 at 28-FEB-2007:11:56:03
channel tp2: starting incremental level 0 datafile backupset
channel tp2: specifying datafile(s) in backupset
including current controlfile in backupset
input datafile fno=00001 name=G:\ORACLE\ORADATA\TKNWP\SYSTEM01.DBF
input datafile fno=00004 name=G:\ORACLE\ORADATA\TKNWP\GENESYS_LOGS_DATA_01.DBF
channel tp2: starting piece 1 at 28-FEB-2007:11:56:03
channel tp3: starting incremental level 0 datafile backupset
channel tp3: specifying datafile(s) in backupset
input datafile fno=00002 name=G:\ORACLE\ORADATA\TKNWP\UNDOTBS01.DBF
input datafile fno=00003 name=I:\ORACLE\ORADATA\TKNWP\GENESYS_CONFIG_DATA_01.DBF
channel tp3: starting piece 1 at 28-FEB-2007:11:56:04
channel tp1: finished piece 1 at 28-FEB-2007:11:56:29
piece handle=06ib6i6j_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp1: backup set complete, elapsed time: 00:00:26
channel tp2: finished piece 1 at 28-FEB-2007:11:56:54
piece handle=07ib6i6j_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp2: backup set complete, elapsed time: 00:00:51
channel tp3: finished piece 1 at 28-FEB-2007:11:56:54
piece handle=08ib6i6j_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp3: backup set complete, elapsed time: 00:00:51
Finished backup at 28-FEB-2007:11:56:54
Starting backup at 28-FEB-2007:11:56:55
current log archived
channel tp1: starting archive log backupset
channel tp1: specifying archive log(s) in backup set
input archive log thread=1 sequence=41 recid=49 stamp=615729415
channel tp1: starting piece 1 at 28-FEB-2007:11:56:57
channel tp1: finished piece 1 at 28-FEB-2007:11:57:04
piece handle=09ib6i88_1_1 comment=API Version 2.0,MMS Version 4.1.0.0
channel tp1: backup set complete, elapsed time: 00:00:08
Finished backup at 28-FEB-2007:11:57:04
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00014.ARC recid=1 stamp=61452
6721
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00015.ARC recid=2 stamp=61452
6722
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00016.ARC recid=3 stamp=61452
8129
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00017.ARC recid=4 stamp=61453
6625
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00018.ARC recid=6 stamp=61453
6632
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00019.ARC recid=8 stamp=61483
3726
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00020.ARC recid=9 stamp=61503
3366
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00021.ARC recid=10 stamp=6150
42001
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00022.ARC recid=11 stamp=6151
17605
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00023.ARC recid=12 stamp=6151
89602
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00024.ARC recid=13 stamp=6152
65205
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00025.ARC recid=14 stamp=6153
37204
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00026.ARC recid=15 stamp=6154
04575
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00027.ARC recid=16 stamp=6154
04584
deleted archive log
archive log filename=F:\ORACLE\ORADATA\TKNWP\TKNWP_00028.ARC recid=26 stamp=6154
77606
Deleted 15 objects
released channel: tp1
released channel: tp2
released channel: tp3
RMAN>
RMAN>
RMAN>
RMAN>
RMAN> **end-of-file**

How to run rda on cluster

Posted by Sagar Patil

To setup rda collection at cluster

Read more…

What is causing Redo / Writes/ Log Creation- Statspack 8i/9i

Posted by Sagar Patil

Physical WRITES from the time Instance Started

Read more…

All OCFS volumes were not available at ocfsconsole config tool

Posted by Sagar Patil

Read more…

RAC Backup : Copy 1 set of backup at FLASH & other to ext3 Disk

Posted by Sagar Patil

Read more…

EM grid console active only at RAC 1 Instance

Posted by Sagar Patil

Case 1 : EM console is working at Node 1. Node 1 is shutdown , services failover to Node 2 but oemctl doesn’t failover to Node 2

[oracle@wygora02 ~]$ showcrs
HA Resource                                   Target     State
-----------                                   ------     -----
ora.wygora01.ASM1.asm                         ONLINE     OFFLINE
ora.wygora01.LISTENER_WYGORA01.lsnr           ONLINE     OFFLINE
ora.wygora01.gsd                              ONLINE     OFFLINE
ora.wygora01.ons                              ONLINE     OFFLINE
ora.wygora01.vip                              ONLINE     OFFLINE
ora.wygora02.ASM2.asm                         ONLINE     ONLINE on wygora02
ora.wygora02.LISTENER_WYGORA02.lsnr           ONLINE     ONLINE on wygora02
ora.wygora02.gsd                              ONLINE     UNKNOWN on wygora02
ora.wygora02.ons                              ONLINE     UNKNOWN on wygora02
ora.wygora02.vip                              ONLINE     ONLINE on wygora02
ora.wygprod.db                                ONLINE     ONLINE on wygora02
ora.wygprod.wygprod.cs                        ONLINE     ONLINE on wygora02
ora.wygprod.wygprod.wygprod1.srv              ONLINE     OFFLINE
ora.wygprod.wygprod.wygprod2.srv              ONLINE     ONLINE on wygora02
ora.wygprod.wygprod1.inst                     OFFLINE    OFFLINE
ora.wygprod.wygprod2.inst                     ONLINE     ONLINE on wygora02
emctl start dbconsole
Z set to GB-Eire
racle Enterprise Manager 10g Database Control Release 10.2.0.1.0
opyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
ttp://wygora01.wyg-asp.com:1158/em/console/aboutApplication
gent Version     : 10.1.0.4.1
MS Version       : 10.1.0.4.0
rotocol Version  : 10.1.0.2.0
gent Home        : /u01/app/oracle/product/10.2.0/db_1/wygora02_wygprod2
gent binaries    : /u01/app/oracle/product/10.2.0/db_1
gent Process ID  : 26599
arent Process ID : 26554
gent URL         : http://wygora02.wyg-asp.com:3938/emd/main
tarted at        : 2008-03-13 15:58:50
tarted by user   : oracle
ast Reload       : 2008-03-13 15:58:50
ast successful upload                       : 2008-03-13 16:43:03
ast attempted upload                        : 2008-03-13 16:44:54
otal Megabytes of XML files uploaded so far :     6.40
umber of XML files pending upload           :        1
ize of XML files pending upload(MB)         :     0.00
vailable disk space on upload filesystem    :    65.82%
gent is already started. Will restart the agent
his will stop the Oracle Enterprise Manager 10g Database Control process. Continue [y/n] :y
topping Oracle Enterprise Manager 10g Database Control ...
...  Stopped.
gent is not running.
tarting Oracle Enterprise Manager 10g Database Control ..... started.
-----------------------------------------------------------------
ogs are generated in directory /u01/app/oracle/product/10.2.0/db_1/wygora02_wygprod2/sysman/log
  No grid Console running
Case 2 : EM console is working at Node 1. Instance 1 is shutdown (Note just a instance & not server) , oemctl working Fine
[oracle@wygora01 ~]$ showcrs
HA Resource                                   Target     State
-----------                                   ------     -----
ora.wygprod.db                                ONLINE     ONLINE on wygora01
ora.wygprod.wygprod.cs                        ONLINE     ONLINE on wygora02
ora.wygprod.wygprod.wygprod1.srv              ONLINE     OFFLINE
ora.wygprod.wygprod.wygprod2.srv              ONLINE     ONLINE on wygora02
ora.wygprod.wygprod1.inst                     OFFLINE    OFFLINE
ora.wygprod.wygprod2.inst                     ONLINE     ONLINE on wygora02

Convert Oracle Enterprise Edition to Standard (DownGrade)

Posted by Sagar Patil

Oracle support referred me to Metalink Note 139642.1 to downgrade Enterprise Edition (EE) to the
Standard Edition (SE). They said we can not just downgrade the software, this will not get rid of EE.  I MUST export / import, in order to go from EE to SE.

The database was just 30GB so I decided to pick up old import/export mechanism

1. Shutdown current Live database (15 mins) & put database services in Manual mode.

2. Startup database in restricted mode so that only DBA users can connect (15 mins)

3. List all invalid objects and spool them (30 mins)

4. Export all individual users from this database in DMP format (probably 3 hours)

5. Install new Standard Edition at different ORACLE_HOME (1 hour)

6. Create a new blank database (1 hour) & configure listener/tnsnnames files for it.

7. Configure init parameters for running import quicker (30 mins)

8. Import Individual Users ( normally double the time of export )

9. Run utlrp.sql to compile all objects at new instance (30 mins)

10. Compare invalid list of objects with spool received at step 3 ( 1 hour)

11. Carry out Functional Testing ( 3 hours)

12. If tests go OK, delete old database instance + software ( 2 hour)

13. Rename new instance to Live i.e testers(1 hour)

Change of Plan

Initially I went for safe option import/export. I thought Oracle 10g standard edition doesn’t have transportable tablespace option enabled but to my surprise we can import transportable tablespace at standard edition but can’t export it out.

I was also worried for the dependency between objects and time it would take to export and import the 24GB data. I decided to try another method thru following plan.

The Process

 

1. First, Identify the self-contained tablespace(s) and alter them read only.

2. Second, you must generate the transportable tablespace set containing the datafiles associated to the tablespace and a file containing the table space metadata in the form of an export file.

3. Third, ensure that the receiving disks have sufficient room to copy the datafiles and the export files. Use operating system commands.

4. Finally, plug the tablespace into the target database(s) and alter the tablespace online.

Step by Step

1. Connect to the database as sys sysdba

2. Identify the users associated to the tablespace in question.

Select Username, Default_Tablespace, Temporary_Tablespace
From Dba_Users
Where (Default_Tablespace = <Tablespace_Name>
Or Temporary_Tablespace = <Tablespace_Name>);

3. Using Sql/Plus or Worksheet connect as sys and compile script dbmsplts.sql

4. Grant execute on dbms_tts to public;

5. Create public synonym dbms_tts for sys.dbms_tts;

6. Execute dbms_tts.transport_set_check(<tablespace_name>,TRUE);
The TRUE parameter ensures the constraints relating to the objects are all in the same tablespace in question.

7. Select * from transport_set_violations;
There should be no rows returned, which indicates the tablespace is self-contained. If rows are returned then consider transporting the tablespace where the violated object(s) reside, this due to referential constraints linked to objects not in the same tablespace in question.

8.Select the file_name from dba_data_files. ***
Select File_name From dba_data_files Where tablespace_name in (<Tablespace_Name>,<Tablespace_Name>);

9.Set the tablespace in question to read only.
Alter Tablespace <Tablespace_name> Read Only;

10. Generate the transportable tablespace set containing the datafiles associated to the tablespace and a file containing the tablespace metadata in the form of an export file. The export will be carried out interactively, and must be prompted for the username, the username must be sys as sysdba and whatever the password is set for SYS.
The following identifies the parameters used for the transportable tablespace.
File Name: Exp_TTS.par Transport_Tablespace=y Tablespaces=(testers) Triggers=y Constraints=y Grants=y File=D:\testers\tts.dmp Log=sD:\testers\tts.log

11.  Copy the datafiles using operating system commands identified from the earlier step *** to the new location of the database renaming the datafiles to ODBA standards.

12. Once the tablespaces datafiles are copied to their new location, Run the import utility to move the tablespace metadata into the new database.
Note the target database must not have the same tablespace name.
File Name: Imp_TTS.par Tablespaces=(testers) Datafiles=(D:\testers\tts.dmp ) Indicates the datafile to use associated to the Tablespace. This is the copied Datafile. tts_owners=(testers) Indicates the owners of objects in the Tablespace, These owners must be made available in the target database. File=s:\testers.dmp Log=s:\testers.log

13. Finally, alter the tablespace out of read mode.
Alter tablespace <Tablespace_Name> Read Write;

10g RAC Management Commands

Posted by Sagar Patil

Read more…

AWR : How to purge old Snapshots?

Posted by Sagar Patil

Read more…

ORA-29701 : Unable to connect to Cluster Manager while creating new ASM Instance

Posted by Sagar Patil

While installing ASM instance I came across Ora 29701

Reason : Css daemon is not running

Solution: cd $ORACLE_HOME/bin

localconfig delete
localconfig add

ons gsd vip listed OFFLINE at crs_stat

Posted by Sagar Patil

For more details visit metalink note 259301.1 & download showcrs shell script
[oracle@ora02 ~]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE ora01
ora….01.lsnr application ONLINE ONLINE ora01
ora….a01.gsd application ONLINE UNKNOWN ora01
ora….a01.ons application ONLINE UNKNOWN ora01
ora….a01.vip application ONLINE ONLINE ora01
ora….SM2.asm application ONLINE ONLINE ora02
ora….02.lsnr application ONLINE ONLINE ora02
ora….a02.gsd application ONLINE UNKNOWN ora02
ora….a02.ons application ONLINE UNKNOWN ora02
ora….a02.vip application ONLINE ONLINE ora02
ora.prod.db application ONLINE ONLINE ora01
ora….prod.cs application OFFLINE OFFLINE
ora….od1.srv application ONLINE UNKNOWN ora01
ora….od2.srv application ONLINE UNKNOWN ora02
ora….d1.inst application ONLINE ONLINE ora01
ora….d2.inst application ONLINE ONLINE ora02

[oracle@ora02 ~]$ srvctl config nodeapps -n ora02
ora02 prod2 /u01/app/oracle/product/10.2.0/db_1
Check VIP [oracle@ora02 ~]$ srvctl config nodeapps -n ora02 -a
VIP exists.: /oravip02.wyg-asp.com/10.13.100.14/255.255.255.0/bond0
Check GSD [oracle@ora02 ~]$ srvctl config nodeapps -n ora02 -g
GSD exists.
Check ONS [oracle@ora02 ~]$ srvctl config nodeapps -n ora02 -s
ONS daemon exists.
Check Listener [oracle@ora02 ~]$ srvctl config nodeapps -n ora02 -l
Listener exists.

[oracle@ora01 bin]$ srvctl start nodeapps -n ora01
CRS-1028: Dependency analysis failed because of:
CRS-0223: Resource ‘ora.ora01.gsd’ has placement error.
CRS-1028: Dependency analysis failed because of:
CRS-0223: Resource ‘ora.ora01.ons’ has placement error.
[oracle@ora01 bin]$ crs_stop ora.ora01.gsd -f
Attempting to stop `ora.ora01.gsd` on member `ora01`
Stop of `ora.ora01.gsd` on member `ora01` succeeded.
[oracle@ora01 bin]$ crs_stop ora.ora01.ons -f
Attempting to stop `ora.ora01.ons` on member `ora01`
Stop of `ora.ora01.ons` on member `ora01` succeeded.

To clear the state of the resource you have to use the force option in the crs_stop command

[oracle@ora01 bin]$ crs_stop ora.ora02.ons -f
Attempting to stop `ora.ora02.ons` on member `ora02`
Stop of `ora.ora02.ons` on member `ora02` succeeded.
[oracle@ora01 bin]$ crs_stop ora.ora02.gsd -f
Attempting to stop `ora.ora02.gsd` on member `ora02`
Stop of `ora.ora02.gsd` on member `ora02` succeeded.
[oracle@ora01 bin]$ srvctl start nodeapps -n ora02

[oracle@ora01 bin]$ srvctl start nodeapps -n ora01

[oracle@ora01 bin]$ srvctl start nodeapps -n ora02
[oracle@ora01 bin]$ showcrs
HA Resource Target State
———– —— —–
ora.ora01.ASM1.asm ONLINE ONLINE on ora01
ora.ora01.LISTENER_ora01.lsnr ONLINE ONLINE on ora01
ora.ora01.gsd ONLINE ONLINE on ora01
ora.ora01.ons ONLINE ONLINE on ora01
ora.ora01.vip ONLINE ONLINE on ora01
ora.ora02.ASM2.asm ONLINE ONLINE on ora02
ora.ora02.LISTENER_ora02.lsnr ONLINE ONLINE on ora02
ora.ora02.gsd ONLINE ONLINE on ora02
ora.ora02.ons ONLINE ONLINE on ora02
ora.ora02.vip ONLINE ONLINE on ora02
ora.prod.db ONLINE ONLINE on ora01
ora.prod.prod.cs OFFLINE OFFLINE
ora.prod.prod.prod1.srv ONLINE UNKNOWN on ora01
ora.prod.prod.prod2.srv ONLINE UNKNOWN on ora02
ora.prod.prod1.inst ONLINE ONLINE on ora01
ora.prod.prod2.inst ONLINE ONLINE on ora02

CRSCTL : Oracle Clusterware Service Administration

Posted by Sagar Patil

Read more…

Top of Page

Top menu