What is Oracle OPatch, How to use OPatch & List patches

Posted by Sagar Patil

Patch is an Oracle supplied utility to assist you with the process of applying interim patches to Oracle’s software. OPatch is a Java-based utility which requires the Oracle Universal Installer to be installed. It is platform independent and runs on all supported operating systems.

OPatch supports the following:
* Applying an interim patch.
* Rolling back the application of an interim patch.
* Detecting conflict when applying an interim patch after previous interim patches have been applied. It also suggests the best options to resolve a conflict.
* Reporting on installed products and interim patch.

Prior to release 10.2 (OPatch for 10.2 is only compatible with 10.2 and nothing earlier), OPatch was available from MetaLink as a patch in and of itself (p2617419_10102_GENERIC.zip for release 10.1.0.2). With 10.2, OPatch is installed as part of the RDBMS software.

Opatch has several options to execute it:
* “lsinventory” = Adding the lsinventory option to opatch produces a report saying the patches that were applied
* “-report” = Will report to screen and will NOT apply the real patch. Using “-report” is a good way of performing nondestructive testing of the patch installation process.
* “rollback”= Will undo the patch that was applied. Example: opatch rollback –id 4667809

Applying a patch is simple as:
* opatch lsinventory    -> To see he list of patches already installed
* opatch apply <patchid>     –> To REALLY aplply the patch

Example Applying Patch 4751921

After the Patch is Installed:
1.Log in as sys as sysdba.
2. cd $ORACLE_HOME/rdbms/admin
3. spool catpatch_database_name
4. shutdown immediate
5. startup migrate
6. @catpatch.sql   ( this takes at least 1 hour ). After catpatch completed,
7. select object_name,owner from dba_objects where ststus=’INVALID’;
( YOU WILL GET BETWEEN 230-3300 INVALID OBJECTS , DON’T PANIC )
8. @utlrp.sql
9.select object_name,owner from dba_objects where ststus=’INVALID’;   ( YOU WILL GET near 0 invalid objects )
10. shutdown immediate;
11. startup


Listing Patches

All patches that are installed with Oracle’s OPatch Utility (Oracle’s Interim Patch Installer) can be listed by invoking the opatch command with the lsinventory option. Here is an example:

$ cd $ORACLE_HOME/OPatch
$ opatch lsinventory
Invoking OPatch 10.2.0.1.0
Oracle interim Patch Installer version 10.2.0.1.0
Copyright (c) 2005, Oracle Corporation.  All rights reserved..

Installed Top-level Products (1):
Oracle Database 10g                                           10.2.0.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
OPatch succeeded.

Another Method using SYS.REGISTRY$HISTORY Table

Since January 2006, contains 1 row for most recent CPU patch applied. A method for determining if CPU patch is applied

SELECT comments, action_time, id “PATCH_NUMBER”, version FROM sys.registry$history WHERE action = ‘CPU’;

COMMENTS ACTION_TIME PATCH_NUMBER VERSION
view recompilation 42:39.2 6452863
view recompilation 59:20.3 6452863
view recompilation 23:58.7 6452863

SELECT comments, action_time, id “PATCH_NUMBER”, version FROM sys.registry$history

COMMENTS ACTION_TIME PATCH_NUMBER VERSION
Upgraded from 10.2.0.1.0 40:28.8 10.2.0.4.0
CPUApr2009 46:06.0 4 10.2.0.4
view recompilation 42:39.2 6452863
CPUOct2009 56:35.7 6 10.2.0.4
view recompilation 59:20.3 6452863
CPUJan2010 01:47.4 6 10.2.0.4
view recompilation 23:58.7 6452863

One other useful Opatch feature

Along with the log and inventory files, Opatch output includes a history file, which contains date and action performed information. The history file is named opatch_history.txt and is located in the $OH\cfgtools\opatch directory. As an example of its contents, the “rollback –help” action performed earlier was recorded as:

[oracle@ opatch]$ pwd
/u01/app/oracle/product/10.2.0/cfgtoollogs/opatch
[oracle@opatch]$ ls -lrt
-rw-r–r– 1 oracle oracle  98608 May 29  2009 opatch2009-05-29_11-37-50AM.log
-rw-r–r– 1 oracle oracle 103814 Dec 14  2009 opatch2009-12-14_20-49-31PM.log
-rw-r–r– 1 oracle oracle   5838 Mar 11  2010 opatch2010-03-11_16-01-00PM.log
-rw-r–r– 1 oracle oracle  33878 Mar 29  2010 opatch2010-03-29_19-53-07PM.log

vi opatch2010-03-29_19-53-07PM.log

Applying patch 9173244…
INFO:Starting Apply Session at Mon Mar 29 19:53:42 BST 2010
INFO:ApplySession applying interim patch ‘9173244’ to OH ‘/u01/app/oracle/product/10.2.0’
INFO:Starting to apply patch to local system at Mon Mar 29 19:53:42 BST 2010
INFO:Start the Apply initScript at Mon Mar 29 19:53:42 BST 2010
INFO:Finish the Apply initScript at Mon Mar 29 19:53:42 BST 2010
INFO:OPatch detected ARU_ID/Platform_ID as 226
INFO:Start saving patch at Mon Mar 29 19:53:44 BST 2010

Oracle Data Pump

Posted by Sagar Patil

– Data Pump runs only on the server side.
– You may initiate the export from a client but the job(s) themselves will run inside an Oracle server.
– There are no dump files (expdat.dmp) or log files that will be created on your local machine

How to use DataPUMP?

To Export Entire Database use FULL =Y , for schema use schemas=<USERNAMES>

FULL expdp system/XXX FULL=y DIRECTORY=dexport DUMPFILE=expdata.dmp LOGFILE=expdata.log
Schema expdp system SCHEMA=DOTCOM DIRECTORY=export DUMPFILE=expdata.dmp LOGFILE=expdata.log

Data pump could be used over Database link as an example below
SQL-S2> CREATE DATABASE LINK S1 CONNECT TO JKOOP IDENTIFIED BY PWD USING ‘S1’;

SQL-S2> CREATE DIRECTORY mydump AS ‘/app/oracle/admin/itctvt/export’ ;

E:> expdp jkoop/pwd FULL=y DIRECTORY=mydump NETWORK_LINK=S1 DUMPFILE=byDB.dmp LOGFILE=byDB.log

Introduction to Monitoring Data Pump
DBA_DATAPUMP_JOBS : This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

SQL> select * from dba_datapump_jobs

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS
———- ———————- ———- ———- ————- ——— —————–
JKOOP SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING 1 1
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT SCHEMA EXECUTING 1 1

DBA_DATAPUMP_SESSIONS : This view give gives the SADDR that assist in determining why a Data Pump session may be having problems. Join to the V$SESSION view for further information.
SQL> SELECT * FROM DBA_DATAPUMP_SESSIONS

OWNER_NAME JOB_NAME SADDR
———- —————————— ——–
JKOOPMANN SYS_EXPORT_FULL_01 225BDEDC
JKOOPMANN SYS_EXPORT_SCHEMA_01 225B2B7C

V$SESSION_LONGOPS :This view helps determine how well a Data Pump export is doing. Basically gives you a progress indicator through the MESSAGE column.
SQL> select username,opname,target_desc,sofar,totalwork,message from V$SESSION_LONGOPS

USERNAME OPNAME TARGET_DES SOFAR TOTALWORK MESSAGE
——– ——————– ———- —– ———- ————————————————
JKOOP SYS_EXPORT_FULL_01 EXPORT 132 132 SYS_EXPORT_FULL_01:EXPORT:132 out of 132 MB done
JKOOP SYS_EXPORT_FULL_01 EXPORT 90 132 SYS_EXPORT_FULL_01:EXPORT:90 out of 132 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 17 17 SYS_EXPORT_SCHEMA_01:EXPORT:17 out of 17 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 19 19 SYS_EXPORT_SCHEMA_01:EXPORT:19 out of 19 MB done

DDL create commands for Objects (DBMS_METADATA)

Posted by Sagar Patil

Metadata API (DBMS_METADATA)
Prior to Oracle9i metadata could only be extracted using SQL statements, export utilities and the OCIDescribeAny interface, all of which are limited:
* The SQL approach is limited in that as versions change, so must your scripts.
* Using export with ROWS=N and an import with SHOW=Y will produce the text to allow you to recreate a schema, but it can require considerable editing.
* The OCIDescribeAny interface is limited in the objects it supports.
To solve these issues Oracle9i has introduced the DBMS_METADATA package which can be used to retrieve object definitions as XML or SQL DDL:

FUNCTION Get_XML (
object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT NULL)
RETURN CLOB;
FUNCTION Get_DDL (
object_type IN VARCHAR2,
name IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
model IN VARCHAR2 DEFAULT 'ORACLE',
transform IN VARCHAR2 DEFAULT NULL)
RETURN CLOB; 

These functions can be used as part of a select statement to produce output for groups of objects:

SET LONG 20000
SET PAGESIZE 0
SELECT DBMS_METADATA.get_ddl ('TABLE','EMP','SCOTT')
FROM dual;
CREATE TABLE "SCOTT"."EMP"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE,
CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"
1 row selected.

The results can be spooled into a file for editing.

Shell or UNIX OS Commands From PL/SQL

Posted by Sagar Patil

http://www.oracle-base.com/articles/8i/ShellCommandsFromPLSQL.php

Main Procedure to kill a latch holder locate_latchholder

Host java procedure to Fire OS commands host_command

Identifying a Poor performing SQL

Posted by Sagar Patil

Our preferred v$sqlarea query is to actually report on physical disk I/O per statement execution. Hit ratios are informative but sometimes misleading. Logical I/O is less relevant. If the statement executes 1,000,000 logical I/Os but still only takes less than one-tenth of a second, who cares? It is the total physical I/O that consumes nearly all the time and identifies the potentially incorrect SQL. For example:

SELECT sql_text, executions,
ROUND(disk_reads / executions, 2) reads_per_run,
disk_reads, buffer_gets,
ROUND((buffer_gets – disk_reads)/
buffer_gets, 2) hit_ratio,
SQL_TEXT
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 0
AND (buffer_gets – disk_reads) / buffer_gets < 0.80
ORDER by 3 desc ;

The previous two statements would have reported more enlightening results:
Executions Reads_Per_Run Disk_Reads Buffer_Gets Hit_Ratio Sql_Text
———- ————- ———- ———– ——— ————
2 3 6 19 0.68 SELECT …
2 1812.5 3625 178777 0.98 SELECT …
From this view of the v$sqlarea table, we can immediately isolate all statements that are performing high numbers of physical reads. These statements might not necessarily be inefficient or poorly written, but they are prime candidates for investigation and further tuning.

How to find out hidden and system Parameters?

Posted by Sagar Patil

System Parameters Purpose: Reports all system parameters, including the hidden ones


select
nam.indx+1 numb,
nam.ksppinm name,
val.ksppstvl value,
nam.ksppity type,
val.ksppstdf is_default,
decode(bitand(nam.ksppiflg/256,1),
1,'True',
'False'
) is_session_modifiable,
decode(bitand(nam.ksppiflg/65536,3),
1,'Immediate',
2,'Deferred' ,
3,'Immediate',
'False'
) is_system_modifiable,
decode(bitand(val.ksppstvf,7),
1,'Modified',
4,'System Modified',
'False'
) is_modified,
decode(bitand(val.ksppstvf,2),
2,'True',
'False'
) is_adjusted,
nam.ksppdesc description
from
x$ksppi nam,
x$ksppsv val
where
nam.indx = val.indx
and
nam.ksppinm like '&&PARAMETER%';

Session Parameters


select
nam.indx+1 numb,
nam.ksppinm name,
val.ksppstvl value,
nam.ksppity type,
val.ksppstdf is_default,
decode(bitand(nam.ksppiflg/256,1),
1,'True',
'False'
) is_session_modifiable,
decode(bitand(nam.ksppiflg/65536,3),
1,'Immediate',
2,'Deferred' ,
3,'Immediate',
'False'
) is_system_modifiable,
decode(bitand(val.ksppstvf,7),
1,'Modified',
4,'System Modified',
'False'
) is_modified,
decode(bitand(val.ksppstvf,2),
2,'True',
'False'
) is_adjusted,
nam.ksppdesc description
from
x$ksppi nam,
x$ksppcv val
where
nam.indx = val.indx

ORA 600 Corrupt UNDO/INDEX/TABLE Segment

Posted by Sagar Patil

I have seen this issue on number of SAP systems. I will compile a procedure I followed here.

select segment_name,
segment_type,
owner,
tablespace_name
from dba_extents
where file_id = <file_id>
and <block> between block_id and block_id + blocks - 1;

For example, given this information:
Corrupt Block Found
TSN = 24, TSNAME = STATS_INDEXES_06
RFN = 23, BLK = 211531, RDBA = 96680523
OBJN = -1, OBJD = 102711, OBJECT = , SUBOBJECT =
I would execute:
select segment_name,
segment_type,
owner,
tablespace_name
from dba_extents
where file_id = 23
and 211531 between block_id and block_id + blocks – 1;

Alrtlog messages
Corrupt Block Found
TSN = 24, TSNAME = STATS_INDEXES_06
RFN = 23, BLK = 293131, RDBA = 96762123
OBJN = -1, OBJD = 103744, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =
ORA-00600: INTERNAL ERROR CODE, ARGUMENTS: [4194], [29], [42],
Alrtlog Errors :
Errors in file /oracle/BPA/saptrace/background/bpa_smon_19615.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [29], [42], [], [], [], [], []
Errors in file /oracle/BPA/saptrace/background/bpa_smon_19615.trc:
ORA-00600: internal error code, arguments: [4193], [21415], [64890], [], [], [], [], []
Tue Aug 7 10:10:10 2007
Errors in file /oracle/BPA/saptrace/usertrace/bpa_ora_19821.trc:
ORA-00600: internal error code, arguments: [4194], [29], [42], [], [], [], [], []

As per undo corruption, restore from backup may be the best and fastest solution here. Can this be an option here?
— Restoring from last backup is not an easy option for us. It’s 3.5 Tera byte database. It costs time and resources which I didn’t had here.

DBVERIFY
It can be used to perform a physical data structure integrity check on data files whether the database is online or offline.

dbv file=system_ts_01.dbf blocksize=8192 logfile=dbvsys_ts.log feedback=1000
Data file = system_ts_01.dbf.
Start block is 9
end block is 25
Blocksize 8192

Logfile parameter specifies the file to which logging information written feedback To display one dot on the screen for every 1000 blocks processed

Oracle Support Communication :
This database was restored from live system last Monday. Everything went very well and database was running fine until Monday morning. Y’day we reported this error first time after running system for 5/6
days. A restart at afternoon fixed those ora-600 errors until this morning when we are getting ora-600 at all times. It is necessary to identify what went wrong before thinking for restore. Personally I don’t see a point restoring entire da tabase when there is no guarantee that these errors will go away after restore.
As per undo corruption, restore from backup may be the best and fastest solution here. Can this be an option here?
–Restoring from last backup is not an easy option for us. It’s 3.5 Tera byte database. It costs time and resources which we don’t have available
ACTION PLAN
============
Let’s start checking objects reported on recovery processes:
fno: 81 blk: 322532
fno: 18 blk: 125593
fno: 18 blk: 125673
fno: 116 blk: 266003

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
WHERE file_id = &fno
and &blk between block_id AND block_id + blocks – 1;
TABLESPACE_NAME SEGMENT_TYPE OWNER
—————————— —————— ——————————
SEGMENT_NAME
——————————————————————————–
PSAPUNDO TYPE2 UNDO SYS
_SYSSMU37$
no rows selected
no rows selected
PSAPUNDO TYPE2 UNDO SYS
_SYSSMU42$

Oracle Support Response We’re facing undo corruption here. Using _corrupted or _offline parameters will be helpful to export full database, but if there are active transactions (as I guess) this database will be desupported after setting them. Let’s try to dump undo headers to check if there are active transactions:

alter system dump undo header <undo>;
Depending on what trace file with header dump tells we can decide what to do. If there are no active transactions, we can try to drop these undo segments. Otherwise, we can just try to restore these datafiles first (the 3 ones) and apply recover on them
== Check if there are Active x’actions before dropping UNDO segments==
We can check
TRN TBL::
which is the starting for the transaction table dump.
On state column, we can see status. 9 means commited, 10 means active. So in our case, all were 9s.

ACTION PLAN
============
As per all transactions are committed, we can try to drop these undo segments. As per using AUM, we need to set smu_debug to allow actions
on undo segments, or place AUM to manual
SQL> alter session set “_smu_debug_mode”=4;
SQL> alter rollback segment “_SYSSMU37$” offline;
-> Check if undo segment went offline in DBA_ROLLBACK_SEGS. If so:
SQL> drop rollback segment “_SYSSMU37$”;
SQL> alter rollback segment “_SYSSMU42$” offline;
-> Check if undo segment went offline in DBA_ROLLBACK_SEGS. If so:
SQL> drop rollback segment “_SYSSMU42$”;
A clean database restart after this was done and things were sorted.

Estimate Undo Retention , Undo Optimization

Posted by Sagar Patil

undo_retention parameter will not guarantee a RETENTION unless you define a RETENTION GUARANTEE CLAUSE on tablespace level

Actual Undo Size

SELECT SUM(a.bytes) “UNDO_SIZE”
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#;

UNDO_SIZE
———-
209715200

Undo Blocks per Second

SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
“UNDO_BLOCK_PER_SEC”
FROM v$undostat;

UNDO_BLOCK_PER_SEC
——————
3.12166667

DB Block Size
SELECT TO_NUMBER(value) “DB_BLOCK_SIZE [KByte]”
FROM v$parameter
WHERE name = ‘db_block_size’;

DB_BLOCK_SIZE [Byte]
——————–
4096

Calculation for Optimal Undo Retention
209’715’200 / (3.12166667 * 4’096) = 16’401 [Sec]  Using Inline Views, you can do all in one query!

SELECT d.undo_size/(1024*1024) “ACTUAL UNDO SIZE [MByte]”,
SUBSTR(e.value,1,25) “UNDO RETENTION [Sec]”,
ROUND((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec))) “OPTIMAL UNDO RETENTION [Sec]”
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = ‘undo_retention’
AND f.name = ‘db_block_size’
/
ACTUAL UNDO SIZE [MByte]
————————
200
UNDO RETENTION [Sec]
——————–
10800
OPTIMAL UNDO RETENTION [Sec]
—————————-
16401

Calculate Needed UNDO Size for given Database Activity

SELECT d.undo_size/(1024*1024) “ACTUAL UNDO SIZE [MByte]”,
SUBSTR(e.value,1,25) “UNDO RETENTION [Sec]”,
(TO_NUMBER(e.value) * TO_NUMBER(f.value) *
g.undo_block_per_sec) / (1024*1024)
“NEEDED UNDO SIZE [MByte]”
FROM (
SELECT SUM(a.bytes) undo_size
FROM v$datafile a,
v$tablespace b,
dba_tablespaces c
WHERE c.contents = ‘UNDO’
AND c.status = ‘ONLINE’
AND b.name = c.tablespace_name
AND a.ts# = b.ts#
) d,
v$parameter e,
v$parameter f,
(
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec
FROM v$undostat
) g
WHERE e.name = ‘undo_retention’
AND f.name = ‘db_block_size’
/
ACTUAL UNDO SIZE [MByte]
————————
200
UNDO RETENTION [Sec]
——————–
10800
NEEDED UNDO SIZE [MByte]
————————
131.695313

Complete (recursive) object dependency

Posted by Sagar Patil

The easiest approach is to use an Oracle utldtree.sql script. This will be located in the %ORACLE_HOME%\rdbms\admin\ directory.

For each schema that you want to be able to get a dependency list, you will open up a SQL*Plus session and execute the script
scott@> @d:\oracle\ora92\rdbms\admin\utldtree.sql
Don’t worry about the ORA-00942 errors you will get– this script tries to drop objects before it re-creates them, so you’ll get errors when the objects don’t exist yet.
Once the script has been run, you can get a listing of dependent objects for a particular object by calling

scott@jcave > exec deptree_fill('table', 'scott', 'emp' );
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
scott@jcave > select * from ideptree;
DEPENDENCIES
-------------------------------------------------------------------------------- 
TABLE SCOTT.EMP
   VIEW SCOTT.V2
   PROCEDURE SCOTT.INCREASE_SALARY
   VIEW SCOTT.V3
   PROCEDURE SCOTT.CALL_ME
   TRIGGER SCOTT.ODBC_EMP_TST
   <no>
7 rows selected.
Elapsed: 00:00:08.03

Resumable Space Allocation / SUSPEND RESUME transactions

Posted by Sagar Patil

Long running operations such as imports and batch processes sometimes fail because the server is unable to allocate more extents for an object. This may be because the object has reached max_extents or there isn’t sufficient room in the tablespace for the object to expand. In previous releases the operation would have to be rerun, possible with some manual cleanup necessary. In Oracle9i operations that would fail due to space allocation problems can be suspended and restarted once the problem is fixed.
* Resumable Mode
* Timeout Period
* AFTER SUSPEND Trigger
* Views
* DBMS_RESUMABLE Package
* Restrictions

Resumable Mode
Operations can be made resumable by explicitly switching the session mode using:
ALTER SESSION ENABLE RESUMABLE;
ALTER SESSION DISABLE RESUMABLE;

When the session is in resumable mode, any operations that result in the following errors will be suspended:
* Out of space condition.
* Maximum extents reached condition.
* Space quota exceeded condition.
Once the error is corrected the operations will automatically resume.
Timeout Period

Operations will remain in a suspended state until the timeout period, 2 hours by default, is reached. The timeout period can be modified using any of the following commands:
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600;
EXECUTE Dbms_Resumable.Set_Timeout(3600);
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600 NAME ‘insert into table’;

The final example can be used to assign a name to the suspended session.
AFTER SUSPEND Trigger
Since suspended operations do not produce error messages, an alternative method is required to notify users/DBAs so that the problem can be corrected. Typically, these procedures are initiated using the AFTER SUSPEND trigger which always fires as an autonomous transaction. This trigger can be used to insert rows into an error table or email an operator using the DBMS_SMTP package:

CREATE OR REPLACE TRIGGER resumable_default
AFTER SUSPEND
ON DATABASE
DECLARE
-- Declare any variables 
BEGIN
-- Alter default timeout period. 
Dbms_Resumable.Set_Timeout(3600);
-- Perform resumable space allocation 
-- notification code here. 
COMMIT;
END;
/

Views
Information about suspended sessions can be viewed via the USER_ and DBA_RESUMABLE views. When a session is suspended a row is added to the V$SESSION_WAIT view with the EVENT column containing “suspended on space error”.
DBMS_RESUMABLE Package
* ABORT(sessionID) – Ends the specified suspended session. Caller must be the owner of the session with sessionID, have ALTER SYSTEM privilege, or have DBA privileges.
* GET_SESSION_TIMEOUT(sessionID) – Returns the timeout period in seconds for the specified session, or -1 if the session does not exist.
* SET_SESSION_TIMEOUT(sessionID, timeout) – Sets the timeout in seconds of the specified session with immediate effect.
* GET_TIMEOUT() – Returns the timeout period in seconds for the current session.
* SET_TIMEOUT(timeout) – Sets the timeout in seconds of the current session with immediate effect.
* DBMS_RESUMABLE.SPACE_ERROR_INFO(…) – Returns information about the space error when called from within an AFTER SUSPEND trigger.
Restrictions
* When using dictionary managed tablespaces, DDL CREATE operations with explicit MAXEXTENTS which run out of extents are aborted because the MAXEXTENTS parameter cannot be modified until the object is created. Extension of existing objects is resumable.
* If rollback segments are placed in dictionary managed tablespaces any RBS errors will not be resumable. Oracle recommends that automatic undo management is configured or rollback segments are placed in a locally managed tablespace.
* Remote operations are not supported in resumable mode.
* Individual parallel server processes may be suspended while others proceed without any problem. When the error is corrected any suspended operations will resume normally. If aborted, the whole transaction is aborted.

Running SHELL script from Oracle PL/SQL

Posted by Sagar Patil
exec dbms_java.grant_permission ('DBA_ADMIN', 'java.io.FilePermission','/usr/bin/ps', 'execute');
exec dbms_java.grant_permission ('DBA_ADMIN','java.lang.RuntimePermission','*','writeFileDescriptor' );

Create or replace and compile
java source named “Util”
as
import java.io.*;
import java.lang.*;
public class Util extends Object
{
public static int RunThis(String args)
{
Runtime rt = Runtime.getRuntime();
int        rc = -1;
try
{
Process p = rt.exec(args);
int bufSize = 4096;
BufferedInputStream bis =
new BufferedInputStream(p.getInputStream(), bufSize);
int len;
byte buffer[] = new byte[bufSize];
// Echo back what the program spit out
while ((len = bis.read(buffer, 0, bufSize)) != -1)
System.out.write(buffer, 0, len);
rc = p.waitFor();
}
catch (Exception e)
{
e.printStackTrace();
rc = -1;
}
finally
{
return rc;
}
}
}
/
Java created.
create or replace
function RUN_CMD(p_cmd in varchar2) return number
as
language java
name ‘Util.RunThis(java.lang.String) return integer’;
/
create or replace procedure RC(p_cmd in varchar2)
as
x number;
begin
x := run_cmd(p_cmd);
end;
/
variable x number;
set serveroutput on
exec dbms_java.set_output(100000);
exec 😡 := RUN_CMD(‘/usr/bin/ls /tmp’);

SQL> exec :x := RUN_CMD('/usr/bin/ls /tmp');
10439.err
10757.err
11319.err
13399.err
13654.err
1429.err
14792.err
20648.err
20994.err
2139.err
21690.err

What is a database EVENT and how does one set events?

Posted by Sagar Patil

Oracle trace events are useful for debugging the Oracle database server.

The following two examples are simply to demonstrate syntax. Refer to later notes on this page for an explanation of what these particular events do.

Events can be activated by either adding them to the INIT.ORA parameter file. E.g.:
event=’1401 trace name errorstack, level 12′ … or, by issuing an ALTER SESSION SET EVENTS command: E.g.
alter session set events ‘10046 trace name context forever, level 4’;

Frequently Used Events:

10046 trace name context forever, level 4 Trace SQL statements and show bind variables in trace output.
10046 trace name context forever, level 8 Shows wait events in the SQL trace files
10046 trace name context forever, level 12 Shows both bind variable names and wait events in the SQL trace files

1401 trace name errorstack, level 12, 1401 trace name errorstack, level 4 , 1401 trace name processstate

Dumps out trace information if an ORA-1401 “inserted value too large for column” error occurs. The 1401 can be replaced by any other Oracle Server error code that you want to trace.
60 trace name errorstack level 10 Show where in the code Oracle gets a deadlock (ORA-60), and may help to diagnose the problem.
10210 trace name context forever, level 10 ,10211 trace name context forever, level 10 ,10231 trace name context forever, level 10 These events prevent database block corruptions
10049 trace name context forever, level 2 Memory protect cursor
10210 trace name context forever, level 2 Data block check
10211 trace name context forever, level 2 Index block check
10235 trace name context forever, level 1 Memory heap check
10262 trace name context forever, level 300 Allow 300 bytes memory leak for connections

Where to find Event Numbers ?

Event numbers can be found in $ORACLE_HOME/rdbms/mesg/oraus.msg

cd $ORACLE_HOME/rdbms/mesg
vi oraus.msg
/10015

10015, 00000, “Undo Segment Recovery”
// *Cause:
// *Action:

Convert Oracle Enterprise Edition to Standard (DownGrade)

Posted by Sagar Patil

Oracle support referred me to Metalink Note 139642.1 to downgrade Enterprise Edition (EE) to the
Standard Edition (SE). They said we can not just downgrade the software, this will not get rid of EE.  I MUST export / import, in order to go from EE to SE.

The database was just 30GB so I decided to pick up old import/export mechanism

1. Shutdown current Live database (15 mins) & put database services in Manual mode.

2. Startup database in restricted mode so that only DBA users can connect (15 mins)

3. List all invalid objects and spool them (30 mins)

4. Export all individual users from this database in DMP format (probably 3 hours)

5. Install new Standard Edition at different ORACLE_HOME (1 hour)

6. Create a new blank database (1 hour) & configure listener/tnsnnames files for it.

7. Configure init parameters for running import quicker (30 mins)

8. Import Individual Users ( normally double the time of export )

9. Run utlrp.sql to compile all objects at new instance (30 mins)

10. Compare invalid list of objects with spool received at step 3 ( 1 hour)

11. Carry out Functional Testing ( 3 hours)

12. If tests go OK, delete old database instance + software ( 2 hour)

13. Rename new instance to Live i.e testers(1 hour)

Change of Plan

Initially I went for safe option import/export. I thought Oracle 10g standard edition doesn’t have transportable tablespace option enabled but to my surprise we can import transportable tablespace at standard edition but can’t export it out.

I was also worried for the dependency between objects and time it would take to export and import the 24GB data. I decided to try another method thru following plan.

The Process

 

1. First, Identify the self-contained tablespace(s) and alter them read only.

2. Second, you must generate the transportable tablespace set containing the datafiles associated to the tablespace and a file containing the table space metadata in the form of an export file.

3. Third, ensure that the receiving disks have sufficient room to copy the datafiles and the export files. Use operating system commands.

4. Finally, plug the tablespace into the target database(s) and alter the tablespace online.

Step by Step

1. Connect to the database as sys sysdba

2. Identify the users associated to the tablespace in question.

Select Username, Default_Tablespace, Temporary_Tablespace
From Dba_Users
Where (Default_Tablespace = <Tablespace_Name>
Or Temporary_Tablespace = <Tablespace_Name>);

3. Using Sql/Plus or Worksheet connect as sys and compile script dbmsplts.sql

4. Grant execute on dbms_tts to public;

5. Create public synonym dbms_tts for sys.dbms_tts;

6. Execute dbms_tts.transport_set_check(<tablespace_name>,TRUE);
The TRUE parameter ensures the constraints relating to the objects are all in the same tablespace in question.

7. Select * from transport_set_violations;
There should be no rows returned, which indicates the tablespace is self-contained. If rows are returned then consider transporting the tablespace where the violated object(s) reside, this due to referential constraints linked to objects not in the same tablespace in question.

8.Select the file_name from dba_data_files. ***
Select File_name From dba_data_files Where tablespace_name in (<Tablespace_Name>,<Tablespace_Name>);

9.Set the tablespace in question to read only.
Alter Tablespace <Tablespace_name> Read Only;

10. Generate the transportable tablespace set containing the datafiles associated to the tablespace and a file containing the tablespace metadata in the form of an export file. The export will be carried out interactively, and must be prompted for the username, the username must be sys as sysdba and whatever the password is set for SYS.
The following identifies the parameters used for the transportable tablespace.
File Name: Exp_TTS.par Transport_Tablespace=y Tablespaces=(testers) Triggers=y Constraints=y Grants=y File=D:\testers\tts.dmp Log=sD:\testers\tts.log

11.  Copy the datafiles using operating system commands identified from the earlier step *** to the new location of the database renaming the datafiles to ODBA standards.

12. Once the tablespaces datafiles are copied to their new location, Run the import utility to move the tablespace metadata into the new database.
Note the target database must not have the same tablespace name.
File Name: Imp_TTS.par Tablespaces=(testers) Datafiles=(D:\testers\tts.dmp ) Indicates the datafile to use associated to the Tablespace. This is the copied Datafile. tts_owners=(testers) Indicates the owners of objects in the Tablespace, These owners must be made available in the target database. File=s:\testers.dmp Log=s:\testers.log

13. Finally, alter the tablespace out of read mode.
Alter tablespace <Tablespace_Name> Read Write;

Speeding-Up Oracle Export/Import Migration

Posted by Sagar Patil

Step 1 – export
Step 2 – import data
Step 3 – import everything else

1.Assuming a full export, use direct=y if you’re not using any predicates in the export.
Set your buffer to be big (10MB at least)
2.Alter all your constraints novalidate prior to export if you can bring the app down, and if you can take a consistent export. This helps in reinstating the constraints instantly without forcing Oracle to validate constraints on massive tables
3. set workarea_size_policy=manual
4. Set sort_area_size=6GB (yes, 6 GB). Combined with 4, it let Oracle build indexes with more space in memory for all the sorts without the need to spill to disk.
5. Set massive online redo logs at 2GB each, 2 members each, 6 groups. After the import, reset everything back down to “normal”.

 

You can locate amount of time remaining to finish an export using

select * from (
select opname, target, sofar, totalwork,
units, elapsed_seconds, totalwork/sofar – 1, round(elapsed_seconds * (totalwork/sofar -1)) Time_Remaining_Seconds, message
from v$session_longops order by start_time desc)
where rownum <=1;

Alternatively use TOAD and click on “Session Browser”

Look at Time Remaining to get a rough estimate of time.

Top of Page

Top menu