Replication Error "QUISCED" Group. How to use DBMS_DEFER_SYS.EXECUTE_ERROR & DBMS_DEFER_SYS.DELETE_ERROR

Posted by Sagar Patil

Delete all Errors from replication sites
Check if there are any errors "select * from deferror;"

select sname, master, status from sys.dba_repcat; - make sure the status is not QUISCED
select id, TIMESTAMP, Master Status from dba_repcatlog

Now if there are any values returned then use following API and replace ID in parameter with values returned above.
repA> exec dbms_repcat.purge_master_log(&ID,’repa’,’scott_repg’);
repB> exec dbms_repcat.purge_master_log(&ID,’repa’,’scott_repg’);

Errors in administration tasks, such as adding groups, are found in DBA_REPCATLOG. This view also holds admin tasks that are still pending. For example:

SELECT ONAME,
REQUEST,
MESSAGE
FROM dba_repcatlog
WHERE status = ‘ERROR’

Once transactions are submitted, any errors are logged in the DEFERROR view. There is a procedure to reapply the error, DBMS_DEFER_SYS.EXECUTE_ERROR. Or you can delete the error using DBMS_DEFER_SYS.DELETE_ERROR. As a last resort, you can remove errors by deleting entries from the table SYSTEM.DEF$_ERROR.

The EXECUTE_ERROR procedure forces execution of a transaction that originally failed , leaving a record in DEFERROR. You might call this procedure if you have repaired the error (for example, a conflict in the advanced replication option) and you now wish to re-attempt the transaction. If another error occurs during EXECUTE_ERROR, the attempt is aborted and the last error encountered is returned as an exception. Upon successful completion, the procedure deletes the entries from the DEFERROR data dictionary view. If the original call has been applied to all other destinations, then the procedure also removes the entries from DEFCALL and DEFTRAN.

As with the DELETE_ERROR and DELETE_TRAN procedures, you may pass NULLs to indicate wildcards.

DBMS_DEFER_SYS.EXECUTE_ERROR (
deferred_tran_id IN VARCHAR2,
destination IN VARCHAR2)

deferred_tran_id
The ID number from the DefError view of the deferred transaction that you want to re-execute. If this parameter is null, all transactions queued for DESTINATION that originated from the DEFERRED_TRAN_DB are re-executed.

destination
The fully qualified database name from the DefError view of the database to which the transaction was originally queued. This parameter must not be null.

Restrictions
Note the following restrictions on calling EXECUTE_ERROR:
The destination parameter may not be NULL.
The deferred_tran_id and deferred_tran_db parameters must either both be NULL or both be NOT NULL. If they are NULL, all transactions in DEFERROR destined for destination are applied.

To delete a transaction from the DefError view . If there are not other DefTranDest or DefError entries for the transaction, the transaction is deleted from the DefTran and DefCall views as well.

DBMS_DEFER_SYS.DELETE_ERROR(
deferred_tran_id IN VARCHAR2,
destination IN VARCHAR2)

deferred_tran_id
The ID number from the DefError view of the deferred transaction that you want to remove from the DefError view. If this parameter is null, all transaction meeting the requirements of the other parameters are removed.

destination
The fully qualified database name from the DefError view of the database to which the transaction was originally queued. If this parameter is null, all transactions meeting the requirements of the other parameters are removed from the DefError view.

Oracle Advanced Replication | Relocating Master Definition Site

Posted by Sagar Patil

I have a live 4 node 4 way cluster between repA,repAA,repB & repBB instances. The node repA is Master which I want to move to repAA.

Database Release : 9.2.0.8 On HP UNIX

All Master Sites Are Available

DBMS_REPCAT.RELOCATE_MASTERDEF  was running for more than 2 hours

Observation :

  • Locate which DBMS jobs are running in system
  • “select /*+ RULE */ what from dba_jobs where job in( select /*+ RULE */  job  from dba_jobs_running)”
  • dbms_repcat.do_deferred_repcat_admin(‘”O5_A-B”‘, FALSE);
  • Observed SQLPLUS/TOAD session showing following SQL waiting for ENQUEUE Latch.SELECT  MASTER  FROM SYSTEM.repcat$_repcat WHERE sname = :b1 AND gowner = :b2  FOR UPDATEI wasted half a day to notice that  “Not all dbms_repcat procedures have an auto-commit“ as described at Note:1026571.6A commit after DBMS_REPCAT.RELOCATE_MASTERDEF solved my problem.repA>SELECT gname, substr(dblink,1,20),masterdef “Is MasterDefinition Site”
    FROM dba_repsites WHERE
    gname in(‘O1_A-B_ALL’,’O1_ERROR_A-B’)
    order by 3 desc;

    GNAME SUBSTR(DBLINK,1,20) I
    —————————— ——————– –
    O1_ERROR_A-B REPA.US.ORACLE.COM Y
    O1_A-B_ALL REPA.US.ORACLE.COM Y
    O1_ERROR_A-B REPB.US.ORACLE.COM N
    O1_A-B_ALL REPB.US.ORACLE.COM N
    O1_A-B_ALL REPBB.US.ORACLE.COM N
    O1_A-B_ALL REPAA.US.ORACLE.COM N
    O1_ERROR_A-B REPAA.US.ORACLE.COM N
    O1_ERROR_A-B REPBB.US.ORACLE.COM N

    repA>show user
    USER is “REPADMIN”
    repA>BEGIN
    DBMS_REPCAT.RELOCATE_MASTERDEF (
    gname => ‘O1_ERROR_A-B’,
    old_masterdef => ‘repA’,
    new_masterdef => ‘repAA’,
    notify_masters => TRUE,
    include_old_masterdef => TRUE);
    END;
    /

    COMMIT;

    /

    PL/SQL procedure successfully completed.

    repA>SELECT gname, substr(dblink,1,20),masterdef “Is MasterDefinition Site”
    FROM dba_repsites WHERE
    gname in(‘O1_A-B_ALL’,’O1_ERROR_A-B’)
    order by 3 desc;

    GNAME SUBSTR(DBLINK,1,20) I
    —————————— ——————– –
    O1_ERROR_A-B REPAA.US.ORACLE.COM Y
    O1_A-B_ALL REPA.US.ORACLE.COM Y
    O1_ERROR_A-B REPA.US.ORACLE.COM N
    O1_A-B_ALL REPB.US.ORACLE.COM N
    O1_A-B_ALL REPBB.US.ORACLE.COM N
    O1_A-B_ALL REPAA.US.ORACLE.COM N
    O1_ERROR_A-B REPBB.US.ORACLE.COM N
    O1_ERROR_A-B REPB.US.ORACLE.COM N

    8 rows selected.

    Relocate the master definition site Master Definition site Available

    BEGIN
    DBMS_REPCAT.RELOCATE_MASTERDEF (
    gname => ‘hr_repg’,
    old_masterdef => ‘orc1.world’,
    new_masterdef => ‘orc2.world’,
    notify_masters = > TRUE,
    include_old_masterdef => TRUE);
    END;

    Relocate the master definition site The Old Master Definition Site Not Available

    BEGIN
    DBMS_REPCAT.RELOCATE_MASTERDEF (
    gname => ‘hr_repg’,
    old_masterdef => ‘orc1.world’,
    new_masterdef => ‘orc2.world’,
    notify_masters = > TRUE,
    include_old_masterdef => FALSE);
    END;

    Reference :

    DBMS_REPCAT.RELOCATE_MASTERDEF (
    gname IN VARCHAR2,
    old_masterdef IN VARCHAR2,
    new_masterdef IN VARCHAR2,
    notify_masters IN BOOLEAN : = TRUE,
    include_old_masterdef IN BOOLEAN : = TRUE)

    gname The name of the object group whose master definition your want to relocate.

    old_masterdef The fully qualified database name of the current master definition site.

    new_masterdef The fully qualified database name of the existing master site that you want to make the new master definition site.

    notify_masters If NOTIFY_MASTERS is TRUE, the procedure synchronously multicasts the change to all masters (including OLD_MASTERDEF only if INCLUDE_OLD_MASTERDEF is TRUE). If any master does not make the change, rollback the changes at all masters.

    include_old_masterdef If NOTIFY_MASTERS is TRUE and INCLUDE_OLD_MASTERDEF is also TRUE, the old master definition site is also notified of the change.

Replication Views / Prerequisites for Advanced Replication

Posted by Sagar Patil

Prerequisites for Advanced Replication

1. Must have distributed and replication option installed
Must run $ORACLE_HOME/rdbms/admin/catrep.sql as INTERNAL

2. Password for repsys, repadmin, and ACME should be the same on master and master definition site

3. Mandatory INIT.ORA parameters on both databases:
shared_pool_size: 25M
global_names=true
job_queue_processes=2
open_links=6
job_queue_interval= <less than interval for dba_jobs submitted>
job_queue_keep_connections=false

Master Definition Site
The database that has all the objects that you wish to replicate. All suspend and resume commands are to be entered from here, and will be automatically propagated to the master sites.

Master Site
The database that will hold the replicated objects. One master definition site can be replicated to a multiple
number of master definitionsites.

Note that as this is only a one way replication, inserts, updates and deletes should only be performed on the master definition site. Otherwise, a conflict resolution strategy must beimplemented.

Deferred Transactions and Remote Procedure Calls
Oracle’s advanced replication option relies primarily on deferred transactions and remote procedure calls (RPCs). When you commit a transaction against a replicated table, for example, the replication support triggers queue a deferred transaction to do your bidding in one or more remote databases.
In addition to providing many of the underpinnings for the advanced replication option, the packages used to create and manipulate deferred calls are available for direct use in yourapplications.

Package Description
DBMS_DEFER_SYS Performs administrative tasks such as scheduling,executing, and deleting queued transactions.
DBMS_DEFER Builds deferred calls.
DBMS_DEFER_QUERY Provides access to parameters passed to deferred calls, primarily for diagnostic purposes.< BR>

There are eight data dictionary views that contain data about deferred transactions and RPCs. You can query these views to determine information such as the destination of RPC calls, error messages, and scheduled execution times. Most of the packages associated with deferred calls reference and/or modify the data in these views.

Package

Description

DEFCALL

Contains information about all deferred RPCs. Queries SYSTEM.DEF$_CALL table.

DEFCALLDEST

Contains the destination database(s) for each deferred RPC in DEFCALL. Queries SYSTEM.DEF$_CALL, SYSTEM.DEF$_DESTINATION, SYSTEM.DEF$_CALLDEST, SYSTEM.DEF$_ERROR, and SYSTEM.REPCAT$_REPPROP.

DEFDEFAULTDEST

Contains the default destinations for deferred RPCs. Queries SYSTEM.DEF$_DEFAULTDEST.

DEFERROR

Contains error information for deferred calls that could not be applied at their destination. Queries SYSTEM.DEF$_ERROR.

DEFERRORCOUNT

Contains the count of errors for each destination. Queries SYSTEM.DEF$_ERROR.

DEFSCHEDULE

Contains information about the scheduling of deferred jobs. Queries SYSTEM.DEF$_DESTINATION and SYS.JOB$.

DEFTRAN

Contains information about all deferred calls. Queries SYSTEM.DEF$_CALL and SYS.USER$.

DEFTRANDEST

Contains the destination database(s) for each deferred transaction. Queries SYSTEM.DEF$_CALL, SYSTEM.DEF$_DESTINATION, SYSTEM.DEF$_ERROR, SYSTEM.DEF$_CALLDEST, and SYSTEM.REPCAT$_REPPROP

Add new Node without Quiescing (RMAN SCN Method)

Posted by Sagar Patil

Task: We have a working setup of 4 node 4 way replication cluster built on Advanced/MultiMaster replication. 2 nodes out of these 4 nodes should be moved to a different physical location without a downtime.

Solution : Add 2 more nodes into system and run a 6 node 6 way replication for some time. Once things are stable, remove 2 nodes from cluster.

Please see Metalink NOTE:296473.1 metalink.oracle.com/CSP/ui/flash.html#tab=Home(page=Home&id=fli2gm32()),(page=KBNavigator&id=fli3rubg(viewingMode=1141&&userQuery=Adding%20Master%20Sites&searchControl=1146))

Step 1 at Each New Master Site

Setup replication at the first site if replication is not already existing

Pre-requisite

  • The replication administrator at each new master site
  • A scheduled link from each existing master site to each new master site
  • A scheduled link from each new master site to each existing master site
  • A schedule purge job at each new master site

Step 2 at Master Definition Site

Take RMAN backup of the database including the archive logs of master definition database.

Step 3. Run the SPECIFY_NEW_MASTERS procedure on the first site

repA>begin
DBMS_REPCAT.SPECIFY_NEW_MASTERS (‘scott_repg’,’REPC.US.ORACLE.COM’);
end;

This procedure specifies the master sites you intend to add to an existing replication group without quiescing the group. This procedure must be run at the master definition site of the specified master group.

If necessary, this procedure creates an extension_id that tracks the process of adding new master sites to a master group. You use this extension_id in the other procedures that you run at various stages in the process. You can view information about the extension_id in the DBA_REPSITES_NEW and DBA_REPEXTENSIONS data dictionary views.

DBMS_REPCAT.SPECIFY_NEW_MASTERS (
gname IN VARCHAR2,
{ master_list IN VARCHAR2
| master_table IN DBMS_UTILITY.DBLINK_ARRAY});

Step 4 Execute the ADD_NEW_MASTERS procedure on master definition site

repA> VARIABLE masterdef_flashback_scn NUMBER;
VARIABLE extension_id VARCHAR2(32);
begin
DBMS_REPCAT.ADD_NEW_MASTERS (
export_required => false,
available_master_list => null,
masterdef_flashback_scn => :masterdef_flashback_scn,
extension_id => :extension_id,
break_trans_to_masterdef => false,
break_trans_to_new_masters => false,
percentage_for_catchup_mdef => 80,
cycle_seconds_mdef => 60,
percentage_for_catchup_new => 80,
cycle_seconds_new => 60);
end;

repA>print masterdef_flashback_scn;

MASTERDEF_FLASHBACK_SCN
———————–
1666794

repA>print extension_id ;

EXTENSION_ID
——————————–
57A6AC50B2801525E0440060B0C193C6

repA>select sname, master, status from sys.dba_repcat;

SNAME M STATUS
—————————— – ———
SCOTT_REPG Y QUIESCED

This procedure adds the master sites in the DBA_REPSITES_NEW data dictionary view to the master groups specified when the SPECIFY_NEW_MASTERS procedure was run. Information about these new master sites are added to the replication catalog at all available master sites.

Caution: After running this procedure, do not disable or enable propagation of the deferred transactions queue until after the new master sites are added. The DBA_REPEXTENSIONS data dictionary view must be clear before you disable or enable propagation. You can use the Replication Management tool or the SET_DISABLED procedure in the DBMS_DEFER_SYS package to disable or enable propagation.

Note:

  • This step will provide the extension_id and the flashback_scn.
  • The flashback_scn is need to duplicate the database in the next step.
  • The extension_id can be used to find out if the steps are proceeding correctly.
  • You can query the dba_repsites_new and dba_repextensions at both the sites with this extension_id

We need to use system change number (SCN) returned by the masterdef_flashback_scn parameter for the FLASHBACK_SCN export parameter. At this point Master defn site will start building up transactions for new master site.

Step 5. Create a duplicate database until the flashback_scn returned in the above step using RMAN.

RMAN>DUPLICATE TARGET DATABASE TO repC UNTIL SCN 1666794;

We are using disk to disk replication so File transfer between sites and Resync disks to a new master site.

Step 6. Change the global name of the new database

ALTER DATABASE RENAME GLOBAL_NAME TO repC;

Step 7. Create database links between the two database as per the replicaton requirement

Step 8. Execute the PREPARE_INSTANTIATED_MASTER procedure on the new master site (new database)

Note:
This procedure enables the propagation of deferred transactions from other prepared new master sites and existing master sites to the invocation master site. This procedure also enables the propagation of deferred transactions from the invocation master site to the other new master sites and existing master sites.

begin
DBMS_REPCAT.PREPARE_INSTANTIATED_MASTER (
extension_id => :extension_id);
end;

extension_id value returned from ADD_NEW_MASTERS is used here.

Now this method is not without drawback and “The master definition site must be the same for all of the master groups”.

————– CASE STUDY- USING DISK REPLICATION INSTEAD OF RMAN NO SCN BASED RECOVERY ——————-

We have a really large database which needs to be attached into 4 way replication cluster. We could use rman but it is going to take more time to backup and restore database so we decided on the DISK replication which could copy Tera bytes in matter of hours than days.

I have 2 DB instances repA ad repB , built with 2 way replication using http://www.oracledbasupport.co.uk/2008/09/11/how-to-setup-master-sites-in-multimaster-replication/

Pre-requisite

  • The replication administrator at each new master site
  • A scheduled link from each existing master site to each new master site
  • A scheduled link from each new master site to each existing master site
  • A schedule purge job at each new master site

repA>begin
DBMS_REPCAT.SPECIFY_NEW_MASTERS (‘scott_repg’,’REPC.US.ORACLE.COM’);
end;

Since we need a master Defn site to build new instance repC, I took offline backup of repA. Before I backed up repA, I put the scott_repg into QUISCED condition.

repA>select sname, master, status from sys.dba_repcat;

SNAME M STATUS
—————————— – ———
SCOTT_REPG Y QUIESCED

repA> VARIABLE masterdef_flashback_scn NUMBER;
VARIABLE extension_id VARCHAR2(32);
begin
DBMS_REPCAT.ADD_NEW_MASTERS (
export_required => false,
available_master_list => null,
masterdef_flashback_scn => :masterdef_flashback_scn,
extension_id => :extension_id,
break_trans_to_masterdef => false,
break_trans_to_new_masters => false,
percentage_for_catchup_mdef => 80,
cycle_seconds_mdef => 60,
percentage_for_catchup_new => 80,
cycle_seconds_new => 60);
end;

repA>print masterdef_flashback_scn;

MASTERDEF_FLASHBACK_SCN
———————–
1666794

repA>print extension_id ;

EXTENSION_ID
——————————–
57A6AC50B2801525E0440060B0C193C6

I copied repA as repC and changed database sid as repC. – “ALTER DATABASE RENAME GLOBAL_NAME TO repC;”

repC> begin
DBMS_REPCAT.PREPARE_INSTANTIATED_MASTER (
extension_id => ’57A6AC50B2801525E0440060B0C193C6′);
end;

After executing above API I could see repA,repB,repC present at repa,repb but repC wouldn’t update like repA. Please see image below for details.

repA>select extension_id, request, masterdef, extension_status,flashback_scn from dba_repextensions;

.Master
Definition
EXTENSION_ID REQUEST Site?. EXTENSION_STA
——————————– ————— ———- ————-
FLASHBACK_SCN
————-
57A6AC50B2801525E0440060B0C193C6 ADD_NEW_MASTERS REPA.US.OR INSTANTIATING
ACLE.COM
1666794

The system is in INSTANTIATING status for last 12 hours.

I can see at one of internal view “select * from dba_repextensions” a flashback_SCN captured and I am sure it is tracked while building up x’actions at master defn. I built test setup for 2 tables with 10 rows and the system is not responding for last 12 hours. It seems gone into some sort of endless loop not sure why. None of the Oracle alrtlog have any detailed information.

repA>SELECT DBLINK DEST, COUNT(*) TRANS
FROM DEFTRANDEST D
GROUP BY DBLINK; 2 3

no rows selected

repA>SELECT COUNT(DISTINCT DEFERRED_TRAN_ID) “Transactions Queued” FROM DEFTRANDEST;

Transactions Queued
——————-
0

Why ? This sould happen.

If you see above documents you will know I really didn’t carry a SCN based recovery. Also before the backup wastaken scott_repg was put into QUIESCED mode. No x’actions were allowed though.

I can see at one of internal view “select * from dba_repextensions” a flashback_SCN captured and I am sure it is tracked while building up x’actions at master defn. I built test setup for 2 tables with 10 rows and the system is not responding for last 1 hour. It seems gone into some sort of endless loop not sure why. None of the Oracle alrtlog have any detailed information.

I tried getting scott_repg out of QUIESCED mode but didn’t work.

repA>select sname, master, status from sys.dba_repcat;

SNAME M STATUS
—————————— – ———
SCOTT_REPG Y QUIESCED

repA> BEGIN DBMS_REPCAT.RESUME_MASTER_ACTIVITY (gname => 'scott_repg'); END;<pre>

PL/SQL procedure successfully completed.

repA>select sname, master, status from sys.dba_repcat;

SNAME M STATUS
—————————— – ———
SCOTT_REPG Y QUIESCED

SELECT GNAME, DBLINK, MASTERDEF
FROM DBA_REPSITES
WHERE MASTER = ‘Y’
AND GNAME NOT IN (SELECT GNAME FROM DBA_REPSITES WHERE SNAPMASTER = ‘Y’)
ORDER BY GNAME; returns

————– CASE STUDY- USING SCN BASED RECOVERY with DISK REPLICATION ——————-

Make sure the replicated clustered Databases repA,repB are in ARCHIVELOG mode.

Use posts http://www.oracledbasupport.co.uk/enable-archivelog-put-database-into-archive-mode/ & http://www.oracledbasupport.co.uk/how-to-setup-2-way-multimaster-replication/

repB>select sname, master, status from sys.dba_repcat;

SNAME M STATUS
—————————— – ———
SCOTT_REPG Y NORMAL

repA>conn system/repa@repa
Connected.
repA>create public database link repc using ‘repc’;

Database link created.

repA>conn repadmin/repadmin@repa
Connected.
repA>create database link repc connect to repadmin identified by repadmin;

repA>conn system/repb@repb
Connected.
repA>create public database link repc using ‘repc’;

Database link created.

repA>conn repadmin/repadmin@repb
Connected.
repA>create database link repc connect to repadmin identified by repadmin;

repB>select db_link from dba_db_links;

DB_LINK
——————————————————————————–
REPA.US.ORACLE.COM
REPC.US.ORACLE.COM
REPA.US.ORACLE.COM
REPC.US.ORACLE.COM

repA>SELECT LOG_MODE FROM SYS.V$DATABASE;

LOG_MODE
————
ARCHIVELOG

repA>begin
DBMS_REPCAT.SPECIFY_NEW_MASTERS (‘scott_repg’,’REPC’);
end;

repA & repB>select sname, master, status from sys.dba_repcat;

SNAME                          M STATUS
—————————— – ———
SCOTT_REPG                     Y NORMAL

repA> VARIABLE masterdef_flashback_scn NUMBER;
VARIABLE extension_id VARCHAR2(32);
begin
DBMS_REPCAT.ADD_NEW_MASTERS (
export_required => false,
available_master_list => null,
masterdef_flashback_scn => :masterdef_flashback_scn,
extension_id => :extension_id,
break_trans_to_masterdef => false,
break_trans_to_new_masters => false,
percentage_for_catchup_mdef => 80,
cycle_seconds_mdef => 60,
percentage_for_catchup_new => 80,
cycle_seconds_new => 60);
end;

repA>print masterdef_flashback_scn;

MASTERDEF_FLASHBACK_SCN
———————–
232079

repA>print extension_id ;

EXTENSION_ID
——————————–
582FD00C0B00208CE0440060B0C193C6
repA>select sname, master, status from sys.dba_repcat;

SNAME                          M STATUS
—————————— – ———
SCOTT_REPG                     Y NORMAL
SQL> conn / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

repA>SELECT GNAME, substr(DBLINK,1,20), MASTERDEF
FROM DBA_REPSITES
WHERE MASTER = ‘Y’
AND GNAME NOT IN (SELECT GNAME FROM DBA_REPSITES WHERE SNAPMASTER = ‘Y’)
ORDER BY GNAME;

.Master
Definition
GNAME .Sites. Site?.
—————————— ————————- ———-
SCOTT_REPG REPA.US.ORACLE.COM Y
SCOTT_REPG REPB.US.ORACLE.COM N
SCOTT_REPG REPC.US.ORACLE.COM N

repB>SELECT GNAME, DBLINK, MASTERDEF
FROM DBA_REPSITES
WHERE MASTER = ‘Y’
AND GNAME NOT IN (SELECT GNAME FROM DBA_REPSITES WHERE SNAPMASTER = ‘Y’)
ORDER BY GNAME;

.Master
Definition
GNAME .Sites. Site?.
—————————— ————————- ———-
SCOTT_REPG REPB.US.ORACLE.COM N
SCOTT_REPG REPA.US.ORACLE.COM Y
SCOTT_REPG REPC.US.ORACLE.COM N

repA>select extension_id, request, masterdef, extension_status,flashback_scn from dba_repextensions;

EXTENSION_ID                     REQUEST
——————————– —————
MASTERDEF
——————————————————————————–
EXTENSION_STA FLASHBACK_SCN
————- ————-
582FD00C0B00208CE0440060B0C193C6 ADD_NEW_MASTERS
REPA.US.ORACLE.COM
INSTANTIATING        232079

repB>select extension_id, request, masterdef, extension_status,flashback_scn from dba_repextensions;

EXTENSION_ID                     REQUEST
——————————– —————
MASTERDEF
——————————————————————————–
EXTENSION_STA FLASHBACK_SCN
————- ————-
582FD00C0B00208CE0440060B0C193C6 ADD_NEW_MASTERS
REPA.US.ORACLE.COM
INSTANTIATING

repA>SELECT COUNT(*) FROM DBA_REPCATLOG;

COUNT(*)
———-
1

repB>SELECT COUNT(*) FROM DBA_REPCATLOG;

COUNT(*)
———-
0
repA>select substr(extension_id,1,35),gname,substr(DBlink,1,20), full_instantiation  from DBA_REPSITES_NEW;

SUBSTR(EXTENSION_ID,1,35)        GNAME
——————————– ——————————
SUBSTR(DBLINK,1,20)  F
——————– –
582FD00C0B00208CE0440060B0C193C6 SCOTT_REPG
REPC.US.ORACLE.COM   Y

repB>select substr(extension_id,1,35),gname,substr(DBlink,1,20), full_instantiation  from DBA_REPSITES_NEW;

SUBSTR(EXTENSION_ID,1,35)        GNAME
——————————– ——————————
SUBSTR(DBLINK,1,20)  F
——————– –
582FD00C0B00208CE0440060B0C193C6 SCOTT_REPG
REPC.US.ORACLE.COM   Y

All master sites instantiated with object-level export/import must be accessible at this time. Their new replication groups are added in the quiesced state. Master sites instantiated through full database export/import or through changed-based recovery do not need to be accessible.

repA> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
repA is now shutdown and copied as repC

repA> cp -Rf  repA repC
I willnow carry a SCN based recovery of database for SCN returned  232079

SQL> STARTUP NOMOUNT pfile=’/opt/oracle/admin/repC/pfile/initrepC.ora’;
ORACLE instance started.

Total System Global Area  320562952 bytes
Fixed Size                   734984 bytes
Variable Size             285212672 bytes
Database Buffers           33554432 bytes
Redo Buffers                1060864 bytes
SQL> CREATE CONTROLFILE reuse DATABASE “repA” NORESETLOGS  ARCHIVELOG
MAXLOGFILES 50  2
3      MAXLOGMEMBERS 5
4      MAXDATAFILES 100
5      MAXINSTANCES 1
6      MAXLOGHISTORY 113
LOGFILE
7    GROUP 1 ‘/opt/oracle/oradata/repC/redo01.log’  SIZE 100M,  8
9    GROUP 2 ‘/opt/oracle/oradata/repC/redo02.log’  SIZE 100M,
GROUP 3 ‘/opt/oracle/oradata/repC/redo03.log’  SIZE 100M 10
DATAFILE
11    ‘/opt/oracle/oradata/repC/system01.dbf’,
12    ‘/opt/oracle/oradata/repC/undotbs01.dbf’, 13
14    ‘/opt/oracle/oradata/repC/drsys01.dbf’,
‘/opt/oracle/oradata/repC/example01.dbf’,
15    ‘/opt/oracle/oradata/repC/indx01.dbf’, 16
‘/opt/oracle/oradata/repC/odm01.dbf’, 17
18    ‘/opt/oracle/oradata/repC/tools01.dbf’,
‘/opt/oracle/oradata/repC/users01.dbf’,
19    ‘/opt/oracle/oradata/repC/xdb01.dbf’
20  CHARACTER SET WE8ISO8859P1
21  ; 22

Control file created.

repC> select first_change#-1 from v$log where status=’CURRENT’;

no rows selected

repC>  recover database until CHANGE 232079;
Media recovery complete.

Let’s drop the DB link at new repC instance and build links at repC to repA

SQL> drop database link REPC;
drop database link REPC
*
ERROR at line 1:
ORA-02082: a loopback database link must have a connection qualifier

repC>  alter database rename global_name to test;

Database altered.

repC> drop public database link repc;

Database link dropped.

repC>  alter database rename global_name to repC;

Database altered.

SQL> create public database link repa using ‘repa’;

Database link created.

SQL> conn repadmin/repadmin@repc
Connected.
SQL> create database link repa connect to repadmin identified by repadmin;

repC>select count(*) from tab@repb;

COUNT(*)
———-
0

repC>select count(*) from tab@repa;

COUNT(*)
———-
0
repC>EXECUTE Dbms_Repcat.Drop_Master_Repgroup(‘scott_repg’);

PL/SQL procedure successfully completed.
repC>BEGIN
DBMS_REPCAT.PREPARE_INSTANTIATED_MASTER (
extension_id  => ‘582FD00C0B00208CE0440060B0C193C6’);
END;
/  2    3    4    5
BEGIN
*
ERROR at line 1:
ORA-23493: “REPC.US.ORACLE.COM” is not a new site for extension request
“582FD00C0B00208CE0440060B0C193C6”
ORA-01403: no data found
ORA-06512: at “SYS.DBMS_REPCAT_ADD_MASTER”, line 2640
ORA-06512: at “SYS.DBMS_REPCAT”, line 1200
ORA-06512: at line 2

Opened up a TAR with Oracle Support and looking for a resolution.

How optimizer stats with Histograms can change execution Plan

Posted by Sagar Patil

Optimizer stats can play a key part in deciding execution plan. Here is an example

Table “RSODSACTREQ” has 313783 of total rows

Database with NO histograms :
For following 4 statements the SQL plan is always same i.e FULL TABLE SCAN

1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’
2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’

Execution Plan :
SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 2 K 39 K 350

Database with histograms:
1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’

Here Oracle can predict the resulting number of rows are lot more and prefers a full table scan

SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 262 K 3 M 350

2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 19
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 93 1 K 19
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 93 3

 

3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 9
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 35 560 9
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 35 3

4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 4
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 1 16 4
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~0 1 3

 

running now

AWR : How to locate resource limits

Posted by Sagar Patil

select A.snap_id AWR_SNAP_ID, A.INSTANCE_NUMBER,
to_char(B.BEGIN_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_START_TIME,
to_char(B.END_INTERVAL_TIME,’DD-MON-YYYY HH24:MI:SS’) AWR_END_TIME,
A.RESOURCE_NAME, MAX_UTILIZATION
from sys.wrh$_resource_limit A, sys.wrm$_snapshot B
where A.resource_name like ‘%processes%’
and A.snap_id=b.snap_id
and A.INSTANCE_NUMBER= B.INSTANCE_NUMBER
and A.INSTANCE_NUMBER= 1
and B.BEGIN_INTERVAL_TIME > sysdate – 12/24;


The different resources recorded are as below :

RESOURCE_NAME
——————————
gcs_resources
processes
enqueue_locks
max_rollback_segments
parallel_max_servers
ges_procs
sessions
gcs_shadows

AWR_SNAP_ID INSTANCE_NUMBER AWR_START_TIME AWR_END_TIME RESOURCE_NAME MAX_UTILIZATION
7964 1 28/05/2010 16:00 28/05/2010 16:56 processes 146
7963 1 28/05/2010 15:00 28/05/2010 16:00 processes 146
7962 1 28/05/2010 14:00 28/05/2010 15:00 processes 146
7961 1 28/05/2010 13:00 28/05/2010 14:00 processes 146
7960 1 28/05/2010 12:00 28/05/2010 13:00 processes 146
7959 1 28/05/2010 11:00 28/05/2010 12:00 processes 146
7958 1 28/05/2010 10:00 28/05/2010 11:00 processes 146
7957 1 28/05/2010 09:00 28/05/2010 10:00 processes 146
7956 1 28/05/2010 08:00 28/05/2010 09:00 processes 146
7955 1 28/05/2010 07:00 28/05/2010 08:00 processes 146
7954 1 28/05/2010 06:00 28/05/2010 07:00 processes 146
7953 1 28/05/2010 05:00 28/05/2010 06:00 processes 146

Which sessions are consuming IO bandwidth, Would return SID list

Posted by Sagar Patil

select
sid,
username,
round(100 * total_user_io/total_io,2) tot_io_pct
from
(select
b.sid sid,
nvl(b.username,p.name) username,
sum(value) total_user_io
from
sys.v_$statname c,
sys.v_$sesstat a,
sys.v_$session b,
sys.v_$bgprocess p
where
a.statistic#=c.statistic# and
p.paddr (+) = b.paddr and
b.sid=a.sid and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’)
group by
b.sid, nvl(b.username,p.name)),
(select
sum(value) total_io
from
sys.v_$statname c,
sys.v_$sesstat a
where
a.statistic#=c.statistic# and
c.name in (‘physical reads’,
‘physical writes’,
‘physical writes direct’,
‘physical reads direct’,
‘physical writes direct (lob)’,
‘physical reads direct (lob)’))
order by
3 desc;

Identify Master node in RAC cluster

Posted by Sagar Patil

1. Grep occsd Log file
[oracle @ tadrac1]: /u1/app/../cssd >grep -i “master node” ocssd.log | tail -1
[CSSD]CLSS-3001: local node number 1, master node number 1

2. Grep crsd log file
[oracle @ tadrac1]: /u1/app/../crsd>grep MASTER crsd.log | tail -1

3. Query V$GES_RESOURCE view

4. ocrconfig -showbackup
The node that store OCR backups is the master node.

The role of the master node is basically that this is the node where other “clients”/nodes will contact to get information about node status… ect.

The rule is like this.
-> the first node in a cluster to be up will become the master node. -> and/or the node with the lowest number will become the master node. Information about who is the master can be found in the cssd logfile. Check for reconfiguration information at the end of the reconfiguration.

Bring Cluster Online/Offline

Posted by Sagar Patil

Step A> Sequence of events to pull cluster database down..

1. Bring down load balanced/TAF service
srvctl stop service -d orcl -s RAC

2. Stop RAC instances using
srvctl stop instance -d (database) -I (instance)

3. If needed stop ASM instnace using
srvctl stop asm -n (node)

4. Stop all services using
srvctl stop -nodeapps

Step B> Sequence of events to bring cluster database back..

1. Start all services using
srvctl start -nodeapps

2. Start ASM instnace using
srvctl start asm -n (node)

3. Start RAC instances using
srvctl start instance -d (database) -I (instance)

4. Finish up by bringing our load balanced/TAF service online
srvctl start service -d orcl -s RAC

Managing CRS/ Commands

Posted by Sagar Patil

CRS DAEMON FUNCTIONALITY

CRSD: Performs high availability recovery and management operations such as maintaining the OCR and managing application resources.
– Engine for HA operation
– Manages ‘application resources’
– Starts, stops, and fails ‘application resources’ over
– Spawns separate ‘actions’ to start/stop/check application resources
– Maintains configuration profiles in the OCR (Oracle Configuration Repository)
– Stores current known state in the OCR.
– Runs as root
– Is restarted automatically on failure

OCSSD:
– OCSSD is part of RAC and Single Instance with ASM
– Provides access to node membership
– Provides group services
– Provides basic cluster locking
– Integrates with existing vendor clusteware, when present
– Can also runs without integration to vendor clustware
– Runs as Oracle.
– Failure exit causes machine reboot.
— This is a feature to prevent data corruption in event of a split brain.

EVMD: Event manager daemon. This process also starts the racgevt process to manage FAN server callouts.
– Generates events when things happen
– Spawns a permanent child evmlogger
– Evmlogger, on demand, spawns children
– Scans callout directory and invokes callouts.
– Runs as Oracle.
– Restarted automatically on failure

RESOURCE STATUS
Status of the database, all instances and all services

srvctl status database -d ORACLE -v

Status of named instances with their current services.

srvctl status instance -d ORACLE -i RAC01, RAC02 -v

Status of a named services

srvctl status service -d ORACLE -s ERP -v

Status of all nodes supporting database applications

srvctl status node

START RESOURCES
Start the database with all enabled instances

srvctl start database -d ORACLE

Start named instances

srvctl start instance -d ORACLE -i RAC03, RAC04

Start named services. Dependent instances are started as needed

srvctl start service -d ORACLE -s CRM

Start a service at the named instance

srvctl start service -d ORACLE -s CRM -i RAC04

Start node applications

srvctl start nodeapps -n myclust-4

STOP RESOURCES
Stop the database, all instances and all services

srvctl stop database -d ORACLE

Stop named instances, first relocating all existing services

srvctl stop instance -d ORACLE -i RAC03,RAC04

Stop the service

srvctl stop service -d ORACLE -s CRM

Stop the service at the named instances

srvctl stop service -d ORACLE -s CRM -i RAC04

Stop node applications. Note that instances and services also stop

srvctl stop nodeapps -n myclust-4

ADD RESOURCES

Add a new node

srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A 139.184.201.1/255.255.255.0/hme0

Add a new database

srvctl add database -d ORACLE -o $ORACLE_HOME

Add named instances to an existing database

srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3

Add a service to an existing database with preferred instances (-r) and available instances (-a). Use basic failover to the available instances

srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04

Add a service to an existing database with preferred instances in list one and available instances in list two. Use preconnect at the available instances

srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT

REMOVE  RESOURCES
Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4

MODIFY RESOURCES
Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n my

Oracle Standby Recovery Rate Monitoring

Posted by Sagar Patil

Why standby periodically lags during the day?
The script reports the time it took to apply the log, the size of the log, and the redo apply rate for that log.

Diff(sec) reports the actual time difference between redo logs applied on the standby.
Lag(sec) reports the time difference between when the archive log was completed on the primary and when it was applied on the standby.

If  you come across issues with script below please download it from here

rem Reports standby apply rate with lag

SELECT   TIMESTAMP,
completion_time “ArchTime”,
SEQUENCE#,
ROUND ( (blocks * block_size) / (1024 * 1024), 1) “SizeM”,
ROUND (
(TIMESTAMP
– LAG (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
* 24
* 60
* 60,
1
)
“Diff(sec)”,
ROUND (
(blocks * block_size) / 1024
/ DECODE (
( (TIMESTAMP
– LAG (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
* 24
* 60
* 60),
0,
1,
(TIMESTAMP
– LAG (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
* 24
* 60
* 60
),
1
)
“KB/sec”,
ROUND (
(blocks * block_size) / (1024 * 1024)
/ DECODE (
( (TIMESTAMP
– LAG (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
* 24
* 60
* 60),
0,
1,
(TIMESTAMP
– LAG (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
* 24
* 60
* 60
),
3
)
“MB/sec”,
ROUND (
( (LEAD (TIMESTAMP, 1, TIMESTAMP) OVER (ORDER BY TIMESTAMP))
– completion_time)
* 24
* 60
* 60,
1
)
“Lag(sec)”
FROM   v$archived_log a, v$dataguard_status dgs
WHERE   a.name = REPLACE (dgs.MESSAGE, ‘Media Recovery Log’)
AND dgs.FACILITY = ‘Log Apply Services’
ORDER BY   TIMESTAMP DESC;

Another quickie SQL

@Standby> select snaphot_time,thread#,sequence#,applied_scn from v$standby_apply_snapshot;

If your log application is running slow please alter parallel_execution_message_size parameter to increase the buffer size.

From 10G DataGuard:  Set PARALLEL_EXECUTION_MESSAGE_SIZE = 4096

When using parallel media recovery or parallel standby recovery, increasing the PARALLEL_EXECUTION_MESSAGE_SIZE database parameter to 4K (4096) can improve parallel recovery by as much as 20 percent. Set this parameter on
both the primary and standby databases in preparation for switchover operations. Increasing this parameter requires more memory from the shared pool by each parallel execution slave process.

The PARALLEL_EXECUTION_MESSAGE_SIZE parameter is also used by parallel query operations and should be tested with any parallel query operations to ensure there is sufficient memory on the system. A large number of parallel query
slaves on a 32-bit installation may reach memory limits and prohibit increasing the PARALLEL_EXECUTION_MESSAGE_SIZE from the default 2K


Oracle Data Pump

Posted by Sagar Patil

– Data Pump runs only on the server side.
– You may initiate the export from a client but the job(s) themselves will run inside an Oracle server.
– There are no dump files (expdat.dmp) or log files that will be created on your local machine

How to use DataPUMP?

To Export Entire Database use FULL =Y , for schema use schemas=<USERNAMES>

FULL expdp system/XXX FULL=y DIRECTORY=dexport DUMPFILE=expdata.dmp LOGFILE=expdata.log
Schema expdp system SCHEMA=DOTCOM DIRECTORY=export DUMPFILE=expdata.dmp LOGFILE=expdata.log

Data pump could be used over Database link as an example below
SQL-S2> CREATE DATABASE LINK S1 CONNECT TO JKOOP IDENTIFIED BY PWD USING ‘S1’;

SQL-S2> CREATE DIRECTORY mydump AS ‘/app/oracle/admin/itctvt/export’ ;

E:> expdp jkoop/pwd FULL=y DIRECTORY=mydump NETWORK_LINK=S1 DUMPFILE=byDB.dmp LOGFILE=byDB.log

Introduction to Monitoring Data Pump
DBA_DATAPUMP_JOBS : This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

SQL> select * from dba_datapump_jobs

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS
———- ———————- ———- ———- ————- ——— —————–
JKOOP SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING 1 1
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT SCHEMA EXECUTING 1 1

DBA_DATAPUMP_SESSIONS : This view give gives the SADDR that assist in determining why a Data Pump session may be having problems. Join to the V$SESSION view for further information.
SQL> SELECT * FROM DBA_DATAPUMP_SESSIONS

OWNER_NAME JOB_NAME SADDR
———- —————————— ——–
JKOOPMANN SYS_EXPORT_FULL_01 225BDEDC
JKOOPMANN SYS_EXPORT_SCHEMA_01 225B2B7C

V$SESSION_LONGOPS :This view helps determine how well a Data Pump export is doing. Basically gives you a progress indicator through the MESSAGE column.
SQL> select username,opname,target_desc,sofar,totalwork,message from V$SESSION_LONGOPS

USERNAME OPNAME TARGET_DES SOFAR TOTALWORK MESSAGE
——– ——————– ———- —– ———- ————————————————
JKOOP SYS_EXPORT_FULL_01 EXPORT 132 132 SYS_EXPORT_FULL_01:EXPORT:132 out of 132 MB done
JKOOP SYS_EXPORT_FULL_01 EXPORT 90 132 SYS_EXPORT_FULL_01:EXPORT:90 out of 132 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 17 17 SYS_EXPORT_SCHEMA_01:EXPORT:17 out of 17 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 19 19 SYS_EXPORT_SCHEMA_01:EXPORT:19 out of 19 MB done

Summary report of ASM disk groups and Space Utilised

Posted by Sagar Patil

PURPOSE : Provide a summary report of all disk groups.

SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off

COLUMN group_name FORMAT a20 HEAD ‘Disk Group|Name’
COLUMN sector_size FORMAT 99,999 HEAD ‘Sector|Size’
COLUMN block_size FORMAT 99,999 HEAD ‘Block|Size’
COLUMN allocation_unit_size FORMAT 999,999,999 HEAD ‘Allocation|Unit Size’
COLUMN state FORMAT a11 HEAD ‘State’
COLUMN type FORMAT a6 HEAD ‘Type’
COLUMN total_mb FORMAT 999,999,999 HEAD ‘Total Size (MB)’
COLUMN used_mb FORMAT 999,999,999 HEAD ‘Used Size (MB)’
COLUMN pct_used FORMAT 999.99 HEAD ‘Pct. Used’

break on report on disk_group_name skip 1
compute sum label “Grand Total: ” of total_mb used_mb on report

SELECT   name group_name,
sector_size sector_size,
block_size block_size,
allocation_unit_size allocation_unit_size,
state state,
TYPE TYPE,
total_mb total_mb,
(total_mb – free_mb) used_mb,
ROUND ( (1 – (free_mb / total_mb)) * 100, 2) pct_used
FROM   v$asm_diskgroup
ORDER BY   name

Sample Report

Disk Group Sector Block Allocation
Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used
——————– ——- ——- ———— ———– —— ————— ————– ———
XYZ_REDO_DG01 512 4,096 16,777,216 MOUNTED EXTERN 28,144 9,424 33.48
ABC_ARCH_DG00 512 4,096 16,777,216 MOUNTED EXTERN 225,216 28,656 12.72
ABC_DATA_DG00 512 4,096 16,777,216 MOUNTED EXTERN 450,432 88,800 19.71
ABC_FLBK_DG00 512 4,096 16,777,216 MOUNTED EXTERN 112,608 4,848 4.31
ABC_REDO_DG00 512 4,096 16,777,216 MOUNTED EXTERN 28,128 9,584 34.07
ABC_REDO_DG01 512 4,096 16,777,216 MOUNTED EXTERN 28,128 9,456 33.62
————— ————–
Grand Total: 4,448,192 2,110,496

Performance summary report of all disks contained within all ASM DiskGroups

Posted by Sagar Patil

— +—————————————————————————-+
— | Jeffrey M. Hunter |
— |—————————————————————————-|
— | PURPOSE : Provide a summary report of all disks contained within all ASM |
— | disk groups along with their performance metrics. |
— | NOTE : As with any code, ensure to test this script in a development |
— | environment before attempting to run it in production. |
— +—————————————————————————-+

SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY off

COLUMN disk_group_name FORMAT a20 HEAD ‘Disk Group Name’
COLUMN disk_path FORMAT a20 HEAD ‘Disk Path’
COLUMN reads FORMAT 999,999,999 HEAD ‘Reads’
COLUMN writes FORMAT 999,999,999 HEAD ‘Writes’
COLUMN read_errs FORMAT 999,999 HEAD ‘Read|Errors’
COLUMN write_errs FORMAT 999,999 HEAD ‘Write|Errors’
COLUMN read_time FORMAT 999,999,999 HEAD ‘Read|Time’
COLUMN write_time FORMAT 999,999,999 HEAD ‘Write|Time’
COLUMN bytes_read FORMAT 999,999,999,999 HEAD ‘Bytes|Read’
COLUMN bytes_written FORMAT 999,999,999,999 HEAD ‘Bytes|Written’

break on report on disk_group_name skip 2

compute sum label “” of reads writes read_errs write_errs read_time write_time bytes_read bytes_written on disk_group_name
compute sum label “Grand Total: ” of reads writes read_errs write_errs read_time write_time bytes_read bytes_written on report

SELECT
a.name disk_group_name
, b.path disk_path
, b.reads reads
, b.writes writes
, b.read_errs read_errs
, b.write_errs write_errs
, b.read_time read_time
, b.write_time write_time
, b.bytes_read bytes_read
, b.bytes_written bytes_written
FROM
v$asm_diskgroup a JOIN v$asm_disk b USING (group_number)
ORDER BY
a.name
/

Mastering ASMCMD

Posted by Sagar Patil

cd Changes the current directory to the specified directory.

duDisplays the total disk space occupied by ASM files in the specified ASM directory and all its subdirectories, recursively.

exit Exits ASMCMD.

find Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
ASMCMD> find +dgroup1 undo* +dgroup1/SAMPLE/DATAFILE/UNDOTBS1.258.555341963 +dgroup1/SAMPLE/DATAFILE/UNDOTBS1.272.557429239

The following example returns the absolute path of all the control files in the
+dgroup1/sample directory.ASMCMD> find -t CONTROLFILE +dgroup1/sample * +dgroup1/sample/CONTROLFILE/Current.260.555342185 +dgroup1/sample/CONTROLFILE/Current.261.555342183

ls Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups.

lsct Lists information about current ASM clients.

lsdg Lists all disk groups and their attributes.

mkalias Creates an alias for a system-generated filename.

mkdir Creates ASM directories.

pwd Displays the path of the current ASM directory.

rm Deletes the specified ASM files or directories.

rmalias Deletes the specified alias, retaining the file that the alias points to.

Display tablespace usage

Posted by Sagar Patil

column tsname format a30 heading ‘Tablespace Name’
column tbs_size_mb format 99999,999 heading ‘Size|(MB)’
column used format 99999,999 heading ‘Used|(MB)’
column avail format 99999,999 heading ‘Free|(MB)’
column used_visual format a11 heading ‘Used’
column pct_used format 999 heading ‘% Used’

set linesize 1000;
set trimspool on;
set pagesize 32000;
set verify off;
set feedback off;

PROMPT
PROMPT *************************
PROMPT *** TABLESPACE STATUS ***
PROMPT *************************

SELECT df.tablespace_name tsname
, round(sum(df.bytes)/1024/1024) tbs_size_mb
, round(nvl(sum(e.used_bytes)/1024/1024,0)) used
, round(nvl(sum(f.free_bytes)/1024/1024,0)) avail
, rpad(‘ ‘||rpad(‘X’,round(sum(e.used_bytes)
*10/sum(df.bytes),0), ‘X’),11,’-‘) used_visual
, nvl((sum(e.used_bytes)*100)/sum(df.bytes),0) pct_used
FROM sys.dba_data_files df
, (SELECT file_id
, sum(nvl(bytes,0)) used_bytes
FROM sys.dba_extents
GROUP BY file_id) e
, (SELECT max(bytes) free_bytes
, file_id
FROM dba_free_space
GROUP BY file_id) f
WHERE e.file_id(+) = df.file_id
AND df.file_id = f.file_id(+)
GROUP BY df.tablespace_name
ORDER BY 6;

Will produce results like

XYZ Live Database
===================
Size Used Free
Tablespace Name (MB) (MB) (MB) Used % Used
—————————— ———- ———- ———- ———– ——
STATSPACK 2,048 0 2,047 ———- 0
TOOLS 1,024 0 1,024 ———- 0
ACF_XYZ 2,048 0 2,048 ———- 0
ACF_IABC 2,048 3 2,045 ———- 0
UNDOTBS1 1,024 337 449 XXX——- 33
SYSTEM 1,024 557 467 XXXXX—– 54
SYSAUX 5,000 2,738 1,032 XXXXX—– 55
USERS 14,000 9,210 2,678 XXXXXXX— 66
UNDOTBS2 1,024 703 20 XXXXXXX— 69
UNDOTBS3 1,024 740 5 XXXXXXX— 72

Enabling ArchiveLog Mode in a RAC Environment

Posted by Sagar Patil

Login to one of the nodes (i.e. linux1) and disable the cluster instance parameter by setting cluster_database to FALSE from the current instance:
$ sqlplus “/ as sysdba”
SQL> alter system set cluster_database=false scope=spfile sid=’orcl1′;

Shutdown all instances accessing the clustered database:
$ srvctl stop database -d orcl

Using the local instance, MOUNT the database:
$ sqlplus “/ as sysdba”
SQL> startup mount
Enable archiving:
SQL> alter database archivelog;

Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:
SQL> alter system set cluster_database=true scope=spfile sid=’orcl1′;
Shutdown the local instance:
SQL> shutdown immediate

Bring all instance back up using srvctl:
$ srvctl start database -d orcl
(Optional) Bring any services (i.e. TAF) back up using srvctl:
$ srvctl start service -d orcl

Login to the local instance and verify Archive Log Mode is enabled:
$ sqlplus “/ as sysdba”
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 83
Next log sequence to archive 84
Current log sequence 84

Oracle Clusterware Administration Quick Reference

Posted by Sagar Patil

Sequence of events to bring cluster database back..

1.    Start all services using “start -nodeapps”
2.    Start ASM instnace using “srvctl start asm -n (node)”
3.    Start RAC instances using “srvctl start instance -d (database) -I (instance)”
4.    Finish up by bringing our load balanced/TAF service online “srvctl start service -d orcl -s RAC”

List of nodes and other information for all nodes participating in the cluster:

[oracle@oradb4 oracle]$ olsnodes -n
oradb4 oradb3 oradb2 oradb1

List all nodes participating in the cluster with their assigned node numbers:

[oracle@oradb4 tmp]$ olsnodes -n
oradb4 1 oradb3 2 oradb2 3 oradb1 4

List all nodes participating in the cluster with the private interconnect assigned to each node:

[oracle@oradb4 tmp]$ olsnodes -p
oradb4 oradb4-priv oradb3 oradb3-priv oradb2 oradb2-priv oradb1 oradb1-pr

Check the health of the Oracle Clusterware daemon processes:

[oracle@oradb4 oracle]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

Query and administer css vote disks :

[root@oradb4 root]# crsctl add css votedisk /u03/oradata/ CssVoteDisk.dbf
Now formatting voting disk: /u03/oradata/CssVoteDisk.dbfRead -1 bytes of 512 at offset 0 in voting device (CssVoteDisk.dbf) successful addition of votedisk /u03/oradata/CssVoteDisk.dbf

For dynamic state dump of the CRS:

[root@oradb4 root]# crsctl debug statedump crs
dumping State for crs objects Dynamic state dump information is appended to the crsd log file located in the $ORA_CRS_HOME/log/oradb4/crsd directory.

Verify the Oracle Clusterware version:

[oracle@oradb4 log]$ crsctl query crs softwareversion
CRS software version on node [oradb4] is [10.2.0.0.0]

Verify the current version of Oracle Clusterware being used:

[oracle@oradb4 log]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.0.0]

Analyze database for right statistics

Posted by Sagar Patil

Different DBAs have different views on % for analyze. The oracle documentation recommends to carry full analyze on entire database which is not possible for most live systems runnning into terabytes.
In past I had performance issue on my database just over 300 GB. There were one table ORDER_DATA with 400 million rows. That one table pulled entire system down number of times just because it wasn’t properly analyzed and oracle didn’t knew data distribution in the table.
I was struggling to understand where things are going wrong as we were analysing entire table every night but not with hash buckets-histograms and surprisingly in SQL execution it was using a right index.

After spending days & weeks investigating the issue, I reanalyzed it with new oracle API for histograms and SQL which used to take between 15-60 min started running at less than 100 milliseconds.

What to look for?
First check STATSPACK and find out the most active tables.
Analyse most active tables once a week with 10-15% sampling
For BIG tables start with 1% sampling and buld over period of time
I also observed adding parallel option in ANALYZE can reduce time taken significantly.

— Added to 9i init.ora
— parallel_automatic_tuning=true
— parallel_max_servers=16
— parallel_min_servers=4
— Changed percent to 1, all idx cols changed degree to 16 from 10
begin
dbms_stats.gather_table_stats(ownname=>’USER’,tabname =>’TABLE_NAME’,
estimate_percent => 1,method_opt=>’for all indexed columns’,
degree=>16,CASCADE=>TRUE);
end ;

begin
dbms_stats.gather_schema_stats(ownname=>’USER’,
estimate_percent => 20,method_opt=>’for all indexed columns’,
degree=>16,CASCADE=>TRUE);
end ;

Other Examples         

GATHER_DATABASE_STATS(estimate_percent,block_sample,method_opt,degree, granularity,cascade,stattab,statid, options,statown,gather_sys,no_invalidate,gather_temp,gather_fixed,stattype);
GATHER_INDEX_STATS (ownname,indname,partname,estimate_percent,stattab,statid,statown,degree,granularity,no_invalidate,stattype);
GENERATE_STATS(ownname,objname,organized);
GATHER_SYSTEM_STATS (gathering_mode,interval,stattab,statid,statown);
GATHER_TABLE_STATS (ownname,tabname,partname,estimate_percent,block_sample,method_opt,degree,granularity,cascade,stattab,statid,statown,no_invalidate,stattype);
GATHER_SCHEMA_STATS(ownname,estimate_percent,block_sample,method_opt,degree,granularity,cascade, stattab,statid,options,statown,no_invalidate,gather_temp,gather_fixed);

How to Backup/Export Oracle Optimizer Statistics into Table

Posted by Sagar Patil

Exporting and Importing Statistics

Caveat: Always use import/export and use imp/exp utility on schema user who owns tables.
I have wasted a week where I was exporting as DBA for XYZ user and then importing into
different system under different username.

Statistics can be exported and imported from the data dictionary to user-owned tables. This enables to create multiple versions of statistics for the same schema. One can also copy statistics from one database to another database.

You may want to copy statistics from a production database to a scaled-down test database to look at SQL execution plans.

Note:
Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database.

The DBMS_STATS export and import packages do utilize IMP and EXP dump files.

Before exporting statistics, you first need to create a table for holding the statistics.

This statistics table is created using the procedure  DBMS_STATS.CREATE_STAT_TABLE.

After this table is created, then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS.EXPORT_*_STATS procedures.

The statistics can then be imported using the DBMS_STATS.IMPORT_*_STATS procedures.

Note that the optimizer does not use statistics stored in a user-owned table. The only statistics used by the optimizer are the statistics stored in the data dictionary.

In order to have the optimizer use the statistics in a user-owned tables, you must import those statistics into the data dictionary using the statistics import procedures.

In order to move statistics from one database to another, you must first export the statistics on the first database, then copy the statistics table to the second database, using the EXP and IMP utilities or other mechanisms, and finally import the statistics into the second database.

The EXP and IMP utilities export and import optimizer statistics from the database along with the table. One exception is that statistics are not exported with the data if a table has columns with system-generated names.

Restoring Statistics Versus Importing or Exporting Statistics

The functionality for restoring statistics is similar in some respects to the functionality of importing and exporting statistics.

In general, you should use the restore capability when:

  • You want to recover older versions of the statistics. For example, to restore the optimizer behavior to an earlier date.
  • You want the database to manage the retention and purging of statistics histories.

You should use EXPORT/IMPORT_*_STATS procedures when:

  • You want to experiment with multiple sets of statistics and change the values back and forth.
  • You want to move the statistics from one database to another database. For example, moving statistics from a production system to a test system.
  • You want to preserve a known set of statistics for a longer period of time than the desired retention date for restoring statistics.
1. Create the Statistics table.
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' ,
tblspace => 'STATS_TABLESPACE');
>>>>>>>> For 10G
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');
>>>>>>>>  For 9i and earlier
begin
 DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATS_TABLE');
end;

2. Export statistics to statistics table

EXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');

3. Import statistics into the data dictionary.
exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');

4. Drop the statistics table.
exec DBMS_STATS.DROP_STAT_TABLE('SYSTEM','STATS_TABLE');
FOR 9i
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATISTICS_TABLE_060307');
end;
begin
DBMS_STATS.EXPORT_SCHEMA_STATS('SAPBP2' ,'STATISTICS_TABLE_060307',NULL,'DBA_ADMIN');
end;
SQL> exec DBMS_STATS.IMPORT_SCHEMA_STATS('SAGAR','STATISTICS_TABLE_060307',NULL,'SAGAR');
PL/SQL procedure successfully completed.

Monitor export Process >>>>>>>>
select count(*) from &STATS_NAME
Stats table can grow exponentially so look at table size while export is active.
select sum(bytes)/1000000 from dba_extents where segment_name='&TABLE_NAME'
Sample statistics at SAP BW System of size 4.2 Tera bytes
Time Elapsed for Export : 40 Mins
Total stats Table Size : 2GB
Time Elapsed for Import :

How to Validate that Stats are reflected after exp/imp

select TABLE_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS,
AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN
from dba_tables where owner='&USER'
At both Databases and see they are very similar.

Locate Hard hitting SQL from Statpack Repository

Posted by Sagar Patil

1. Login as PERFSTAT user on database.  It won’t work unless U login as PERFSTAT user

2. Find DBID using  “select dbid from stats$sql_summary”

3. Locate MIN(SNAP_ID) pBgnSnap & MAX(SNAP_ID) pEndSnap

SELECT   MIN (snap_id),
 MAX (snap_id),
 MIN (snap_time),
 MAX (snap_time)
 FROM   stats$snapshot
 WHERE       TO_NUMBER (TO_CHAR (snap_time, 'HH24')) > 10
 AND TO_NUMBER (TO_CHAR (snap_time, 'HH24')) < 13
 AND TRUNC (snap_time) = TRUNC (SYSDATE)

Show All SQL Stmts ordered by Logical Reads

SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = :pDbId
 AND e.instance_number = :pInstNum
ORDER BY   3 DESC
Show SQL Stmts where SQL_TEXT like '%'
SELECT   e.hash_value "E.HASH_VALUE",
 e.module "Module",
 e.buffer_gets - NVL (b.buffer_gets, 0) "Buffer Gets",
 e.executions - NVL (b.executions, 0) "Executions",
 ROUND (
 DECODE (
 (e.executions - NVL (b.executions, 0)),
 0,
 TO_NUMBER (NULL),
 (e.buffer_gets - NVL (b.buffer_gets, 0))
 / (e.executions - NVL (b.executions, 0))
 ),
 3
 )
 "Gets / Execution",
 ROUND (  100
 * (e.buffer_gets - NVL (b.buffer_gets, 0))
 / sp920.getGets (:pDbID,
 :pInstNum,
 :pBgnSnap,
 :pEndSnap,
 'NO'), 3)
 "Percent of Total",
 ROUND ( (e.cpu_time - NVL (b.cpu_time, 0)) / 1000000, 3) "CPU (s)",
 ROUND ( (e.elapsed_time - NVL (b.elapsed_time, 0)) / 1000000, 3)
 "Elapsed (s)",
 ROUND (e.fetches - NVL (b.fetches, 0)) "Fetches",
 sp920.getSQLText (e.hash_value, 400) "SQL Statement"
 FROM   stats$sql_summary e, stats$sql_summary b
 WHERE       b.snap_id(+) = :pBgnSnap
 AND b.dbid(+) = e.dbid
 AND b.instance_number(+) = e.instance_number
 AND b.hash_value(+) = e.hash_value
 AND b.address(+) = e.address
 AND b.text_subset(+) = e.text_subset
 AND e.snap_id = :pEndSnap
 AND e.dbid = 2863128100
 AND e.instance_number = :pInstNum
 AND sp920.getSQLText (e.hash_value, 400) LIKE '%ZPV_DATA%'
ORDER BY   3 DESC

Locate Server Workload from Statspack for days in Past

Posted by Sagar Patil

Change a.statistic# to respective value

Stats for Working Hours

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and to_char(b.SNAP_TIME,'HH24') > 8
and to_char(b.SNAP_TIME,'HH24') <18
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

Locate kind of stats you want to pull from statspack

(select * from STATS$SYSSTAT where name like '%XXX%' )
9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage
Rollback Related -
176 transaction tables consistent read rollbacks
180 rollbacks only - consistent read gets
181 cleanouts and rollbacks - consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes - undo records applied
Sample Report built using SQL in post Stats_Report

Stats for Entire Day

select to_char(trunc(b.snap_time),'DD-MM-YYYY') ,statistic#,name, sum(value) from STATS$SYSSTAT A, stats$snapshot B
where a.snap_id=b.snap_id
and trunc(b.snap_time) > trunc(sysdate -30)
and a.statistic#=54
group by trunc(b.snap_time) ,statistic#,name
order by trunc(b.snap_time)

How to setup RMAN under Windows

Posted by Sagar Patil

Create RMAN Catalogue steps
• Select database for catalogue.
• Create catalogue owner RMANC, default tablespace TOOLS.
• Grant RECOVERY_CATALOG_OWNER role to RMANC.
• Run the RMAN executable, being the same version as the catalogue database codeset and connect to the catalogue as the RMANC user.
• Issue the CREATE CATALOG command.

At RMAN repository Database

SQL>   create user rmanc identified by xxxxxxxx
   Temporary tablespace TEMP
   Default tablespace TOOLS quota unlimited on TOOLS;
SQL>   grant RECOVERY_CATALOG_OWNER to RMANC;
%RMAN catalog RMANC/xxxxxxxx@catdb
rman>   CREATE CATALOG

 

At Target Database which need to be backed up thru RMAN
Before registering a target database, create the target rman user in the target database.

SQL>   connect / as sysdba
SQL>   create user RMANT identified by xxxxxxxx
   Default tablespace TOOLS
   Temporary tablespace TEMP;
SQL>   grant SYSDBA,CREATE SESSION to RMANT;

 

Initialisation Parameters
To improve backup performance following parameters must be set….
• BACKUP_TAPE_IO_SLAVES = TRUE
For 8i
• LARGE_POOL_SIZE = (4 *
<number> *
DB_BLOCK_SIZE *
DB_FILE_DIRECT_IO_COUNT *
<level>
) +
(4 *
<number> *
<size>
)
For 9i
• LARGE_POOL_SIZE= <number>*
(16MB +
<size>
)
The size of a tape buffer is determined by the RMAN parameter BLKSIZE.
BLKSIZE is OS dependant, but performance benefits can be made by explicitly setting it higher. 4M seems to offer improved performance. When testing a new backup, try differing values, in conjunction with the above calculation for LARGE_POOL_SIZE, to determine the optimal setting.
Net Requirements

RMAN requires a dedicated server connection to the target database.
Targetdb_rman.domain =
   (DESCRIPTION=
      (ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=1521))
      (CONNECT_DATA=(SERVICE_NAME=dbname.db_domain)(SERVER=DEDICATED))
   )

The above should be configured on both the target and catalogue servers. The server hosting the target database also requires a Net configuration for the catalogue database, as this is where the RMAN commands are normally issued.
Database registration

To register the target database, issue the following commands.

> rman TARGET  rmanc/xxxxxxxx@targetdb  CATALOG rmant/xxxxxxxx@cataloguedb
rman> REGISTER DATABASE;
To check successful registration, issue the following command whilst still in RMAN;
rman> REPORT SCHEMA;
RMAN> report schema;
Report of database schema
File K-bytes    Tablespace           RB segs Datafile Name
---- ---------- -------------------- ------- ------------------- 
1        524288 SYSTEM               YES     G:\ORACLE\ORADATA\TKNWP\SYSTEM01.DB
F
2        524288 UNDOTBS              YES     G:\ORACLE\ORADATA\TKNWP\UNDOTBS01.D
BF
3         20480 GENESYS_CONFIG_DATA  NO      I:\ORACLE\ORADATA\TKNWP\GENESYS_CON
FIG_DATA_01.DBF
4         20480 GENESYS_LOGS_DATA    NO      G:\ORACLE\ORADATA\TKNWP\GENESYS_LOG
S_DATA_01.DBF
5        131072 TOOLS                NO      H:\ORACLE\ORADATA\TKNWP\TOOLS_01.DB
F
6         25600 USERS                NO      H:\ORACLE\ORADATA\TKNWP\USERS_01.DB
F
7        256000 PERFSTAT             NO      G:\ORACLE\ORADATA\TKNWP\PERFSTAT_01
.DBF

 

Post-Recovery/Standby failover Tasks
If a resetlogs command has been issued, or a standby database has been activated, this needs to be reflected in the recovery catalogue. Any operations in RMAN after these operations will fail with the ‘target database incarnation not found in recovery catalog’ error. This is due to the resetlogs command, resetting the SCN’s. As the database Id is still the same, we need to issue the ‘reset database’ command to align the recovery catalogue with the database.
> rman TARGET rmanc/xxxxxxxx@targetdb CATALOG rmant/xxxxxxxx@cataloguedb
rman> RESET DATABASE;
To check;
rman> list incarnation of database;
Monitoring an RMAN Backup
To view the progress of a backup, the v$ views can be queried. Here is an example of a monitoring script:
Execute this whilst the backup is processing:

SELECT sid, serial#, context, sofar, totalwork,
round(sofar/totalwork*100,2) "% Complete"
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <totalwork>/
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
------ ---------- ---------- ---------- ---------- ---------- 
8 19 1 10377 36617 28.34
SQL>/
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
---- ---------- ---------- ---------- ---------- ---------- 
8 19 1 21513 36617 58.75
SQL>/
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
----- ---------- ---------- ---------- ---------- ---------- 
8 19 1 29641 36617 80.95
SQL>/
SID SERIAL# CONTEXT SOFAR TOTALWORK % Complete
----- ---------- ---------- ---------- ---------- ---------- 
8 19 1 35849 36617 97.9
SQL>/
no rows selected

 

The views V$BACKUP_ASYNC_IO and V$BACKUP_SYNC_IO contain detailed information on backup operations. This data is not persistent. If there is data in the V$BACKUP_SYNC_IO view, then you need to investigate why the backups are not running asynchronously. Check BACKUP_TAPE_IO_SLAVES is set.

How to Monitor rman Backup ?

Posted by Sagar Patil

Datafiles Backed up during past 24 Hours

SELECT dbfiles||' from '||numfiles "Datafiles backed up",
cfiles "Control Files backed up", spfiles "SPFiles backed up"
FROM (select count(*) numfiles from sys.v_$datafile),
(select count(*) dbfiles
from sys.v_$backup_datafile a, sys.v_$datafile b
where a.file# = b.file#
and a.completion_time > sysdate - 1),
(select count(*) cfiles from sys.v_$backup_datafile
where file# = 0 and completion_time > sysdate - 1),
(select count(*) spfiles from sys.v_$backup_spfile
where completion_time > sysdate - 1) 

Archlog Files Backed up during past 24 Hours

SELECT backedup||' from '||archived "Archlog files backed up",
ondisk "Archlog files still on disk"
FROM (select count(*) archived
from sys.v_$archived_log where completion_time > sysdate - 1),
(select count(*) backedup from sys.v_$archived_log
where backup_count > 0
and completion_time > sysdate - 1),
(select count(*) ondisk from sys.v_$archived_log
where archived = 'YES' and deleted = 'NO')

RMAN Backups Still Running:

SELECT to_char(start_time,'DD-MON-YY HH24:MI') "BACKUP STARTED",
sofar, totalwork,
elapsed_seconds/60 "ELAPSE (Min)",
round(sofar/totalwork*100,2) "Complete%"
FROM sys.v_$session_longops
WHERE compnam = 'dbms_backup_restore'
/

BACKUP STARTED,SOFAR,TOTALWORK,ELAPSE (Min),Complete%
27-JUN-07 09:54,755,45683,2.73333333333333,1.65
27-JUN-07 09:52,1283,10947,4.36666666666667,11.72
27-JUN-07 09:46,11275,11275,0.783333333333333,100
27-JUN-07 09:46,58723,58723,5.73333333333333,100
27-JUN-07 09:46,12363,12363,0.333333333333333,100
27-JUN-07 09:44,11115,11115,4.53333333333333,100
27-JUN-07 09:44,12371,12371,0.183333333333333,100
27-JUN-07 07:34,4706,4706,0.166666666666667,100
27-JUN-07 07:34,83729,83729,118.35,100
27-JUN-07 05:21,8433,8433,0.333333333333333,100
27-JUN-07 05:21,83729,83729,132.25,100

RAC on Windows,Linux with VMWARE, FIREWIRE, NFS

Posted by Sagar Patil

Some cheap/easy ways to Install RAC on inexpensive hardware

Step-By-Step Installation of RAC on Linux – Single Node (Oracle9i 9.2.0 with OCFS) single_node_oracle9i_920_with_ocfs

RAC Different Test Environments Made Easy.pdf rac_different_test_environments_made_easy_208 from “Plamen Zyumbyulev”

Top of Page

Top menu