Streams Performance Data Collection Tools for 10g & 11g

Posted by Sagar Patil

There are diferent ways to collect performance related data

  • Install Application Workload Repository (AWR) /Active Session History (ASH). Take AWR snapshots an hour or for desired interval
  • Install and Configure STRMMON utility if you have 10gR2(Not available in 11g). Use Metalink Article “290605.1”
  • Use UTL_SPADV as a alternative to STRMMON for > 11gR1
  • Run Streams Healthcheckat each database: before, during , and after performance period of interest, available from Metalink:Article273674.1

Use AWR with STRMMON (10g)/UTL_SPADV(11g) during performance period of interest

10g : STRMMON Utility

  • Install and Configure STRMMON (10gR2) using “290605.1”
  • Collect STRMMON information regularly, at least once every hour to align with AWR snapshots.

Here is a STRMMON Command line example
– To collect 10 minutes of output from source and target databases in a single line

60 seconds X 10 Iterations= 600 seconds or 10 minutes
$ORACLE_HOME/denmo$strmmon -interval 10 -count 60 -user sys -passw <sys_password> -dbname<source database> -sysdba \
-user sys -passw<sys_password> -dbname<target database> -sysdba  > strmmon.log

11g : Using Streams performance advisor i.e. UTL_SPADV package (Replacement for STRMMON)

Install this Optional packge inside STREAMS admin user by compiling Sql> @?/rdbms/admin/utlspadv.sql

How to Run UTL_SPADV?
Run collect_stats Procedure will wait until interval and count satisfied , Default interval= 60 sec; Default count=10 iterations

11gR2 RAC-Dataguard Sync issue Between Primary & Standby

Posted by Sagar Patil

I have a 2 node RAC -DG setup between 2 remote data centres. After building DataGuard between them I am now coming across stange latency stats.

set pagesize 2000;
COLUMN NAME FORMAT A30;
COLUMN value FORMAT A20;
COLUMN UNIT FORMAT A20;
COLUMN time_computed FORMAT A20;
select name
 , value
 , unit
 , time_computed
 from v$dataguard_stats;

NAME                           VALUE                UNIT                 TIME_COMPUTED
------------------------------ -------------------- -------------------- --------------------
transport lag                  +00 00:02:21         day(2) to second(0)  06/15/2010 13:12:36
 interval
apply lag                      +00 00:02:23         day(2) to second(0)  06/15/2010 13:12:36
 interval
apply finish time              +00 00:00:01.846     day(2) to second(3)  06/15/2010 13:12:36
 interval
estimated startup time         16                   second               06/15/2010 13:12:36

The lag stats range from 2 mins to upto 3 hours.

select name
 , value
 , unit
 , time_computed
 from v$dataguard_stats;
 NAME                           VALUE                UNIT                 TIME_COMPUTED
------------------------------ -------------------- -------------------- --------------------
transport lag                  +00 00:53:10         day(2) to second(0)  06/15/2010 14:03:16
 interval
apply lag                      +00 00:53:12         day(2) to second(0)  06/15/2010 14:03:16
 interval
apply finish time              +00 00:00:01.846     day(2) to second(3)  06/15/2010 14:03:16
 interval
estimated startup time         16                   second               06/15/2010 14:03:16

DGMGRL> show database 'STDBY';
Database - STDBY
 Role:            PHYSICAL STANDBY
 Intended State:  APPLY-ON
 Transport Lag:   52 minutes 21 seconds
 Apply Lag:       52 minutes 23 seconds
 Real Time Query: OFF
 Instance(s):
 STDBY1 (apply instance)
 STDBY2
Database Status:
SUCCESS

DGMGRL> show database 'PROD' statusreport;
STATUS REPORT
 INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'STDBY' statusreport;
STATUS REPORT
 INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show configuration verbose;
Configuration - dataguard
 Protection Mode: MaxPerformance
 Databases:
 PROD  - Primary database
 STDBY - Physical standby database
 Properties:
 FastStartFailoverThreshold      = '30'
 OperationTimeout                = '30'
 FastStartFailoverLagLimit       = '30'
 CommunicationTimeout            = '180'
 FastStartFailoverAutoReinstate  = 'TRUE'
 FastStartFailoverPmyShutdown    = 'TRUE'
 BystandersFollowRoleChange      = 'ALL'
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS

DGMGRL> show database verbose 'PROD';
show database verbose 'STDBY';
Database - PROD
 Role:            PRIMARY
 Intended State:  TRANSPORT-ON
 Instance(s):
 PROD1
 PROD2
 Properties:
 DGConnectIdentifier             = 'PROD'
 ObserverConnectIdentifier       = ''
 LogXptMode                      = 'ASYNC'
 DelayMins                       = '0'
 Binding                         = 'optional'
 MaxFailure                      = '0'
 MaxConnections                  = '1'
 ReopenSecs                      = '300'
 NetTimeout                      = '30'
 RedoCompression                 = 'DISABLE'
 LogShipping                     = 'ON'
 PreferredApplyInstance          = ''
 ApplyInstanceTimeout            = '0'
 ApplyParallel                   = 'AUTO'
 StandbyFileManagement           = 'AUTO'
 ArchiveLagTarget                = '0'
 LogArchiveMaxProcesses          = '5'
 LogArchiveMinSucceedDest        = '1'
 DbFileNameConvert               = ''
 LogFileNameConvert              = ''
 FastStartFailoverTarget         = ''
 InconsistentProperties          = '(monitor)'
 InconsistentLogXptProps         = '(monitor)'
 SendQEntries                    = '(monitor)'
 LogXptStatus                    = '(monitor)'
 RecvQEntries                    = '(monitor)'
 SidName(*)
 StaticConnectIdentifier(*)
 StandbyArchiveLocation(*)
 AlternateLocation(*)
 LogArchiveTrace(*)
 LogArchiveFormat(*)
 TopWaitEvents(*)
 (*) - Please check specific instance for the property value
Database Status:
SUCCESS

DGMGRL>
Database - STDBY
 Role:            PHYSICAL STANDBY
 Intended State:  APPLY-ON
 Transport Lag:   28 minutes 40 seconds
 Apply Lag:       1 hour(s) 13 minutes 18 seconds
 Real Time Query: OFF
 Instance(s):
 STDBY1 (apply instance)
 STDBY2
 Properties:
 DGConnectIdentifier             = 'STDBY'
 ObserverConnectIdentifier       = ''
 LogXptMode                      = 'ARCH'
 DelayMins                       = '0'
 Binding                         = 'OPTIONAL'
 MaxFailure                      = '0'
 MaxConnections                  = '1'
 ReopenSecs                      = '300'
 NetTimeout                      = '30'
 RedoCompression                 = 'DISABLE'
 LogShipping                     = 'ON'
 PreferredApplyInstance          = ''
 ApplyInstanceTimeout            = '0'
 ApplyParallel                   = 'AUTO'
 StandbyFileManagement           = 'AUTO'
 ArchiveLagTarget                = '0'
 LogArchiveMaxProcesses          = '5'
 LogArchiveMinSucceedDest        = '1'
 DbFileNameConvert               = ''
 LogFileNameConvert              = ''
 FastStartFailoverTarget         = ''
 InconsistentProperties          = '(monitor)'
 InconsistentLogXptProps         = '(monitor)'
 SendQEntries                    = '(monitor)'
 LogXptStatus                    = '(monitor)'
 RecvQEntries                    = '(monitor)'
 SidName(*)
 StaticConnectIdentifier(*)
 StandbyArchiveLocation(*)
 AlternateLocation(*)
 LogArchiveTrace(*)
 LogArchiveFormat(*)
 TopWaitEvents(*)
 (*) - Please check specific instance for the property value
Database Status:
SUCCESS

DGMGRL> show database 'PROD' StatusReport
show database 'STDBY' StatusReport
STATUS REPORT
 INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> STATUS REPORT
 INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'PROD' InconsistentProperties
show database 'STDBY' InconsistentProperties
INCONSISTENT PROPERTIES
 INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL>
INCONSISTENT PROPERTIES
 INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

DGMGRL> DGMGRL> show database 'PROD' InconsistentLogXptProps
show database 'STDBY' InconsistentLogXptPropsINCONSISTENT LOG TRANSPORT PROPERTIES
 INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

DGMGRL> show database 'PROD' SendQEntries
PRIMARY_SEND_QUEUE
 STANDBY_NAME       STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED    FIRST_CHANGE#     NEXT_CHANGE#       SIZE (KBs)
 STDBY     ARCHIVED        732278995                1                 4924  11/15/2010 09:29:54  11/15/2010 11:00:15        405196462        405290625            25968
 CURRENT        732278995                1                 4925  11/15/2010 11:00:15                             405290625                              6667
 CURRENT        732278995                2                 5016  11/15/2010 10:15:37                             405250777                             14538

DGMGRL> show database 'PROD' LogXptStatus
LOG TRANSPORT STATUS
PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME               STATUS
 PROD1              STDBY
 PROD2              STDBY

DGMGRL> show database 'STDBY' RecvQEntries
STANDBY_RECEIVE_QUEUE
 STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED    FIRST_CHANGE#     NEXT_CHANGE#       SIZE (KBs)
 PARTIALLY_APPLIED        732278995                1                 4924  11/15/2010 09:29:54  11/15/2010 11:00:15        405196462        405290625            25968

Conclusion:
The stats above show archives are sent over immediately so there is no issue with the transport or the apply on the Standby.
The archiving to the standby is done by the Archiver, so this is only when a logswitch occures. This is why there is a ‘lag’:
Using LGWR instead of ARCH will minimize the ‘Lag’ values

Setting up Bidirectional Streams SCHEMA Replication

Posted by Sagar Patil

Details below are for configuring a schema level replication using API DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS.

1. Verify Source & Target System for Streams Configuration

CONNECT system/manager@ {STREAMS/STRMREPL}
SET SQLPROMPT "STREAMS> "
set pages 0;
set linesize 2000;
SELECT * FROM global_name; 
SHOW PARAMETER service_names
SHOW PARAMETER global_names

-- Verify database parameters
col name HEADING 'Parameter|Name' format a30
col value HEADING 'Parameter|Value' format a15
col description HEADING 'Description' format a60 word
select name,value,description from v$parameter where name in
 ('aq_tm_processes', 'archive_lag_target', 'job_queue_processes','_job_queue_interval',
 'shared_pool_size', 'sga_max_size', 'sga_target','streams_pool_size',
 'global_names', 'compatible','log_parallelism', 'logmnr_max_persistent_sessions',
 'parallel_max_servers', 'processes', 'sessions'
 );

-- Verify ARCHIVELOG mode and archive log settings
ARCHIVE LOG LIST
SHOW PARAMETER log_archive_format
SHOW PARAMETER log_archive_dest_1
SHOW PARAMETER log_archive_dest_state_1

-- Verify Streams administrator
SELECT username, account_status, default_tablespace
 FROM DBA_USERS
 WHERE username = 'STRMADMIN';
SELECT *
 FROM DBA_STREAMS_ADMINISTRATOR;

-- Verify database link as Streams administrator
CONNECT strmadmin/strmadmin@STREAMS
SELECT db_link
 FROM DBA_DB_LINKS;

-- Verify directory objects
SELECT directory_name, directory_path
 FROM   dba_directories;

2. Setting Up SCOTT Schema Replication

STREAMS>>conn scott/tiger
Connected.
STREAMS>>select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE
BONUS                          TABLE
SALGRADE                       TABLE
SALES                          TABLE
COSTS                          TABLE
TIMES                          TABLE
CHANNELS                       TABLE
PROMOTIONS                     TABLE
COUNTRIES                      TABLE
PRODUCTS                       TABLE
CATEGORIES_TAB                 TABLE
PRODUCT_REF_LIST_NESTEDTAB     TABLE
SUBCATEGORY_REF_LIST_NESTEDTAB TABLE
CUSTOMERS                      TABLE
EMP_REPL                       TABLE
16 rows selected.

STRMREPL>>conn scott/tiger
Connected.
STRMREPL>>select * from tab;
no rows selected

There are no tables existing at destination schema : SCOTT@STRMREPL

set serveroutput on;
set term on;
set lines 200;
BEGIN
 DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
 schema_names => 'SCOTT',
 source_directory_object => 'STREAMS_DIR',
 destination_directory_object => 'STRMREPL_DIR',
 source_database => 'STREAMS',
 destination_database => 'STRMREPL',
 perform_actions => TRUE,
 dump_file_name => 'scott_exp.dmp',
 capture_name => 'SCOTT_CAP',
 capture_queue_table => 'SCOTT_SCHEMA_CAPTURE_QT',
 capture_queue_name => 'SCOTT_SCHEMA_CAPTURE_Q',
 propagation_name => 'SCOTT_SCHEMA_PROPAGATION',
 apply_name => 'SCOTT_SCHEMA_APPLY',
 apply_queue_table => 'strmadmin.SCOTT_SCHEMA_APPLY_QT',
 apply_queue_name => 'strmadmin.SCOTT_SCHEMA_APPLY_Q',
 bi_directional=> TRUE,
 include_ddl => FALSE,
 instantiation=>DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);
 END;
 /
job finished
PL/SQL procedure successfully completed.

This can take a long time depending on schema to be replciated.
To monitor progress have alook at filesizes of expdp operation "scott_exp.dmp" at STREAMS_DIR & STRMREPL_DIR

select from dba_recoverable_script_errors to see there were no errors
STREAMS>> select error_message,block_num from dba_recoverable_script_errors

STRMREPL>>conn scott/tiger
Connected.
STRMREPL>>select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE
BONUS                          TABLE
SALGRADE                       TABLE
SALES                          TABLE
COSTS                          TABLE
TIMES                          TABLE
CHANNELS                       TABLE
PROMOTIONS                     TABLE
COUNTRIES                      TABLE
PRODUCTS                       TABLE
CATEGORIES_TAB                 TABLE
PRODUCT_REF_LIST_NESTEDTAB     TABLE
SUBCATEGORY_REF_LIST_NESTEDTAB TABLE
CUSTOMERS                      TABLE
EMP_REPL                       TABLE
16 rows selected.

 

 

 

3) Let’s verify DML changes executed in STREAMS are propagated to STRMREPL database

STREAMS>>delete from scott.promotions;
2 rows deleted.
STREAMS>>commit;
Commit complete.
STREAMS>>select count(*) from scott.promotions;
 COUNT(*)
----------
 0

At replicated Site 
STRMREPL>>select count(*) from scott.promotions;
 COUNT(*)
----------
 0

Note the DDL changes are not replicated as I set "include_ddl => FALSE" at "MAINTAIN_SCHEMAS"

STREAMS>>drop table scott.emp;
Table dropped.
STREAMS>>desc scott.emp;
ERROR:
ORA-04043: object scott.emp does not exist

STRMREPL>>desc scott.emp;
 Name                                                                                                    Null?    Type
 ----------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
 EMPNO                                                                                                   NOT NULL NUMBER(4)
 ENAME                                                                                                    VARCHAR2(10)
 JOB                                                                                                      VARCHAR2(9)
 MGR                                                                                                      NUMBER(4)
 HIREDATE                                                                                                 DATE
 SAL                                                                                                      NUMBER(7,2)
 COMM                                                                                                     NUMBER(7,2)
 DEPTNO                                                                                                   NUMBER(2)

Issues Noticed

STREAMS>>select error_message,block_num from dba_recoverable_script_errors;

ERROR_MESSAGE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 BLOCK_NUM
----------
ORA-24035: AQ agent  is not a subscriber for queue STRMADMIN.SCOTT_SCHEMA_CAPTURE_Q

I fixed it by deleting rows from following 2 tables.

SQL> conn / as sysdba
Connected.
STREAMS>> delete from dba_recoverable_script_errors;
STREAMS>> delete from dba_recoverable_script;
STREAMS>> commit;

If needed you can delete all streams config details except db_links and parameters using API below

A) Disable propagation schedule
STREAMS>> exec dbms_aqadm.disable_propagation_schedule(queue_name => 'SCOTT_SCHEMA_CAPTURE_Q',destination => 'STRMREPL',destination_queue => 'strmadmin.SCOTT_SCHEMA_APPLY_Q');

B) Drop propagation
STREAMS>> exec dbms_propagation_adm.drop_propagation(propagation_name => 'SCOTT_SCHEMA_PROPAGATION',drop_unused_rule_sets => true);
PL/SQL procedure successfully completed.

C) Remove Streams
STREAMS>> exec  dbms_streams_adm.remove_streams_configuration;
PL/SQL procedure successfully completed.

D) Drop Queues for capture and apply
– drop capture queue
STREAMS>>exec  DBMS_STREAMS_ADM.REMOVE_QUEUE('SCOTT_SCHEMA_CAPTURE_Q',TRUE);
PL/SQL procedure successfully completed.

– drop apply queue
STREAMS>>Exec DBMS_STREAMS_ADM.REMOVE_QUEUE('strmadmin.SCOTT_SCHEMA_APPLY_Q',TRUE);
PL/SQL procedure successfully completed.

Finally delete any STREAMs errors reported
STREAMS>> delete from dba_recoverable_script_errors;
STREAMS>> delete from dba_recoverable_script;
STREAMS>> commit;

Building Tablespace Growth Report from Grid Data

Posted by Sagar Patil

Use SQL Query below at Grid sysman database to locate growth of tablespaces over time. The returned results data could be used under EXCEL to generate graphs.

SELECT   target_name,
 KEY_VALUE,
 ROLLUP_TIMESTAMP,
 METRIC_COLUMN,
 sample_count,
 ROUND (AVERAGE / 1000) Allocted_GB
 FROM   sysman.MGMT$METRIC_DAILY
 WHERE       metric_name = 'tbspAllocation'
 AND LOWER (TARGET_NAME) LIKE '%Oracle_db_name%'
 AND KEY_VALUE = 'DATA'
 AND LOWER (TARGET_TYPE) = 'oracle_database'
 AND Metric_column = 'spaceAllocated'
ORDER BY   ROLLUP_TIMESTAMP DESC;

TARGET_NAME    KEY_VALUE    ROLLUP_TIMESTAMP    METRIC_COLUMN    SAMPLE_COUNT    ALLOCTED_GB
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    31/10/2010    spaceAllocated    1    395
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    19/10/2010    spaceAllocated    1    394
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    18/10/2010    spaceAllocated    1    394
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    21/09/2010    spaceAllocated    1    378
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    09/08/2010    spaceAllocated    1    371
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    25/08/2010    spaceAllocated    1    365
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    14/08/2010    spaceAllocated    1    357
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    08/07/2010    spaceAllocated    1    353
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    31/07/2010    spaceAllocated    1    349
Prod.oracle.db.uk       PROD_ARCHIVE_DATA_001    26/06/2010    spaceAllocated    1    330

Following attribute_types are available for target_type=’host’

CPUUsage
DiskActivity
EMDUploadStats
FileMonitoring
Filesystems
Load
Network
PagingActivity
ProcessInfo
ProgramResourceUtilization
Response
Swap_Area_Status
ThreadCPUUsageSummary
TotalDiskUsage
host_storage_history
proc_zombie

Following attribute_types are available for target_type=’oracle_database’

DATABASE_SIZE
Database_Resource_Usage
DeferredTrans
OCMInstrumentation
Recovery_Area
Response
UserAudit
UserBlock
UserLocks
alertLogStatus
all_invalid_objects
archFull
dbjob_status
dumpFull
ha_flashrecovery
ha_recovery
instance_efficiency
instance_throughput
invalid_objects
problemTbsp
segment_advisor_count
service
sga_pool_wastage
sql_response
streams_processes_count
system_response_time_per_call
tbspAllocation
wait_bottlenecks
wait_sess_cls

 

 

Oracle Roles and Users audit report

Posted by Sagar Patil

I have often used following procedure to create list of roles & corresponding users mapped into it. This was then used to email business owners as well as for user accesss auditing.

-- Package Name : users_granted_role
-- This package was created to spool user and their respective Privs from oracle data Dictionary.
CREATE OR REPLACE PACKAGE users_granted_role
IS
 procedure write_op (pv_str in varchar2);
 function user_or_role(pv_grantee in dba_users.username%type) return varchar2;
 function role_pwd(pv_role in dba_roles.role%type)   return varchar2;
 procedure get_users(pv_grantee in dba_roles.role%type,pv_tabstop in out number);
 procedure get_role (pv_role in varchar2);
 Procedure extract_user_role_details;
END users_granted_role;
/
create or replace package body users_granted_role
IS
 output_method varchar2(1) :='S';
 skip_user varchar2(1) := 'N';
 user_to_skip varchar2(20) :='TEST%';
 --
 lg_fptr utl_file.file_type;
 lv_file_or_screen varchar2(1):='S';
 v_tag      VARCHAR2(8);
 v_filename VARCHAR2(30);
 v_today    VARCHAR2(8);
 --
 cursor find_all_roles is
 Select    role
 from    dba_roles;
 --
procedure write_op (pv_str in varchar2) is
 begin
 v_today    := TO_CHAR(sysdate,'YYYYMMDD');
 v_tag      := 'UserPriv';
 v_filename := 'User_Privileges'|| v_today;

 if lv_file_or_screen='S' then
 dbms_output.put_line(v_tag || v_filename||' '||pv_str);
 else
 utl_file.put_line(lg_fptr,pv_str);
 end if;
 exception
 when utl_file.invalid_path  then
 dbms_output.put_line('invalid path');
 when utl_file.invalid_mode  then
 dbms_output.put_line('invalid mode');
 when utl_file.invalid_filehandle  then
 dbms_output.put_line('invalid filehandle');
 when utl_file.invalid_operation  then
 dbms_output.put_line('invalid operation');
 when utl_file.read_error  then
 dbms_output.put_line('read error');
 when utl_file.write_error  then
 dbms_output.put_line('write error');
 when utl_file.internal_error  then
 dbms_output.put_line('internal error');
 when others then
 dbms_output.put_line('ERROR (write_op) => '||sqlcode);
 dbms_output.put_line('MSG (write_op) => '||sqlerrm);

end write_op;
 --
function user_or_role(pv_grantee in dba_users.username%type) return varchar2 is
 --
 cursor c_use (cp_grantee in dba_users.username%type) is
 select  'USER' userrole
 from    dba_users u
 where   u.username=cp_grantee
 union
 select  'ROLE' userrole
 from    dba_roles r
 where   r.role=cp_grantee;
 --
 lv_use c_use%rowtype;
 --
 begin
 open c_use(pv_grantee);
 fetch c_use into lv_use;
 close c_use;
 return lv_use.userrole;
 exception
 when others then
 dbms_output.put_line('ERROR (user_or_role) => '||sqlcode);
 dbms_output.put_line('MSG (user_or_role) => '||sqlerrm);
end user_or_role;
 -----------------
function role_pwd(pv_role in dba_roles.role%type)   return varchar2 is
 --
 cursor c_role(cp_role in dba_roles.role%type) is
 select    r.password_required
 from    dba_roles r
 where    r.role=cp_role;
 --
 lv_role c_role%rowtype;
 --
 begin
 open c_role(pv_role);
 fetch c_role into lv_role;
 close c_role;
 return lv_role.password_required;
 exception       
 when others then null;
 --dbms_output.put_line('ERROR (role_pwd) => '||sqlcode);
 --dbms_output.put_line('MSG (role_pwd) => '||sqlerrm);
end role_pwd;
 --
procedure get_users(pv_grantee in dba_roles.role%type,pv_tabstop in out number) is
 --
 lv_tab varchar2(50):='';
 lv_loop number;
 lv_user_or_role dba_users.username%type;
 --
 cursor c_user (cp_username in dba_role_privs.grantee%type) is
 select  d.grantee,
 d.admin_option
 from    dba_role_privs d
 where   d.granted_role=cp_username;
 --
 begin
 pv_tabstop:=pv_tabstop+1;
 for lv_loop in 1..pv_tabstop loop
 lv_tab:=lv_tab||chr(9);
 end loop;

 for lv_user in c_user(pv_grantee) loop
 lv_user_or_role:=user_or_role(lv_user.grantee);
 if lv_user_or_role = 'ROLE' then
 if lv_user.grantee = 'PUBLIC' then
 write_op(lv_tab||'Role => '||lv_user.grantee
 ||' (ADM = '||lv_user.admin_option
 ||'|PWD = '||role_pwd(lv_user.grantee)||')');
 else
 write_op(lv_tab||'Role => '||lv_user.grantee
 ||' (ADM = '||lv_user.admin_option
 ||'|PWD = '||role_pwd(lv_user.grantee)||')'
 ||' which is granted to =>');
 end if;
 get_users(lv_user.grantee,pv_tabstop);
 else
 if upper(skip_user) = 'Y' and lv_user.grantee like upper(user_to_skip) then
 null;
 else
 write_op(lv_tab||'User => '||lv_user.grantee
 ||' (ADM = '||lv_user.admin_option||')');
 end if;
 end if;
 end loop;
 pv_tabstop:=pv_tabstop-1;
 lv_tab:='';
 exception
 when others then
 dbms_output.put_line('ERROR (get_users) => '||sqlcode);
 dbms_output.put_line('MSG (get_users) => '||sqlerrm);       
end get_users;
----
procedure get_role (pv_role in varchar2) is
 --
 cursor c_main (cp_role in varchar2) is
 select    p.grantee,
 p.admin_option
 from    dba_role_privs p
 where    p.granted_role=cp_role;
 --
 lv_userrole dba_users.username%type;
 lv_tabstop number;
 --
 --
begin
 lv_tabstop:=1;
 for lv_main in c_main(pv_role) loop   
 lv_userrole:=user_or_role(lv_main.grantee);
 if lv_userrole='USER' then
 if upper(skip_user) = 'Y' and lv_main.grantee like upper(user_to_skip) then
 null;
 else
 write_op(chr(9)||'User => '||lv_main.grantee
 ||' (ADM = '||lv_main.admin_option||')');
 end if;
 else
 if lv_main.grantee='PUBLIC' then
 write_op(chr(9)||'Role => '||lv_main.grantee
 ||' (ADM = '||lv_main.admin_option
 ||'|PWD = '||role_pwd(lv_main.grantee)||')');
 else
 write_op(chr(9)||'Role => '||lv_main.grantee
 ||' (ADM = '||lv_main.admin_option
 ||'|PWD = '||role_pwd(lv_main.grantee)||')'
 ||' which is granted to =>');
 end if;
 get_users(lv_main.grantee,lv_tabstop);
 end if;
 end loop;
 exception
 when others then
 dbms_output.put_line('ERROR (get_role) => '||sqlcode);
 dbms_output.put_line('MSG (get_role) => '||sqlerrm);
end get_role;

Procedure extract_user_role_details is
begin
 write_op('Users_granted_role: Release 1.0 - Author : Sagar PATIL on '|| sysdate);
 for role_to_find in find_all_roles loop
 lv_file_or_screen:= upper(output_method);
 write_op(chr(10));
 write_op('Investigating Role => '||upper(role_to_find.role)||' (PWD = '
 ||role_pwd(upper(role_to_find.role))||') which is granted to =>');
 write_op('====================================================================');
 get_role(upper(role_to_find.role));
 end loop;
exception
 when others then
 dbms_output.put_line('ERROR (main) => '||sqlcode);
 dbms_output.put_line('MSG (main) => '||sqlerrm);
end extract_user_role_details;
end;
/

Run it as below

SQL> spool list_of_users.lst
SQL> set serveroutput on size 20000;
SQL> exec users_granted_role.extract_user_role_details;

UserPrivUser_Privileges20101026 Investigating Role => CONNECT (PWD = NO) which is granted to =>
====================================================================
UserPrivUser_Privileges20101026         User => WMSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => SPATIAL_CSW_ADMIN_USR (ADM = NO)
UserPrivUser_Privileges20101026         User => SYSMAN (ADM = NO)
UserPrivUser_Privileges20101026         User => GRIDCONTROL (ADM = NO)
UserPrivUser_Privileges20101026         User => RMAN (ADM = NO)
UserPrivUser_Privileges20101026         User => MDDATA (ADM = NO)
UserPrivUser_Privileges20101026         User => OWBSYS (ADM = YES)
UserPrivUser_Privileges20101026         User => SYSMAN_MDS (ADM = NO)
UserPrivUser_Privileges20101026         User => SYS (ADM = YES)
UserPrivUser_Privileges20101026         User => MDSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => SPATIAL_WFS_ADMIN_USR (ADM = NO)
UserPrivUser_Privileges20101026         User => APEX_030200 (ADM = YES)
UserPrivUser_Privileges20101026         User => SCOTT (ADM = NO)
UserPrivUser_Privileges20101026

UserPrivUser_Privileges20101026 Investigating Role => RESOURCE (PWD = NO) which is granted to =>
====================================================================
UserPrivUser_Privileges20101026         User => WMSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => SCOTT (ADM = NO)
UserPrivUser_Privileges20101026         User => SPATIAL_CSW_ADMIN_USR (ADM = NO)
UserPrivUser_Privileges20101026         User => RMAN (ADM = NO)
UserPrivUser_Privileges20101026         Role => LOGSTDBY_ADMINISTRATOR (ADM = NO|PWD =NO) which is granted to =>
UserPrivUser_Privileges20101026                 User => SYS (ADM = YES)
UserPrivUser_Privileges20101026         User => EXFSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => SPATIAL_WFS_ADMIN_USR (ADM = NO)
UserPrivUser_Privileges20101026         User => CTXSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => OLAPSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => MDSYS (ADM = NO)
UserPrivUser_Privileges20101026         User => SYSMAN_MDS (ADM = NO)
UserPrivUser_Privileges20101026         User => XDB (ADM = NO)
UserPrivUser_Privileges20101026         User => APEX_030200 (ADM = YES)
UserPrivUser_Privileges20101026         User => SYS (ADM = YES)
UserPrivUser_Privileges20101026         User => SYSMAN (ADM = NO)
UserPrivUser_Privileges20101026         User => OUTLN (ADM = NO)
UserPrivUser_Privileges20101026         User => MDDATA (ADM = NO)
PL/SQL procedure successfully completed.
SQL> spool off;

Streams Replication| Monitoring Streams Capture Processes

Posted by Sagar Patil

Capture process is an optional Streams client that is an Oracle background process that scans a database redo log to capture DML and DDL changes made to database objects.

1)  Displaying the Queue, Rule Sets, and Status of Each Capture Process
2)  Displaying Change Capture Information About Each Capture Process
3)  Displaying State Change and Message Creation Time for Each Capture Process
4)  Displaying Elapsed Time Performing Capture Operations for Each Capture Process
5)  Displaying Information About Each Downstream Capture Process
6)  Displaying the Registered Redo Log Files for Each Capture Process
7)  Displaying the Redo Log Files that Are Required by Each Capture Process
8)  Displaying SCN Values for Each Redo Log File Used by Each Capture Process
9)  Displaying the Last Archived Redo Entry Available to Each Capture Process
10) Listing the Parameter Settings for Each Capture Process
11) Viewing the Extra Attributes Captured by Each Capture Process
12) Determining the Applied SCN for All Capture Processes in a Database
13) Determining Redo Log Scanning Latency for Each Capture Process
14) Determining Message Enqueuing Latency for Each Capture Process
15) Displaying Information About Rule Evaluations for Each Capture Process

1)    Displaying the Queue, Rule Sets, and Status of Each Capture Process
To display this general information about each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15

SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS
FROM DBA_CAPTURE;

Capture         Capture                                         Capture
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set        Rule Set        Status
--------------- --------------- --------------- --------------- ---------------
STREAMS_DATA_C STREAMS_DATA_C RULESET$_54                     ENABLED
 _Q
If the status of a capture process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_CAPTURE data dictionary view to determine the error.

2)    Displaying Change Capture Information About Each Capture Process
The query in this section displays the following information about each capture process in a database:
The current state of the capture process: INITIALIZING,WAITING FOR DICTONARY REDO, DICTIONARY INITIALIZATION, MINING, LOADING, CAPTURING CHANGES, WAITING FOR REDO, EVALUATING RULE, CREATING LCR, ENQUEUING MESSAGE, PAUSED FOR FLOW CONTROL, SHUTTING DOWN
The total number of redo entries passed by LogMiner to the capture process for detailed rule evaluation. A capture process converts a redo entry into a message and performs detailed rule evaluation on the message when capture process prefiltering cannot discard the change.
COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 999999

SELECT c.CAPTURE_NAME,
 SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
 c.SID,
 c.SERIAL#,
 c.STATE,
 c.TOTAL_MESSAGES_CAPTURED,
 c.TOTAL_MESSAGES_ENQUEUED
 FROM V$STREAMS_CAPTURE c, V$SESSION s
 WHERE c.SID = s.SID AND
 c.SERIAL# = s.SERIAL#;
Redo
 Capture         Session                               Entries
Capture         Process Session  Serial                             Evaluated
Name            Number       ID  Number State                       In Detail
--------------- ------- ------- ------- --------------------------- ---------
 Total
 LCRs
Enqueued
--------
STREAMS_DATA_C CP01        217    2119 WAITING FOR TRANSACTION      ########
 #######

3)    Displaying State Change and Message Creation Time for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN STATE_CHANGED HEADING 'State|Change Time'
COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'

SELECT CAPTURE_NAME,
 STATE,
 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
 FROM V$STREAMS_CAPTURE;

Capture                                     State             Last Message
Name            State                       Change Time       Create Time
--------------- --------------------------- ----------------- -----------------
STREAMS_DATA_C WAITING FOR TRANSACTION     13:03:11 01/26/10 13:03:13 01/26/10

4)    Displaying Elapsed Time Performing Capture Operations for Each Capture Process
To display this information for each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN ELAPSED_CAPTURE_TIME HEADING 'Elapsed|Capture|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Elapsed|Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Elapsed|Enqueue|Time' FORMAT 99999999.99
COLUMN ELAPSED_LCR_TIME HEADING 'Elapsed|LCR|Creation|Time' FORMAT 99999999.99
COLUMN ELAPSED_PAUSE_TIME HEADING 'Elapsed|Pause|Time' FORMAT 99999999.99

SELECT CAPTURE_NAME,
 (ELAPSED_CAPTURE_TIME/100) ELAPSED_CAPTURE_TIME,
 (ELAPSED_RULE_TIME/100) ELAPSED_RULE_TIME,
 (ELAPSED_ENQUEUE_TIME/100) ELAPSED_ENQUEUE_TIME,
 (ELAPSED_LCR_TIME/100) ELAPSED_LCR_TIME,
 (ELAPSED_PAUSE_TIME/100) ELAPSED_PAUSE_TIME
 FROM V$STREAMS_CAPTURE;

Elapsed                   Elapsed
 Elapsed         Rule      Elapsed          LCR      Elapsed
Capture              Capture   Evaluation      Enqueue     Creation        Pause
Name                    Time         Time         Time         Time         Time
--------------- ------------ ------------ ------------ ------------ ------------
STREAMS_DATA_C       279.21          .26          .43          .26          .00

5)    Displaying Information About Each Downstream Capture Process
A downstream capture is a capture process runs on a database other than the source database.
You can display the following information about each downstream capture process in a database by running the query in this section:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
COLUMN USE_DATABASE_LINK HEADING 'Uses|Database|Link?' FORMAT A8

SELECT CAPTURE_NAME,
 SOURCE_DATABASE,
 QUEUE_NAME,
 STATUS,
 USE_DATABASE_LINK
 FROM DBA_CAPTURE
 WHERE CAPTURE_TYPE = 'DOWNSTREAM';

6)    Displaying the Registered Redo Log Files for Each Capture Process
This query displays information about these files for both local capture processes and downstream capture processes.

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A20
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10

SELECT r.CONSUMER_NAME,
 r.SOURCE_DATABASE,
 r.SEQUENCE#,
 r.NAME,
 r.DICTIONARY_BEGIN,
 r.DICTIONARY_END
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Capture                                                  Dictionary Dictionary
Process         Source     Sequence Archived Redo Log    Build      Build
Name            Database     Number File Name            Begin      End
--------------- ---------- -------- -------------------- ---------- ----------
STREAMS_DATA_C   PROD         7175 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7175_732278995.arc

STREAMS_DATA_C   PROD         7176 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7176_732278995.arc

STREAMS_DATA_C   PROD         7177 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7177_732278995.arc

7)    Displaying the Redo Log Files that Are Required by Each Capture Process
A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files.
You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. Redo log files prior to the redo log file that contains the required checkpoint SCN are no longer needed by the capture process. These redo log files can be stored offline if they are no longer needed for any other purpose. If you reset the start SCN for a capture process to a lower value in the future, then these redo log files might be needed.
To display this information about each required archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40

SELECT r.CONSUMER_NAME,
 r.SOURCE_DATABASE,
 r.SEQUENCE#,
 r.NAME
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME =  c.CAPTURE_NAME AND
 r.NEXT_SCN      >= c.REQUIRED_CHECKPOINT_SCN;

Capture                             Required
Process         Source     Sequence Archived Redo Log
Name            Database     Number File Name
--------------- ---------- -------- ----------------------------------------
STREAMS_DATA_C PROD        13418 /mnt/logs/oradata/PROD/arch/2_13418_732278995.arc
STREAMS_DATA_C PROD        10762 /mnt/logs/oradata/PROD/arch/1_10762_732278995.arc

8)    Displaying SCN Values for Each Redo Log File Used by Each Capture Process:

You can display information about the SCN values for archived redo log files that are registered for each capture process in a database by running the query in this section.
This query displays information the SCN values for these files for both local capture processes and downstream capture processes. This query also identifies redo log files that are no longer needed by any capture process at the local database.
To display this information about each registered archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A25
COLUMN FIRST_SCN HEADING 'First SCN' FORMAT 99999999999
COLUMN NEXT_SCN HEADING 'Next SCN' FORMAT 99999999999
COLUMN PURGEABLE HEADING 'Purgeable?' FORMAT A10

SELECT r.CONSUMER_NAME,
 r.NAME,
 r.FIRST_SCN,
 r.NEXT_SCN,
 r.PURGEABLE
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Capture
Process         Archived Redo Log
Name            File Name                    First SCN     Next SCN Purgeable?
--------------- ------------------------- ------------ ------------ ----------
STREAMS_DATA_C /mnt/logs/oradata/PROD/           7499664298   7499908128 NO
 arch/1_7175_732278995.arc

STREAMS_DATA_C /mnt/logs/oradata/PROD/           7499908128   7500009109 NO
 arch/1_7176_732278995.arc

9) Displaying the Last Archived Redo Entry Available to Each Capture Process
For a local capture process, the last archived redo entry available is the last entry from the online redo log flushed to an archived log file.
For a downstream capture process, the last archived redo entry available is the redo entry with the most recent SCN in the last archived log file added to the LogMiner session used by the capture process.

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A20
COLUMN LOGMINER_ID HEADING 'LogMiner ID' FORMAT 9999
COLUMN AVAILABLE_MESSAGE_NUMBER HEADING 'Last Redo SCN' FORMAT 9999999999
COLUMN AVAILABLE_MESSAGE_CREATE_TIME HEADING 'Time of|Last Redo SCN'

SELECT CAPTURE_NAME,
 LOGMINER_ID,
 AVAILABLE_MESSAGE_NUMBER,
 TO_CHAR(AVAILABLE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY')
 AVAILABLE_MESSAGE_CREATE_TIME
 FROM V$STREAMS_CAPTURE;

Capture                                        Time of
Name                 LogMiner ID Last Redo SCN Last Redo SCN
-------------------- ----------- ------------- -----------------
STREAMS_DATA_C                9    8087814747 23:13:18 10/26/10

10)    Listing the Parameter Settings for Each Capture Process
Following query displays the current setting for each capture process parameter for each capture process in a database:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A25
COLUMN PARAMETER HEADING 'Parameter' FORMAT A25
COLUMN VALUE HEADING 'Value' FORMAT A10
COLUMN SET_BY_USER HEADING 'Set by User?' FORMAT A15

SELECT CAPTURE_NAME,
 PARAMETER,
 VALUE,
 SET_BY_USER 
 FROM DBA_CAPTURE_PARAMETERS;

Capture
Process
Name                      Parameter                 Value      Set by User?
------------------------- ------------------------- ---------- ---------------
STREAMS_DATA_C           PARALLELISM               1          NO
STREAMS_DATA_C           STARTUP_SECONDS           0          NO
STREAMS_DATA_C           TRACE_LEVEL               0          NO
STREAMS_DATA_C           TIME_LIMIT                INFINITE   NO
STREAMS_DATA_C           MESSAGE_LIMIT             INFINITE   NO
STREAMS_DATA_C           MAXIMUM_SCN               INFINITE   NO
STREAMS_DATA_C           WRITE_ALERT_LOG           Y          NO
STREAMS_DATA_C           DISABLE_ON_LIMIT          N          NO
STREAMS_DATA_C           DOWNSTREAM_REAL_TIME_MINE Y          NO
STREAMS_DATA_C           IGNORE_TRANSACTION                   NO
STREAMS_DATA_C           MESSAGE_TRACKING_FREQUENC 2000000    NO
 Y

STREAMS_DATA_C           IGNORE_UNSUPPORTED_TABLE  *          NO
STREAMS_DATA_C           SKIP_AUTOFILTERED_TABLE_D Y          NO
 DL

STREAMS_DATA_C           SPLIT_THRESHOLD           1800       NO
STREAMS_DATA_C           MERGE_THRESHOLD           60         NO
STREAMS_DATA_C           CAPTURE_SEQUENCE_NEXTVAL  N          NO
STREAMS_DATA_C           XOUT_CLIENT_EXISTS        N          NO
STREAMS_DATA_C           EXCLUDEUSER                          NO
STREAMS_DATA_C           EXCLUDEUSERID                        NO
STREAMS_DATA_C           EXCLUDETRANS                         NO
STREAMS_DATA_C           GETREPLICATES             N          NO
STREAMS_DATA_C           GETAPPLOPS                Y          NO
STREAMS_DATA_C           CAPTURE_IDKEY_OBJECTS     N          NO
STREAMS_DATA_C           MAX_SGA_SIZE              INFINITE   NO

11)    Viewing the Extra Attributes Captured by Each Capture Process
You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process to capture one or more extra attributes from the redo log.
The following query displays the extra attributes included in the LCRs captured by each capture process in the local database:

COLUMN CAPTURE_NAME HEADING 'Capture Process' FORMAT A20
COLUMN ATTRIBUTE_NAME HEADING 'Attribute Name' FORMAT A15
COLUMN INCLUDE HEADING 'Include Attribute in LCRs?' FORMAT A30

SELECT CAPTURE_NAME, ATTRIBUTE_NAME, INCLUDE
 FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
 ORDER BY CAPTURE_NAME;

Capture Process      Attribute Name  Include Attribute in LCRs?
-------------------- --------------- ------------------------------
STREAMS_DATA_C      THREAD#         NO
STREAMS_DATA_C      ROW_ID          NO
STREAMS_DATA_C      TX_NAME         NO
STREAMS_DATA_C      SERIAL#         NO
STREAMS_DATA_C      USERNAME        NO
STREAMS_DATA_C      SESSION#        NO

12) Determining the Applied SCN for All Capture Processes in a Database
The applied system change number (SCN) for a capture process is the SCN of the most recent message dequeued by the relevant apply processes. All changes below this applied SCN have been dequeued by all apply processes that apply changes captured by the capture process.

COLUMN CAPTURE_NAME HEADING 'Capture Process Name' FORMAT A30
COLUMN APPLIED_SCN HEADING 'Applied SCN' FORMAT 99999999999

SELECT   CAPTURE_NAME, APPLIED_SCN FROM DBA_CAPTURE;

Capture Process      APPLIED_SCN
-------------------- -----------
STREAMS_DATA_C       8087783293

13) Determining Redo Log Scanning Latency for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN LAST_STATUS HEADING 'Seconds Since|Last Status' FORMAT 999999
COLUMN CAPTURE_TIME HEADING 'Current|Process|Time'
COLUMN CREATE_TIME HEADING 'Message|Creation Time' FORMAT 999999

SELECT CAPTURE_NAME,
 ((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
 ((SYSDATE - CAPTURE_TIME)*86400) LAST_STATUS,
 TO_CHAR(CAPTURE_TIME, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,     
 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME
 FROM V$STREAMS_CAPTURE;

Capture    Latency               Current
Process         in Seconds Since Process           Message
Name       Seconds   Last Status Time              Creation Time
---------- ------- ------------- ----------------- -----------------
INFO_CONTR       2             1 23:21:36 10/26/10 23:21:35 10/26/10
ACT_C

The “Latency in Seconds” returned by this query is the difference between the current time (SYSDATE) and the “Message Creation Time.” The “Seconds Since Last Status” returned by this query is the difference between the current time (SYSDATE) and the “Current Process Time.”

14)    Determining Message Enqueuing Latency for Each Capture Process
Run the following query to determine the message capturing latency for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Message Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Message|Number' FORMAT 999999

SELECT CAPTURE_NAME,
 (ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS,
 TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
 TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
 ENQUEUE_MESSAGE_NUMBER
 FROM V$STREAMS_CAPTURE;

Capture    Latency
Process         in Message Creation                          Message
Name       Seconds Time                 Enqueue Time          Number
---------- ------- -------------------- -------------------- -------
STREAMS_DATA   -3 23:22:25 10/26/10    23:22:22 10/26/10    #######

15)    Displaying Information About Rule Evaluations for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN TOTAL_PREFILTER_DISCARDED HEADING 'Prefilter|Changes|Discarded'
 FORMAT 9999999999
COLUMN TOTAL_PREFILTER_KEPT HEADING 'Prefilter|Changes|Kept' FORMAT 9999999999
COLUMN TOTAL_PREFILTER_EVALUATIONS HEADING 'Prefilter|Evaluations'
 FORMAT 9999999999
COLUMN UNDECIDED HEADING 'Undecided|After|Prefilter' FORMAT 9999999999
COLUMN TOTAL_FULL_EVALUATIONS HEADING 'Full|Evaluations' FORMAT 9999999999

SELECT CAPTURE_NAME,
 TOTAL_PREFILTER_DISCARDED,
 TOTAL_PREFILTER_KEPT,
 TOTAL_PREFILTER_EVALUATIONS,
 (TOTAL_PREFILTER_EVALUATIONS -
 (TOTAL_PREFILTER_KEPT + TOTAL_PREFILTER_DISCARDED)) UNDECIDED,
 TOTAL_FULL_EVALUATIONS
 FROM V$STREAMS_CAPTURE;

Prefilter   Prefilter               Undecided
Capture            Changes     Changes   Prefilter       After        Full
Name             Discarded        Kept Evaluations   Prefilter Evaluations
--------------- ---------- ----------- ----------- ----------- -----------
STREAMS_DATA_C   53128265     1662026    54790291           0         801

The total number of prefilter evaluations equals the sum of the prefilter changes discarded, the prefilter changes kept, and the undecided changes.

Streams Replication | Monitoring Streams Queues and Propagations

Posted by Sagar Patil

This post lists sample queries to monitor Streams queues and propagations.

1)       Monitoring ANYDATA Queues and Messaging
2)       Monitoring Buffered Queues
3)       Monitoring Streams Propagations and Propagation Jobs

1)        Monitoring ANYDATA Queues and Messaging

a)     Displaying the ANYDATA Queues in a Database
b)     Viewing the Messaging Clients in a Database
c)     Viewing Message Notifications
d)     Determining the Consumer of Each User-Enqueued Message in a Queue
e)     Viewing the Contents of User-Enqueued Messages in a Queue

a)     Displaying the ANYDATA Queues in a Database : To display all of the ANYDATA queues in a database, run the following query:

COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
 FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
 WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
 q.QUEUE_TABLE = t.QUEUE_TABLE AND
 q.OWNER       = t.OWNER;

An exception queue is created automatically when you create an ANYDATA queue.

b) Viewing the Messaging Clients in a Database : Run the following query to view this information about messaging clients

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A25
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A18
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A11
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A11

SELECT STREAMS_NAME,
 QUEUE_OWNER,
 QUEUE_NAME,
 RULE_SET_NAME,
 NEGATIVE_RULE_SET_NAME
 FROM DBA_STREAMS_MESSAGE_CONSUMERS;

c) Viewing Message Notifications : You can configure a message notification to send a notification when a message that can be dequeued by a messaging client is enqueued into a queue.
The notification can be sent to an email address, to an HTTP URL, or to a PL/SQL procedure. Run the following query to view the message notifications configured in a database:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25

SELECT STREAMS_NAME,
 QUEUE_OWNER,
 QUEUE_NAME,
 NOTIFICATION_TYPE,
 NOTIFICATION_ACTION
 FROM DBA_STREAMS_MESSAGE_CONSUMERS
 WHERE NOTIFICATION_TYPE IS NOT NULL;

d) Configuring a Messaging Client and Message Notification : To determine the consumer for each user-enqueued message in a queue, query AQ$queue_table_name in the queue owner’s schema, where queue_table_name is the name of the queue table.  For example, to find the consumers of the user-enqueued messages in the oe_q_table_any queue table, run the following query:

COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME
 FROM AQ$OE_Q_TABLE_ANY;

e) Viewing the Contents of User-Enqueued Messages in a Queue : In an ANYDATA queue, to view the contents of a payload that is encapsulated within an ANYDATA payload, you query the queue table using the Accessdata_type static functions of the ANYDATA type, where data_type is the type of payload to view. For example, to view the contents of payload of type NUMBER in a queue with a queue table named oe_queue_table, run the following query as the queue owner:

SELECT qt.user_data.AccessNumber() "Numbers in Queue"
 FROM strmadmin.oe_q_table_any qt;

Similarly, to view the contents of a payload of type VARCHAR2 in a queue with a queue table named oe_q_table_any, run the following query:

SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
 FROM strmadmin.oe_q_table_any qt;

2)       Monitoring Buffered Queues

A buffered queue includes the following storage areas:
Buffered queues are stored in the Streams pool, and the Streams pool is a portion of memory in the System Global Area (SGA) that is used by Streams.  In a Streams environment, LCRs captured by a capture process always are stored in the buffered queue of an ANYDATA queue.  Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table.  Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages.
Messages that spill from memory are stored in the appropriate queue table.

The following sections describe queries that monitor buffered queues:

i) Determining the Number of Messages in Each Buffered Queue

COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT QUEUE_SCHEMA,
 QUEUE_NAME,
 (NUM_MSGS - SPILL_MSGS) MEM_MSG,
 SPILL_MSGS,
 NUM_MSGS
 FROM V$BUFFERED_QUEUES;

ii) Viewing the Capture Processes for the LCRs in Each Buffered Queue
A capture process is a queue publisher that enqueues captured messages into a buffered queue. These LCRs can be propagated to other queues subsequently.
By querying the V$BUFFERED_PUBLISHERS dynamic performance view, you can display each capture process that captured the LCRs in the buffered queue. These LCRs might have been captured at the local database, or they might have been captured at a remote database and propagated to the queue specified in the query.

COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 99999999

SELECT SENDER_NAME,
 SENDER_ADDRESS,
 QUEUE_NAME,       
 CNUM_MSGS,
 LAST_ENQUEUED_MSG
 FROM V$BUFFERED_PUBLISHERS;

iii) Displaying Information About Propagations that Send Buffered Messages :
The query in this section displays the following information about each propagation that sends buffered messages from a buffered queue in the local database

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20
SELECT p.PROPAGATION_NAME,
 s.QUEUE_SCHEMA,
 s.QUEUE_NAME,
 s.DBLINK,
 s.SCHEDULE_STATUS
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

iv) Displaying the Number of Messages and Bytes Sent By Propagations
The query in this section displays the number of messages and the number of bytes sent by each propagation that sends buffered messages from a buffered queue in the local database:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 99999999

SELECT p.PROPAGATION_NAME,
 s.QUEUE_NAME,
 s.DBLINK,
 s.TOTAL_MSGS,
 s.TOTAL_BYTES
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

v) Displaying Performance Statistics for Propagations that Send Buffered Messages
The query in this section displays the amount of time that a propagation sending buffered messages spends performing various tasks. Each propagation sends messages from the source queue to the destination queue. Specifically, the query displays the following information:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.PROPAGATION_NAME,
 s.QUEUE_NAME,
 s.DBLINK,
 (s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
 (s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
 (s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

vi) Viewing the Propagations Dequeuing Messages from Each Buffered Queue
Propagations are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the propagations that can dequeue buffered messages from a queue.

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN SUBSCRIBER_ADDRESS HEADING 'Destination|Database' FORMAT A11
COLUMN CURRENT_ENQ_SEQ HEADING 'Current|Enqueued|Sequence' FORMAT 99999999
COLUMN LAST_BROWSED_SEQ HEADING 'Last|Browsed|Sequence' FORMAT 99999999
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
 FORMAT 99999999

SELECT p.PROPAGATION_NAME,
 s.SUBSCRIBER_ADDRESS,
 s.CURRENT_ENQ_SEQ,
 s.LAST_BROWSED_SEQ,    
 s.LAST_DEQUEUED_SEQ,
 s.NUM_MSGS, 
 s.TOTAL_SPILLED_MSG
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s, V$BUFFERED_QUEUES q
WHERE q.QUEUE_ID = s.QUEUE_ID AND
 p.SOURCE_QUEUE_OWNER = q.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME = q.QUEUE_NAME AND
 p.DESTINATION_DBLINK = s.SUBSCRIBER_ADDRESS;

vii) Displaying Performance Statistics for Propagations that Receive Buffered Messages
The query in this section displays the amount of time that each propagation receiving buffered messages spends performing various tasks. Each propagation receives the messages and enqueues them into the destination queue for the propagation. Specifically, the query displays the following information:

COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99

SELECT SRC_QUEUE_NAME,
 SRC_DBNAME,
 (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
 (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
 (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
 FROM V$PROPAGATION_RECEIVER;

viii) Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue
Apply processes are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the apply processes that can dequeue messages from a queue.

COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
 FORMAT 99999999

SELECT s.SUBSCRIBER_NAME,
 q.QUEUE_SCHEMA,
 q.QUEUE_NAME,
 s.LAST_DEQUEUED_SEQ,
 s.NUM_MSGS,
 s.TOTAL_SPILLED_MSG
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
 s.SUBSCRIBER_ADDRESS IS NULL AND
 s.SUBSCRIBER_NAME = a.APPLY_NAME;

3)       Monitoring Streams Propagations and Propagation Jobs

The following sections contain queries that you can run to display information about propagations and propagation jobs:
a) Displaying the Queues and Database Link for Each Propagation
b) Determining the Source Queue and Destination Queue for Each Propagation
c) Determining the Rule Sets for Each Propagation
d) Displaying the Schedule for a Propagation Job
e) Determining the Total Number of Messages and Bytes Propagated

a) Displaying the Queues and Database Link for Each Propagation
You can display information about each propagation by querying the DBA_PROPAGATION data dictionary view. This view contains information about each propagation with a source queue is at the local database.

COLUMN PROPAGATION_NAME        HEADING 'Propagation|Name'   FORMAT A19
COLUMN SOURCE_QUEUE_NAME       HEADING 'Source|Queue|Name'  FORMAT A17
COLUMN DESTINATION_DBLINK      HEADING 'Database|Link'      FORMAT A9
COLUMN DESTINATION_QUEUE_NAME  HEADING 'Dest|Queue|Name'    FORMAT A15
COLUMN STATUS                  HEADING 'Status'             FORMAT A8
COLUMN QUEUE_TO_QUEUE          HEADING 'Queue-|to-|Queue?'  FORMAT A6

SELECT PROPAGATION_NAME,
 SOURCE_QUEUE_NAME,
 DESTINATION_DBLINK,
 DESTINATION_QUEUE_NAME,
 STATUS,
 QUEUE_TO_QUEUE
 FROM DBA_PROPAGATION;

b) Determining the Source Queue and Destination Queue for Each Propagation
You can determine the source queue and destination queue for each propagation by querying the DBA_PROPAGATION data dictionary view.

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN SOURCE_QUEUE_OWNER HEADING 'Source|Queue|Owner' FORMAT A10
COLUMN 'Source Queue' HEADING 'Source|Queue' FORMAT A15
COLUMN DESTINATION_QUEUE_OWNER HEADING 'Dest|Queue|Owner'   FORMAT A10
COLUMN 'Destination Queue' HEADING 'Destination|Queue' FORMAT A15

SELECT p.PROPAGATION_NAME,
 p.SOURCE_QUEUE_OWNER,
 p.SOURCE_QUEUE_NAME ||'@'||
 g.GLOBAL_NAME "Source Queue",
 p.DESTINATION_QUEUE_OWNER,
 p.DESTINATION_QUEUE_NAME ||'@'||
 p.DESTINATION_DBLINK "Destination Queue"
 FROM DBA_PROPAGATION p, GLOBAL_NAME g;

c)  Determining the Rule Sets for Each Propagation
The query in this section displays the following information for each propagation:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN RULE_SET_OWNER HEADING 'Positive|Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Positive Rule|Set Name' FORMAT A15
COLUMN NEGATIVE_RULE_SET_OWNER HEADING 'Negative|Rule Set|Owner' FORMAT A10
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative Rule|Set Name' FORMAT A15

SELECT PROPAGATION_NAME,
 RULE_SET_OWNER,
 RULE_SET_NAME,
 NEGATIVE_RULE_SET_OWNER,
 NEGATIVE_RULE_SET_NAME
 FROM DBA_PROPAGATION;

d) Displaying the Schedule for a Propagation Job
The query in this section displays the following information about the propagation schedule for a propagation job used by a propagation named dbs1_to_dbs2:

COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT 99999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 99999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99

SELECT DISTINCT TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
 s.PROPAGATION_WINDOW,
 s.NEXT_TIME,
 s.LATENCY,
 DECODE(s.SCHEDULE_DISABLED,
 'Y', 'Disabled',
 'N', 'Enabled') SCHEDULE_DISABLED,
 s.PROCESS_NAME,
 s.FAILURES
 FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
 WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2'
 AND p.DESTINATION_DBLINK = s.DESTINATION
 AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
 AND s.QNAME = p.SOURCE_QUEUE_NAME;

e)  Determining the Total Number of Messages and Bytes Propagated
All propagation jobs from a source queue that share the same database link have a single propagation schedule. The query in this section displays the following information for each propagation:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN TOTAL_TIME HEADING 'Total Time|Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Messages|Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999

SELECT p.PROPAGATION_NAME, s.TOTAL_TIME, s.TOTAL_NUMBER, s.TOTAL_BYTES
 FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
 WHERE p.DESTINATION_DBLINK = s.DESTINATION
 AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
 AND s.QNAME = p.SOURCE_QUEUE_NAME;

Streams Replication| Monitoring Oracle Streams Apply Processes at Destination Server

Posted by Sagar Patil

Following shell script will be handy when you are working with Streams Replication.  There are 2 versions, one for 10g:monitor_streams_applyprocess_10g and another for 11g: monitor_streams_applyprocess_11g

#!/bin/bash
## Monitoring Oracle Streams Apply Processes at Destination Server
export STREAMS_ADMIN_SCHEMA=STRMADMIN
export STREAMS_ADMIN_SCHEMA_PWD=STRMADMIN
export DEST_DB=streams1

echo "-------------------Menu---------------------------"
echo " **** This script was tested against RHEL/Oracle 10gR2, Please update script variables with right database values ***"
echo "To run option 16 you will need some plsql procedures in place, detailed at http://www.oracledbasupport.co.uk/streams-displaying-detailed-information-about-apply-errors/"
echo ""
flag=0
while [ $flag -ne 17 ]
do
echo "1.  Determining the Queue, Rule Sets, and Status for Each Apply Process"
echo "2.  Displaying General Information About Each Apply Process"
echo "3.  Listing the Parameter Settings for Each Apply Process"
echo "4.  Displaying Information About Apply Handlers"
echo "5.  Displaying Information About the Reader Server for Each Apply Process"
echo "6.  Monitoring Transactions and Messages Spilled by Each Apply Process"
echo "7.  Determining Capture to Dequeue Latency for a Message"
echo "8.  Displaying General Information About Each Coordinator Process"
echo "9.  Displaying Information About Transactions Received and Applied"
echo "10. Determining the Capture to Apply Latency for a Message for Each Apply Process"
echo "11. Displaying Information About the Apply Servers for Each Apply Process"
echo "12. Displaying Effective Apply Parallelism for an Apply Process"
echo "13. Viewing Rules that Specify a Destination Queue on Apply"
echo "14. Viewing Rules that Specify No Execution on Apply"
echo "15. Checking for Apply Errors"
echo "16. Displaying Detailed Information About Apply Errors"
echo "17. Exit"

echo "Enter the option:"

read flag

echo "*******************************************************"
if [ $flag -eq 1 ]
then
#### Display this general information about each apply process in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Apply|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Apply|Process|Status' FORMAT A15

SELECT APPLY_NAME,
       QUEUE_NAME,
       RULE_SET_NAME,
       NEGATIVE_RULE_SET_NAME,
       STATUS
  FROM DBA_APPLY;
exit
EOF
fi

if [ $flag -eq 2 ]
then
#### To display this general information about each apply process in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25
COLUMN APPLY_USER HEADING 'Apply User' FORMAT A30

SELECT APPLY_NAME,
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED,
       APPLY_USER
  FROM DBA_APPLY;
exit
EOF
fi

if [ $flag -eq 3 ]
then
####   The following query displays the current setting for each apply process parameter for each apply process in a database:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN PARAMETER HEADING 'Parameter' FORMAT A25
COLUMN VALUE HEADING 'Value' FORMAT A20
COLUMN SET_BY_USER HEADING 'Set by User?' FORMAT A15

SELECT APPLY_NAME,
       PARAMETER,
       VALUE,
       SET_BY_USER
  FROM DBA_APPLY_PARAMETERS;
exit
EOF
fi

if [ $flag -eq 4 ]
then
#### To display the error handler for each apply process that applies changes locally in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A5
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A10
COLUMN USER_PROCEDURE HEADING 'Handler Procedure' FORMAT A30
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15

SELECT OBJECT_OWNER,
       OBJECT_NAME,
       OPERATION_NAME,
       USER_PROCEDURE,
       APPLY_NAME
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'Y'
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

-- To display each message handler in a database, run the following query:
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20;
COLUMN MESSAGE_HANDLER HEADING 'Message Handler' FORMAT A20;

SELECT APPLY_NAME, MESSAGE_HANDLER FROM DBA_APPLY
  WHERE MESSAGE_HANDLER IS NOT NULL;

-- To display each precommit handler in a database, run the following query:
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN PRECOMMIT_HANDLER HEADING 'Precommit Handler' FORMAT A30
COLUMN APPLY_CAPTURED HEADING 'Type of|Messages|Applied' FORMAT A15

SELECT APPLY_NAME,
       PRECOMMIT_HANDLER,
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE PRECOMMIT_HANDLER IS NOT NULL;

exit
EOF
fi

if [ $flag -eq 5 ]
then
#### The information displayed by this query is valid only for an enabled apply process.
#### Run the following query to display this information for each apply process:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN APPLY_CAPTURED HEADING 'Dequeues Captured|Messages?' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Process|Name' FORMAT A7
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_MESSAGES_DEQUEUED HEADING 'Total Messages|Dequeued' FORMAT 99999999

SELECT r.APPLY_NAME,
       ap.APPLY_CAPTURED,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_MESSAGES_DEQUEUED
       FROM V\$STREAMS_APPLY_READER r, V\$SESSION s, DBA_APPLY ap
       WHERE r.SID = s.SID AND
             r.SERIAL# = s.SERIAL# AND
             r.APPLY_NAME = ap.APPLY_NAME;

exit
EOF
fi

if [ $flag -eq 6 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "Display information for each apply process in a database" from dual;
COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20
COLUMN 'Transaction ID' HEADING 'Transaction ID' FORMAT A15
COLUMN FIRST_SCN HEADING 'First SCN'   FORMAT 99999999
COLUMN MESSAGE_COUNT HEADING 'Message Count' FORMAT 99999999

SELECT APPLY_NAME,
       XIDUSN ||'.'||
       XIDSLT ||'.'||
       XIDSQN "Transaction ID",
       FIRST_SCN,
       MESSAGE_COUNT
  FROM DBA_APPLY_SPILL_TXN;

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A15
COLUMN TOTAL_MESSAGES_SPILLED HEADING 'Total|Spilled Messages' FORMAT 99999999
COLUMN ELAPSED_SPILL_TIME HEADING 'Elapsed Time|Spilling Messages' FORMAT 99999999.99

SELECT APPLY_NAME,
       TOTAL_MESSAGES_SPILLED,
       (ELAPSED_SPILL_TIME/100) ELAPSED_SPILL_TIME
  FROM V\$STREAMS_APPLY_READER;
exit
EOF
fi

if [ $flag -eq 7 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATION HEADING 'Message Creation' FORMAT A17
COLUMN LAST_DEQUEUE HEADING 'Last Dequeue Time' FORMAT A20
COLUMN DEQUEUED_MESSAGE_NUMBER HEADING 'Dequeued|Message Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
     TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') CREATION,
     TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
     DEQUEUED_MESSAGE_NUMBER
  FROM V\$STREAMS_APPLY_READER;
exit
EOF
fi

if [ $flag -eq 8 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Coordinator|Process|Name' FORMAT A11
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A21

SELECT c.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       c.SID,
       c.SERIAL#,
       c.STATE
       FROM V\$STREAMS_APPLY_COORDINATOR c, V\$SESSION s
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL#;
exit
EOF
fi

if [ $flag -eq 9 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN TOTAL_RECEIVED HEADING 'Total|Trans|Received' FORMAT 99999999
COLUMN TOTAL_APPLIED HEADING 'Total|Trans|Applied' FORMAT 99999999
COLUMN TOTAL_ERRORS HEADING 'Total|Apply|Errors' FORMAT 9999
COLUMN BEING_APPLIED HEADING 'Total|Trans Being|Applied' FORMAT 99999999
COLUMN UNASSIGNED_COMPLETE_TXNS HEADING 'Total|Unnasigned|Trans' FORMAT 99999999
COLUMN TOTAL_IGNORED HEADING 'Total|Trans|Ignored' FORMAT 99999999

SELECT APPLY_NAME,
       TOTAL_RECEIVED,
       TOTAL_APPLIED,
       TOTAL_ERRORS,
       (TOTAL_ASSIGNED - (TOTAL_ROLLBACKS + TOTAL_APPLIED)) BEING_APPLIED,
       TOTAL_IGNORED
       FROM V\$STREAMS_APPLY_COORDINATOR;
exit
EOF
fi

if [ $flag -eq 10 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
        "Message Creation",
     TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     HWM_MESSAGE_NUMBER
  FROM V\$STREAMS_APPLY_COORDINATOR;

-- select "Following query will display capture to apply latency using DBA_APPLY_PROGRESS view for a captured LCR for each apply process" from dual;
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
        "Message Creation",
     TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     APPLIED_MESSAGE_NUMBER
  FROM DBA_APPLY_PROGRESS;

exit
EOF
fi

if [ $flag -eq 11 ]
then
#### Run the following query to display information about the apply servers for each apply process:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_ASSIGNED,
       r.TOTAL_MESSAGES_APPLIED
  FROM V\$STREAMS_APPLY_SERVER R, V\$SESSION S
  WHERE r.SID = s.SID AND
        r.SERIAL# = s.SERIAL#
  ORDER BY r.APPLY_NAME, r.SERVER_ID;
exit
EOF
fi

if [ $flag -eq 12 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "The following query displays the effective apply parallelism for an apply process named apply:" from dual;

SELECT COUNT(SERVER_ID) "Effective Parallelism"
  FROM V\$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY' AND
        TOTAL_MESSAGES_APPLIED > 0;

-- select "You can display the total number of messages applied by each apply server by running the following query" from dual;
COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total Messages Applied' FORMAT 999999
SELECT SERVER_ID, TOTAL_MESSAGES_APPLIED
  FROM V\$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY'
  ORDER BY SERVER_ID;
exit
EOF
fi

if [ $flag -eq 13 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To view destination queue settings for rules, run the following query:" from dual;
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DESTINATION_QUEUE_NAME HEADING 'Destination Queue' FORMAT A30
SELECT RULE_OWNER, RULE_NAME, DESTINATION_QUEUE_NAME
  FROM DBA_APPLY_ENQUEUE;
exit
EOF
fi

if [ $flag -eq 14 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To view each rule with NO for its execution directive, run the following query:" from dual;
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
SELECT RULE_OWNER, RULE_NAME
  FROM DBA_APPLY_EXECUTE
  WHERE EXECUTE_EVENT = 'NO';
exit
EOF
fi

## Checking for Apply Errors
if [ $flag -eq 15 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To check for apply errors, run the following query:" from dual;
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME,
       SOURCE_DATABASE,
       LOCAL_TRANSACTION_ID,
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

exit
EOF
fi
done
echo "exit"

Monitoring Streams Replication| How to Display Detailed Information About Apply Errors?

Posted by Sagar Patil

SQL scripts below can display detailed information about error transactions in the error queue in a database.
These scripts are designed to display information about LCRs, but you can extend them to display information about any non-LCR messages used.

To use these scripts, complete the following steps:

Step 1) Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View
Step 2) Create a Procedure that Prints the Value in an ANYDATA Object
Step 3) Create a Procedure that Prints a Specified LCR
Step 4) Create a Procedure that Prints All the LCRs in the Error Queue
Step 5) Create a Procedure that Prints All the Error LCRs for a Transaction

Step 1   Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

The user who creates and runs the print_errors and print_transaction procedures described in the following sections must be granted explicit SELECT privilege on the DBA_APPLY_ERROR data dictionary view. This privilege cannot be granted through a role. Running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package on a user grants this privilege to the user. To grant this privilege to a user directly, complete the following steps:

GRANT SELECT ON DBA_APPLY_ERROR TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;

Step 2   Create a Procedure that Prints the Value in an ANYDATA Object
The following procedure prints the value in a specified ANYDATA object for some selected data types. Optionally, you can add more data types to this procedure.

CREATE OR REPLACE PROCEDURE print_any(data IN ANYDATA) IS
tn  VARCHAR2(61);
str VARCHAR2(4000);
chr VARCHAR2(1000);
num NUMBER;
dat DATE;
rw  RAW(4000);
res NUMBER;
BEGIN
IF data IS NULL THEN
DBMS_OUTPUT.PUT_LINE('NULL value');
RETURN;
END IF;
tn := data.GETTYPENAME();
IF tn = 'SYS.VARCHAR2' THEN
res := data.GETVARCHAR2(str);
DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253));
ELSIF tn = 'SYS.CHAR' then
res := data.GETCHAR(chr);
DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253));
ELSIF tn = 'SYS.VARCHAR' THEN
res := data.GETVARCHAR(chr);
DBMS_OUTPUT.PUT_LINE(chr);
ELSIF tn = 'SYS.NUMBER' THEN
res := data.GETNUMBER(num);
DBMS_OUTPUT.PUT_LINE(num);
ELSIF tn = 'SYS.DATE' THEN
res := data.GETDATE(dat);
DBMS_OUTPUT.PUT_LINE(dat);
ELSIF tn = 'SYS.RAW' THEN
-- res := data.GETRAW(rw);
-- DBMS_OUTPUT.PUT_LINE(SUBSTR(DBMS_LOB.SUBSTR(rw),0,253));
DBMS_OUTPUT.PUT_LINE('BLOB Value');
ELSIF tn = 'SYS.BLOB' THEN
DBMS_OUTPUT.PUT_LINE('BLOB Found');
ELSE
DBMS_OUTPUT.PUT_LINE('typename is ' || tn);
END IF;
END print_any;
/

Step 3   Create a Procedure that Prints a Specified LCR
The following procedure prints a specified LCR. It calls the print_any procedure created in “Create a Procedure that Prints the Value in an ANYDATA Object”.

CREATE OR REPLACE PROCEDURE print_lcr(lcr IN ANYDATA) IS
typenm    VARCHAR2(61);
ddllcr    SYS.LCR$_DDL_RECORD;
proclcr   SYS.LCR$_PROCEDURE_RECORD;
rowlcr    SYS.LCR$_ROW_RECORD;
res       NUMBER;
newlist   SYS.LCR$_ROW_LIST;
oldlist   SYS.LCR$_ROW_LIST;
ddl_text  CLOB;
ext_attr  ANYDATA;
BEGIN
typenm := lcr.GETTYPENAME();
DBMS_OUTPUT.PUT_LINE('type name: ' || typenm);
IF (typenm = 'SYS.LCR$_DDL_RECORD') THEN
res := lcr.GETOBJECT(ddllcr);
DBMS_OUTPUT.PUT_LINE('source database: ' ||
ddllcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE('owner: ' || ddllcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE('object: ' || ddllcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE('is tag null: ' || ddllcr.IS_NULL_TAG);
DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
ddllcr.GET_DDL_TEXT(ddl_text);
DBMS_OUTPUT.PUT_LINE('ddl: ' || ddl_text);
-- Print extra attributes in DDL LCR
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('serial#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('session#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('thread#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('tx_name');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('username');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
END IF;
DBMS_LOB.FREETEMPORARY(ddl_text);
ELSIF (typenm = 'SYS.LCR$_ROW_RECORD') THEN
res := lcr.GETOBJECT(rowlcr);
DBMS_OUTPUT.PUT_LINE('source database: ' ||
rowlcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE('owner: ' || rowlcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE('object: ' || rowlcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE('is tag null: ' || rowlcr.IS_NULL_TAG);
DBMS_OUTPUT.PUT_LINE('command_type: ' || rowlcr.GET_COMMAND_TYPE);
oldlist := rowlcr.GET_VALUES('old');
FOR i IN 1..oldlist.COUNT LOOP
IF oldlist(i) IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE('old(' || i || '): ' || oldlist(i).column_name);
print_any(oldlist(i).data);
END IF;
END LOOP;
newlist := rowlcr.GET_VALUES('new', 'n');
FOR i in 1..newlist.count LOOP
IF newlist(i) IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE('new(' || i || '): ' || newlist(i).column_name);
print_any(newlist(i).data);
END IF;
END LOOP;
-- Print extra attributes in row LCR
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('row_id');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('row_id: ' || ext_attr.ACCESSUROWID());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('serial#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('session#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('thread#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('tx_name');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('username');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ELSE
DBMS_OUTPUT.PUT_LINE('Non-LCR Message with type ' || typenm);
END IF;
END print_lcr;
/

Step 4   Create a Procedure that Prints All the LCRs in the Error Queue

The following procedure prints all of the LCRs in all of the error queues. It calls the print_lcr procedure created in ” Step3) Create a Procedure that Prints a Specified LCR”.

CREATE OR REPLACE PROCEDURE print_errors IS
CURSOR c IS
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_NUMBER,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
FROM DBA_APPLY_ERROR
ORDER BY SOURCE_DATABASE, SOURCE_COMMIT_SCN;
i      NUMBER;
txnid  VARCHAR2(30);
source VARCHAR2(128);
msgno  NUMBER;
msgcnt NUMBER;
errnum NUMBER := 0;
errno  NUMBER;
errmsg VARCHAR2(2000);
lcr    ANYDATA;
r      NUMBER;
BEGIN
FOR r IN c LOOP
errnum := errnum + 1;
msgcnt := r.MESSAGE_COUNT;
txnid  := r.LOCAL_TRANSACTION_ID;
source := r.SOURCE_DATABASE;
msgno  := r.MESSAGE_NUMBER;
errno  := r.ERROR_NUMBER;
errmsg := r.ERROR_MESSAGE;
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('----- ERROR #' || errnum);
DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE('--message: ' || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid);
print_lcr(lcr);
END LOOP;
END LOOP;
END print_errors;
/

To run this procedure after you create it, enter the following:

SET SERVEROUTPUT ON SIZE 1000000
EXEC print_errors

Step 5   Create a Procedure that Prints All the Error LCRs for a Transaction
The following procedure prints all the LCRs in the error queue for a particular transaction. It calls the print_lcr procedure created in “Create a Procedure that Prints a Specified LCR”.

CREATE OR REPLACE PROCEDURE print_transaction(ltxnid IN VARCHAR2) IS
i      NUMBER;
txnid  VARCHAR2(30);
source VARCHAR2(128);
msgno  NUMBER;
msgcnt NUMBER;
errno  NUMBER;
errmsg VARCHAR2(2000);
lcr    ANYDATA;
BEGIN
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_NUMBER,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
INTO txnid, source, msgno, msgcnt, errno, errmsg
FROM DBA_APPLY_ERROR
WHERE LOCAL_TRANSACTION_ID =  ltxnid;
DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE('--message: ' || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid); -- gets the LCR
print_lcr(lcr);
END LOOP;
END print_transaction;
/

To run this procedure after you create it, pass to it the local transaction identifier of a error transaction. For example, if the local transaction identifier is 1.17.2485, then enter the following:

SET SERVEROUTPUT ON SIZE 1000000
EXEC print_transaction('1.17.2485')

How to locate local transaction identifier of a error transaction? To check for apply errors, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME,
       SOURCE_DATABASE,
       LOCAL_TRANSACTION_ID,
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

If there are any apply errors, then your output looks similar to the following:

Apply                 Local                                         Messages in
Process    Source     Transaction                                         Error
Name       Database   ID          Error Number Error Message        Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.EXAM 1.62.948            1403 ORA-01403: no data found     1_MULT3PLE.COM
APPLY_FROM MULT2.EXAM 1.54.948            1403 ORA-01403: no data found     1_MULT2PLE.COM

Linux – List file count by Day

Posted by Sagar Patil
ls -lt | grep "^-" | awk '{
if($8 ~ /:/)
   $8=2011
Date_count=$6" "$7", "$8
freq[Date_count]++}
END {for (date in freq) printf "%s\t%d\n", date, freq[date] }' | sort

I often use this script when oracle archive volume fills up and I want to locate number of archive generated every day.

OUTPUT

Oct 1, 2011     3
Oct 2, 2011     3
Oct 3, 2011     4
Oct 4, 2011     24
Oct 5, 2011     48
Oct 6, 2011     101
Oct 7, 2011     39

 

 

Oracle Streams Introduction

Posted by Sagar Patil

The streams contain following processes:

  1. Capture
  2. Instantiation (Not in image above)
  3. Propagate
  4. Apply

1) Capture process A Capture process can capture a majority of database transactions. The Capture process specifically captures DML and DDL. The Capture process uses both LogMiner and Advanced Queuing to accomplish it’s task (Note: Synchronous Capture uses internal triggers instead of LogMiner)

2) Instantiation process : composed of three steps

  • Creating the object(s) at the destination Site
  • Updating the Streams data dictionary with metadata
  • Setting the Instantiation SCN for the object(s)

Once instantiation is complete, the instantiation SCN will be the same at both the source and destination site(s), indicating to Streams that it is from this SCN forward that changes should be captured, propagated, and applied for the destination.

Instantiation Levels include

  • Table Level
  • Schema Level
  • Database (Global) Level
  • Tablespace (this requires special steps)

Instantiation Methods

  • Data Pump
  • Transportable Tablespaces
  • RMAN for entire database
  • Manual method

3) Propagate process The Propagate process does the actual Propagation between the source and target queues.  Propagation has two configuration options, queue-to-queue or queue-to-dblink. In the queue-to-queue configuration, each Propagation has its own propagation job. It should be noted that queue-to-queue propagation is recommended for Streams in an RAC environment. The default configuration of queue-to-dblink has one shared propagation job.

4) Apply process It’s made up of multiple parts. Those parts are as follows:

  • Reader server: Takes the LCRs and converts it into transactions, preserving transactional order, and dependencies.
  • Coordinator process: Takes the transactions from reader server and sends them to Apply server. This process also monitors the Apply server to ensure that the transactions are applied in the correct order.
  • Apply server: Applies the LCR or message to the handler, either an Apply handler or message handler. Apply server also deals with placing the LCR or message into the appropriate error queue if it cannot be applied.

Oracle Streams Restrictions

  1. Capture Process Restrictions
  2. Apply Process Restrictions

Simply query DBA_STREAMS_UNSUPPORTED view and you can find out the reason why a particular table could not be streamed.

1) Capture Process Restrictions

Unsupported Data Types for Capture Processes : A capture process does not capture the results of DML changes to columns of the following data types:

  • SecureFile CLOB, NCLOB, and BLOB
  • BFILE
  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • XMLType stored object relationally or as binary XML
  • Oracle-supplied types: Any types, URI types, spatial types, and media types

Unsupported Changes for Capture Processes: A capture process never captures changes made to the following schemas:

CTXSYS,DBSNMP,DMSYS,DVSYS,EXFSYS,LBACSYS,MDDATA,MDSYS,OLAPSYS,ORDPLUGINS,ORDSYS,OUTLN,SI_INFORMTN_SCHEMA,SYS,SYSMAN,SYSTEM,WMSYS,XDB

Unsupported DML Changes for Capture Processes A capture process cannot capture changes made to an index-organized table if the index-organized table contains any columns of the following data types:

  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • XMLType stored object relationally or as binary XML (XMLType stored as CLOB is supported.)
  • The following Oracle-supplied types: Any types, URI types, spatial types, and media types
  • A capture process cannot capture DML changes made to temporary tables, object tables, or tables stored with segment compression enabled. A capture process raises an error if it attempts to capture such changes.
  • Also, if you share a sequence at multiple databases, then sequence values used for individual rows at these databases might vary. Also, changes to actual sequence values are not captured. For example, if a user references a NEXTVAL or sets the sequence, then a capture process does not capture changes resulting from these operations.

Unsupported DDL Changes for Capture Processes A capture process captures the DDL changes that satisfy its rule sets, except for the following types of DDL changes:

  • ALTER DATABASE
  • CREATE CONTROLFILE
  • CREATE DATABASE
  • CREATE PFILE
  • CREATE SPFILE

Some types of DDL changes that are captured by a capture process cannot be applied by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records information about it in the trace file for the apply process.

Changes Ignored by a Capture Process : A capture process ignores the following types of changes:

  • The session control statements ALTER SESSION and SET ROLE.
  • The system control statement ALTER SYSTEM.
  • CALL, EXPLAIN PLAN, and LOCK TABLE statements.
  • GRANT statements on views.
  • Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which a capture process captures changes, but the logical structure of the table before online redefinition must be the same as the logical structure after online redefinition.
  • Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, then these changes can be captured by a capture process if the changes satisfy the capture process rule sets.

NOLOGGING and UNRECOVERABLE Keywords for SQL Operations If you use the NOLOGGING or UNRECOVERABLE keyword for a SQL operation, then the changes resulting from the SQL operation cannot be captured by a capture process.

Supplemental Logging Data Type Restrictions LOB, LONG, LONG RAW, user-defined type, and Oracle-supplied type columns cannot be part of a supplemental log group.

2.  Apply Process Restrictions

Unsupported Data Types for Apply Processes

An apply process does not apply row LCRs containing the results of DML changes in columns of the following data types:

  • SecureFile CLOB, NCLOB, and BLOB
  • BFILE
  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • Oracle-supplied types: Any types, URI types, spatial types, and media types

The following types of DDL changes are not supported by an apply process. These types of DDL changes are not applied:

  • ALTER MATERIALIZED VIEW
  • ALTER MATERIALIZED VIEW LOG
  • CREATE DATABASE LINK
  • CREATE SCHEMA AUTHORIZATION
  • CREATE MATERIALIZED VIEW
  • CREATE MATERIALIZED VIEW LOG
  • DROP DATABASE LINK
  • DROP MATERIALIZED VIEW
  • DROP MATERIALIZED VIEW LOG
  • FLASHBACK DATABASE
  • RENAME

If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records the following message in the apply process trace file, followed by the DDL text that was ignored:
Apply process ignored the following DDL:Oracle Support Master Note for Streams Setup Scripts (Doc ID 789445.1)

What is new at Oracle11g Streams?

Oracle Database 11g enhanced streams replication to provide a 30 to 50 percent performance improvement for SQL Apply and to support XML Type (CLOBs) and transparent data encryption.
New manageability improvements includes

  1. Streams performance advisor
  2. Topology views
  3. Automatic Workload Repository (AWR) and ADDM support
  4. Data comparison utility and synchronous capture
  5. New Advanced Queuing (AQ) features include
  6. Java Message Service (JMS) performance improvements
  7. Direct streams AQ support in Java Database Connectivity (JDBC)
  8. Scalable event notification
  • Streams in 11g has much better monitoring and tracking than previous versions.

o      LCR tracking
o      Expanded streams specific error messages
o      Oracle Streams Performance Advisor
o      Better documentation and examples

  • Distinguishes between implicit and explicit consumption (direct bearing on my work)
  • Streams 11g switches from DBMS_JOB to DBMS_SCHEDULER for propagation

How does stream work uner RAC environment?

When you create the queue in an instance of the database, the instance assumes the ownership of the associated queue table. The capture and apply processes run in the instance that owns the queue table associated with their respective queues.  In RAC environments, it is possible to specify the ownership of the queue table. You can specify the primary instance and secondary instance for a given queue table.

When the primary instance becomes unavailable, the secondary instance assumes the ownership of the queue, and the Streams processes are restarted on the secondary instance automatically. When the primary instance becomes available, the queue ownership and the Streams processes switch back
to the primary instance.

You can use ALTER_QUEUE_TABLE procedure in the DBMS_AQADM package to define the primary and secondary instances for the queue table.

SQL> begin
dbms_aqadm.alter_queue_table (
queue_table => ‘CAP_Q_T’,
primary_instance => 2,
secondary_instance => 3);
end;

The view DBA_QUEUE_TABLES shows information about the queue table ownership and its current owner instance:

SQL> select queue_table,
owner_instance,
primary_instance,
secondary_instance
from dba_queue_tables
where owner = ‘STRMADMIN’;

Finding Backup details from 11g RMAN respository tables

Posted by Sagar Patil
  • List databases registered with rman repository

select NAME
     , DBID
  from RC_DATABASE;

  • List rman backup STATUS details with database name, Start time, End time , Processed MB/GB

SELECT db_name, row_type, start_time, end_time, operation, status, mbytes_processed
    FROM   rc_rman_status
    WHERE  db_name like  ‘PROD%’
      AND  start_time > SYSDATE – 1
    ORDER  BY END_TIME desc;

  • List backup details of all registered RMAN databases 

SELECT  db_name
      , start_time
      , row_type
      , end_time
      , operation
      , status
      , mbytes_processed
   FROM rc_rman_status
  WHERE start_time > SYSDATE – 1
  ORDER BY DB_NAME
  , start_time;

  • Show backups which are still RUNNING

SELECT  db_name
      , start_time
      , row_type
      , operation
      , status
  FROM rc_rman_status
  WHERE start_time > SYSDATE – 1
  AND STATUS = ‘RUNNING’
  ORDER BY DB_NAME
  , start_time;

  • List db name, tablespace & datafiles backed up with size

select DB_NAME
     , TABLESPACE_NAME
     , FILE#
     , BYTES/1000000000 SIZE_GB
     , NAME
  from RC_DATAFILE
 where DB_NAME = ‘PROD’;

  • List RMAN backup configuration Parameters

select
DB_UNIQUE_NAME
,NAME
,VALUE         
from RC_RMAN_CONFIGURATION
 where DB_UNIQUE_NAME =’PROD’
order by 1,2,3;

ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE      DISK TO 1
ARCHIVELOG DELETION POLICY                    TO NONE
BACKUP OPTIMIZATION                           OFF
CHANNEL                                       DEVICE TYPE DISK FORMAT   '/mnt/backup/Rman/PROD1/%d_%s_%p_%U'
CONTROLFILE AUTOBACKUP                        ON
CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mnt/backup/Rman/PROD1/%F'
DATAFILE BACKUP COPIES FOR DEVICE TYPE        DISK TO 1
DEVICE TYPE                                   DISK PARALLELISM 3 BACKUP TYPE TO COMPRESSED BACKUPSET
MAXSETSIZE TO                                 UNLIMITED
SNAPSHOT CONTROLFILE NAME                     TO '/opt/app/oracle/product/11.2/db_1/dbs/snapcf_RMANTST.f'  
  • List the most recent FULL Database backup with STATUS=’COMPLETED’

select   a.db_name
Database
       , db.dbid DBID
       , a.end_time Latest_Backup
       , round ( a.output_bytes / 1000000000 ) GBytes_Processed
       , round ( ( end_time – start_time ) * 60 * 24 ) Minutes
    from rc_rman_status a
       , rc_database db
   where object_type in ( ‘DB FULL’
   , ‘DB INCR’ )
     and status = ‘COMPLETED’
     and operation = ‘BACKUP’
     and a.db_name = ‘&&DB_NAME’
     and end_time in ( select end_time
           from rc_rman_status b
          where b.db_name = a.db_name
            and b.db_key = a.db_key
            and object_type in ( ‘DB FULL’
          , ‘DB INCR’ )
            and status = ‘COMPLETED’
            and operation = ‘BACKUP’ )
     and db.db_key = a.db_key
   order by a.db_name
   , end_time desc;

  • List Backups for a Specific Database Instance ‘&&DB_NAME’

select   a.db_name
Database
       , db.dbid DBID
       , a.end_time Latest_Backup
       , round ( a.output_bytes / 1000000000 ) GBytes_Processed
       , round ( ( end_time – start_time ) * 60 * 24 ) Minutes_Taken
    from rman.rc_rman_status a
       , rman.rc_database db
   where object_type in ( ‘DB FULL’
   , ‘DB INCR’ )
     and status = ‘COMPLETED’
     and operation = ‘BACKUP’
     and a.db_name = ‘&&DB_NAME’
     and end_time in ( select end_time
           from rman.rc_rman_status b
          where b.db_name = a.db_name
            and b.db_key = a.db_key
            and object_type in ( ‘DB FULL’
          , ‘DB INCR’ )
            and status = ‘COMPLETED’
            and operation = ‘BACKUP’ )
     and db.db_key = a.db_key
   order by a.db_name
   , end_time desc;

DATABASE       DBID LATEST_BA GBYTES_PROCESSED MINUTES_TAKEN
-------- ---------- --------- ---------------- -------------
PROD   4020163152 22-SEP-11                0            20
PROD   4020163152 21-SEP-11                0            19
PROD   4020163152 20-SEP-11                0            17
PROD   4020163152 19-SEP-11                0            16
PROD   4020163152 18-SEP-11                4            13
PROD   4020163152 17-SEP-11                0            15
  • List Archivelog backups

SELECT a.db_name “Database”
     , db.dbid “DBID”
     , a.end_time “Latest Backup”
     , ROUND ( a.output_bytes / 1000000 ) “MBytes Processed”
     , ROUND ( ( end_time – start_time ) * 60 * 24 ) “Minutes Taken”
  FROM rman.rc_rman_status a
     , rman.rc_database db
 WHERE object_type = ‘ARCHIVELOG’
   AND status = ‘COMPLETED’
   AND operation = ‘BACKUP’
   AND a.db_name = ‘&&DB_NAME’
   AND end_time IN ( SELECT end_time
         FROM rman.rc_rman_status b
        WHERE b.db_name = a.db_name
          AND b.db_key = a.db_key
          AND object_type = ‘ARCHIVELOG’
          AND status = ‘COMPLETED’
          AND operation = ‘BACKUP’ )
   AND db.db_key = a.db_key
 ORDER BY a.db_name
 , end_time desc;

  • List all Database FULL backups done in last 30 days

SELECT  SUBSTR ( TO_CHAR ( START_TIME
    , ‘DD-MON-YYYY HH24:MI’ )
        , 1
        , 20 )
      , SUBSTR ( OBJECT_TYPE
  , 1
  , 20 )
   FROM rman.rc_rman_status
  WHERE start_time > SYSDATE – 30
    AND object_type = ‘DB FULL’
  ORDER BY 1 DESC;

  • Is RMAN Backup Still Running?

SELECT TO_CHAR ( start_time
        , ‘DD-MON-YY HH24:MI’ ) “BACKUP STARTED”
     , sofar
     , totalwork
     , elapsed_seconds / 60 “ELAPSE (Min)”
     , ROUND ( sofar / totalwork * 100
       , 2 ) “Complete%”
  FROM sys.v_$session_longops
 WHERE opname = ‘dbms_backup_restore’;

11g Grid | Monitoring Management Services and Repository

Posted by Sagar Patil

Management Services and Repository:

Lets you monitor system performance and access diagnostic information for the Oracle Management Services and Management Repository.

You can view:

  • The overall health of Enterprise Manager.
  • The status and performance of the Repository DBMS Jobs that handle Enterprise Manager’s maintenance and monitoring functionality.
  • The health and configuration of all Management Services.
  • Performance errors for the DBMS jobs and Management Service components (Repository Metrics).

Click at Setup ->  “Management Services and Repository” tab

Overview:

The General section shows general statistics about the Management Repository such as Availability or the number of administrators for this Management Repository.

The Management Services : provides you with an overview of the health and configuration of all Management Services, including their status, any errors encountered, the performance (load processing) of the Management Services when loading metric data to the Management Repository, and information on files waiting to be loaded into the Management Repository.

You can easily view how well the Management Services are performing by looking at the Files Pending Load columns.

You can also look at the performance over time by clicking on the metric link and going to the metrics detail page; for example, for the same amount of data processed, a decreasing trend in loader throughput might indicate that resources on the machine are being maxed out and an additional Management Service might need to be added for load balancing.

Click on “Management Services”  -> Name_of_Management_Service  to see detailed statistics on  “Management Service”.

Repository Operations : The Repository Operations page provides you with an overview of the status and performance of the Repository DBMS Jobs that handle part of Enterprise Manager’s maintenance and monitoring functionality. These DBMS jobs run within the Management Repository and require no user input.

Use the Repository Operations page to view the performance of the Repository DBMS jobs and the errors which have occurred to see how Enterprise Manager is performing.

  • If there are any errors shown on the page, see the specific error.
  • If a job is down and there is no error, view the database alert log for more information.
  • If the Processing Time Percent (Last Hour) is running large and the Throughput is low, check for possible errors in the System Errors page, or alerts in the database alert log.
  • If any DBMS job is down, you must determine what the problem is from the generated error messages. You may have to connect to the database as a super administrator and shut down and restart it in order to restart the job.
  • If you want more information, clicking a link brings you to a metrics detail page.

Errors

The Errors page shows you the performance errors for the DBMS jobs and Management Service components. It also shows the number of duplicate targets. If you click the link, it will bring you to the duplicate targets page. To resolve duplicate target errors, the duplicate target must be renamed on the conflicting Oracle Management Agent.

 

 

Grid false alert "Agent to OMS communication is broken"

Posted by Sagar Patil

If you are receiving tonnes of alerts from grid as below, here is a simple solution.

DBA_000337

Increase max_inactive_time in table sysman.mgmt_emd_ping, default timeout is 120 secs (2 mins)

DBA_000336

How does this parameter work with Grid & Agent?

Each Management Agent sends a periodic signal to an Oracle Management Service (OMS) indicating that the Management Agent is available.

If the Management Service does not receive a signal from an Management Agent within a specified time interval (default of 120 seconds) then the Management Service performs a reverse ping. A reverse ping is when the Management Service attempts to contact the Management Agent using the Management Agent URL. If the reverse ping succeeds, then the Oracle Management Service knows that the Management Agent and host are both available.

If the Management Service reverse ping fails, then all targets monitored by the Management Agent are considered to be in the “Agent Unreachable” state, and the Oracle Management Service attempts a TCP ping of the host on which the Management Agent resides. Based on the results of the TCP ping, one of two messages will be returned:

  • If the Management Service’s TCP ping to the host succeeds, then the Management Service determines that the Management Agent is down, but the host is still up. The notification alert message will indicate this state.The message is as follows:Agent is Unreachable (REASON = Connection refused) but the host is UP.The REASON will be filled with the error that was received while performing the reverse ping.
  • If the Management Service’s TCP ping to the host fails, then, you may conclude one of the following:
    • There are network problems between the Management Service and Management Agent hosts
    • The host itself is down
    • The host cannot be reached using a ping because a firewall exists between the Management Service and the Management Agent hosts that prevents ICMP traffic from passing, or the Management Agent host does not support ICMP packets

    The notification alert message will indicate the problem. If Management Service’s host ping fails, the message is:

    Agent is Unreachable (REASON = Connection refused). Host is unreachable (REASON = unknown host)

    The REASON for Agent Unreachable will be filled with the error we received while perfoming the reverse ping and the REASON for host unreachability will be filled with the error we received while performing the host ping.

 

DataGuard Monitorable (Read-Only) Database Properties

Posted by Sagar Patil
Configuration details below are for a 2 Node RAC as Primary (PROD1,PROD2) & 2 node RAC(STDBY1,STDBY2) as Physical Standby.
DGMGRL> show database 'PROD';
Database - PROD
  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    PROD1
    PROD2
Database Status:
SUCCESS

DGMGRL> show database 'STDBY';
Database - STDBY
  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       6 seconds
  Real Time Query: OFF
  Instance(s):
    STDBY1 (apply instance)
    STDBY2
Database Status:
SUCCESS
DGMGRL> show instance verbose "PROD1";
Instance 'PROD1' of database 'PROD'
  Host Name: Primary-Server
  PFILE:
  Properties:
    SidName                         = 'PROD1'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Primary-Server)(PORT=1529))(CONNECT_DATA=(SERVICE_NAME=PROD_DGMGRL)(INSTANCE_NAME=PROD1)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '/mnt/logs/oradata/PROD/arch'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.arc'
    TopWaitEvents                   = '(monitor)'
Instance Status:
SUCCESS

DGMGRL> show instance verbose "STDBY2";
Instance 'STDBY2' of database 'STDBY'
  Host Name : Standby-Server
  PFILE:
  Properties:
    SidName                         = 'STDBY2'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Standby-Server)(PORT=1529))(CONNECT_DATA=(SERVICE_NAME=STDBY_DGMGRL)(INSTANCE_NAME=STDBY2)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '/mnt/logs/oradata/PROD/arch'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.arc'
    TopWaitEvents                   = '(monitor)'
Instance Status:
SUCCESS
Properties for Physical Standby Database :
1.  InconsistentLogXptProps (Inconsistent Redo Transport Properties)
2.  InconsistentProperties (Inconsistent Database Properties)
3.  LogXptStatus (Redo Transport Status)
4.  RecvQEntries (Receive Queue Entries)
5.  SendQEntries (Send Queue Entries)
6.  StatusReport (Status Report)
7.  TopWaitEvents

Properties for Logical Standby Database
1.  LsbyFailedTxnInfo (Logical Standby Failed Transaction Information)
2.  LsbyParameters (Logical Standby Parameters)
3.  LsbySkipTable (Logical Standby Skip Table)
4.  LsbySkipTxnTable (SQL Apply Skip Transaction Table)

1.  InconsistentLogXptProps (Inconsistent Redo Transport Properties)
    The InconsistentLogXptProps monitorable database property returns a table that shows all properties related to redo transport services whose values are inconsistent between the broker configuration file and the runtime value in the database.
	DGMGRL> show instance "PROD1" 'InconsistentLogXptProps';
2.  InconsistentProperties (Inconsistent Database Properties)
DGMGRL> show instance "PROD1" 'InconsistentProperties';
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE
DGMGRL> show instance "STDBY1" 'InconsistentProperties';
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE
3.  LogXptStatus (Redo Transport Status)
    The LogXptStatus monitorable database property returns a table that contains the error status of redo transport services for each of the enabled standby databases. This property pertains to the primary database.
	DGMGRL> show instance "PROD1" 'LogXptStatus';
		LOG TRANSPORT STATUS
		PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME               STATUS
             PROD1              STDBY
             PROD2              STDBY
4.  RecvQEntries (Receive Queue Entries)
    The RecvQEntries monitorable database property returns a table indicating all log files that were received by the standby database but have not yet been applied.
    If no rows are returned, it implies all log files received have been applied. This property pertains to a standby database.
	DGMGRL> show instance "STDBY1" 'RecvQEntries';
STANDBY_RECEIVE_QUEUE
              STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED    FIRST_CHANGE#     NEXT_CHANGE#       SIZE (KBs)
         NOT_APPLIED        732278995                2                 7733  09/13/2011 17:35:56  09/13/2011 17:36:07       7611896792       7611961172            39968
         NOT_APPLIED        732278995                2                 7734  09/13/2011 17:36:07  09/13/2011 17:36:16       7611961172       7612024484            39969
5. SendQEntries (Send Queue Entries)
   The SendQEntries monitorable database property returns a table that shows all log files on the primary database that were not successfully archived to one or more standby databases.
   This property pertains to the primary database.
	DGMGRL> show instance "PROD1" 'SendQEntries';
   PRIMARY_SEND_QUEUE
        STANDBY_NAME       STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED    FIRST_CHANGE#     NEXT_CHANGE#       SIZE (KBs)
             STDBY     ARCHIVED        732278995                1                 7796  09/13/2011 17:37:31  09/13/2011 17:38:01       7612501278       7612699830              177
             STDBY     ARCHIVED        732278995                2                 7744  09/13/2011 17:37:46  09/13/2011 17:37:55       7612594908       7612657357            39970
             STDBY     ARCHIVED        732278995                2                 7745  09/13/2011 17:37:55  09/13/2011 17:38:05       7612657357       7612719596            39968
                       CURRENT         732278995                1                 7797  09/13/2011 17:38:01                            7612699830                                56
                       CURRENT         732278995                2                 7746  09/13/2011 17:38:05                            7612719596                             10491
6. StatusReport (Status Report)
   DGMGRL> SHOW DATABASE 'PROD1' 'StatusReport' ;
7. TopWaitEvents
   The TopWaitEvents monitorable database property specifies the 5 events with the longest waiting time in the specified instance. The events and their waiting time are retrieved from V$SYSTEM_EVENT.
DGMGRL> show instance "PROD1" 'TopWaitEvents';
TOP SYSTEM WAIT EVENTS
               Event            Wait Time
SQL*Net message from client   202752791770
   rdbms ipc message          65593825038
EMON slave idle wait          12015781443
  gcs remote message          5594728345
      DIAG idle wait          5590025706

DGMGRL> show instance "STDBY1" 'TopWaitEvents';
TOP SYSTEM WAIT EVENTS
               Event            		Wait Time
   rdbms ipc message          			16335113847
parallel recovery slave next change     5446704641
SQL*Net message from client           	4758412882
        PX Idle Wait           			2533146430
  gcs remote message           			1361762939

	

How to Safely Remove a Data Guard Broker Configuration under RAC/NON-RAC setup

Posted by Sagar Patil

1) Remove the Data Guard Broker Configuration

Using the Commandline DGMGRL
SQL> show parameter dg_broker;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
dg_broker_config_file1               string      /mnt/data/oradata/PROD/dr1PROD.dat
dg_broker_config_file2               string      /mnt/data/oradata/PROD/dr2PROD.dat
dg_broker_start                      boolean     TRUE

DGMGRL> connect sys/sysgsadm
Connected.
DGMGRL> show configuration;
Configuration – dataguard
Protection Mode: MaxPerformance
Databases:
PROD  – Primary database
PROD – Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS

DGMGRL> remove configuration;
Removed configuration
DGMGRL> show configuration;
ORA-16532: Data Guard broker configuration does not exist
Configuration details cannot be determined by DGMGRL

2) On the primary database set dg_broker_start=false:

SQL> show parameter dg_broker

NAME                                 TYPE        VALUE
———————————— ———– ——————————
dg_broker_config_file1               string      ?/dbs/dr1@.dat
dg_broker_config_file2               string      ?/dbs/dr2@.dat
dg_broker_start                      boolean     TRUE

SQL> alter system set dg_broker_start=false scope=both;

System altered.

SQL> show parameter dg_broker;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
dg_broker_config_file1               string      /mnt/data/oradata/PROD/dr1PROD.dat
dg_broker_config_file2               string      /mnt/data/oradata/PROD/dr2PROD.dat
dg_broker_start                      boolean     TRUE

3) On the primary disable any archive destinations that are archiving to the standby:

SQL> column A format 999;
column B format a40;
column C format a10;
column C word_wrapped;
column D word_wrapped;
select dest_id A,destination B,status C
from v$archive_dest
where target=’STANDBY’;
A B                                        C
—- —————————————- ———-
2 dg_PROD                                VALID

Primary> show parameter log_archive_dest_state_2;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
log_archive_dest_state_2             string      ENABLE

Primary> alter system set log_archive_dest_state_2=defer scope=both;
System altered.

Primary> show parameter log_archive_dest_state_2;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
log_archive_dest_state_2             string      DEFER

 

4) on the standby set the dg_broker_start parameter to false:

SQL> show parameter dg_broker;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
dg_broker_config_file1               string      /mnt/data/oradata/PROD/dr1PROD.dat
dg_broker_config_file2               string      /mnt/data/oradata/PROD/dr2PROD.dat
dg_broker_start                      boolean     TRUE

SQL> alter system set dg_broker_start=false scope=both;
System altered.

5) On both system remove metadata files

oracle@hasunclu1:/u03/oracle/9.2.0/dbs> rm /mnt/data/oradata/PROD/dr1PROD.dat
oracle@hasunclu1:/u03/oracle/9.2.0/dbs> rm /mnt/data/oradata/PROD/dr2PROD.dat

If required you can activate standby database as below :

Standby> Shutdown abort;

Standby> startup;
ORACLE instance started.
Total System Global Area 9152860160 bytes
Fixed Size                  2234056 bytes
Variable Size            6610225464 bytes
Database Buffers         2516582400 bytes
Redo Buffers               23818240 bytes
Database mounted.
ORA-10456: cannot open standby database; media recovery session may be in progress

SQL> alter database activate standby database;
Database altered.

SQL> startup force;
ORACLE instance started.
Total System Global Area 9152860160 bytes
Fixed Size                  2234056 bytes
Variable Size            6610225464 bytes
Database Buffers         2516582400 bytes
Redo Buffers               23818240 bytes
Database mounted.
Database opened.

[oracle@Node2 PROD]$ srvctl stop database -d PROD
[oracle@Node2 PROD]$ srvctl start database -d PROD

[oracle@Node2 PROD]$ srvctl status instance -d PROD -n Node1
Instance PROD1 is running on node Node1
[oracle@Node2 PROD]$ srvctl status instance -d PROD -n Node2
Instance PROD2 is running on node Node2

The database won’t be opened by srvctl commands as it’s configured for STANDBY mode under Datauard setup.
Please change service mode using srvctl from MOUNT TO OPEN.

Script to Collect Data Guard Physical Standby Diagnostic Information [ID 241438.1]

Posted by Sagar Patil

I generally don’t cut and paste metalink notes. But in case you are using DG , for diagnosis you will have to use 2 metalink scripts, [ID 241438.1] & [ID 241374.1]

Script to Collect Data Guard Physical Standby Diagnostic Information [ID 241438.1]
 Modified 20-APR-2011     Type SCRIPT     Status PUBLISHED      
Overview
-------
This script is intended to provide an easy method to provide information
necessary to troubleshoot Data Guard issues.

-- NAME: DG_phy_stby_diag.sql    
-- ------------------------------------------------------------------------ 
-- AUTHOR:  
--    Michael Smith - Oracle Support Services - DataServer Group
--    Copyright 2002, Oracle Corporation      
-- ------------------------------------------------------------------------ 
-- PURPOSE: 
--    This script is to be used to assist in collection information to help
--    troubeshoot Data Guard issues.
-- ------------------------------------------------------------------------ 
-- DISCLAIMER: 
--    This script is provided for educational purposes only. It is NOT  
--    supported by Oracle World Wide Technical Support. 
--    The script has been tested and appears to work as intended. 
--    You should always run new scripts on a test instance initially. 
-- ------------------------------------------------------------------------ 
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool dgdiag_phystby_&&dbname&&timestamp&&suffix
set lines 200
set pagesize 35
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
set feedback on
select to_char(sysdate) time from dual;

set echo on

--
-- ARCHIVER can be  (STOPPED | STARTED | FAILED) FAILED means that the archiver failed
-- to archive a -- log last time, but will try again within 5 minutes. LOG_SWITCH_WAIT
-- The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log switching is waiting for. Note that 
-- if ALTER SYSTEM SWITCH LOGFILE is hung, but there is room in the current online 
-- redo log, then value is NULL 

column host_name format a20 tru
column version format a9 tru
select instance_name,host_name,version,archiver,log_switch_wait from v$instance;

-- The following select will give us the generic information about how this standby is
-- setup.  The database_role should be standby as that is what this script is intended 
-- to be ran on.  If protection_level is different than protection_mode then for some
-- reason the mode listed in protection_mode experienced a need to downgrade.  Once the
-- error condition has been corrected the protection_level should match the protection_mode
-- after the next log switch.

column ROLE format a7 tru
select name,platform_id,database_role role,log_mode,
 flashback_on flashback,protection_mode,protection_level 
from v$database;

-- Force logging is not mandatory but is recommended.  Supplemental logging should be enabled
-- on the standby if a logical standby is in the configuration. During normal 
-- operations it is acceptable for SWITCHOVER_STATUS to be SESSIONS ACTIVE or NOT ALLOWED.

column force_logging format a13 tru
column remote_archive format a14 tru
column dataguard_broker format a16 tru
select force_logging,remote_archive,supplemental_log_data_pk,supplemental_log_data_ui,
switchover_status,dataguard_broker from v$database; 

-- This query produces a list of all archive destinations and shows if they are enabled,
-- what process is servicing that destination, if the destination is local or remote,
-- and if remote what the current mount ID is. For a physical standby we should have at
-- least one remote destination that points the primary set but it should be deferred.

COLUMN destination FORMAT A35 WRAP
column process format a7
column archiver format a8
column ID format 99

select dest_id "ID",destination,status,target,
archiver,schedule,process,mountid 
from v$archive_dest;

-- If the protection mode of the standby is set to anything higher than max performance
-- then we need to make sure the remote destination that points to the primary is set
-- with the correct options else we will have issues during switchover.

select dest_id,process,transmit_mode,async_blocks,
net_timeout,delay_mins,reopen_secs,register,binding
from v$archive_dest;

-- The following select will show any errors that occured the last time an attempt to
-- archive to the destination was attempted.  If ERROR is blank and status is VALID then
-- the archive completed correctly.

column error format a55 tru
select dest_id,status,error from v$archive_dest;

-- Determine if any error conditions have been reached by querying thev$dataguard_status
-- view (view only available in 9.2.0 and above):

column message format a80
select message, timestamp
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;

-- The following query is ran to get the status of the SRL's on the standby.  If the
-- primary is archiving with the LGWR process and SRL's are present (in the correct
-- number and size) then we should see a group# active.

select group#,sequence#,bytes,used,archived,status from v$standby_log;

-- The above SRL's should match in number and in size with the ORL's returned below:

select group#,thread#,sequence#,bytes,archived,status from v$log;

-- Query v$managed_standby to see the status of processes involved in the
-- configuration.

select process,status,client_process,sequence#,block#,active_agents,known_agents
from v$managed_standby;

-- Verify that the last sequence# received and the last sequence# applied to standby
-- database.

select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
 from v$archived_log
 where resetlogs_change#=(select resetlogs_change# from v$database)
 group by thread#) al,
 (select thread# thrd, max(sequence#) lhmax
 from v$log_history
 where first_time=(select max(first_time) from v$log_history)
 group by thread#) lh
where al.thrd = lh.thrd;

-- The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next
-- gap that is currently blocking redo apply from continuing. After resolving the
-- identified gap and starting redo apply, query the V$ARCHIVE_GAP fixed view again
-- on the physical standby database to determine the next gap sequence, if there is
-- one.

select * from v$archive_gap;

-- Non-default init parameters.

set numwidth 5
column name format a30 tru
column value format a50 wra
select name, value
from v$parameter
where isdefault = 'FALSE';

spool off

Script to Collect Data Guard Primary Site Diagnostic Information [ID 241374.1]

Posted by Sagar Patil

I generally don’t cut and paste metalink notes. But in case you are using DG , for diagnosis you will have to use 2 metalink scripts, [ID 241438.1] & [ID 241374.1]

Script to Collect Data Guard Primary Site Diagnostic Information [ID 241374.1]
 Modified 20-APR-2011     Type SCRIPT     Status PUBLISHED      

Overview
--------
This script is intended to provide an easy method to provide information
necessary to troubleshoot Data Guard issues.

Script Notes
-------------

This script is intended to be run via sqlplus as the SYS or Internal user.

Script
-------
 - - - - - - - - - - - - - - - - Script begins here - - - - - - - - - - - - - - - -

-- NAME: dg_prim_diag.sql  (Run on PRIMARY with a LOGICAL or PHYSICAL STANDBY)
-- ------------------------------------------------------------------------ 
--    Copyright 2002, Oracle Corporation      
-- LAST UPDATED: 2/23/04
--
-- Usage: @dg_prim_diag
-- ------------------------------------------------------------------------ 
-- PURPOSE: 
--    This script is to be used to assist in collection information to help
--    troubeshoot Data Guard issues with an emphasis on Logical Standby.
-- ------------------------------------------------------------------------ 
-- DISCLAIMER: 
--    This script is provided for educational purposes only. It is NOT  
--    supported by Oracle World Wide Technical Support. 
--    The script has been tested and appears to work as intended. 
--    You should always run new scripts on a test instance initially. 
-- ------------------------------------------------------------------------ 
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool dg_prim_diag_&&dbname&&timestamp&&suffix
set linesize 79
set pagesize 35
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
set feedback on
select to_char(sysdate) time from dual;

set echo on

-- In the following the database_role should be primary as that is what
-- this script is intended to be run on.  If protection_level is different
-- than protection_mode then for some reason the mode listed in
-- protection_mode experienced a need to downgrade.  Once the error
-- condition has been corrected the protection_level should match the
-- protection_mode after the next log switch.

column role format a7 tru
column name format a10 wrap

select name,platform_id,database_role role,log_mode,
 flashback_on flashback,protection_mode,protection_level 
from v$database;

-- ARCHIVER can be (STOPPED | STARTED | FAILED). FAILED means that the
-- archiver failed to archive a log last time, but will try again within 5
-- minutes. LOG_SWITCH_WAIT The ARCHIVE LOG/CLEAR LOG/CHECKPOINT event log
-- switching is waiting for.  Note that if ALTER SYSTEM SWITCH LOGFILE is
-- hung, but there is room in the current online redo log, then value is
-- NULL

column host_name format a20 tru
column version format a9 tru

select instance_name,host_name,version,archiver,log_switch_wait
from v$instance;

-- The following query give us information about catpatch.
-- This way we can tell if the procedure doesn't match the image.

select version, modified, status from dba_registry
where comp_id = 'CATPROC';

-- Force logging is not mandatory but is recommended.  Supplemental
-- logging must be enabled if the standby associated with this primary is
-- a logical standby. During normal operations it is acceptable for
-- SWITCHOVER_STATUS to be SESSIONS ACTIVE or TO STANDBY.

column force_logging format a13 tru
column remote_archive format a14 tru
column dataguard_broker format a16 tru

select force_logging,remote_archive,
 supplemental_log_data_pk,supplemental_log_data_ui,
 switchover_status,dataguard_broker
from v$database; 

-- This query produces a list of all archive destinations.  It shows if
-- they are enabled, what process is servicing that destination, if the
-- destination is local or remote, and if remote what the current mount ID
-- is.

column destination format a35 wrap
column process format a7
column archiver format a8
column ID format 99
column mid format 99

select dest_id "ID",destination,status,target,
 schedule,process,mountid  mid
from v$archive_dest order by dest_id;

-- This select will give further detail on the destinations as to what
-- options have been set.  Register indicates whether or not the archived
-- redo log is registered in the remote destination control file.

set numwidth 8
column ID format 99

select dest_id "ID",archiver,transmit_mode,affirm,async_blocks async,
 net_timeout net_time,delay_mins delay,reopen_secs reopen,
 register,binding
from v$archive_dest order by dest_id;

-- The following select will show any errors that occured the last time
-- an attempt to archive to the destination was attempted.  If ERROR is
-- blank and status is VALID then the archive completed correctly.

column error format a55 wrap

select dest_id,status,error from v$archive_dest;

-- The query below will determine if any error conditions have been
-- reached by querying the v$dataguard_status view (view only available in
-- 9.2.0 and above):

column message format a80

select message, timestamp
from v$dataguard_status
where severity in ('Error','Fatal')
order by timestamp;

-- The following query will determine the current sequence number
-- and the last sequence archived.  If you are remotely archiving
-- using the LGWR process then the archived sequence should be one
-- higher than the current sequence.  If remotely archiving using the
-- ARCH process then the archived sequence should be equal to the
-- current sequence.  The applied sequence information is updated at
-- log switch time.

select ads.dest_id,max(sequence#) "Current Sequence",
 max(log_sequence) "Last Archived"
 from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
 where ad.dest_id=al.dest_id
 and al.dest_id=ads.dest_id
 and al.resetlogs_change#=(select max(resetlogs_change#) from v$archived_log )
 group by ads.dest_id;

-- The following select will attempt to gather as much information as
-- possible from the standby.  SRLs are not supported with Logical Standby
-- until Version 10.1.

set numwidth 8
column ID format 99
column "SRLs" format 99
column Active format 99

select dest_id id,database_mode db_mode,recovery_mode,
 protection_mode,standby_logfile_count "SRLs",
 standby_logfile_active ACTIVE,
 archived_seq#
from v$archive_dest_status;

-- Query v$managed_standby to see the status of processes involved in
-- the shipping redo on this system.  Does not include processes needed to
-- apply redo.

select process,status,client_process,sequence#
from v$managed_standby;

-- The following query is run on the primary to see if SRL's have been
-- created in preparation for switchover.

select group#,sequence#,bytes from v$standby_log;

-- The above SRL's should match in number and in size with the ORL's
-- returned below:

select group#,thread#,sequence#,bytes,archived,status from v$log;

-- Non-default init parameters.

set numwidth 5
column name format a30 tru
column value format a48 wra
select name, value
from v$parameter
where isdefault = 'FALSE';

spool off

 - - - - - - - - - - - - - - - -  Script ends here  - - - - - - - - - - - - - - - -

 

 

Understanding 11g Grid Directory Structure , Config & Log files

Posted by Sagar Patil

If you can’t see directory structure properly, use this file

$cd /opt/app/oracle/Middleware
[oracle@Middleware]$ tree -L 2
.
|-- WebLogicServer
|   |-- Oracle_WT
|   |-- agent11g
|   |-- domain-registry.xml
|   |-- logs
|   |-- modules
|   |-- oms11g ($OMS_HOME)
|        |-- cfgtoollogs (Install Log)
|            |-- oui
|                |-- installActions2011XXX.log
|                |-- oraInstall2011XXX.log
|        |-- sysman
|            |-- log
|                |-- emrepocminst.log
|            |-- schemamanager
|                |-- emschema.log (Install Log)
|            |--config
|                |-- httpd_em.conf
|                |-- emoms.properties
|   |-- oracle_common
|   |-- patch_wls1032
|   |-- registry.dat
|   |-- registry.xml
|   |-- user_projects
|   |-- utils
|   |-- wlserver_10.3
|        |-- common
|            |-- emnodemanager (Oracle WebLogic Server Logs)
|                |-- nodemanager.log
|                |-- nodemanager.properties
|               
-- gc_inst (EM INSTANCE BASE)
 |-- WebTierIH1
 |   |-- OHS
 |   |-- auditlogs
 |   |-- bin
 |   |-- config
 |   |-- diagnostics
 |       |-- logs
 |           |-- OPMN
 |               |-- opmn  (WebTier logs)
 |                   |-- provision.log
 |                   |-- opmn.out
 |                   |-- debug.log
 |                   |-- opmn.log
 |           |-- OHS
 |               |-- ohs1  (WebTier logs)
 |                   |-- access_log
 |                   |-- mod_wl_ohs.log
 |-- em
 |   |-- EMGC_OMS1 (OMS_NAME)
 |       |-- emgc.properties ( Contains OMS ServerName, OMS PortNumbers)
 |       |-- Sysman
 |           |-- Log
 |               |--    emoms.log : Main log file for the OMS.
 |                 |        Number of files created will be = (log4j.appender.emlogAppender.MaxBackupIndex + 1)
 |                |--    emoms.trc :    Main trace file for the OMS.
 |                |        Number of files created will be = (log4j.appender.emtrcAppender.MaxBackupIndex + 1)
 |                |--    secure.log
 |                |        Contains output from the 'emctl secure oms' commands.
 |                |--    emctl.msg
 |                |        Created / written to by the HealthMonitor thread of the OMS, when it re-starts the OMS due to a critical error.
 |                |--    emctl.log
 |                        Created by the emctl utility, when any commands are executed in the OMS home          
 |-- user_projects
     |-- domains
         |-- GCDomain
             |-- servers
                 |-- EMGC_OMS1 (Oracle WebLogic Server Logs : GRID Logs)
                     |-- logs
                         |-- EMGC_OMS1.log        - JVM Application Log
                         |-- access.log           - Smiilar to http acess_log
                         |-- EMGC_OMS1.out   
                |-- EMGC_ADMINSERVER  (Oracle WebLogic Server Logs : AdminServer Logs)
                         |-- logs
                         |-- EMGC_ADMINSERVER.log       
                         |-- GCDomain.log                  
                         |-- EMGC_ADMINSERVER.out

---------------------------------------------------------------------------------------------------------
Component                                 Location
---------------------------------------------------------------------------------------------------------
Oracle HTTP Server (OHS)       <EM_INSTANCE_BASE>/<webtier_instance_name>/diagnostics/logs/OHS/<ohs_name>
For example,                /u01/app/Oracle/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs1

OPMN                        <EM_INSTANCE_BASE>/<webtier_instance_name>/diagnostics/logs/OPMN/<opmn_name>
For example,                /u01/app/Oracle/gc_inst/WebTierIH1/diagnostics/logs/OPMN/opmn1

Oracle WebLogic                <EM_INSTANCE_BASE>/user_projects/domains/<domain_name>/servers/<SERVER_NAME>/logs/<SERVER_NAME>.out
For example,                /u01/app/Oracle/gc_inst/user_projects/domains/GCDomain/servers/EMGC_OMS1/logs/EMGC_OMS1.out

Oracle Enterprise Manager to 10g Grid Migration Process

Posted by Sagar Patil

Migrating from previous Oracle Enterprise Manager releases to Grid Control is a two-step process:

1. Deploy the OMA (Agent) to all managed servers.
2. Migrate existing Oracle Enterprise Manager administrator accounts to Grid Control.

1. Deploy the OMA (Agent) to all managed servers.

To simplify and automate Management Agent deployment, a Tcl script is provided that is submitted as a job from an Enterprise Manager Release 2.2, Release 9.0.1, or Release 9.2 Job system. The deployment script (agentIntallJob.tcl) can be found in the Oracle Enterprise Manager 10g home directory at the following location:

%ORACLE_HOME/sysman/agent_download/agentInstallJob.tcl

2. Migrate existing Oracle Enterprise Manager administrator accounts to Grid Control.

Once the Oracle Enterprise Manager 10g Management Agents have been deployed and configured, the next step is to migrate information about users, privileges, groups, and preferred credentials from the original Management Repository to the Oracle Enterprise Manager 10g Management Repository.

Both Enterprise Manager 9i and Oracle Enterprise Manager 10g save and encrypt all administrator accounts and preferred credentials in the repository. In order to migrate all of these accounts over to Enterprise Manager 10g, you must run the Migration Utility from the Enterprise Manager 10g home.

This command line utility can be found in the following directory: %EM_HOME%/bin/repo_mig

The Migration Utility requires the repository user and password for both the original Management Repository database and for the new Oracle Enterprise Manager 10g Management Repository database.

You execute the utility and specify operational parameters using the following format:

$ repo_mig   –migrate  EM_user/password@old_repository_host:port:sid  sysman/password@OMR_host:port:sid

where:
-preview: Generates a preliminary migration report without carrying out the migration.
-migrate: Performs migration of groups, administrators, target privileges, and preferred credentials of hosts, databases, and listeners.
source_user: Source OEM repository user name
source_pwd: Source OEM repository password
source_service: Source OEM repository service. For example, Host:Port:SID
dest_user: Destination OEM repository user name
dest_pwd: Destination OEM repository password
dest_service: Destination OEM repository service. (Host:Port:SID)

Once the migration is complete, the account information is then saved and encrypted. The passwords on all of the accounts will remain the same.

11g Grid | How to manually clear EM Grid Control alerts

Posted by Sagar Patil

List Alerts and number of times they have been raised?

select substr(message_nlsid,1,50),count(*)
 from mgmt_current_severity
 where message_nlsid is not null
 group by message_nlsid
 having count(*) > 10
 order by 2 desc,1 ;

 Message_nlsid                            Count                
-------------------------------------------------------------       
invalid_objects_count_alertmessage               88
service_perf_stat_alertmessage                   37
adrAlertLog_accessViolationErrStack_alertmessage 31
Response_Status_alertmessage                     29
resource_instance_value_alert_message_push       16
adrAlertLog_genericInternalErrStack_alertmessage 15
alert_streams_process_status                     15
TNSERRORS_tnserr_alertmessage                    14

SQL to list all alerts against a given “message_nlsid”

select A.target_name, A.target_type, B.target_guid, B.message
 from mgmt_targets  A, mgmt_current_severity B
 where message_nlsid='invalid_objects_count_alertmessage'
 AND A.target_guid=B.target_guid
 order by TARGET_NAME, TARGET_TYPE;

SQL if you want to list/delete alerts for a known “TARGET”

SELECT A.target_name
, B.target_guid
 , B.metric_guid
 , B.key_value
 FROM mgmt_targets A
 JOIN mgmt_current_severity B
 ON A.target_guid = B.target_guid
 WHERE upper ( A.target_name ) LIKE '%TESTDB%';
DECLARE
CURSOR c1 IS
SELECT A.target_name
, B.target_guid
, B.metric_guid
, B.key_value
FROM mgmt_targets A
JOIN mgmt_current_severity B
ON A.target_guid = B.target_guid
WHERE upper ( A.target_name ) LIKE '%TESTDB%';

BEGIN
FOR r IN c1
LOOP
dbms_output.put_line ( 'R.TARGET_GUID = ' || r.target_guid || '  , r.metric_guid = ' || r.metric_guid || '  , r.key_value = ' || r.key_value );
sysman.em_severity.delete_current_severity ( r.target_guid
, r.metric_guid
, r.key_value );
DELETE from sysman.mgmt_severity
WHERE target_guid = r.target_guid
AND metric_guid = r.metric_guid
AND key_value = r.key_value;
END
LOOP;
COMMIT;
END;

List all alerts with a message “invalid_objects_count_alertmessage”

SELECT A.target_name,B.target_guid,  B.metric_guid,  B.key_value  FROM mgmt_targets A JOIN mgmt_current_severity B  ON A.target_guid = B.target_guid  WHERE message_nlsid='invalid_objects_count_alertmessage';

Delete all alerts for message “invalid_objects_count_alertmessage”

SQL> select severity_code,message
 from mgmt_current_severity
 where message_nlsid='invalid_objects_count_alertmessage'; 
---------------------
88 rows selected

DECLARE
 CURSOR c1 IS
 SELECT B.target_guid,
 B.metric_guid,
 B.key_value
 FROM mgmt_targets A JOIN mgmt_current_severity B
 ON A.target_guid = B.target_guid
 WHERE message_nlsid='invalid_objects_count_alertmessage';
BEGIN
 FOR r IN c1 LOOP
 dbms_output.put_line( 'R.TARGET_GUID = ' ||r.target_guid || '  , r.metric_guid = ' || r.metric_guid || '  , r.key_value = ' || r.key_value);
 sysman.em_severity.delete_current_severity(r.target_guid,r.metric_guid,r.key_value);
 DELETE from sysman.mgmt_severity
 WHERE
 target_guid = r.target_guid AND
 metric_guid = r.metric_guid AND
 key_value = r.key_value;
 END LOOP;
 COMMIT;
END;
SQL > select severity_code,message
 from mgmt_current_severity
 where message_nlsid='invalid_objects_count_alertmessage'; 
no rows selected
If you find it difficult to delete using "message_nlsid" use message flashed at EM console instead.

List Alerts :
SELECT A.target_name,B.target_guid,message,
 B.metric_guid,
 B.key_value
 FROM mgmt_targets A JOIN mgmt_current_severity B
 ON A.target_guid = B.target_guid
 WHERE message like '%An%access%violation%detected%'; 

Delete Alerts :    
DECLARE
 CURSOR c1 IS
 SELECT B.target_guid,
 B.metric_guid,
 B.key_value
 FROM mgmt_targets A JOIN mgmt_current_severity B
 ON A.target_guid = B.target_guid
 WHERE message like '%An%access%violation%detected%';

BEGIN
 FOR r IN c1 LOOP
 dbms_output.put_line( 'R.TARGET_GUID = ' ||r.target_guid || '  , r.metric_guid = ' || r.metric_guid || '  , r.key_value = ' || r.key_value);
 sysman.em_severity.delete_current_severity(r.target_guid,r.metric_guid,r.key_value);
 DELETE from sysman.mgmt_severity
 WHERE
 target_guid = r.target_guid AND
 metric_guid = r.metric_guid AND
 key_value = r.key_value;
 END LOOP;
 COMMIT;
END;

Delete old alerts date/monthwise

select count ( * )
 , to_char ( trunc ( collection_timestamp
 , 'MONTH' )
 , 'MONTH' )
 from mgmt_current_severity
 group by trunc ( collection_timestamp
 , 'MONTH' )
 order by trunc ( collection_timestamp
 , 'MONTH' );
 COUNT(*) TO_CHAR(TRUNC(COLLECTION_TIMESTAMP,'
---------- ------------------------------------
 153 MARCH
 147 APRIL
 120 MAY

DECLARE
 CURSOR c1 IS
 SELECT B.target_guid
 , B.metric_guid
 , B.key_value
 FROM mgmt_targets A
 JOIN mgmt_current_severity B
 ON A.target_guid = B.target_guid
 WHERE to_char ( trunc ( b.collection_timestamp
 , 'MONTH' )
 , 'MONTH' ) like 'MAR%';
 BEGIN
 FOR r IN c1
 LOOP
 dbms_output.put_line ( 'R.TARGET_GUID = ' || r.target_guid || '  , r.metric_guid = ' || r.metric_guid || '  , r.key_value = ' || r.key_value );
 sysman.em_severity.delete_current_severity ( r.target_guid
 , r.metric_guid
 , r.key_value );
 DELETE from sysman.mgmt_severity
 WHERE target_guid = r.target_guid
 AND metric_guid = r.metric_guid
 AND key_value = r.key_value;
 END
 LOOP;
 COMMIT;
 END;
 PL/SQL procedure successfully completed.

COUNT(*) TO_CHAR(TRUNC(COLLECTION_TIMESTAMP,’
———- ————————————
147 APRIL
120 MAY

How to create SSH tunnels to get around Firewall

Posted by Sagar Patil

I normally have access to unix/linux systems through ssh (port 22) but firewall access is often disabled for ports like TNS (1521/1526), Emagent(1158), Grid  (5500). How do you connect to those ports if you don’t have direct access through firewall ? … Use SSH tunneling.

1.  Locate Source (Windows Desktop), Destination  Unix server IP address and port number you wish to connect at Destination

I am trying to connect to destination oracle server (192.168.1.100) using TNS port 1529

2. Open putty and add following configuration

Add server and port details

C:\>telnet localhost 1529
Connecting To localhost…Could not open connection to the host, on port 1529: Connect failed

3. Now startup putty session and ssh login at Remote server.

run “netstat -an” on windows desktop to see any sessions with port 1529 are listed

Now try “telnet localhost 1529” to see if it’s all working as it should.

4. Configure connection detail to port on your local machine which will create a tunnel to the destination server.

I am using TORA so I have directed port 1529 at my local machine , this is very important.

I am now able to connect to Target Oracle database thru my ssh tunnel successfully.

5. Let’s  use “netstat -anp”  at server to see connections from source desktop.

192.168.1.100 (Remote Oracle Server)  &  192.168.1.121 (my Desktop)

[oracle@~]$ netstat -anp | grep 192.168.1.121
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp        0      0 ::ffff:192.168.1.100:22    ::ffff:192.168.1.121:3970    ESTABLISHED –
tcp        0      0 ::ffff:192.168.1.100:22    ::ffff:192.168.1.121:3807    ESTABLISHED –

6. At desktop run “netstat -an”  to see established sessions.

How Grid Enterprise Manager Determines Host and Management Agent Availability

Posted by Sagar Patil

The availability of the targets you manage is directly affected by the availability of the Management Agents, which gather data about your managed targets, and the host computers on which your targets are installed. As a result, it is important to understand how Enterprise Manager determines host and Management Agent availability.

Each Management Agent sends a periodic signal to an Oracle Management Service (OMS) indicating that the Management Agent is available.

If the Management Service does not receive a signal from an Management Agent within a specified time interval (default of 120 seconds) then the Management Service performs a reverse ping. A reverse ping is when the Management Service attempts to contact the Management Agent using the Management Agent URL. If the reverse ping succeeds, then the Oracle Management Service knows that the Management Agent and host are both available.

If the Management Service reverse ping fails, then all targets monitored by the Management Agent are considered to be in the “Agent Unreachable” state, and the Oracle Management Service attempts a TCP ping of the host on which the Management Agent resides. Based on the results of the TCP ping, one of two messages will be returned:

  • If the Management Service’s TCP ping to the host succeeds, then the Management Service determines that the Management Agent is down, but the host is still up. The notification alert message will indicate this state.The message is as follows:Agent is Unreachable (REASON = Connection refused) but the host is UP.The REASON will be filled with the error that was received while performing the reverse ping.
  • If the Management Service’s TCP ping to the host fails, then, you may conclude one of the following:
    • There are network problems between the Management Service and Management Agent hosts
    • The host itself is down
    • The host cannot be reached using a ping because a firewall exists between the Management Service and the Management Agent hosts that prevents ICMP traffic from passing, or the Management Agent host does not support ICMP packets

    The notification alert message will indicate the problem. If Management Service’s host ping fails, the message is:

    Agent is Unreachable (REASON = Connection refused). Host is unreachable (REASON = unknown host)

    The REASON for Agent Unreachable will be filled with the error we received while perfoming the reverse ping and the REASON for host unreachability will be filled with the error we received while performing the host ping.

11g Grid | Creating and Comparing a Baseline

Posted by Sagar Patil

As a DBA,  one can’t look at entire application performance. But there is an easy way to see if database server load is normal when you hit issues . It’s done by creating and comparing baselines/snapshots under 11g Grid.  This option doesn’t work with 10g so you will need 11g database.

This post will cover-

  • Creating a single Baseline
  • Creating a Repeating Baseline
  • Comparing a Baseline to Another Baseline or Pair of Snapshots

Baseline Interval Type

Element Description
Single You define a single baseline type using a single fixed time period with specific beginning and ending times and dates. Single baseline periods are useful for establishing baselines over user-defined periods of special processing, such as month-end.
Repeating You define a repeating baseline type using a fixed time period with specific beginning and ending times. This time period can be repeated daily or weekly. Repeating baseline periods are useful for observing recurring performance patterns. You can then use these statistics to automatically set metric thresholds for alerting.

The first thing you need to do is locate a normal day and create reference point.

To create a single baseline
From the Database Home page, click Server.The Server page appears.

  1. Under Statistics Management, click AWR Baselines.The AWR Baselines page appears with a list of existing baselines displayed.
  2. Click Create.The Create Baseline: Baseline Interval Type page appears.
  3. Select Single.
  4. Click Continue.The Create Baseline: Single Baseline page appears.
  5. In the Baseline Name field, enter a name for the baseline.
  6. Under Baseline Interval, select whether to use a snapshot range or a time range for the baseline. Do one of the following:
    • To use a range, select Snapshot Range. Complete the following steps:
      • Under Select Time Period, select a start time for the baseline by selecting Period Start Time and the snapshot icon below the Active Sessions chart that corresponds to the desired start time.
      • Select an end time for the baseline by selecting Period End Time and the snapshot icon below the Active Sessions chart that corresponds to the desired end time.
      • Optionally, to view older snapshots that are not displayed below the Active Sessions chart, expand Change Chart Time Period. Enter the desired start date in the Chart Start Date field and the desired end date in the Chart End Date field, and click Go.

    • To use a time range, select Time Range. Complete the following steps:
      • In the Start Time fields, select a start time for the baseline.
      • In the End Time fields, select an end time for the baseline.
  7. Click Finish.

 

Creating a Repeating Baseline

A repeating baseline is a baseline that repeats during a time interval over a specific period. For example, a repeating baseline may repeat every Monday from 8:00 a.m. to 10:00 a.m. from February 6, 2009 to February 6, 2010.

To create a repeating baseline:

  1. From the Database Home page, click Server.The Server page appears.
  2. Under Statistics Management, click AWR Baselines.The AWR Baselines page appears with a list of existing baselines displayed.
  3. Click Create.The Create Baseline: Baseline Interval Type page appears.
  4. Select Repeating and then click Continue.The Create Baseline: Repeating Baseline Template page appears.
  5. In the Baseline Name Prefix field, enter a name prefix for the baseline.
  6. Under Baseline Time Period, specify the time of the day that you want the baseline to begin collecting AWR data and the duration of the baseline collection.
  7. Under Frequency, do one of the following:
    • Select Daily if you want the baseline to repeat on a daily basis.
    • Select Weekly if you want the baseline to repeat on a weekly basis, and then select the day of the week on which the baseline will repeat.
  8. Under Interval of Baseline Creation, complete the following steps:
    1. In the Start Time fields, select a date and time in the future when the data collection should begin.
    2. In the End Time fields, select a date and time in the future when the data collection should end.
  9. Under Purge Policy, enter the number of days to retain captured baselines.
  10. Click Finish.A baseline template with the same name as the baseline name prefix will be created.
  11. A baseline template is a specification that enables Oracle Database to automatically generate a baseline for a future time period.

Comparing a Baseline to Another Baseline or Pair of Snapshots

When performance degradation occurs over time, you can run the AWR Compare Periods report to compare the degraded performance.

To compare a baseline to another baseline

  1. From Database Home page, click Server.
  2. Under Statistics Management, click Automatic Workload Repository.
  3. Under Manage Snapshots and Baselines, click the link next to Baselines.
  4. Complete the following steps:
    1. Select the baseline to use for the report.At least one existing baseline must be available.
    2. From the Actions list, select Compare Periods and click Go.

    The Compare Periods: Second Period Start page appears. Under First Period, the selected baseline is displayed.

  5. Compare the baseline selected in the first period to another baseline or a pair of snapshots. Do one of the following:
    • To compare to another baseline, select Select a Baseline and the baseline you want to use in the second period, and then click Next.The Compare Periods: Review page appears. Go to Step 7
    • To compare to a pair of snapshots, select Select Beginning Snapshot and the beginning snapshot to use in the second period, and then click Next.The Compare Periods: Second Period End appears. Proceed to the next step.
  6. Select the ending snapshot for the snapshot period that will be included in the report and then click Next.
  7. Review the periods to be included in the report and then click Finish.
  8. Click Report to view the report.
  9. One of first thing I look for is “Service Statistics”

To save the report as an HTML file, click Save to File.

Digging deep in Oracle Grid Repository for Information

Posted by Sagar Patil

How Grid Repository Stores Data?

The agents upload data at MGMT_METRIC_RAW table which is retained for 7 days.
Raw data are aggregated by Grid Control based on a predefined policy into 1-hour records, stored in MGMT_METRIC_1HOUR.
After 31 days or a month, 1-hour records in MGMT_METRIC_1HOUR are subject for further aggregation that rolls into 1-day records.
These 1-day records are stored in MGMT_METRIC_1Day for 365 days or 1 year before purged by Grid Control.

Modify these retention policies by updating the mgmt_parameters table in the OMR.
Table Name                   Retention Parameter                  Retention Days
MGMT_METRICS_RAW             mgmt_raw_keep_window                    7
MGMT_METRICS_1HOUR           mgmt_hour_keep_window                   31
MGMT_METRICS_1DAY            mgmt_day_keep_window                    365

To query data in above tables, we need to know TARGET_GUID and METRIC_GUID.
For easier access using names “target names” or “metric names” we can use 3 views MGMT$METRIC_DETAILS, MGMT$METRIC_HOURLY, and MGMT$METRIC_DAILY.
These views owned by SYSMAN are based on these 3 main tables but having extra columns like ‘target_name’, ‘metric_name’ for easier referrence.

<
Following SQL will return definitions of MGMT$METRIC_DETAILS,METRIC_HOURLY & METRIC_DAILY

select VIEW_NAME
, TEXT_LENGTH
, TEXT
from dba_views
where OWNER = ‘SYSMAN’
and VIEW_NAME in ( ‘MGMT$METRIC_DETAILS’
, ‘MGMT$METRIC_HOURLY’
, ‘MGMT$METRIC_DAILY’ );

Let’s have a look at attributes stored at these views which we could use for reporting.
The common attributes are “Property_name” & “target_type”.

select unique property_name
from mgmt$target_properties
order by 1;

———————————————
CRSHome
CRSSoftwareVersion
CRSVersion
CRSVersionCategory
ClusterName

DBDomain
DBName
DBVersion
InstanceName
SID

DataGuardStatus
DatabaseName
DatabaseType
DatabaseUser
DbBlockSize
OpenMode
OracleHome
StartTime
StatisticsLevel
background_dump_dest
core_dump_dest

ListenAddress
ListenPort
ListenerOraDir
LsnrName

HARDWARE_TYPE
Machine
MachineName
IP_address
CPUCount

OS
OS_patchlevel

RACInstNum
RACOption

select unique target_type
from mgmt$target
order by 1;

———————————————
cluster
composite
host
j2ee_application
metadata_repository
netapp_filer
oracle_csa_collector
oracle_database
oracle_emd
oracle_emrep
oracle_ias_farm
oracle_listener
rac_database
weblogic_domain
weblogic_j2eeserver

Let’s combine these 2 views to locate some interesting facts.

List Targets with TNS Listener ports configured :

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'oracle_listener' )
 and ( mgmt$target_properties.property_name = 'Port' );

Devora02       LISTENER_ora02                       oracle_listener Port 1529
Devora01       LISTENER_ora01                       oracle_listener Port 1529
Devora04       LISTENER_ora04                       oracle_listener Port 1529

List Machine_Names, CPU Count & Database Verion for Licensing

SELECT mgmt$target.host_name
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 AND ( mgmt$target_properties.property_name in ( 'CPUCount','DBVersion' ) )
 GROUP BY mgmt$target.host_name
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 order by mgmt$target.host_name;

List Dataguard Instances mounted in APPLY mode

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'oracle_database' )
 and ( mgmt$target_properties.property_name = 'OpenMode' )
 and PROPERTY_VALUE like 'READ%ONLY%WITH%APPLY%';

List RAC databases and their Attributes like ClusterName, Dataguard Status.
Change "property_name" attribute per your need

SELECT mgmt$target.host_name
 , mgmt$target.target_name
 , mgmt$target.target_type
 , mgmt$target_properties.property_name
 , mgmt$target_properties.property_value
 FROM mgmt$target
 , mgmt$target_properties
 WHERE ( mgmt$target.target_name = mgmt$target_properties.target_name )
 AND ( mgmt$target.target_type = mgmt$target_properties.target_type )
 and ( mgmt$target.target_type = 'rac_database' )
 and ( mgmt$target_properties.property_name in ( 'RACOption'
 , 'DBName'
 , 'DBDomain'
 , 'DBVersion'
 , 'ClusterName'
 , 'DataGuardStatus'
 , 'MachineName'
 , 'Role'
 , 'SID' ) )
 order by mgmt$target.host_name, mgmt$target.target_name,
mgmt$target_properties.property_name;  

RACNode1        DEV   rac_database ClusterName     crs1                          
RACNode1        DEV   rac_database DBDomain        oracledbasupport.co.uk                
RACNode1        DEV   rac_database DBName          DEV                         
RACNode1        DEV   rac_database DBVersion       10.2.0.3.0                    
RACNode1        DEV   rac_database DataGuardStatus                               
RACNode1        DEV   rac_database MachineName     RAC01-vip
RACNode1        DEV   rac_database RACOption       YES                           
RACNode1        DEV   rac_database Role            NORMAL                        
RACNode1        DEV   rac_database SID             DEV1                        
RACDGNode1       PROD rac_database ClusterName     RACDGNodecrs                        
RACDGNode1       PROD rac_database DBDomain        oracledbasupport.co.uk                      
RACDGNode1       PROD rac_database DBName          PROD                              
RACDGNode1       PROD rac_database DBVersion       11.2.0.2.0                          
RACDGNode1       PROD rac_database DataGuardStatus Physical Standby                    
RACDGNode1       PROD rac_database MachineName     RACDGNode-vip
RACDGNode1       PROD rac_database RACOption       YES                                 
RACDGNode1       PROD rac_database Role            NORMAL                              
RACDGNode1       PROD rac_database SID             PROD2

If you are having performance hit on Grid database, use following SQL to locate most active segemnts.
You can then think of archiving data in grid else moving them on speedy spindles.

select sum ( B.logical_reads_total )
 , sum ( B.physical_reads_total )
 , sum ( B.physical_writes_total )
 , sum ( buffer_busy_waits_total )
 , c.object_name
 , c.owner
 from DBA_HIST_SNAPSHOT A
 , DBA_HIST_SEG_STAT B
 , dba_objects C
 where A.Snap_id = b.snap_id
 and c.object_id = b.obj#
 and A.BEGIN_INTERVAL_TIME >= to_Date ( '17-May-2011 08:00'
 , 'DD-MON-YYYY HH24:MI' )
 and A.END_INTERVAL_TIME <= to_Date ( '17-May-2011 12:00'
 , 'DD-MON-YYYY HH24:MI' )
 group by c.object_name
 , c.owner
 order by 1 desc;

Change order by
1: For most Read Segments
3: For most Writes on a segment
4: For Waits

SQL to report Oracle init parameters for a Target database

  SELECT   target_name,
           target_type,
           name,
           VALUE
    FROM   MGMT$ECM_VISIBLE_SNAPSHOTS A, SYSMAN.MGMT_DB_INIT_PARAMS_ECM B
   WHERE       A.ECM_SNAPSHOT_ID = B.ECM_SNAPSHOT_ID
              AND TARGET_TYPE = 'rac_database'  -- Choose TARGET_TYPE
           AND name LIKE 'remote_listener%'     -- Look for a relevant Parameter
GROUP BY   target_name,
           target_type,
           name,
           VALUE
ORDER BY   Target_name, name ;

Following SQL will report All Database Target details monitored thru grid

select t.host_name
 as     host
 , ip.property_value IP
 , t.target_name
 as     name
 , decode ( t.type_qualifier4
 , ' '
 , 'Normal'
 , t.type_qualifier4 )
 as     type
 , dbv.property_value
 as     version
 , port.property_value port
 , SID.property_value SID
 , logmode.property_value
 as     "Log Mode"
 , oh.property_value
 as     "Oracle Home"
 from mgmt$target t
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'DBVersion' ) dbv
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'Port' ) port
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'SID' ) sid
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'log_archive_mode' ) logmode
 , ( select p.target_guid
 , p.property_value
 from mgmt$target_properties p
 where p.property_name = 'OracleHome' ) oh
 , ( select tp.target_name
 as     host_name
 , tp.property_value
 from mgmt$target_properties tp
 where tp.target_type = 'host'
 and tp.property_name = 'IP_address' ) ip
 where t.target_guid = port.target_guid
 and port.target_guid = sid.target_guid
 and sid.target_guid = dbv.target_guid
 and dbv.target_guid = logmode.target_guid
 and logmode.target_guid = oh.target_guid
 and t.host_name = ip.host_name
 order by 1, 3;

Top of Page

Top menu