Streams | ORA-24082 while disabling a propagation job

Posted by Sagar Patil

The reason for the ORA-24082 error is that a message has been enqueued  onto the AQ$_PROP_NOTIFY_<N> queue to notify the job_queue_process associated with  the propagation schedule to stop propagating messages and the
job_queue_process has failed to acknowledge that message within a  reasonable amount of time.

dbms_aqadm.disable_propagation_schedule(
queue_name => ‘STRMADMIN. SCOTT_PROPAGATION_Q’, destination => ‘${DEST_DB}’, destination_queue => ‘STRMADMIN. SCOTT_PROPAGATION_Q’);

stopping SCOTT_PROPAGATION
BEGIN
*
ERROR at line 1:
ORA-24082: propagation may still be happening for the schedule for QUEUE
STRMADMIN.SCOTT_C_Q and DESTINATION STREAMS
ORA-06512: at line 11

Use stop_propagation with Force option

select PROPAGATION_NAME,status from dba_propagation where propagation_name=’SCOTT’;

PROPAGATION_NAME                   STATUS
————————————————————————————
SCOTT_P                            ENABLED

SQL> execute dbms_propagation_adm.stop_propagation(propagation_name=>’SCOTT_P’,force=>TRUE);
PL/SQL procedure successfully completed.

select PROPAGATION_NAME,status from dba_propagation where propagation_name=’SCOTT’;

PROPAGATION_NAME                   STATUS
———————————————————————————–
SCOTT_P                            ABORTED

If you have number of propagation processes within database to stop then try bash script below

#!/bin/bash
# This script will connect to each Database defined at array “SIDs”
# Disable Capture process
# Disable Propogation process
#
# If you want to run SQL commands at ALL instances active on the Box please use line below
# typeset -a SIDS=($(ps -ef | grep pmon | grep -v grep | awk -F_ ‘{print $NF}’))
#
# Connect to specific database defined at SIDS=(test1 test2 test3)
typeset -a SIDS=(ORCL)

for x in $(seq 0 $((${#SIDS[*]}-1)))
do
export ORACLE_SID=${SIDS[x]}
export ORACLE_HOME=$(awk -F: ‘/’^$ORACLE_SID’/ {print $2}’ /etc/oratab)
typeset -a UIDS=($(
$ORACLE_HOME/bin/sqlplus -s strmadmin/strmadmin <<EOF | grep -v “SQL>” | grep “M_A_R_K” |awk ‘{print $1}’
set feedback off
set pages 0
set hea off
select capture_name, ‘M_A_R_K’  from sys.streams$_capture_process where status= 1;
order by 1;
quit
EOF
))
for ID in ${UIDS[*]}
do
#
# DEST_DB — Set this variable to Target Database
# If your destination_queue name is different then please change value accordingly
#
$ORACLE_HOME/bin/sqlplus  -s strmadmin/strmadmin <<EOF
set pages 0;
set feedback off;
set linesize 200;
Begin
dbms_aqadm.disable_propagation_schedule(
queue_name => ‘STRMADMIN.${ID}_Q’,
destination => ‘${DEST_DB}’,
destination_queue => ‘STRMADMIN.${ID}_Q’);
EXCEPTION
WHEN OTHERS THEN
IF sqlcode = -24082 THEN
execute dbms_propagation_adm.stop_propagation(propagation_name=>’${ID}’,force=>TRUE);
END IF;
IF sqlcode = -24064  THEN NULL;
ELSE RAISE;
END IF;
END;
exit;
EOF
echo “Proceed/Cancel (P/C)?”
read A
[[ $A != “p” && $A != “P” ]] && exit
#    “break” will skip the remaining users for this SID and continue with the next SID
#    To skip everything (the remaining users for this SID as well as the remaining SIDs)replace “break” with “exit”.
done
done

Streams – Capture Process States

Posted by Sagar Patil

The state of a capture process describes what the capture process is doing currently. You can view the state of a capture process by querying the STATE column in the V$STREAMS_CAPTURE dynamic performance view. The following capture process states are possible

Read more…

Useful Tables and Views, for Troubleshooting Streams Issues

Posted by Sagar Patil
CAPTURE PROCESS
streams$_capture_process Lists all defined capture processes
dba_capture Basic status, error info
v$streams_capture Detailed status info
dba_capture_parameters Configuration information
PROPAGATE PROCESS
streams$_propagation_process: Lists all defined propagate processes
dba_propagation Basic status, error info
v$propagation_sender Detailed status info
v$propagation_receiver Detailed status info
 APPLY PROCESS
 streams$_apply_process Lists all defined apply processes
 dba_apply Basic status, error info
 v$streams_apply_reader Status of the apply reader
 v$streams_apply_server Status of apply server(s)
 v$streams_apply_coordinator Overall status, latency info
 dba_apply_parameters Configuration information
Miscellaneous Tables 
 v$buffered_queues View that displays the current and cumulative number of messages enqueued and spilled, for each buffered queue.
 sys.streams$_apply_spill_msgs_part Table that the apply process uses, to “spill” messages from large transactions to disk.
 system.logmnr_restart_ckpt$ Table that holds capture process “checkpoint” information.

Identify Issues and Fix Streams Capture Process

Posted by Sagar Patil

My Capture, Propagate and apply processes were working fine until the server fial over. After a restart even though capture processes are enabled, they are stalled at status “INITIALIZING”. The Propagate and Apply were working fine.

The alrtlog didn’t show any streams error message except process details when Capture processes were enabled.

Streams CAPTURE C004 started with pid=74, OS id=23523
Thu OCt 21 13:07:18 2010
Streams CAPTURE C019 started with pid=84, OS id=27340
Thu OCt 21 13:27:37 2010
Streams CAPTURE C016 started with pid=81, OS id=5580
Thu OCt 21 13:27:50 2010
Streams CAPTURE C014 started with pid=56, OS id=5653
Thu OCt 21 13:27:59 2010
Streams CAPTURE C013 started with pid=68, OS id=5759
Thu OCt 21 13:28:11 2010
Streams CAPTURE C011 started with pid=86, OS id=5881
Thu OCt 21 13:28:35 2010
Streams CAPTURE C007 started with pid=87, OS id=6068
Thu OCt 21 13:28:48 2010
Streams CAPTURE C003 started with pid=88, OS id=6149
Thu OCt 21 13:41:41 2010

Find the Captured SCNs number and then locate the log files needed

select min(start_scn), min(applied_scn)     from dba_capture;

Find the Archivelog names by using the SCN

SELECT   name,
thread#,
sequence#,
status,
first_time,
next_time,
first_change#,
next_change#
FROM   v$archived_log
WHERE   111694632468 BETWEEN first_change# AND next_change#;

If the status column from above Query returned D then please restore the archivelog.

The following query can be used to determine the oldest archive log that will need to be read, during the next restart of a capture process.

SELECT   a.sequence#, b.name
FROM   v$log_history a, v$archived_log b
WHERE       a.first_change# <= (SELECT   required_checkpoint_scn
FROM   dba_capture
WHERE   capture_name = ‘SCOTT_C’)
AND a.next_change# > (SELECT   required_checkpoint_scn
FROM   dba_capture
WHERE   capture_name = ‘SCOTT_C’)
AND a.sequence# = b.sequence#(+);

If needed restore archivelog files using

$RMAN target / ${CATALOG} log=$LOG_FILE append << EOF
RUN {
### Backup Archived Logs
allocate channel t1 type disk
set archivelog destination to ‘/mnt/logs/oradata/TEST/arch’;
# restore archivelog from logseq=9970 until logseq=9984 thread=1;    — Use if you know the SEQ number needed (select * from V$ARCHIVED_LOG )
restore ARCHIVELOG FROM TIME ‘SYSDATE-3’ UNTIL TIME ‘SYSDATE-1’;  — Use if you want to restore daywise
release channel t1;
}
}
exit;
EOF

After log restore, I  stopped/started capture processes and it all worked fine.

Streams/RAC and Database Link Problem

Posted by Sagar Patil

While working with streams 10g RAC,  setup went well but source database  propagation process won’t work with target database “strmrepl1” and returned TNS “12514” error.

Propagation Schedule for (STRMADMIN.SCOTT_C_Q, “STRMADMIN”.”SCOTT_A_Q”@strmrepl) encountered following error:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Strangely I could see db links working fine from source(streams) to destination(strmrepl) and vice versa. Manually DB links were working under TOAD, SQLPLUS but thru streams propagate process they won’t.

Finally I hit a solution when I carried “lsnrctl staus LISTENER”
I could see there were 2 services registered for same servicename

[Streams]$ lsnrctl status LISTENER | grep str
Service “replication” has 2 instance(s).
Instance “streams1”, status READY, has 2 handler(s) for this service…
Instance “strmrepl1”, status READY, has 2 handler(s) for this service…

Show parameter service_names at database did flash both instance details which seem wrongly updated.

Solution : Set right value for service name at database instance

[oracle@]$ srvctl config service -d streams      –Source DB
strmtesting PREF: streams1 AVAIL:

[oracle@]$ srvctl config service -d strmrepl    –Target DB
replication PREF: strmrepl1 AVAIL:

SQL> alter system set service_names=’replication’
$lsnrctl reload LISTENER

[Streams]$ lsnrctl status LISTENER | grep str
Service “replication” has 1 instance(s).
Instance “strmrepl1”, status READY, has 2 handler(s) for this service…

Streams Performance Data Collection Tools for 10g & 11g

Posted by Sagar Patil

There are diferent ways to collect performance related data

  • Install Application Workload Repository (AWR) /Active Session History (ASH). Take AWR snapshots an hour or for desired interval
  • Install and Configure STRMMON utility if you have 10gR2(Not available in 11g). Use Metalink Article “290605.1”
  • Use UTL_SPADV as a alternative to STRMMON for > 11gR1
  • Run Streams Healthcheckat each database: before, during , and after performance period of interest, available from Metalink:Article273674.1

Use AWR with STRMMON (10g)/UTL_SPADV(11g) during performance period of interest

10g : STRMMON Utility

  • Install and Configure STRMMON (10gR2) using “290605.1”
  • Collect STRMMON information regularly, at least once every hour to align with AWR snapshots.

Here is a STRMMON Command line example
– To collect 10 minutes of output from source and target databases in a single line

60 seconds X 10 Iterations= 600 seconds or 10 minutes
$ORACLE_HOME/denmo$strmmon -interval 10 -count 60 -user sys -passw <sys_password> -dbname<source database> -sysdba \
-user sys -passw<sys_password> -dbname<target database> -sysdba  > strmmon.log

11g : Using Streams performance advisor i.e. UTL_SPADV package (Replacement for STRMMON)

Install this Optional packge inside STREAMS admin user by compiling Sql> @?/rdbms/admin/utlspadv.sql

How to Run UTL_SPADV?
Run collect_stats Procedure will wait until interval and count satisfied , Default interval= 60 sec; Default count=10 iterations

Setting up Bidirectional Streams SCHEMA Replication

Posted by Sagar Patil

Details below are for configuring a schema level replication using API DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS.

1. Verify Source & Target System for Streams Configuration

CONNECT system/manager@ {STREAMS/STRMREPL}
SET SQLPROMPT "STREAMS> "
set pages 0;
set linesize 2000;
SELECT * FROM global_name; 
SHOW PARAMETER service_names
SHOW PARAMETER global_names

-- Verify database parameters
col name HEADING 'Parameter|Name' format a30
col value HEADING 'Parameter|Value' format a15
col description HEADING 'Description' format a60 word
select name,value,description from v$parameter where name in
 ('aq_tm_processes', 'archive_lag_target', 'job_queue_processes','_job_queue_interval',
 'shared_pool_size', 'sga_max_size', 'sga_target','streams_pool_size',
 'global_names', 'compatible','log_parallelism', 'logmnr_max_persistent_sessions',
 'parallel_max_servers', 'processes', 'sessions'
 );

-- Verify ARCHIVELOG mode and archive log settings
ARCHIVE LOG LIST
SHOW PARAMETER log_archive_format
SHOW PARAMETER log_archive_dest_1
SHOW PARAMETER log_archive_dest_state_1

-- Verify Streams administrator
SELECT username, account_status, default_tablespace
 FROM DBA_USERS
 WHERE username = 'STRMADMIN';
SELECT *
 FROM DBA_STREAMS_ADMINISTRATOR;

-- Verify database link as Streams administrator
CONNECT strmadmin/strmadmin@STREAMS
SELECT db_link
 FROM DBA_DB_LINKS;

-- Verify directory objects
SELECT directory_name, directory_path
 FROM   dba_directories;

2. Setting Up SCOTT Schema Replication

STREAMS>>conn scott/tiger
Connected.
STREAMS>>select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE
BONUS                          TABLE
SALGRADE                       TABLE
SALES                          TABLE
COSTS                          TABLE
TIMES                          TABLE
CHANNELS                       TABLE
PROMOTIONS                     TABLE
COUNTRIES                      TABLE
PRODUCTS                       TABLE
CATEGORIES_TAB                 TABLE
PRODUCT_REF_LIST_NESTEDTAB     TABLE
SUBCATEGORY_REF_LIST_NESTEDTAB TABLE
CUSTOMERS                      TABLE
EMP_REPL                       TABLE
16 rows selected.

STRMREPL>>conn scott/tiger
Connected.
STRMREPL>>select * from tab;
no rows selected

There are no tables existing at destination schema : SCOTT@STRMREPL

set serveroutput on;
set term on;
set lines 200;
BEGIN
 DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
 schema_names => 'SCOTT',
 source_directory_object => 'STREAMS_DIR',
 destination_directory_object => 'STRMREPL_DIR',
 source_database => 'STREAMS',
 destination_database => 'STRMREPL',
 perform_actions => TRUE,
 dump_file_name => 'scott_exp.dmp',
 capture_name => 'SCOTT_CAP',
 capture_queue_table => 'SCOTT_SCHEMA_CAPTURE_QT',
 capture_queue_name => 'SCOTT_SCHEMA_CAPTURE_Q',
 propagation_name => 'SCOTT_SCHEMA_PROPAGATION',
 apply_name => 'SCOTT_SCHEMA_APPLY',
 apply_queue_table => 'strmadmin.SCOTT_SCHEMA_APPLY_QT',
 apply_queue_name => 'strmadmin.SCOTT_SCHEMA_APPLY_Q',
 bi_directional=> TRUE,
 include_ddl => FALSE,
 instantiation=>DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);
 END;
 /
job finished
PL/SQL procedure successfully completed.

This can take a long time depending on schema to be replciated.
To monitor progress have alook at filesizes of expdp operation "scott_exp.dmp" at STREAMS_DIR & STRMREPL_DIR

select from dba_recoverable_script_errors to see there were no errors
STREAMS>> select error_message,block_num from dba_recoverable_script_errors

STRMREPL>>conn scott/tiger
Connected.
STRMREPL>>select * from tab;
TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
DEPT                           TABLE
EMP                            TABLE
BONUS                          TABLE
SALGRADE                       TABLE
SALES                          TABLE
COSTS                          TABLE
TIMES                          TABLE
CHANNELS                       TABLE
PROMOTIONS                     TABLE
COUNTRIES                      TABLE
PRODUCTS                       TABLE
CATEGORIES_TAB                 TABLE
PRODUCT_REF_LIST_NESTEDTAB     TABLE
SUBCATEGORY_REF_LIST_NESTEDTAB TABLE
CUSTOMERS                      TABLE
EMP_REPL                       TABLE
16 rows selected.

 

 

 

3) Let’s verify DML changes executed in STREAMS are propagated to STRMREPL database

STREAMS>>delete from scott.promotions;
2 rows deleted.
STREAMS>>commit;
Commit complete.
STREAMS>>select count(*) from scott.promotions;
 COUNT(*)
----------
 0

At replicated Site 
STRMREPL>>select count(*) from scott.promotions;
 COUNT(*)
----------
 0

Note the DDL changes are not replicated as I set "include_ddl => FALSE" at "MAINTAIN_SCHEMAS"

STREAMS>>drop table scott.emp;
Table dropped.
STREAMS>>desc scott.emp;
ERROR:
ORA-04043: object scott.emp does not exist

STRMREPL>>desc scott.emp;
 Name                                                                                                    Null?    Type
 ----------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
 EMPNO                                                                                                   NOT NULL NUMBER(4)
 ENAME                                                                                                    VARCHAR2(10)
 JOB                                                                                                      VARCHAR2(9)
 MGR                                                                                                      NUMBER(4)
 HIREDATE                                                                                                 DATE
 SAL                                                                                                      NUMBER(7,2)
 COMM                                                                                                     NUMBER(7,2)
 DEPTNO                                                                                                   NUMBER(2)

Issues Noticed

STREAMS>>select error_message,block_num from dba_recoverable_script_errors;

ERROR_MESSAGE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 BLOCK_NUM
----------
ORA-24035: AQ agent  is not a subscriber for queue STRMADMIN.SCOTT_SCHEMA_CAPTURE_Q

I fixed it by deleting rows from following 2 tables.

SQL> conn / as sysdba
Connected.
STREAMS>> delete from dba_recoverable_script_errors;
STREAMS>> delete from dba_recoverable_script;
STREAMS>> commit;

If needed you can delete all streams config details except db_links and parameters using API below

A) Disable propagation schedule
STREAMS>> exec dbms_aqadm.disable_propagation_schedule(queue_name => 'SCOTT_SCHEMA_CAPTURE_Q',destination => 'STRMREPL',destination_queue => 'strmadmin.SCOTT_SCHEMA_APPLY_Q');

B) Drop propagation
STREAMS>> exec dbms_propagation_adm.drop_propagation(propagation_name => 'SCOTT_SCHEMA_PROPAGATION',drop_unused_rule_sets => true);
PL/SQL procedure successfully completed.

C) Remove Streams
STREAMS>> exec  dbms_streams_adm.remove_streams_configuration;
PL/SQL procedure successfully completed.

D) Drop Queues for capture and apply
– drop capture queue
STREAMS>>exec  DBMS_STREAMS_ADM.REMOVE_QUEUE('SCOTT_SCHEMA_CAPTURE_Q',TRUE);
PL/SQL procedure successfully completed.

– drop apply queue
STREAMS>>Exec DBMS_STREAMS_ADM.REMOVE_QUEUE('strmadmin.SCOTT_SCHEMA_APPLY_Q',TRUE);
PL/SQL procedure successfully completed.

Finally delete any STREAMs errors reported
STREAMS>> delete from dba_recoverable_script_errors;
STREAMS>> delete from dba_recoverable_script;
STREAMS>> commit;

Streams Replication| Monitoring Streams Capture Processes

Posted by Sagar Patil

Capture process is an optional Streams client that is an Oracle background process that scans a database redo log to capture DML and DDL changes made to database objects.

1)  Displaying the Queue, Rule Sets, and Status of Each Capture Process
2)  Displaying Change Capture Information About Each Capture Process
3)  Displaying State Change and Message Creation Time for Each Capture Process
4)  Displaying Elapsed Time Performing Capture Operations for Each Capture Process
5)  Displaying Information About Each Downstream Capture Process
6)  Displaying the Registered Redo Log Files for Each Capture Process
7)  Displaying the Redo Log Files that Are Required by Each Capture Process
8)  Displaying SCN Values for Each Redo Log File Used by Each Capture Process
9)  Displaying the Last Archived Redo Entry Available to Each Capture Process
10) Listing the Parameter Settings for Each Capture Process
11) Viewing the Extra Attributes Captured by Each Capture Process
12) Determining the Applied SCN for All Capture Processes in a Database
13) Determining Redo Log Scanning Latency for Each Capture Process
14) Determining Message Enqueuing Latency for Each Capture Process
15) Displaying Information About Rule Evaluations for Each Capture Process

1)    Displaying the Queue, Rule Sets, and Status of Each Capture Process
To display this general information about each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15

SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS
FROM DBA_CAPTURE;

Capture         Capture                                         Capture
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set        Rule Set        Status
--------------- --------------- --------------- --------------- ---------------
STREAMS_DATA_C STREAMS_DATA_C RULESET$_54                     ENABLED
 _Q
If the status of a capture process is ABORTED, then you can query the ERROR_NUMBER and ERROR_MESSAGE columns in the DBA_CAPTURE data dictionary view to determine the error.

2)    Displaying Change Capture Information About Each Capture Process
The query in this section displays the following information about each capture process in a database:
The current state of the capture process: INITIALIZING,WAITING FOR DICTONARY REDO, DICTIONARY INITIALIZATION, MINING, LOADING, CAPTURING CHANGES, WAITING FOR REDO, EVALUATING RULE, CREATING LCR, ENQUEUING MESSAGE, PAUSED FOR FLOW CONTROL, SHUTTING DOWN
The total number of redo entries passed by LogMiner to the capture process for detailed rule evaluation. A capture process converts a redo entry into a message and performs detailed rule evaluation on the message when capture process prefiltering cannot discard the change.
COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 999999

SELECT c.CAPTURE_NAME,
 SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
 c.SID,
 c.SERIAL#,
 c.STATE,
 c.TOTAL_MESSAGES_CAPTURED,
 c.TOTAL_MESSAGES_ENQUEUED
 FROM V$STREAMS_CAPTURE c, V$SESSION s
 WHERE c.SID = s.SID AND
 c.SERIAL# = s.SERIAL#;
Redo
 Capture         Session                               Entries
Capture         Process Session  Serial                             Evaluated
Name            Number       ID  Number State                       In Detail
--------------- ------- ------- ------- --------------------------- ---------
 Total
 LCRs
Enqueued
--------
STREAMS_DATA_C CP01        217    2119 WAITING FOR TRANSACTION      ########
 #######

3)    Displaying State Change and Message Creation Time for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN STATE_CHANGED HEADING 'State|Change Time'
COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'

SELECT CAPTURE_NAME,
 STATE,
 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
 FROM V$STREAMS_CAPTURE;

Capture                                     State             Last Message
Name            State                       Change Time       Create Time
--------------- --------------------------- ----------------- -----------------
STREAMS_DATA_C WAITING FOR TRANSACTION     13:03:11 01/26/10 13:03:13 01/26/10

4)    Displaying Elapsed Time Performing Capture Operations for Each Capture Process
To display this information for each capture process in a database, run the following query:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN ELAPSED_CAPTURE_TIME HEADING 'Elapsed|Capture|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Elapsed|Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Elapsed|Enqueue|Time' FORMAT 99999999.99
COLUMN ELAPSED_LCR_TIME HEADING 'Elapsed|LCR|Creation|Time' FORMAT 99999999.99
COLUMN ELAPSED_PAUSE_TIME HEADING 'Elapsed|Pause|Time' FORMAT 99999999.99

SELECT CAPTURE_NAME,
 (ELAPSED_CAPTURE_TIME/100) ELAPSED_CAPTURE_TIME,
 (ELAPSED_RULE_TIME/100) ELAPSED_RULE_TIME,
 (ELAPSED_ENQUEUE_TIME/100) ELAPSED_ENQUEUE_TIME,
 (ELAPSED_LCR_TIME/100) ELAPSED_LCR_TIME,
 (ELAPSED_PAUSE_TIME/100) ELAPSED_PAUSE_TIME
 FROM V$STREAMS_CAPTURE;

Elapsed                   Elapsed
 Elapsed         Rule      Elapsed          LCR      Elapsed
Capture              Capture   Evaluation      Enqueue     Creation        Pause
Name                    Time         Time         Time         Time         Time
--------------- ------------ ------------ ------------ ------------ ------------
STREAMS_DATA_C       279.21          .26          .43          .26          .00

5)    Displaying Information About Each Downstream Capture Process
A downstream capture is a capture process runs on a database other than the source database.
You can display the following information about each downstream capture process in a database by running the query in this section:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
COLUMN USE_DATABASE_LINK HEADING 'Uses|Database|Link?' FORMAT A8

SELECT CAPTURE_NAME,
 SOURCE_DATABASE,
 QUEUE_NAME,
 STATUS,
 USE_DATABASE_LINK
 FROM DBA_CAPTURE
 WHERE CAPTURE_TYPE = 'DOWNSTREAM';

6)    Displaying the Registered Redo Log Files for Each Capture Process
This query displays information about these files for both local capture processes and downstream capture processes.

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A20
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10

SELECT r.CONSUMER_NAME,
 r.SOURCE_DATABASE,
 r.SEQUENCE#,
 r.NAME,
 r.DICTIONARY_BEGIN,
 r.DICTIONARY_END
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Capture                                                  Dictionary Dictionary
Process         Source     Sequence Archived Redo Log    Build      Build
Name            Database     Number File Name            Begin      End
--------------- ---------- -------- -------------------- ---------- ----------
STREAMS_DATA_C   PROD         7175 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7175_732278995.arc

STREAMS_DATA_C   PROD         7176 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7176_732278995.arc

STREAMS_DATA_C   PROD         7177 /mnt/logs/oradata/   NO         NO
 PROD/arch/1_7177_732278995.arc

7)    Displaying the Redo Log Files that Are Required by Each Capture Process
A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files.
You can query the REQUIRED_CHECKPOINT_SCN column in the DBA_CAPTURE data dictionary view to determine the required checkpoint SCN for a capture process. Redo log files prior to the redo log file that contains the required checkpoint SCN are no longer needed by the capture process. These redo log files can be stored offline if they are no longer needed for any other purpose. If you reset the start SCN for a capture process to a lower value in the future, then these redo log files might be needed.
To display this information about each required archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40

SELECT r.CONSUMER_NAME,
 r.SOURCE_DATABASE,
 r.SEQUENCE#,
 r.NAME
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME =  c.CAPTURE_NAME AND
 r.NEXT_SCN      >= c.REQUIRED_CHECKPOINT_SCN;

Capture                             Required
Process         Source     Sequence Archived Redo Log
Name            Database     Number File Name
--------------- ---------- -------- ----------------------------------------
STREAMS_DATA_C PROD        13418 /mnt/logs/oradata/PROD/arch/2_13418_732278995.arc
STREAMS_DATA_C PROD        10762 /mnt/logs/oradata/PROD/arch/1_10762_732278995.arc

8)    Displaying SCN Values for Each Redo Log File Used by Each Capture Process:

You can display information about the SCN values for archived redo log files that are registered for each capture process in a database by running the query in this section.
This query displays information the SCN values for these files for both local capture processes and downstream capture processes. This query also identifies redo log files that are no longer needed by any capture process at the local database.
To display this information about each registered archive redo log file in a database, run the following query:

COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A25
COLUMN FIRST_SCN HEADING 'First SCN' FORMAT 99999999999
COLUMN NEXT_SCN HEADING 'Next SCN' FORMAT 99999999999
COLUMN PURGEABLE HEADING 'Purgeable?' FORMAT A10

SELECT r.CONSUMER_NAME,
 r.NAME,
 r.FIRST_SCN,
 r.NEXT_SCN,
 r.PURGEABLE
 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME;

Capture
Process         Archived Redo Log
Name            File Name                    First SCN     Next SCN Purgeable?
--------------- ------------------------- ------------ ------------ ----------
STREAMS_DATA_C /mnt/logs/oradata/PROD/           7499664298   7499908128 NO
 arch/1_7175_732278995.arc

STREAMS_DATA_C /mnt/logs/oradata/PROD/           7499908128   7500009109 NO
 arch/1_7176_732278995.arc

9) Displaying the Last Archived Redo Entry Available to Each Capture Process
For a local capture process, the last archived redo entry available is the last entry from the online redo log flushed to an archived log file.
For a downstream capture process, the last archived redo entry available is the redo entry with the most recent SCN in the last archived log file added to the LogMiner session used by the capture process.

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A20
COLUMN LOGMINER_ID HEADING 'LogMiner ID' FORMAT 9999
COLUMN AVAILABLE_MESSAGE_NUMBER HEADING 'Last Redo SCN' FORMAT 9999999999
COLUMN AVAILABLE_MESSAGE_CREATE_TIME HEADING 'Time of|Last Redo SCN'

SELECT CAPTURE_NAME,
 LOGMINER_ID,
 AVAILABLE_MESSAGE_NUMBER,
 TO_CHAR(AVAILABLE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY')
 AVAILABLE_MESSAGE_CREATE_TIME
 FROM V$STREAMS_CAPTURE;

Capture                                        Time of
Name                 LogMiner ID Last Redo SCN Last Redo SCN
-------------------- ----------- ------------- -----------------
STREAMS_DATA_C                9    8087814747 23:13:18 10/26/10

10)    Listing the Parameter Settings for Each Capture Process
Following query displays the current setting for each capture process parameter for each capture process in a database:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A25
COLUMN PARAMETER HEADING 'Parameter' FORMAT A25
COLUMN VALUE HEADING 'Value' FORMAT A10
COLUMN SET_BY_USER HEADING 'Set by User?' FORMAT A15

SELECT CAPTURE_NAME,
 PARAMETER,
 VALUE,
 SET_BY_USER 
 FROM DBA_CAPTURE_PARAMETERS;

Capture
Process
Name                      Parameter                 Value      Set by User?
------------------------- ------------------------- ---------- ---------------
STREAMS_DATA_C           PARALLELISM               1          NO
STREAMS_DATA_C           STARTUP_SECONDS           0          NO
STREAMS_DATA_C           TRACE_LEVEL               0          NO
STREAMS_DATA_C           TIME_LIMIT                INFINITE   NO
STREAMS_DATA_C           MESSAGE_LIMIT             INFINITE   NO
STREAMS_DATA_C           MAXIMUM_SCN               INFINITE   NO
STREAMS_DATA_C           WRITE_ALERT_LOG           Y          NO
STREAMS_DATA_C           DISABLE_ON_LIMIT          N          NO
STREAMS_DATA_C           DOWNSTREAM_REAL_TIME_MINE Y          NO
STREAMS_DATA_C           IGNORE_TRANSACTION                   NO
STREAMS_DATA_C           MESSAGE_TRACKING_FREQUENC 2000000    NO
 Y

STREAMS_DATA_C           IGNORE_UNSUPPORTED_TABLE  *          NO
STREAMS_DATA_C           SKIP_AUTOFILTERED_TABLE_D Y          NO
 DL

STREAMS_DATA_C           SPLIT_THRESHOLD           1800       NO
STREAMS_DATA_C           MERGE_THRESHOLD           60         NO
STREAMS_DATA_C           CAPTURE_SEQUENCE_NEXTVAL  N          NO
STREAMS_DATA_C           XOUT_CLIENT_EXISTS        N          NO
STREAMS_DATA_C           EXCLUDEUSER                          NO
STREAMS_DATA_C           EXCLUDEUSERID                        NO
STREAMS_DATA_C           EXCLUDETRANS                         NO
STREAMS_DATA_C           GETREPLICATES             N          NO
STREAMS_DATA_C           GETAPPLOPS                Y          NO
STREAMS_DATA_C           CAPTURE_IDKEY_OBJECTS     N          NO
STREAMS_DATA_C           MAX_SGA_SIZE              INFINITE   NO

11)    Viewing the Extra Attributes Captured by Each Capture Process
You can use the INCLUDE_EXTRA_ATTRIBUTE procedure in the DBMS_CAPTURE_ADM package to instruct a capture process to capture one or more extra attributes from the redo log.
The following query displays the extra attributes included in the LCRs captured by each capture process in the local database:

COLUMN CAPTURE_NAME HEADING 'Capture Process' FORMAT A20
COLUMN ATTRIBUTE_NAME HEADING 'Attribute Name' FORMAT A15
COLUMN INCLUDE HEADING 'Include Attribute in LCRs?' FORMAT A30

SELECT CAPTURE_NAME, ATTRIBUTE_NAME, INCLUDE
 FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
 ORDER BY CAPTURE_NAME;

Capture Process      Attribute Name  Include Attribute in LCRs?
-------------------- --------------- ------------------------------
STREAMS_DATA_C      THREAD#         NO
STREAMS_DATA_C      ROW_ID          NO
STREAMS_DATA_C      TX_NAME         NO
STREAMS_DATA_C      SERIAL#         NO
STREAMS_DATA_C      USERNAME        NO
STREAMS_DATA_C      SESSION#        NO

12) Determining the Applied SCN for All Capture Processes in a Database
The applied system change number (SCN) for a capture process is the SCN of the most recent message dequeued by the relevant apply processes. All changes below this applied SCN have been dequeued by all apply processes that apply changes captured by the capture process.

COLUMN CAPTURE_NAME HEADING 'Capture Process Name' FORMAT A30
COLUMN APPLIED_SCN HEADING 'Applied SCN' FORMAT 99999999999

SELECT   CAPTURE_NAME, APPLIED_SCN FROM DBA_CAPTURE;

Capture Process      APPLIED_SCN
-------------------- -----------
STREAMS_DATA_C       8087783293

13) Determining Redo Log Scanning Latency for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN LAST_STATUS HEADING 'Seconds Since|Last Status' FORMAT 999999
COLUMN CAPTURE_TIME HEADING 'Current|Process|Time'
COLUMN CREATE_TIME HEADING 'Message|Creation Time' FORMAT 999999

SELECT CAPTURE_NAME,
 ((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
 ((SYSDATE - CAPTURE_TIME)*86400) LAST_STATUS,
 TO_CHAR(CAPTURE_TIME, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,     
 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME
 FROM V$STREAMS_CAPTURE;

Capture    Latency               Current
Process         in Seconds Since Process           Message
Name       Seconds   Last Status Time              Creation Time
---------- ------- ------------- ----------------- -----------------
INFO_CONTR       2             1 23:21:36 10/26/10 23:21:35 10/26/10
ACT_C

The “Latency in Seconds” returned by this query is the difference between the current time (SYSDATE) and the “Message Creation Time.” The “Seconds Since Last Status” returned by this query is the difference between the current time (SYSDATE) and the “Current Process Time.”

14)    Determining Message Enqueuing Latency for Each Capture Process
Run the following query to determine the message capturing latency for each capture process:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Message Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Message|Number' FORMAT 999999

SELECT CAPTURE_NAME,
 (ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS,
 TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
 TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
 ENQUEUE_MESSAGE_NUMBER
 FROM V$STREAMS_CAPTURE;

Capture    Latency
Process         in Message Creation                          Message
Name       Seconds Time                 Enqueue Time          Number
---------- ------- -------------------- -------------------- -------
STREAMS_DATA   -3 23:22:25 10/26/10    23:22:22 10/26/10    #######

15)    Displaying Information About Rule Evaluations for Each Capture Process

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
COLUMN TOTAL_PREFILTER_DISCARDED HEADING 'Prefilter|Changes|Discarded'
 FORMAT 9999999999
COLUMN TOTAL_PREFILTER_KEPT HEADING 'Prefilter|Changes|Kept' FORMAT 9999999999
COLUMN TOTAL_PREFILTER_EVALUATIONS HEADING 'Prefilter|Evaluations'
 FORMAT 9999999999
COLUMN UNDECIDED HEADING 'Undecided|After|Prefilter' FORMAT 9999999999
COLUMN TOTAL_FULL_EVALUATIONS HEADING 'Full|Evaluations' FORMAT 9999999999

SELECT CAPTURE_NAME,
 TOTAL_PREFILTER_DISCARDED,
 TOTAL_PREFILTER_KEPT,
 TOTAL_PREFILTER_EVALUATIONS,
 (TOTAL_PREFILTER_EVALUATIONS -
 (TOTAL_PREFILTER_KEPT + TOTAL_PREFILTER_DISCARDED)) UNDECIDED,
 TOTAL_FULL_EVALUATIONS
 FROM V$STREAMS_CAPTURE;

Prefilter   Prefilter               Undecided
Capture            Changes     Changes   Prefilter       After        Full
Name             Discarded        Kept Evaluations   Prefilter Evaluations
--------------- ---------- ----------- ----------- ----------- -----------
STREAMS_DATA_C   53128265     1662026    54790291           0         801

The total number of prefilter evaluations equals the sum of the prefilter changes discarded, the prefilter changes kept, and the undecided changes.

Streams Replication | Monitoring Streams Queues and Propagations

Posted by Sagar Patil

This post lists sample queries to monitor Streams queues and propagations.

1)       Monitoring ANYDATA Queues and Messaging
2)       Monitoring Buffered Queues
3)       Monitoring Streams Propagations and Propagation Jobs

1)        Monitoring ANYDATA Queues and Messaging

a)     Displaying the ANYDATA Queues in a Database
b)     Viewing the Messaging Clients in a Database
c)     Viewing Message Notifications
d)     Determining the Consumer of Each User-Enqueued Message in a Queue
e)     Viewing the Contents of User-Enqueued Messages in a Queue

a)     Displaying the ANYDATA Queues in a Database : To display all of the ANYDATA queues in a database, run the following query:

COLUMN OWNER HEADING 'Owner' FORMAT A10
COLUMN NAME HEADING 'Queue Name' FORMAT A28
COLUMN QUEUE_TABLE HEADING 'Queue Table' FORMAT A22
COLUMN USER_COMMENT HEADING 'Comment' FORMAT A15

SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
 FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
 WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
 q.QUEUE_TABLE = t.QUEUE_TABLE AND
 q.OWNER       = t.OWNER;

An exception queue is created automatically when you create an ANYDATA queue.

b) Viewing the Messaging Clients in a Database : Run the following query to view this information about messaging clients

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A25
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A18
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A11
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A11

SELECT STREAMS_NAME,
 QUEUE_OWNER,
 QUEUE_NAME,
 RULE_SET_NAME,
 NEGATIVE_RULE_SET_NAME
 FROM DBA_STREAMS_MESSAGE_CONSUMERS;

c) Viewing Message Notifications : You can configure a message notification to send a notification when a message that can be dequeued by a messaging client is enqueued into a queue.
The notification can be sent to an email address, to an HTTP URL, or to a PL/SQL procedure. Run the following query to view the message notifications configured in a database:

COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25

SELECT STREAMS_NAME,
 QUEUE_OWNER,
 QUEUE_NAME,
 NOTIFICATION_TYPE,
 NOTIFICATION_ACTION
 FROM DBA_STREAMS_MESSAGE_CONSUMERS
 WHERE NOTIFICATION_TYPE IS NOT NULL;

d) Configuring a Messaging Client and Message Notification : To determine the consumer for each user-enqueued message in a queue, query AQ$queue_table_name in the queue owner’s schema, where queue_table_name is the name of the queue table.  For example, to find the consumers of the user-enqueued messages in the oe_q_table_any queue table, run the following query:

COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30

SELECT MSG_ID, MSG_STATE, CONSUMER_NAME
 FROM AQ$OE_Q_TABLE_ANY;

e) Viewing the Contents of User-Enqueued Messages in a Queue : In an ANYDATA queue, to view the contents of a payload that is encapsulated within an ANYDATA payload, you query the queue table using the Accessdata_type static functions of the ANYDATA type, where data_type is the type of payload to view. For example, to view the contents of payload of type NUMBER in a queue with a queue table named oe_queue_table, run the following query as the queue owner:

SELECT qt.user_data.AccessNumber() "Numbers in Queue"
 FROM strmadmin.oe_q_table_any qt;

Similarly, to view the contents of a payload of type VARCHAR2 in a queue with a queue table named oe_q_table_any, run the following query:

SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
 FROM strmadmin.oe_q_table_any qt;

2)       Monitoring Buffered Queues

A buffered queue includes the following storage areas:
Buffered queues are stored in the Streams pool, and the Streams pool is a portion of memory in the System Global Area (SGA) that is used by Streams.  In a Streams environment, LCRs captured by a capture process always are stored in the buffered queue of an ANYDATA queue.  Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table.  Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages.
Messages that spill from memory are stored in the appropriate queue table.

The following sections describe queries that monitor buffered queues:

i) Determining the Number of Messages in Each Buffered Queue

COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT QUEUE_SCHEMA,
 QUEUE_NAME,
 (NUM_MSGS - SPILL_MSGS) MEM_MSG,
 SPILL_MSGS,
 NUM_MSGS
 FROM V$BUFFERED_QUEUES;

ii) Viewing the Capture Processes for the LCRs in Each Buffered Queue
A capture process is a queue publisher that enqueues captured messages into a buffered queue. These LCRs can be propagated to other queues subsequently.
By querying the V$BUFFERED_PUBLISHERS dynamic performance view, you can display each capture process that captured the LCRs in the buffered queue. These LCRs might have been captured at the local database, or they might have been captured at a remote database and propagated to the queue specified in the query.

COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 99999999

SELECT SENDER_NAME,
 SENDER_ADDRESS,
 QUEUE_NAME,       
 CNUM_MSGS,
 LAST_ENQUEUED_MSG
 FROM V$BUFFERED_PUBLISHERS;

iii) Displaying Information About Propagations that Send Buffered Messages :
The query in this section displays the following information about each propagation that sends buffered messages from a buffered queue in the local database

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20
SELECT p.PROPAGATION_NAME,
 s.QUEUE_SCHEMA,
 s.QUEUE_NAME,
 s.DBLINK,
 s.SCHEDULE_STATUS
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

iv) Displaying the Number of Messages and Bytes Sent By Propagations
The query in this section displays the number of messages and the number of bytes sent by each propagation that sends buffered messages from a buffered queue in the local database:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 99999999

SELECT p.PROPAGATION_NAME,
 s.QUEUE_NAME,
 s.DBLINK,
 s.TOTAL_MSGS,
 s.TOTAL_BYTES
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

v) Displaying Performance Statistics for Propagations that Send Buffered Messages
The query in this section displays the amount of time that a propagation sending buffered messages spends performing various tasks. Each propagation sends messages from the source queue to the destination queue. Specifically, the query displays the following information:

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.PROPAGATION_NAME,
 s.QUEUE_NAME,
 s.DBLINK,
 (s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
 (s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
 (s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
 FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
 WHERE p.DESTINATION_DBLINK = s.DBLINK AND
 p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME  = s.QUEUE_NAME;

vi) Viewing the Propagations Dequeuing Messages from Each Buffered Queue
Propagations are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the propagations that can dequeue buffered messages from a queue.

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN SUBSCRIBER_ADDRESS HEADING 'Destination|Database' FORMAT A11
COLUMN CURRENT_ENQ_SEQ HEADING 'Current|Enqueued|Sequence' FORMAT 99999999
COLUMN LAST_BROWSED_SEQ HEADING 'Last|Browsed|Sequence' FORMAT 99999999
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
 FORMAT 99999999

SELECT p.PROPAGATION_NAME,
 s.SUBSCRIBER_ADDRESS,
 s.CURRENT_ENQ_SEQ,
 s.LAST_BROWSED_SEQ,    
 s.LAST_DEQUEUED_SEQ,
 s.NUM_MSGS, 
 s.TOTAL_SPILLED_MSG
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s, V$BUFFERED_QUEUES q
WHERE q.QUEUE_ID = s.QUEUE_ID AND
 p.SOURCE_QUEUE_OWNER = q.QUEUE_SCHEMA AND
 p.SOURCE_QUEUE_NAME = q.QUEUE_NAME AND
 p.DESTINATION_DBLINK = s.SUBSCRIBER_ADDRESS;

vii) Displaying Performance Statistics for Propagations that Receive Buffered Messages
The query in this section displays the amount of time that each propagation receiving buffered messages spends performing various tasks. Each propagation receives the messages and enqueues them into the destination queue for the propagation. Specifically, the query displays the following information:

COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99

SELECT SRC_QUEUE_NAME,
 SRC_DBNAME,
 (ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
 (ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
 (ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
 FROM V$PROPAGATION_RECEIVER;

viii) Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue
Apply processes are queue subscribers that can dequeue messages from a queue. By querying the V$BUFFERED_SUBSCRIBERS dynamic performance view, you can display all the apply processes that can dequeue messages from a queue.

COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
 FORMAT 99999999

SELECT s.SUBSCRIBER_NAME,
 q.QUEUE_SCHEMA,
 q.QUEUE_NAME,
 s.LAST_DEQUEUED_SEQ,
 s.NUM_MSGS,
 s.TOTAL_SPILLED_MSG
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
 s.SUBSCRIBER_ADDRESS IS NULL AND
 s.SUBSCRIBER_NAME = a.APPLY_NAME;

3)       Monitoring Streams Propagations and Propagation Jobs

The following sections contain queries that you can run to display information about propagations and propagation jobs:
a) Displaying the Queues and Database Link for Each Propagation
b) Determining the Source Queue and Destination Queue for Each Propagation
c) Determining the Rule Sets for Each Propagation
d) Displaying the Schedule for a Propagation Job
e) Determining the Total Number of Messages and Bytes Propagated

a) Displaying the Queues and Database Link for Each Propagation
You can display information about each propagation by querying the DBA_PROPAGATION data dictionary view. This view contains information about each propagation with a source queue is at the local database.

COLUMN PROPAGATION_NAME        HEADING 'Propagation|Name'   FORMAT A19
COLUMN SOURCE_QUEUE_NAME       HEADING 'Source|Queue|Name'  FORMAT A17
COLUMN DESTINATION_DBLINK      HEADING 'Database|Link'      FORMAT A9
COLUMN DESTINATION_QUEUE_NAME  HEADING 'Dest|Queue|Name'    FORMAT A15
COLUMN STATUS                  HEADING 'Status'             FORMAT A8
COLUMN QUEUE_TO_QUEUE          HEADING 'Queue-|to-|Queue?'  FORMAT A6

SELECT PROPAGATION_NAME,
 SOURCE_QUEUE_NAME,
 DESTINATION_DBLINK,
 DESTINATION_QUEUE_NAME,
 STATUS,
 QUEUE_TO_QUEUE
 FROM DBA_PROPAGATION;

b) Determining the Source Queue and Destination Queue for Each Propagation
You can determine the source queue and destination queue for each propagation by querying the DBA_PROPAGATION data dictionary view.

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN SOURCE_QUEUE_OWNER HEADING 'Source|Queue|Owner' FORMAT A10
COLUMN 'Source Queue' HEADING 'Source|Queue' FORMAT A15
COLUMN DESTINATION_QUEUE_OWNER HEADING 'Dest|Queue|Owner'   FORMAT A10
COLUMN 'Destination Queue' HEADING 'Destination|Queue' FORMAT A15

SELECT p.PROPAGATION_NAME,
 p.SOURCE_QUEUE_OWNER,
 p.SOURCE_QUEUE_NAME ||'@'||
 g.GLOBAL_NAME "Source Queue",
 p.DESTINATION_QUEUE_OWNER,
 p.DESTINATION_QUEUE_NAME ||'@'||
 p.DESTINATION_DBLINK "Destination Queue"
 FROM DBA_PROPAGATION p, GLOBAL_NAME g;

c)  Determining the Rule Sets for Each Propagation
The query in this section displays the following information for each propagation:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN RULE_SET_OWNER HEADING 'Positive|Rule Set|Owner' FORMAT A10
COLUMN RULE_SET_NAME HEADING 'Positive Rule|Set Name' FORMAT A15
COLUMN NEGATIVE_RULE_SET_OWNER HEADING 'Negative|Rule Set|Owner' FORMAT A10
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative Rule|Set Name' FORMAT A15

SELECT PROPAGATION_NAME,
 RULE_SET_OWNER,
 RULE_SET_NAME,
 NEGATIVE_RULE_SET_OWNER,
 NEGATIVE_RULE_SET_NAME
 FROM DBA_PROPAGATION;

d) Displaying the Schedule for a Propagation Job
The query in this section displays the following information about the propagation schedule for a propagation job used by a propagation named dbs1_to_dbs2:

COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT 99999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 99999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99

SELECT DISTINCT TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
 s.PROPAGATION_WINDOW,
 s.NEXT_TIME,
 s.LATENCY,
 DECODE(s.SCHEDULE_DISABLED,
 'Y', 'Disabled',
 'N', 'Enabled') SCHEDULE_DISABLED,
 s.PROCESS_NAME,
 s.FAILURES
 FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
 WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2'
 AND p.DESTINATION_DBLINK = s.DESTINATION
 AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
 AND s.QNAME = p.SOURCE_QUEUE_NAME;

e)  Determining the Total Number of Messages and Bytes Propagated
All propagation jobs from a source queue that share the same database link have a single propagation schedule. The query in this section displays the following information for each propagation:

COLUMN PROPAGATION_NAME HEADING 'Propagation|Name' FORMAT A20
COLUMN TOTAL_TIME HEADING 'Total Time|Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Messages|Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999

SELECT p.PROPAGATION_NAME, s.TOTAL_TIME, s.TOTAL_NUMBER, s.TOTAL_BYTES
 FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
 WHERE p.DESTINATION_DBLINK = s.DESTINATION
 AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
 AND s.QNAME = p.SOURCE_QUEUE_NAME;

Streams Replication| Monitoring Oracle Streams Apply Processes at Destination Server

Posted by Sagar Patil

Following shell script will be handy when you are working with Streams Replication.  There are 2 versions, one for 10g:monitor_streams_applyprocess_10g and another for 11g: monitor_streams_applyprocess_11g

#!/bin/bash
## Monitoring Oracle Streams Apply Processes at Destination Server
export STREAMS_ADMIN_SCHEMA=STRMADMIN
export STREAMS_ADMIN_SCHEMA_PWD=STRMADMIN
export DEST_DB=streams1

echo "-------------------Menu---------------------------"
echo " **** This script was tested against RHEL/Oracle 10gR2, Please update script variables with right database values ***"
echo "To run option 16 you will need some plsql procedures in place, detailed at http://www.oracledbasupport.co.uk/streams-displaying-detailed-information-about-apply-errors/"
echo ""
flag=0
while [ $flag -ne 17 ]
do
echo "1.  Determining the Queue, Rule Sets, and Status for Each Apply Process"
echo "2.  Displaying General Information About Each Apply Process"
echo "3.  Listing the Parameter Settings for Each Apply Process"
echo "4.  Displaying Information About Apply Handlers"
echo "5.  Displaying Information About the Reader Server for Each Apply Process"
echo "6.  Monitoring Transactions and Messages Spilled by Each Apply Process"
echo "7.  Determining Capture to Dequeue Latency for a Message"
echo "8.  Displaying General Information About Each Coordinator Process"
echo "9.  Displaying Information About Transactions Received and Applied"
echo "10. Determining the Capture to Apply Latency for a Message for Each Apply Process"
echo "11. Displaying Information About the Apply Servers for Each Apply Process"
echo "12. Displaying Effective Apply Parallelism for an Apply Process"
echo "13. Viewing Rules that Specify a Destination Queue on Apply"
echo "14. Viewing Rules that Specify No Execution on Apply"
echo "15. Checking for Apply Errors"
echo "16. Displaying Detailed Information About Apply Errors"
echo "17. Exit"

echo "Enter the option:"

read flag

echo "*******************************************************"
if [ $flag -eq 1 ]
then
#### Display this general information about each apply process in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Apply|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Apply|Process|Status' FORMAT A15

SELECT APPLY_NAME,
       QUEUE_NAME,
       RULE_SET_NAME,
       NEGATIVE_RULE_SET_NAME,
       STATUS
  FROM DBA_APPLY;
exit
EOF
fi

if [ $flag -eq 2 ]
then
#### To display this general information about each apply process in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25
COLUMN APPLY_USER HEADING 'Apply User' FORMAT A30

SELECT APPLY_NAME,
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED,
       APPLY_USER
  FROM DBA_APPLY;
exit
EOF
fi

if [ $flag -eq 3 ]
then
####   The following query displays the current setting for each apply process parameter for each apply process in a database:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN PARAMETER HEADING 'Parameter' FORMAT A25
COLUMN VALUE HEADING 'Value' FORMAT A20
COLUMN SET_BY_USER HEADING 'Set by User?' FORMAT A15

SELECT APPLY_NAME,
       PARAMETER,
       VALUE,
       SET_BY_USER
  FROM DBA_APPLY_PARAMETERS;
exit
EOF
fi

if [ $flag -eq 4 ]
then
#### To display the error handler for each apply process that applies changes locally in a database, run the following query:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN OBJECT_OWNER HEADING 'Table|Owner' FORMAT A5
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A10
COLUMN OPERATION_NAME HEADING 'Operation' FORMAT A10
COLUMN USER_PROCEDURE HEADING 'Handler Procedure' FORMAT A30
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15

SELECT OBJECT_OWNER,
       OBJECT_NAME,
       OPERATION_NAME,
       USER_PROCEDURE,
       APPLY_NAME
  FROM DBA_APPLY_DML_HANDLERS
  WHERE ERROR_HANDLER = 'Y'
  ORDER BY OBJECT_OWNER, OBJECT_NAME;

-- To display each message handler in a database, run the following query:
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20;
COLUMN MESSAGE_HANDLER HEADING 'Message Handler' FORMAT A20;

SELECT APPLY_NAME, MESSAGE_HANDLER FROM DBA_APPLY
  WHERE MESSAGE_HANDLER IS NOT NULL;

-- To display each precommit handler in a database, run the following query:
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN PRECOMMIT_HANDLER HEADING 'Precommit Handler' FORMAT A30
COLUMN APPLY_CAPTURED HEADING 'Type of|Messages|Applied' FORMAT A15

SELECT APPLY_NAME,
       PRECOMMIT_HANDLER,
       DECODE(APPLY_CAPTURED,
              'YES', 'Captured',
              'NO',  'User-Enqueued') APPLY_CAPTURED
  FROM DBA_APPLY
  WHERE PRECOMMIT_HANDLER IS NOT NULL;

exit
EOF
fi

if [ $flag -eq 5 ]
then
#### The information displayed by this query is valid only for an enabled apply process.
#### Run the following query to display this information for each apply process:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A15
COLUMN APPLY_CAPTURED HEADING 'Dequeues Captured|Messages?' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Process|Name' FORMAT A7
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_MESSAGES_DEQUEUED HEADING 'Total Messages|Dequeued' FORMAT 99999999

SELECT r.APPLY_NAME,
       ap.APPLY_CAPTURED,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_MESSAGES_DEQUEUED
       FROM V\$STREAMS_APPLY_READER r, V\$SESSION s, DBA_APPLY ap
       WHERE r.SID = s.SID AND
             r.SERIAL# = s.SERIAL# AND
             r.APPLY_NAME = ap.APPLY_NAME;

exit
EOF
fi

if [ $flag -eq 6 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "Display information for each apply process in a database" from dual;
COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20
COLUMN 'Transaction ID' HEADING 'Transaction ID' FORMAT A15
COLUMN FIRST_SCN HEADING 'First SCN'   FORMAT 99999999
COLUMN MESSAGE_COUNT HEADING 'Message Count' FORMAT 99999999

SELECT APPLY_NAME,
       XIDUSN ||'.'||
       XIDSLT ||'.'||
       XIDSQN "Transaction ID",
       FIRST_SCN,
       MESSAGE_COUNT
  FROM DBA_APPLY_SPILL_TXN;

COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A15
COLUMN TOTAL_MESSAGES_SPILLED HEADING 'Total|Spilled Messages' FORMAT 99999999
COLUMN ELAPSED_SPILL_TIME HEADING 'Elapsed Time|Spilling Messages' FORMAT 99999999.99

SELECT APPLY_NAME,
       TOTAL_MESSAGES_SPILLED,
       (ELAPSED_SPILL_TIME/100) ELAPSED_SPILL_TIME
  FROM V\$STREAMS_APPLY_READER;
exit
EOF
fi

if [ $flag -eq 7 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATION HEADING 'Message Creation' FORMAT A17
COLUMN LAST_DEQUEUE HEADING 'Last Dequeue Time' FORMAT A20
COLUMN DEQUEUED_MESSAGE_NUMBER HEADING 'Dequeued|Message Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
     TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') CREATION,
     TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
     DEQUEUED_MESSAGE_NUMBER
  FROM V\$STREAMS_APPLY_READER;
exit
EOF
fi

if [ $flag -eq 8 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN PROCESS_NAME HEADING 'Coordinator|Process|Name' FORMAT A11
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A21

SELECT c.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
       c.SID,
       c.SERIAL#,
       c.STATE
       FROM V\$STREAMS_APPLY_COORDINATOR c, V\$SESSION s
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL#;
exit
EOF
fi

if [ $flag -eq 9 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A20
COLUMN TOTAL_RECEIVED HEADING 'Total|Trans|Received' FORMAT 99999999
COLUMN TOTAL_APPLIED HEADING 'Total|Trans|Applied' FORMAT 99999999
COLUMN TOTAL_ERRORS HEADING 'Total|Apply|Errors' FORMAT 9999
COLUMN BEING_APPLIED HEADING 'Total|Trans Being|Applied' FORMAT 99999999
COLUMN UNASSIGNED_COMPLETE_TXNS HEADING 'Total|Unnasigned|Trans' FORMAT 99999999
COLUMN TOTAL_IGNORED HEADING 'Total|Trans|Ignored' FORMAT 99999999

SELECT APPLY_NAME,
       TOTAL_RECEIVED,
       TOTAL_APPLIED,
       TOTAL_ERRORS,
       (TOTAL_ASSIGNED - (TOTAL_ROLLBACKS + TOTAL_APPLIED)) BEING_APPLIED,
       TOTAL_IGNORED
       FROM V\$STREAMS_APPLY_COORDINATOR;
exit
EOF
fi

if [ $flag -eq 10 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
        "Message Creation",
     TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     HWM_MESSAGE_NUMBER
  FROM V\$STREAMS_APPLY_COORDINATOR;

-- select "Following query will display capture to apply latency using DBA_APPLY_PROGRESS view for a captured LCR for each apply process" from dual;
COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 9999999999

SELECT APPLY_NAME,
     (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')
        "Message Creation",
     TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     APPLIED_MESSAGE_NUMBER
  FROM DBA_APPLY_PROGRESS;

exit
EOF
fi

if [ $flag -eq 11 ]
then
#### Run the following query to display information about the apply servers for each apply process:
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_ASSIGNED,
       r.TOTAL_MESSAGES_APPLIED
  FROM V\$STREAMS_APPLY_SERVER R, V\$SESSION S
  WHERE r.SID = s.SID AND
        r.SERIAL# = s.SERIAL#
  ORDER BY r.APPLY_NAME, r.SERVER_ID;
exit
EOF
fi

if [ $flag -eq 12 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "The following query displays the effective apply parallelism for an apply process named apply:" from dual;

SELECT COUNT(SERVER_ID) "Effective Parallelism"
  FROM V\$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY' AND
        TOTAL_MESSAGES_APPLIED > 0;

-- select "You can display the total number of messages applied by each apply server by running the following query" from dual;
COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total Messages Applied' FORMAT 999999
SELECT SERVER_ID, TOTAL_MESSAGES_APPLIED
  FROM V\$STREAMS_APPLY_SERVER
  WHERE APPLY_NAME = 'APPLY'
  ORDER BY SERVER_ID;
exit
EOF
fi

if [ $flag -eq 13 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To view destination queue settings for rules, run the following query:" from dual;
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A15
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A15
COLUMN DESTINATION_QUEUE_NAME HEADING 'Destination Queue' FORMAT A30
SELECT RULE_OWNER, RULE_NAME, DESTINATION_QUEUE_NAME
  FROM DBA_APPLY_ENQUEUE;
exit
EOF
fi

if [ $flag -eq 14 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To view each rule with NO for its execution directive, run the following query:" from dual;
COLUMN RULE_OWNER HEADING 'Rule Owner' FORMAT A20
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20
SELECT RULE_OWNER, RULE_NAME
  FROM DBA_APPLY_EXECUTE
  WHERE EXECUTE_EVENT = 'NO';
exit
EOF
fi

## Checking for Apply Errors
if [ $flag -eq 15 ]
then
sqlplus -s ${STREAMS_ADMIN_SCHEMA}/${STREAMS_ADMIN_SCHEMA_PWD}@${DEST_DB} <<EOF
-- select "To check for apply errors, run the following query:" from dual;
COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME,
       SOURCE_DATABASE,
       LOCAL_TRANSACTION_ID,
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

exit
EOF
fi
done
echo "exit"

Monitoring Streams Replication| How to Display Detailed Information About Apply Errors?

Posted by Sagar Patil

SQL scripts below can display detailed information about error transactions in the error queue in a database.
These scripts are designed to display information about LCRs, but you can extend them to display information about any non-LCR messages used.

To use these scripts, complete the following steps:

Step 1) Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View
Step 2) Create a Procedure that Prints the Value in an ANYDATA Object
Step 3) Create a Procedure that Prints a Specified LCR
Step 4) Create a Procedure that Prints All the LCRs in the Error Queue
Step 5) Create a Procedure that Prints All the Error LCRs for a Transaction

Step 1   Grant Explicit SELECT Privilege on the DBA_APPLY_ERROR View

The user who creates and runs the print_errors and print_transaction procedures described in the following sections must be granted explicit SELECT privilege on the DBA_APPLY_ERROR data dictionary view. This privilege cannot be granted through a role. Running the GRANT_ADMIN_PRIVILEGE procedure in the DBMS_STREAMS_AUTH package on a user grants this privilege to the user. To grant this privilege to a user directly, complete the following steps:

GRANT SELECT ON DBA_APPLY_ERROR TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;

Step 2   Create a Procedure that Prints the Value in an ANYDATA Object
The following procedure prints the value in a specified ANYDATA object for some selected data types. Optionally, you can add more data types to this procedure.

CREATE OR REPLACE PROCEDURE print_any(data IN ANYDATA) IS
tn  VARCHAR2(61);
str VARCHAR2(4000);
chr VARCHAR2(1000);
num NUMBER;
dat DATE;
rw  RAW(4000);
res NUMBER;
BEGIN
IF data IS NULL THEN
DBMS_OUTPUT.PUT_LINE('NULL value');
RETURN;
END IF;
tn := data.GETTYPENAME();
IF tn = 'SYS.VARCHAR2' THEN
res := data.GETVARCHAR2(str);
DBMS_OUTPUT.PUT_LINE(SUBSTR(str,0,253));
ELSIF tn = 'SYS.CHAR' then
res := data.GETCHAR(chr);
DBMS_OUTPUT.PUT_LINE(SUBSTR(chr,0,253));
ELSIF tn = 'SYS.VARCHAR' THEN
res := data.GETVARCHAR(chr);
DBMS_OUTPUT.PUT_LINE(chr);
ELSIF tn = 'SYS.NUMBER' THEN
res := data.GETNUMBER(num);
DBMS_OUTPUT.PUT_LINE(num);
ELSIF tn = 'SYS.DATE' THEN
res := data.GETDATE(dat);
DBMS_OUTPUT.PUT_LINE(dat);
ELSIF tn = 'SYS.RAW' THEN
-- res := data.GETRAW(rw);
-- DBMS_OUTPUT.PUT_LINE(SUBSTR(DBMS_LOB.SUBSTR(rw),0,253));
DBMS_OUTPUT.PUT_LINE('BLOB Value');
ELSIF tn = 'SYS.BLOB' THEN
DBMS_OUTPUT.PUT_LINE('BLOB Found');
ELSE
DBMS_OUTPUT.PUT_LINE('typename is ' || tn);
END IF;
END print_any;
/

Step 3   Create a Procedure that Prints a Specified LCR
The following procedure prints a specified LCR. It calls the print_any procedure created in “Create a Procedure that Prints the Value in an ANYDATA Object”.

CREATE OR REPLACE PROCEDURE print_lcr(lcr IN ANYDATA) IS
typenm    VARCHAR2(61);
ddllcr    SYS.LCR$_DDL_RECORD;
proclcr   SYS.LCR$_PROCEDURE_RECORD;
rowlcr    SYS.LCR$_ROW_RECORD;
res       NUMBER;
newlist   SYS.LCR$_ROW_LIST;
oldlist   SYS.LCR$_ROW_LIST;
ddl_text  CLOB;
ext_attr  ANYDATA;
BEGIN
typenm := lcr.GETTYPENAME();
DBMS_OUTPUT.PUT_LINE('type name: ' || typenm);
IF (typenm = 'SYS.LCR$_DDL_RECORD') THEN
res := lcr.GETOBJECT(ddllcr);
DBMS_OUTPUT.PUT_LINE('source database: ' ||
ddllcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE('owner: ' || ddllcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE('object: ' || ddllcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE('is tag null: ' || ddllcr.IS_NULL_TAG);
DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
ddllcr.GET_DDL_TEXT(ddl_text);
DBMS_OUTPUT.PUT_LINE('ddl: ' || ddl_text);
-- Print extra attributes in DDL LCR
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('serial#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('session#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('thread#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('tx_name');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ext_attr := ddllcr.GET_EXTRA_ATTRIBUTE('username');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
END IF;
DBMS_LOB.FREETEMPORARY(ddl_text);
ELSIF (typenm = 'SYS.LCR$_ROW_RECORD') THEN
res := lcr.GETOBJECT(rowlcr);
DBMS_OUTPUT.PUT_LINE('source database: ' ||
rowlcr.GET_SOURCE_DATABASE_NAME);
DBMS_OUTPUT.PUT_LINE('owner: ' || rowlcr.GET_OBJECT_OWNER);
DBMS_OUTPUT.PUT_LINE('object: ' || rowlcr.GET_OBJECT_NAME);
DBMS_OUTPUT.PUT_LINE('is tag null: ' || rowlcr.IS_NULL_TAG);
DBMS_OUTPUT.PUT_LINE('command_type: ' || rowlcr.GET_COMMAND_TYPE);
oldlist := rowlcr.GET_VALUES('old');
FOR i IN 1..oldlist.COUNT LOOP
IF oldlist(i) IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE('old(' || i || '): ' || oldlist(i).column_name);
print_any(oldlist(i).data);
END IF;
END LOOP;
newlist := rowlcr.GET_VALUES('new', 'n');
FOR i in 1..newlist.count LOOP
IF newlist(i) IS NOT NULL THEN
DBMS_OUTPUT.PUT_LINE('new(' || i || '): ' || newlist(i).column_name);
print_any(newlist(i).data);
END IF;
END LOOP;
-- Print extra attributes in row LCR
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('row_id');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('row_id: ' || ext_attr.ACCESSUROWID());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('serial#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('serial#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('session#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('session#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('thread#');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('thread#: ' || ext_attr.ACCESSNUMBER());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('tx_name');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('transaction name: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ext_attr := rowlcr.GET_EXTRA_ATTRIBUTE('username');
IF (ext_attr IS NOT NULL) THEN
DBMS_OUTPUT.PUT_LINE('username: ' || ext_attr.ACCESSVARCHAR2());
END IF;
ELSE
DBMS_OUTPUT.PUT_LINE('Non-LCR Message with type ' || typenm);
END IF;
END print_lcr;
/

Step 4   Create a Procedure that Prints All the LCRs in the Error Queue

The following procedure prints all of the LCRs in all of the error queues. It calls the print_lcr procedure created in ” Step3) Create a Procedure that Prints a Specified LCR”.

CREATE OR REPLACE PROCEDURE print_errors IS
CURSOR c IS
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_NUMBER,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
FROM DBA_APPLY_ERROR
ORDER BY SOURCE_DATABASE, SOURCE_COMMIT_SCN;
i      NUMBER;
txnid  VARCHAR2(30);
source VARCHAR2(128);
msgno  NUMBER;
msgcnt NUMBER;
errnum NUMBER := 0;
errno  NUMBER;
errmsg VARCHAR2(2000);
lcr    ANYDATA;
r      NUMBER;
BEGIN
FOR r IN c LOOP
errnum := errnum + 1;
msgcnt := r.MESSAGE_COUNT;
txnid  := r.LOCAL_TRANSACTION_ID;
source := r.SOURCE_DATABASE;
msgno  := r.MESSAGE_NUMBER;
errno  := r.ERROR_NUMBER;
errmsg := r.ERROR_MESSAGE;
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('----- ERROR #' || errnum);
DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE('--message: ' || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid);
print_lcr(lcr);
END LOOP;
END LOOP;
END print_errors;
/

To run this procedure after you create it, enter the following:

SET SERVEROUTPUT ON SIZE 1000000
EXEC print_errors

Step 5   Create a Procedure that Prints All the Error LCRs for a Transaction
The following procedure prints all the LCRs in the error queue for a particular transaction. It calls the print_lcr procedure created in “Create a Procedure that Prints a Specified LCR”.

CREATE OR REPLACE PROCEDURE print_transaction(ltxnid IN VARCHAR2) IS
i      NUMBER;
txnid  VARCHAR2(30);
source VARCHAR2(128);
msgno  NUMBER;
msgcnt NUMBER;
errno  NUMBER;
errmsg VARCHAR2(2000);
lcr    ANYDATA;
BEGIN
SELECT LOCAL_TRANSACTION_ID,
SOURCE_DATABASE,
MESSAGE_NUMBER,
MESSAGE_COUNT,
ERROR_NUMBER,
ERROR_MESSAGE
INTO txnid, source, msgno, msgcnt, errno, errmsg
FROM DBA_APPLY_ERROR
WHERE LOCAL_TRANSACTION_ID =  ltxnid;
DBMS_OUTPUT.PUT_LINE('----- Local Transaction ID: ' || txnid);
DBMS_OUTPUT.PUT_LINE('----- Source Database: ' || source);
DBMS_OUTPUT.PUT_LINE('----Error in Message: '|| msgno);
DBMS_OUTPUT.PUT_LINE('----Error Number: '||errno);
DBMS_OUTPUT.PUT_LINE('----Message Text: '||errmsg);
FOR i IN 1..msgcnt LOOP
DBMS_OUTPUT.PUT_LINE('--message: ' || i);
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE(i, txnid); -- gets the LCR
print_lcr(lcr);
END LOOP;
END print_transaction;
/

To run this procedure after you create it, pass to it the local transaction identifier of a error transaction. For example, if the local transaction identifier is 1.17.2485, then enter the following:

SET SERVEROUTPUT ON SIZE 1000000
EXEC print_transaction('1.17.2485')

How to locate local transaction identifier of a error transaction? To check for apply errors, run the following query:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A10
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN LOCAL_TRANSACTION_ID HEADING 'Local|Transaction|ID' FORMAT A11
COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A20
COLUMN MESSAGE_COUNT HEADING 'Messages in|Error|Transaction' FORMAT 99999999

SELECT APPLY_NAME,
       SOURCE_DATABASE,
       LOCAL_TRANSACTION_ID,
       ERROR_NUMBER,
       ERROR_MESSAGE,
       MESSAGE_COUNT
  FROM DBA_APPLY_ERROR;

If there are any apply errors, then your output looks similar to the following:

Apply                 Local                                         Messages in
Process    Source     Transaction                                         Error
Name       Database   ID          Error Number Error Message        Transaction
---------- ---------- ----------- ------------ -------------------- -----------
APPLY_FROM MULT3.EXAM 1.62.948            1403 ORA-01403: no data found     1_MULT3PLE.COM
APPLY_FROM MULT2.EXAM 1.54.948            1403 ORA-01403: no data found     1_MULT2PLE.COM

Oracle Streams Introduction

Posted by Sagar Patil

The streams contain following processes:

  1. Capture
  2. Instantiation (Not in image above)
  3. Propagate
  4. Apply

1) Capture process A Capture process can capture a majority of database transactions. The Capture process specifically captures DML and DDL. The Capture process uses both LogMiner and Advanced Queuing to accomplish it’s task (Note: Synchronous Capture uses internal triggers instead of LogMiner)

2) Instantiation process : composed of three steps

  • Creating the object(s) at the destination Site
  • Updating the Streams data dictionary with metadata
  • Setting the Instantiation SCN for the object(s)

Once instantiation is complete, the instantiation SCN will be the same at both the source and destination site(s), indicating to Streams that it is from this SCN forward that changes should be captured, propagated, and applied for the destination.

Instantiation Levels include

  • Table Level
  • Schema Level
  • Database (Global) Level
  • Tablespace (this requires special steps)

Instantiation Methods

  • Data Pump
  • Transportable Tablespaces
  • RMAN for entire database
  • Manual method

3) Propagate process The Propagate process does the actual Propagation between the source and target queues.  Propagation has two configuration options, queue-to-queue or queue-to-dblink. In the queue-to-queue configuration, each Propagation has its own propagation job. It should be noted that queue-to-queue propagation is recommended for Streams in an RAC environment. The default configuration of queue-to-dblink has one shared propagation job.

4) Apply process It’s made up of multiple parts. Those parts are as follows:

  • Reader server: Takes the LCRs and converts it into transactions, preserving transactional order, and dependencies.
  • Coordinator process: Takes the transactions from reader server and sends them to Apply server. This process also monitors the Apply server to ensure that the transactions are applied in the correct order.
  • Apply server: Applies the LCR or message to the handler, either an Apply handler or message handler. Apply server also deals with placing the LCR or message into the appropriate error queue if it cannot be applied.

Oracle Streams Restrictions

  1. Capture Process Restrictions
  2. Apply Process Restrictions

Simply query DBA_STREAMS_UNSUPPORTED view and you can find out the reason why a particular table could not be streamed.

1) Capture Process Restrictions

Unsupported Data Types for Capture Processes : A capture process does not capture the results of DML changes to columns of the following data types:

  • SecureFile CLOB, NCLOB, and BLOB
  • BFILE
  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • XMLType stored object relationally or as binary XML
  • Oracle-supplied types: Any types, URI types, spatial types, and media types

Unsupported Changes for Capture Processes: A capture process never captures changes made to the following schemas:

CTXSYS,DBSNMP,DMSYS,DVSYS,EXFSYS,LBACSYS,MDDATA,MDSYS,OLAPSYS,ORDPLUGINS,ORDSYS,OUTLN,SI_INFORMTN_SCHEMA,SYS,SYSMAN,SYSTEM,WMSYS,XDB

Unsupported DML Changes for Capture Processes A capture process cannot capture changes made to an index-organized table if the index-organized table contains any columns of the following data types:

  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • XMLType stored object relationally or as binary XML (XMLType stored as CLOB is supported.)
  • The following Oracle-supplied types: Any types, URI types, spatial types, and media types
  • A capture process cannot capture DML changes made to temporary tables, object tables, or tables stored with segment compression enabled. A capture process raises an error if it attempts to capture such changes.
  • Also, if you share a sequence at multiple databases, then sequence values used for individual rows at these databases might vary. Also, changes to actual sequence values are not captured. For example, if a user references a NEXTVAL or sets the sequence, then a capture process does not capture changes resulting from these operations.

Unsupported DDL Changes for Capture Processes A capture process captures the DDL changes that satisfy its rule sets, except for the following types of DDL changes:

  • ALTER DATABASE
  • CREATE CONTROLFILE
  • CREATE DATABASE
  • CREATE PFILE
  • CREATE SPFILE

Some types of DDL changes that are captured by a capture process cannot be applied by an apply process. If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records information about it in the trace file for the apply process.

Changes Ignored by a Capture Process : A capture process ignores the following types of changes:

  • The session control statements ALTER SESSION and SET ROLE.
  • The system control statement ALTER SYSTEM.
  • CALL, EXPLAIN PLAN, and LOCK TABLE statements.
  • GRANT statements on views.
  • Changes made to a table or schema by online redefinition using the DBMS_REDEFINITION package. Online table redefinition is supported on a table for which a capture process captures changes, but the logical structure of the table before online redefinition must be the same as the logical structure after online redefinition.
  • Invocations of PL/SQL procedures, which means that a call to a PL/SQL procedure is not captured. However, if a call to a PL/SQL procedure causes changes to database objects, then these changes can be captured by a capture process if the changes satisfy the capture process rule sets.

NOLOGGING and UNRECOVERABLE Keywords for SQL Operations If you use the NOLOGGING or UNRECOVERABLE keyword for a SQL operation, then the changes resulting from the SQL operation cannot be captured by a capture process.

Supplemental Logging Data Type Restrictions LOB, LONG, LONG RAW, user-defined type, and Oracle-supplied type columns cannot be part of a supplemental log group.

2.  Apply Process Restrictions

Unsupported Data Types for Apply Processes

An apply process does not apply row LCRs containing the results of DML changes in columns of the following data types:

  • SecureFile CLOB, NCLOB, and BLOB
  • BFILE
  • ROWID
  • User-defined types (including object types, REFs, varrays, and nested tables)
  • Oracle-supplied types: Any types, URI types, spatial types, and media types

The following types of DDL changes are not supported by an apply process. These types of DDL changes are not applied:

  • ALTER MATERIALIZED VIEW
  • ALTER MATERIALIZED VIEW LOG
  • CREATE DATABASE LINK
  • CREATE SCHEMA AUTHORIZATION
  • CREATE MATERIALIZED VIEW
  • CREATE MATERIALIZED VIEW LOG
  • DROP DATABASE LINK
  • DROP MATERIALIZED VIEW
  • DROP MATERIALIZED VIEW LOG
  • FLASHBACK DATABASE
  • RENAME

If an apply process receives a DDL LCR that specifies an operation that cannot be applied, then the apply process ignores the DDL LCR and records the following message in the apply process trace file, followed by the DDL text that was ignored:
Apply process ignored the following DDL:Oracle Support Master Note for Streams Setup Scripts (Doc ID 789445.1)

What is new at Oracle11g Streams?

Oracle Database 11g enhanced streams replication to provide a 30 to 50 percent performance improvement for SQL Apply and to support XML Type (CLOBs) and transparent data encryption.
New manageability improvements includes

  1. Streams performance advisor
  2. Topology views
  3. Automatic Workload Repository (AWR) and ADDM support
  4. Data comparison utility and synchronous capture
  5. New Advanced Queuing (AQ) features include
  6. Java Message Service (JMS) performance improvements
  7. Direct streams AQ support in Java Database Connectivity (JDBC)
  8. Scalable event notification
  • Streams in 11g has much better monitoring and tracking than previous versions.

o      LCR tracking
o      Expanded streams specific error messages
o      Oracle Streams Performance Advisor
o      Better documentation and examples

  • Distinguishes between implicit and explicit consumption (direct bearing on my work)
  • Streams 11g switches from DBMS_JOB to DBMS_SCHEDULER for propagation

How does stream work uner RAC environment?

When you create the queue in an instance of the database, the instance assumes the ownership of the associated queue table. The capture and apply processes run in the instance that owns the queue table associated with their respective queues.  In RAC environments, it is possible to specify the ownership of the queue table. You can specify the primary instance and secondary instance for a given queue table.

When the primary instance becomes unavailable, the secondary instance assumes the ownership of the queue, and the Streams processes are restarted on the secondary instance automatically. When the primary instance becomes available, the queue ownership and the Streams processes switch back
to the primary instance.

You can use ALTER_QUEUE_TABLE procedure in the DBMS_AQADM package to define the primary and secondary instances for the queue table.

SQL> begin
dbms_aqadm.alter_queue_table (
queue_table => ‘CAP_Q_T’,
primary_instance => 2,
secondary_instance => 3);
end;

The view DBA_QUEUE_TABLES shows information about the queue table ownership and its current owner instance:

SQL> select queue_table,
owner_instance,
primary_instance,
secondary_instance
from dba_queue_tables
where owner = ‘STRMADMIN’;

Top of Page

Top menu