Oracle Undocumented Parameters

Posted by Sagar Patil

select ksppinm
from x$ksppi
where substr(ksppinm,1,1) = ‘_’;

The following is a list of undocumented parameters.

1. _db_block_cache_protect
On VMS, the DB_BLOCK_CACHE_PROTECT mechanism has been made much faster. During normal use, having it turned on shouldn’t be noticeable (the degradation is less than 1%). Developers who link non-share will need PSWAPM privilege to use this feature.

When DB_BLOCK_CACHE_PROTECT is turned on, developers may either use the VMS mailboxes with the M (MEMORY_LOG) command  or they may just examine the ring buffer in the PGA (index SMPREI_, array SMPREB_) to determine what buffer requests have been made recently. DB_BLOCK_CACHE_PROTECT will prevent certain corruption from getting to disk; although, it may crash the foreground of the instance. It will help

catch stray writes in the cache. When you try to write past the buffer size in the sga, it will fail first with a stack violation. It seems that the db_block_cache_protect has a significant performance
overhead. Preliminary testing shows that it has considerable overhead (a single update took twice as long with the parameter set to TRUE).

2. _db_block_compute_checksums
There is another new init.ora parameter, DB_BLOCK_COMPUTE_CHECKSUMS, that controls whether a checksum is put into every block before the block is written to disk. The default is FALSE. A block read validates an exiting checksum whether or not this option is enabled. A block is marked
as corrupt if a checksum fails. It helps determine corruption due to hardware problems. The incarnation number and the sequence number are added to the end of the block to help
catch corruption. If the problem (corruption) is in the middle of the block this test will not detect it. To detect this problem a checksum may be generated in the block header before every write and verified on every read.

3. _db_block_hash_buckets= “Number of database block hash buckets”
The number of hash buckets is
a) by default to be set to a prime number;
b) coerced to a prime if there is an init.ora parameter setting.
The value, if not a prime number > 2, is rounded up to the next highest
prime. I would tend not to change it unless there is latch contention on the hash chains. raising it to equal the number of buffers would clearly remove any contention (basically, this is just saying that each buffer lives on its own hash chain). Having it set too small would mean that we might have to
scan over lots of buffers to find the one we want. I think the default is to make it 1/4 of the total number of buffers

4. _db_block_multiple_hashchain_latches “Use one latch per hash chain”

5. _db_handles “System-wide simultaneous buffer operations”

6. _db_handles_cached “Buffer handles cached each process”

7. _wait_for_sync ” Wait for sync on commit ”
Wait_for_sync is an oracle generic parameter which, when set to false, will allow the system to complete commits without waiting for the redo-log buffer flushes to complete.

8. _db_block_max_scan_cnt “Maximum number of buffers to inspect when  looking for free buffer”
DB_BLOCK_MAX_SCAN_CNT is an init.ora parameter which specifies the number of unavailable buffers a process should scan before signaling DBWR to write dirty buffers from the buffer cache to disk.

9. _db_writer_scan_depth
“Number of LRU buffers for dbwr to scan when looking for dirty buffers”

10a. _db_writer_scan_depth_increment
“Add to dbwr scan depth when dbwr is behind”

10b. _db_writer_scan_depth_decrement
Subtract from dbwr scan depth when dbwr is working too hard

11. _db_large_dirty_queue
“Number of buffers which force dirty queue to be written

12. _db_block_write_batch
Number of blocks to group in each DB Writer IO
specifies the no of blocks to be written to the disk in one write operation. Should be increased till write wait time and write complete waits starts to increase.
DBWR Free Low is the number of times DBWR is invoked because a user process found at least DB_BLOCK_WRITE_BATCH/2 buffers on the dirty list. This parameter specifies the number of blocks which should be written to disk at one time. This parameter should only be increased until the statistics Write Complete Waits and Write Wait Time show growth. Write Complete
Waits is the number of times a process waited for DBWR to write a current block before making a change to a buffer.

13. _db_block_cache_clone
“Always clone data blocks on get (for debugging)”
This parameter setting has a significantly adverse affect on performance
and we were told to run without it.

14. _controlfile_enqueue_timeout
/* use parameter value (default is 900) */
/* System Parameter: control file enqueue timeout in seconds */

15. _db_no_mount_lock
add hidden parameter to not acquire mount lock
If hidden int.ora parameter _db_no_mount_lock is set to TRUE then no mount locks are acquired when the the database is mounted exlusive. This allows two databases with the same name to be simultaneously mounted. Note that this circumvents the mechanism to prevent two simultaneous startups of the same database, and is thus a dangerous parameter to set. This only affects ports that ar compiled with the SYS_DFS option enabled (i.e. VMS only).

It does not get a mount lock on the database name during startup. This allows 2 instances with the same name to run on one machine

16. _log_checkpoint_recovery_check
Check redo log after checkpoints.
Add debugging code to check the red log after a checkpoint. This code is intended to help find a problm that is generating ora-600 [3020] during recovery. This code is enabed with a new init.ora parameter:
_log_checkpoint_recovery_check=XXX, where XXX is the number of redo blocks to check. This is called in LGWR after every checkpoint. If the init.ora parameter “_log_checkpoint_recovery_check” is zero (default) it does nothing. If it is a positive value then that many blocks of redo are scanned to see that the data file blocks on disk could be recovered if there was an immediate crash. This code was introduced to catch an elusive bug that results in OERI(3020) errors occasionally during crash recovery.

17. _switch_on_stuck_recovery
Check redo Log contents after checkpoint. This is called in LGWR after every checkpoint. If this parameter is zero (default) it does nothing. If it is a positive value then that many blocks of redo are scanned to see that the data file blocks on disk could be recovered if there was an immediate crash. This code was introduced to catch an elusive bug that results in OERI(3020) errors occasionally during crash recovery. Checkpoint recovery check: this is the number of redo blocks that kcracl will verify after every LGWR generated checkpoint. Defaults to zero for no checking. When opening the named offline log for redo application and to recalculate future change thread switching this parameter is used.

18. _log_io_size=redo log IO chunk size (blocks/write)
/* System Parameter: IO chunk size */
1. that the value is o/s dependent.
2. if left at 0, the value will be automatically determined for each log
file.

19. _log_buffers_debug
/* debugging: fill redo buffers with [well known] junk after writes */
“debug redo buffers (slows things down)”

20. _log_debug_multi_instance
/* debugging : pretend multi-instance */
“debug redo multi instance code”

21. _log_entry_prebuild_threshold
/* redo entry pre-build threshold */
/* this is a bad idea for a uniprocessor , and is only helpful for a
multiprocessor when there is latch contention */
LOG_ENTRY_PREBUILD_THRESHOLD determines the maximum size of a redo entry to prebuild before the copy to the log buffer. Increasing this parameter reduces the time that the redo copy latch is held. This parameter should not be modified if it is a single processor environment or there will be memory contention.

22. _disable_logging
If this is true, redo records will not be generated — no recovery is possible if the instance crashes. It is mainly used for getting good benchmarking  results. Default is false

23. _log_blocks_during_backup
TRUE value implies before any change is made to a db_block in the buffer cache, a *complete image* of the block is copied to the redo redo log. (This explains why excessive redo would be generated for datafiles excessive redo would be generated for datafiles in hot backup mode.) There is a new init.ora parameter, LOG_BLOCKS_DURING_BACKUP, that controls whether block images ar written to the redo log during hot backup.
Default is TRUE for VM, but port specific with the default defined in sparams.h. This may beset to FALSE if the Oracle block size equals the O/S physical sector sie or if it is otherwise ensured that hot backup reads consistent versios of blocks even if those blocks are being written at the time. Put anther way, this may be set to FALSE on machines that can guarantee the aomicity of a single block I/O request.
Default is true Problem is due to split blocks.

24. _allow_resetlogs_corruption
Added new secret init.ora parameter to override error 1138.  When set to TRUE the  resetlogs option will be allowed even if there are hot backups that need  more redo applied. Unless you are certain that absolutely all redo, includig  the online logs, has been applied, then a full export and import mst be done to insure the database is internally consistant.

25. _reuse_index_loop
“number of blocks being examine for index block reuse”
/* secret system parameter to control how agressive we should walk the free
** list when attempting to reuse block – default is 5.
** Set to 0 for fast index operation which is susceptible to growth,
** Set to > 5 for slower index op but more agressive in reusing blocks */ Controls the amount of work done when looking for a block to reusse for n index entry. The value determines the number of blocks to check on the freelist when looking for a reusable block.

26. _mts_load_constants
/* hidden init.ora to set server load balancing constants */
/* fill in load balancing parameters (from _mts_load_constants) */
* PID Controller – calculate control on number of servers using:
* control = Kp * err + Kd * delta(err) + Ki * sum(err)
* where Kp = proportional, Kd = derivative constant, Ki = integral constant
* Kp,Kd,Ki can be changed with the hidden parameter _mts_load_constants
* in order to tune the system as desired.
This values should only be changed after gathering enough information to determine that the mts is not optimal.

27. _mts_fastpath
/* hidden init.ora to enable dispatcher fastpath */
default is false
* Return TRUE if the system should implement the dispatcher network
* fastpath. When enabled, the dispatcher fastpath causes network i/o
* to be buffered and only flushed when absolutely necessary. When not
* enabled, flushes will occur for every dirty virtual circuit buffer.

28. _kgl_multi_instance_lock
Only for debugging. all the _kgl_multi_instance_xxxx
“whether KGL to support multi-instance locks”
Default is 0

29. _kgl_multi_instance_pin
“whether KGL to support multi-instance pins”
Default is 0.

30. _kgl_multi_instance_invalidation
“whether KGL to support multi-instance invalidations”
Default is 0.

31. _row_cache_instance_locks
Kernel SQL Row cache management component, number of row cache instance
locks
default is 100

32. _row_cache_buffer_size
“size of row cache circular buffer”
default is 200

33. _messages
” message queue resources – dependent on # processes ”
The asynchronous message mechanism allows processes to send messages to each other. A process may send a message to a specified other process (always without waiting for a reply), may wait for a message to arrive on its queue, and may obtain the next message. Messages sent to detached processes are reliably delivered. Messages sent to foreground processes are reliably delivered as long as the process is active. The mechanism also permits sending of a simple “reply”, which is a one-byte message without queuing. It should normally be used to reply to asynchronous messages, and this is a safer technique than regular messages for responding to foreground processes. This mechanism is not used in single process mode.

34. _cpu_count
ksb – Kernel Service Background processes
“number of cpu’s for this instance”
CPU_COUNT has to be set on some platforms in order for Oracle to take advantage of multi-processor system, on others it does not have effect on performance since load balancing between processors is handled by the o/s.

35. _debug_sga
/* Debug SGA, don’t make the SGA a global section so we can set watchpoints

36. _enqueue_locks
ksq1 – Kernal Service enQueues (1)
Definitions for enqueues client objects, “locks for managed enqueues”

37. _enqueue_hash
“enqueue hash table length”

38. _enqueue_debug_multi_instance
“debug enqueue multi instance”
KST is a trace facility used for “realtime” tracing of events. Below are defined KST macros that will enable the tracing of such things as latch operations, memory assignments, etc. Tracing is done to a per process circular buffer stored in the SGA. Access to these buffers is via fixed tables. Tracing is enabled for classes of events, particular events, and ranges of events.

The tracing state may be dynamically changed with the following syntax
“ALTER TRACING”
– “ON”
– Processes begin logging events using the current enabled events
– “OFF”
– Processes stop tracing
– “ENABLE”
– Add in the events indicated in to those which are
being traced.
– “DISABLE”
– No longer trace events specified by

39._trace_buffers_per_process
Note that by default, tracing is NOT enabled. In order to enable tracing
at instance startup time, add _trace_buffers_per_process = 1

40. _trace_block_size
_trace_xxxxx (xxxxx = buffers_per_process, block_size, archive_start,
flushing, enabled, get_time_every, archive_dest etc.)
These parameters are only there for debugging purposes. Customers
or support will never have to use them.

41. _trace_archive_start
“start trace process on SGA initialization”

42. _trace_flushing
“TRWR should try to keep tracing buffers clean”

43. _trace_enabled
By default, tracing is NOT enabled. In order to enable tracing,
_trace_enabled = true

44. _trace_events
If you wish to enable tracing of waits at instance startup time, you can  either
add the line ‘_trace_events = “WAT,ALL”‘ to init.ora or execute  ‘alter tracing enable “WAT,ALL”‘ in a sqldba session.
If you wish to limit tracing to specific events, you can use the  the following syntax:
alter tracing enable “WAT,,…”  where “id” is either a specific event number, or an event range (event number 1 – event number 2).

45. _trace_archive_dest “trace archival destination”

46. _trace_file_size “trace file size”  default is 10000 blocks

47. _trace_write_batch_size “trace write batch size”  default is 32

48. _rollback_segment_initial “starting undo segment number”
Default is 1. DO NOT SPECIFY 0 AS THAT HAPPENS TO BE THE SYSTEM ROLLBACK

49. _rollback_segment_count “number of undo segments”  default is 0

50. _offline_rollback_segments
If a rollback segment is not accessible because the file it is in is offline or corrupted, one can force the system to come up without the rollback segment by specifying the rollback segment in init.ora
paramater ‘_offline_rollback_segments’. The system will come up by estimating the current system commit time since it cannot access transaction tble in the rollback segment. The system commit number is a conservative guess based on current time, the database creation time and the assumed transaction rate of 8000 tps. ONE MUST MAKE SURE THAT THE SYSTEM TIME IS SET CORRECTLY WHEN FORCING THE SYSTEM UP USING ‘_OFFLINE_ROLLBACK_SEGENTS’. A trace is written with information
about the estimated system commit number.

51. _corrupted_rollback_segments Mark a rollback segment as corrupted.

52. _label_tag_cache_size
/* hidden size of the SGA label tag comparison cache (bitcache) */
“number of tags in the label tag comparison cache”
default is 200

53. _trace_get_time_every
“Number of trace sequence numbers per call to slgcs()”
default is 200

54. _vms_bg_priority
“VMS process priority to give to background processes”
default is 4

55. _sort_use_os_files_as_temporaries
Use O/S files rather than temp segments for sorting.

56. _log_checkpoints_to_alert
Whether checkpoint messages should be written to alert.log or not. Turned
off in benchmarks.

57. _large_memory_system :
Used in internal benchmarks. Doesn’t concern us.
“Configure system to use memory and save cpu cycles, paging, etc
default is false

58. _wakeup_timeout
This is WMONs sleeptime between checks of it’s queue of processes to wake.

59. _latch_wait_posting
enable posting of processes that may be waiting for a latch after a process frees the same latch (set this parameter to a value greater than one this parameter to a value greater than one for it to take effect).

60. _sql_connect_capability_code
allows database links from a pre-7.0.15 release to work with release 7.1. It is necessary to set this parameter for database links from a V6 ?

ORA-00312: Message 312 not found; Solution

Posted by Sagar Patil

Applies to: Oracle Server – Enterprise Edition – Version: 9.2.0.6.0
This problem can occur on any platform.
Symptoms
When NLS_LANGUAGE is set to ENGLISH and NLS_TERRITORY is set to AUSTRALIA, the text related to Archival messages are not displayed. The following contents are shown in the Alert log :

Errors in file /smartdi/d03/oracle/SMTRMAN/admin/bdump/smtrman_arc0_659582.trc:
ORA-16014: Message 16014 not found; No message file for product=RDBMS,facility=ORA;
ORA-00312: Message 312 not found; No message file for product=RDBMS,facility=ORA;

But when its set back to NLS_LANGUAGE=AMERICAN and NLS_TERRITORY=AMERICA it works fine.

Cause
Bug 4600519 ” MESSAGE NOT FOUND ERRORS WHEN NLS_LANGUAGE IS SET TO ENGLISH ”
The reason is due to the missing oragb.msg/oragb.msb files and oraau.msg/oraau.msb files.

When the NLS_LANGUAGE is set to ENGLISH, Oracle looks out for the message file oragb.msg/oragb.msb
which it was not able to find and hence the behaviour.

Solution

Navigate to $ORACLE_HOME/rdbms/mesg and copy the message files ” oraus.msb/oraus.msg ” to ” oragb.msb/oragb.msg ”
$ cp oraus.msb oragb.msb
$ cp oraus.msg oragb.msg

Flashback Query

Posted by Sagar Patil

Oracle Flashback Query allows users to see a consistent view of the database as it was at a point in the past. This functionality allows comparative reporting over time and recovery from logical corruptions.

Prerequisites

Oracle Flashback Query can only be used if the server is configured to use Automatic Undo Management, rather than traditional rollback segments. The maximum time period that can be flashbacked to is defined using the UNDO_RETENTION parameter in the init.ora file. Alternatively, this parameter can be set using:

ALTER SYSTEM SET UNDO_RETENTION = <seconds>;

Using Flashback Query

Flashback Query is enabled and disabled using the DBMS_FLASHBACK package. The point in time of the flashback can be specified using the SCN or the actual time:

EXECUTE Dbms_Flashback.Enable_At_System_Change_Number(123);
EXECUTE Dbms_Flashback.Enable_At_Time(’28-AUG-01 11:00:00′);

Once you’ve finished performing all your read-only operations you can turn off flashback query using:

EXECUTE Dbms_Flashback.Disable;

The flashback query is turned off when the session ends if there is no explicit call to the disable procedure. The current system change number can be returned using the Get_System_Change_Number function.
Example Data Recovery
Assuming all data was accidentally deleted from the EMPLOYEES table at 9:05AM we could recover it using:

DECLARE
CURSOR c_emp IS
SELECT *
FROM employees;
v_row c_emp%ROWTYPE;
BEGIN
Dbms_Flashback.Enable_At_Time('28-AUG-01 09:00:00');
OPEN c_emp;
Dbms_Flashback.Disable;

LOOP
FETCH c_emp INTO v_row;
EXIT WHEN c_emp%NOTFOUND;
INSERT INTO employees VALUES
(v_row.employee_id, v_row.first_name,
v_row.last_name, v_row.email,
v_row.phone_number, v_row.hire_date,
v_row.job_id, v_row.salary,
v_row.commission_pct, v_row.manager_id,
v_row.department_id, v_row.dn);
END LOOP;
CLOSE c_emp;
COMMIT;
END;
/

Notice that the Flashback Query session is disabled after the cursor is created so that the DML operations can be performed to recover the data.

Restrictions

* The server must be configured to use Automatic Undo Management.
* No DDL or DML can be issued when using Flashback Query.
* Flashback Query does not reverse DDL operations such as DROP, only DML alterations to the data.
* Flashback Query does apply to code objects (Packages, Procedures, Function or Triggers). If invoked, the current definition will be executed against the flashback data.

Function Based Indexes

Posted by Sagar Patil

Traditionally, performing a function on an indexed column in the where clause of a query guaranteed an index would not be used. Oracle 8i introduced Function Based Indexes to counter this problem. Rather than indexing a column, you index the function on that column, storing the product of the function, not the original column data. When a query is passed to the server that could benefit from that index, the query is rewritten to allow the index to be used. The following code samples give an example of the use of Function Based Indexes:

* Build Test Table
* Build Regular Index
* Build Function Based Index

Build Test Table
First we build a test table and populate it with enough data so that use of an index would be advantageous:

Code:
CREATE TABLE user_data (
id          NUMBER(10)    NOT NULL,
first_name  VARCHAR2(40)  NOT NULL,
last_name   VARCHAR2(40)  NOT NULL);

BEGIN
  FOR cur_rec IN 1 .. 2000 LOOP
    IF MOD(cur_rec, 2) = 0 THEN
      INSERT INTO user_data
      VALUES (cur_rec, ‘John’ || cur_rec, ‘Doe’);
    ELSE
      INSERT INTO user_data
      VALUES (cur_rec, ‘Jayne’ || cur_rec, ‘Doe’);
    END IF;
    COMMIT;
  END LOOP;
END;
/

ANALYZE TABLE user_data COMPUTE STATISTICS;

At this point the table is not indexed so we would expect a full table scan for any query:

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
[b]   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)[/b]

Build Regular Index
If we now create a regular index on the FIRST_NAME column we see that the index is not used:

CREATE INDEX first_name_idx ON user_data (first_name);
ANALYZE TABLE user_data COMPUTE STATISTICS;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)

Build Function Based Index
If we now replace the regular index with a function based index on the FIRST_NAME column we see that the index is used:

DROP INDEX first_name_idx;
CREATE INDEX first_name_idx ON user_data (UPPER(first_name));
ANALYZE TABLE user_data COMPUTE STATISTICS;

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;
   
Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=14)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF ‘USER_DATA’ (Cost=2 Card=1 Bytes=14)

[b]   2    1     INDEX (RANGE SCAN) OF ‘FIRST_NAME_IDX’ (NON-UNIQUE) (Cost=1 Card=1)[/b]

The QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED parameters must be set or the server will not be able to rewrite the queries, and will therefore not be able to use the new index. These parameters may be set in the Init.ora file if they are needed permanently.

When to Rebuild a B-Tree Index

Posted by Sagar Patil

You have to periodically check your indexes to see if they become skewed and, therefore, good candidates for rebuild.

A skewed index has many records clumped close together on the index tree due to their similar indexed values. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance. It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

Here is a sample procedure on how to identify the skewed indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred
thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.

For example:   SQL> analyze index A1_PK compute statistics;
Index analyzed.

2. Run the script given below – index_check.sql – to find out how skewed each index is.

This query checks on all indexes that belong to user SCOTT:

SQL> select index_name, blevel,
decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from dba_indexes
where owner='SCOTT';

INDEX_NAME                                BLEVEL  OK
----------------------------------------  ------  ---- 
A1_PK                                     BLEVEL  HIGH
A1_UK                                     BLEVEL  HIGH
BUDVERPORT_BUDVERPORT2_UK                 1       OK BLEVEL
BUDVERPORT_BV_FK_I                        2       OK BLEVEL
BUDVERPORT_CHAR_CL_FK_I                   1       OK BLEVEL
BUDVERPORT_DIRCTE_FK_I                    3       OK BLEVEL
BUDVERPORT_FUND_TYPE_FK_I                 1       OK BLEVEL
BUDVERPORT_OMB_SUBFCT_FK_I                1       OK BLEVEL
BUDVERPORT_ORG_FK_I                       0       OK BLEVEL
BUDVERPORT_PL_TITLE_FK_I                  1       OK BLEVEL
BUDVERPORT_RDC_FK_I                       1       OK BLEVEL
S_INVENTORY_PRODID_WARID_PK               BLEVEL  HIGH
S_ITEM_ORDID_ITEMID_PK                    BLEVEL  HIGH
S_ITEM_ORDID_PRODID_UK                    BLEVEL  HIGH
S_LONGTEXT_ID_PK                          BLEVEL  HIGH
S_ORD_ID_PK                               BLEVEL  HIGH
S_PRODUCT_ID_PK                           BLEVEL  HIGH
S_PRODUCT_NAME_UK                         BLEVEL  HIGH
S_REGION_ID_PK                            BLEVEL  HIGH
S_REGION_NAME_UK                          BLEVEL  HIGH
S_TITLE_TITLE_PK                          BLEVEL  HIGH
S_WAREHOUSE_ID_PK                         BLEVEL  HIGH

3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. Prior to 9i, if the BLEVEL is more than 4, it is recommended to rebuild the index. As database are getting bigger and bigger, BLEVEL may not be a good indicator of time to rebuild the index. BLEVEL > 4 may suggest an evaluation of whether the partitioning option could help you.

Note: If you do not analyze the index, the index_check.sql script will
show “BLEVEL HIGH” for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table. This table does not contain an OWNER column and assumes you are looking for statistics for indexes created by your active session only.

SQL> analyze index SCOTT.ORG_PK validate structure;
Index analyzed.

SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
(LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
from index_stats  where NAME='&index_name';

Enter value for index_name: ORG_PK

PCT_DELETED DISTINCTIVENESS
———– —————
0 0

The PCT_DELETED column shows what percent of leaf entries (index entries) have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.
The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average.

For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in
(10000-9000) x 100 / 10000 = 10. This shows a good distribution of values.

If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.

Index Monitoring

Posted by Sagar Patil

Index monitoring could be initiated and stopped using ALTER INDEX syntax shown below.

ALTER INDEX my_index_i MONITORING USAGE;
ALTER INDEX my_index_i NOMONITORING USAGE;

Information about the index usage can be displayed using the V$OBJECT_USAGE view.

SELECT index_name,
table_name,
monitoring,
used,
start_monitoring,
end_monitoring
FROM v$object_usage
WHERE index_name = 'MY_INDEX_I'
ORDER BY index_name; 

The V$OBJECT_USAGE view does not contain an OWNER column so you must to log on as the object owner to see the usage data.

Foreign Key Indexes
To quote the Oracle Database Concepts manual:
“You should almost always index foreign keys. The only exception is when the matching unique or primary key is never updated or deleted.”

When a foreign key is unindexed, DML on the parent primary key results in a share row exclusive table lock (or share-subexclusive table lock, SSX) on the child table, preventing DML from other transactions against the child table. If the DML affects several rows in the parent table, the lock on the child table is obtained and released immediately for each row in turn. Despite the speed of the lock-release process, this can cause significant amounts of contention on the child table during periods of heavy update/delete activity on the parent table.

When a foreign key is indexed, DML on the parent primary key results in a row share table lock (or subshare table lock, SS) on the child table. This type of lock prevents other transactions from issuing whole table locks on the child table, but does not block DML on either the parent or the child table. Only the rows relating to the parent primary key are locked in the child table.

Building virtual index using the NOSEGMENT clause.

Posted by Sagar Patil

Let’s create a virtual index

07:59:12 orcl> create index hr.emp2_emp_id_virtual on hr.employees2(employee_id) nosegment;
Index created.

I am going to set a hidden startup parameter “_use_nosegment_indexes” to TRUE so that our session will recognize our new virtual index.

08:00:09 orcl> alter session set “_use_nosegment_indexes” = true;
Running our statement again to see if it will use our new virtual index. Check out the access path below. The optimizer has chosen our virtual index.

select employee_id, a.department_id, b.department_name from
hr.departments b, hr.employees2 a where
a.department_id = b.department_id
and employee_id = 203

Execution Plan
———————————————————-
Plan hash value: 2516110069
—————————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 25 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 25 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES2 | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | EMP2_EMP_ID_VIRTUAL | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
—————————————————————————————————-

If I set the “_use_nosegment_indexes” to FALSE, the optimizer did NOT choose virtual index.

08:01:09 orcl> alter session set “_use_nosegment_indexes” = false;
Session altered.

08:01:33 orcl> select employee_id, a.department_id, b.department_name
08:01:47 2 from
08:01:47 3 hr.departments b, hr.employees2 a
08:01:47 4 where
08:01:47 5 a.department_id = b.department_id
08:01:47 6 and employee_id = 203;

Execution Plan
———————————————————-
Plan hash value: 2641883601
——————————————————————————————–

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 25 | 818 (3)| 00:00:10 |
| 1 | NESTED LOOPS | | 1 | 25 | 818 (3)| 00:00:10 |
|* 2 | TABLE ACCESS FULL | EMPLOYEES2 | 1 | 9 | 817 (3)| 00:00:10 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————–

Executing DBMS_STATS to gather statistics on both the virtual and standard index. I have run tests with statistics and without and it does seem to affect virtual index access paths.

08:21:57 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_NON_VIRTUAL’);
PL/SQL procedure successfully completed.

08:23:10 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_VIRTUAL’);

PL/SQL procedure successfully completed.

Looking for information on indexes built on the EMPLOYEES2 table.
Oracle returns a row for the standard index but not the virtual index.

08:20:31 orcl> select index_name, last_analyzed from dba_indexes where table_name = ‘EMPLOYEES2’

INDEX_NAME LAST_ANAL
—————————— ———
EMP2_EMP_ID_NON_VIRTUAL 31-MAY-07

Determining f we can find the virtual index in DBA_SEGMENTS. No success.

08:26:09 orcl> select segment_name, segment_type from dba_segments where segment_name like ‘EMP2%’;
SEGMENT_NAME SEGMENT_TYPE
——————– ——————
EMP2_EMP_ID_NON_VIRT INDEX
UAL

Looking for the the virtual index in DBA_OBJECTS. Finally, we find some sort of evidence that the virtual index exists in the database!

08:30:21 orcl> col object_name for a30
08:30:29 orcl> r
1 select object_name, object_type, created, status, temporary
2* from dba_objects where object_name like ‘EMP2%’

OBJECT_NAME OBJECT_TYPE CREATED STATUS T
—————————— ——————- ——— ——- –
EMP2_EMP_ID_NON_VIRTUAL INDEX 31-MAY-07 VALID N
EMP2_EMP_ID_VIRTUAL INDEX 31-MAY-07 VALID N

Selecting the Best Join Order : Methods Oracle Uses to Join Tables

Posted by Sagar Patil

Choose the join order that gives you least number of rows to be joined to the other tables. That is, if you’re joining three tables, the one with the more restrictive filter should be joined first to one of the other two tables.

When SQL statement includes a join between two or more tables, the order in which you join tables is extremely important.
– The driving table in a join is the first table that comes after the WHERE clause.
– The driving table in the join should contain the filter that will eliminate the most rows.

Nested loops Join
Sort-merge Join
Hash Join
Cluster Join

Nested Loops Joins
For each relevant row in the first table, find all matching rows in the other table.
Suppose you have a list of all public officials in your city and a telephone book for your city. How would you go about getting the names and telephone numbers of each council member?

You could:
– Go down the list of public officials
– Discard all names except council members
– Look up the remaining names in the telephone book one at a time

Sort-Merge Joins
Sort all of the relevant rows in the first table by the join key, likewise sort the second table, and then merge the two together.

Suppose you and your fianc?e each have a huge compact disc collection. You buy a house together and decide to combine collections. How would you eliminate the duplicate CDs?

You could:
– Alphabetize each collection by artist
– Compare collections side-by-side
– Extract the discs that appear in both

Hash Join
Build a hash table in memory of all relevant rows from the first table. For each relevant row in the second table, probe the hash table to find the matches.

A hash join is like a nested loops join, except that preparatory work is done to one of the two tables to speed up the join process.

Consider the nested loops join example with the council members?
phone numbers, and imagine that the telephone book is not
alphabetized by name.

Cluster Joins
For each relevant row in the first table, find all matching rows in the other table by looking at the cluster index. The matching rows will probably be conveniently located in the same data block.

Consider a stack of purchase orders.
Each PO can have any number of items on it. When you pick up one PO, you have easy access to all of its items.

 

Sid from Unix Process ID

Posted by Sagar Patil

Here is a situation, This afternoon I have seen my CPU hitting through the roof on a Live system. There are about 6 Oracle instances and one is chewing more than 90% of CPU time.

– I want to know which Instance /process is a culprit

  PID USERNAME PRI NICE  SIZE   RES STATE   TIME    CPU COMMAND
638523 oracle    52    0 1360M  503M run   574:22 99.10% oracle
958998 oracle    42    0 1363M  506M run   518:12 46.20% oracle
982071 oracle    42    0 1360M  502M WAIT  618:58 38.40% oracle

 

select s.username, s.status,  s.sid,     s.serial#,
       p.spid,     s.machine, s.process, s.lockwait
from   v$session s, v$process p
where  p.spid     = &oracle_pid
and    s.paddr    = p.addr;

Replace ‘&unix_pid’  with 958998

USERNAME    STATUS    SID    SERIAL#    SPID    MACHINE    PROCESS    LOCKWAIT
SYS         ACTIVE    241    38945      958998   

Find out SGA PGA Size

Posted by Sagar Patil

SET LINESIZE 145
SET PAGESIZE 9999

COLUMN sid                     FORMAT 999            HEADING ‘SID’
COLUMN oracle_username         FORMAT a12            HEADING ‘Oracle User’     JUSTIFY right
COLUMN os_username             FORMAT a9             HEADING ‘O/S User’        JUSTIFY right
COLUMN session_program         FORMAT a18            HEADING ‘Session Program’ TRUNC
COLUMN session_machine         FORMAT a8             HEADING ‘Machine’         JUSTIFY right TRUNC
COLUMN session_pga_memory      FORMAT 9,999,999,999  HEADING ‘PGA Memory’
COLUMN session_pga_memory_max  FORMAT 9,999,999,999  HEADING ‘PGA Memory Max’
COLUMN session_uga_memory      FORMAT 9,999,999,999  HEADING ‘UGA Memory’
COLUMN session_uga_memory_max  FORMAT 9,999,999,999  HEADING ‘UGA Memory MAX’

SELECT
s.sid                sid
, lpad(s.username,12)  oracle_username
, lpad(s.osuser,9)     os_username
, s.program            session_program
, lpad(s.machine,8)    session_machine
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session pga memory’)        session_pga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session pga memory max’)    session_pga_memory_max
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session uga memory’)        session_uga_memory
, (select ss.value from v$sesstat ss, v$statname sn
where ss.sid = s.sid and
sn.statistic# = ss.statistic# and
sn.name = ‘session uga memory max’)    session_uga_memory_max
FROM
v$session  s
ORDER BY session_pga_memory DESC
/

SID  Oracle User  O/S User Session Program     Machine     PGA Memory PGA Memory Max     UGA Memory UGA Memory MAX
—- ———— ——— —————— ——– ————– ————– ————– ————–
14          SYS    oracle sqlplus@bmau32.uk. bmau32.u     14,040,528     14,040,528        209,312        209,312
6                 oracle oracle@bmau32.uk.p bmau32.u     10,670,592     10,670,592         78,496         78,496
5                 oracle oracle@bmau32.uk.p bmau32.u     10,378,816     10,378,816         78,496         78,496
31          SYS                                            5,504,568     26,138,992      4,856,512     24,737,312
13                 oracle oracle@bmau32.uk.p bmau32.u      4,660,064      4,660,064         78,496         78,496
12                 oracle oracle@bmau32.uk.p bmau32.u      4,618,136      4,618,136         78,496         78,496

Statspack Scripts

Posted by Sagar Patil

Installation
The following scripts must be run as a user with SYSDBA privilege:
* SPCREATE.SQL: Creates entire Statspack environment (calls SPCUSR.SQL, SPCTAB.SQL, SPCPKG.SQL) * SPDROP.SQL: Drops entire Statspack environment (calls SPDTAB.SQL, SPDUSR.SQL)

The following scripts are run as a user with SYSDBA privilege by the preceding calling scripts:
* SPDTAB.SQL: Drops Statspack tables
* SPDUSR.SQL: Drops the Statspack user (PERFSTAT)

The following scripts are run as PERFSTAT by the preceding calling scripts:
* SPCUSR.SQL: Creates the Statspack user (PERFSTAT)
* SPCTAB.SQL: Creates Statspack tables
* SPCPKG.SQL: Creates the Statspack package

Scripts for Statspack Reporting and Automation

The following scripts must be run as PERFSTAT:
* SPREPORT.SQL: Generates a Statspack report
* SPREPSQL.SQL: Generates a Statspack SQL report for the specific SQL hash value specified
* SPREPINS.SQL: Generates a Statspack report for the database and instance specified
* SPAUTO.SQL: Automates Statspack statistics collection (using DBMS_JOB)

Scripts for Statspack Performance Data Maintenance
The following scripts must be run as PERFSTAT:
* SPPURGE.SQL: Purges a limited range of Snapshot IDs for a given database instance.
* SPTRUNC.SQL: Truncates all performance data in Statspack tables

API to Retrieve runtime Replication Parameters

Posted by Sagar Patil

We often get thousands of replication conflits/errors at current system. A resolution is generally looking at EM console and resolving each error manually.  Please see examples below.

This approach is working well until we have less than some thousands error x’actions at DEFERROR. Anything above that is causing a performance hit. Also DEFERROR /DEFTRANS get highly fragmented adding to the current problem. A solution would be writing a PLSQL API to retrieve runtime parameters as and when needed than using EM console to retrieve millions of rows.

SELECT argcount, schemaname, packagename, procname
FROM defcall WHERE  deferred_tran_id in
(select  deferred_tran_id from deftran where rownum <100)

ARGCOUNT	SCHEMANAME	PACKAGENAME	PROCNAME
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
17	CVD_SYSTEM	VOUCHER	REP_UPDATE
9	CVD_SYSTEM	VOUCHER_EVENT	REP_INSERT
SELECT e.destination, e.deferred_tran_id, e.origin_tran_db, e.origin_tran_id,
  e.start_time, count(e.callno)
FROM
 sys.deferror e, sys.defcall c WHERE e.deferred_tran_id = c.deferred_tran_id
  GROUP BY e.destination, e.deferred_tran_id, e.origin_tran_db,
  e.origin_tran_id, e.start_time ORDER BY 1
DESTINATION	DEFERRED_TRAN_ID	ORIGIN_TRAN_DB	ORIGIN_TRAN_ID	START_TIME	COUNT(E.CALLNO)CPI_O5CA.O2	105.45.83730	CPI_O5CB.O2	120.23.60368	09/11/2008 20:24:50	3
CPI_O5CA.O2	107.12.84097	CPI_O5CB.O2	121.41.62034	09/11/2008 20:24:51	1
CPI_O5CA.O2	107.13.84100	CPI_O5CB.O2	123.47.50472	09/11/2008 20:24:52	3
CPI_O5CA.O2	108.1.83596	CPI_O5CB.O2	123.42.50472	09/11/2008 20:24:52	1
SELECT error_msg FROM deferror
    WHERE deferred_tran_id = '107.12.84097' AND callno = 2

ERROR_MSG
ORA-01403: no data found
SELECT t.deferred_tran_id, d.dblink, t.start_time, count(c.callno)
               FROM sys.deftran t, sys.defcall c, sys.deftrandest d
            WHERE d.deferred_tran_id = t.deferred_tran_id
           AND c.deferred_tran_id = t.deferred_tran_id
          GROUP BY t.deferred_tran_id, t.start_time, d.dblink ORDER BY 3 desc
Find out the Argument Count and Call number :
select deferred_tran_id, callno, argcount, procname, packagename, schemaname
             from defcall;
Substitute Transaction ID, Argument Number and Call Number to locate parameter for that Call 

DECLARE
       atype       INTEGER;
       csform          NUMBER;
       arg_no      NUMBER;
       callno      NUMBER;
       tid         VARCHAR2(22);
       outval      VARCHAR2(2000);
BEGIN
    arg_no:=1;
    callno:='0';
    tid:='105.45.83730';
    atype:=dbms_defer_query.get_arg_type(callno, arg_no, tid);
    csform:=dbms_defer_query.get_arg_form(callno, arg_no, tid);
    IF atype = 2 THEN
      outval:=to_char(dbms_defer_query.get_number_arg(callno, arg_no, tid));
    ELSIF atype = 96 and
          csform = 1 THEN
      outval:=dbms_defer_query.get_char_arg(callno, arg_no, tid);
    ELSIF atype = 96 and
          csform = 2 THEN
      outval:=translate(dbms_defer_query.get_nchar_arg(callno, arg_no, tid) USING CHAR_CS);
    ELSIF atype= 1 and
          csform = 1 THEN
      outval:=dbms_defer_query.get_varchar2_arg(callno, arg_no, tid);
    ELSIF atype= 1 and
              csform = 2 THEN
      outval:=translate(dbms_defer_query.get_nvarchar2_arg(callno, arg_no, tid) USING CHAR_CS);
    ELSIF atype = 12 THEN
         outval:=to_char(dbms_defer_query.get_date_arg(callno, arg_no, tid), 'DD-MON-YYYY HH24:MI:SS');
    ELSIF atype = 23 THEN
         outval:=rawtohex(dbms_defer_query.get_raw_arg(callno, arg_no, tid));
    ELSIF atype = 11 THEN
         outval:=dbms_defer_query.get_rowid_arg(callno, arg_no, tid);
    ELSE
         outval:='-Binary Value-';
      -- RAISE NO_DATA_FOUND;
    END IF;
    dbms_output.put_line(outval);
END;

How to enable trace in Oracle

Posted by Sagar Patil

1. Enable trace at instance level

Put the following line in init.ora. It will enable trace for all sessions and the background
processes

to Start  : sql_trace = TRUE

to Stop  : sql_trace = FALSE

to enable tracing without restarting database run the following command in sqlplus

SQLPLUS> ALTER SYSTEM SET trace_enabled = TRUE;

2. Enable trace at session level

to Start  : ALTER SESSION SET sql_trace = TRUE;

to stop   : ALTER SESSION SET sql_trace = FALSE;

– or –

to Start  : EXECUTE dbms_session.set_sql_trace (TRUE);
to Stop   : EXECUTE dbms_session.set_sql_trace (FALSE);

– or –

to Start  : EXECUTE dbms_support.start_trace;
to Stop   : EXECUTE dbms_support.stop_trace;

3. Enable trace in another session

SELECT * FROM v$session WHERE upper(osuser) like  ‘&1’;

to start : EXECUTE dbms_support.start_trace_in_session (SID, SERIAL#);

to stop : EXECUTE dbms_support.stop_trace_in_session (SID, SERIAL#);

– or –

EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, TRUE);
EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, FALSE);

4. Enable trace Using ORADEBUG

SQL>  select s.username, p.spid os_process_id, p.pid oracle_process_id
  from v$session s, v$process p
where s.paddr = p.addr
   and s.username = upper(‘&user_name’);  2    3    4
Enter value for user_name: repadmin
old   4:    and s.username = upper(‘&user_name’)
new   4:    and s.username = upper(‘repadmin’)

USERNAME                       OS_PROCESS_I ORACLE_PROCESS_ID
—————————— ———— —————–
REPADMIN                       850297                      19
REPADMIN                       955904                      32
REPADMIN                       911971                      20
REPADMIN                       845029                      18
REPADMIN                       533889                      30

SQL> oradebug setospid 533889;
Oracle pid: 30, Unix process pid: 533889, image: oracle@bmau29.uk.pri.o2.com (TNS V1-V3)

SQL> oradebug event 10046 trace name context forever, level 12;
Statement processed.

FAN (Fast Application Notification) & ONS (Oracle Notification Services)

Posted by Sagar Patil

Read more…

How does the failover mechanism work?

Posted by Sagar Patil

Read more…

Can't convert a Table Space into Read Only Mode

Posted by Sagar Patil

Reason : Active x’actions running on the table space

Locate SQL_TEXT and Session Address running alter Tablspace Command

SELECT SQL_TEXT, SADDR
FROM V$SQLAREA,V$SESSION
WHERE V$SQLAREA.ADDRESS = V$SESSION.SQL_ADDRESS
AND lower(SQL_TEXT) LIKE ‘alter tablespace%’;

V$transaction will show active X’actions in System

SELECT SES_ADDR, START_SCNB,used_urec,used_ublk
FROM V$TRANSACTION
ORDER BY START_SCNB;

v$transaction View
Important fields of v$transaction are used_ublk and used_urec. They tell of how many blocks and records the undo for a transaction consists. In order to find out the name of the corresponding rollback segment, join the xidusn field with the usn field of v$rollname. This is demonstrated in

List of Sessions blocking tablepsace in write mode

select saddr, SID,Serial#,username,Status, Machine,SQL_ID
from v$session where saddr in
(select SES_ADDR FROM V$TRANSACTION where used_urec is not null
and used_ublk is not null) ;

SQL clarifying Commands fired by Session.

select b.SQL_TEXT,saddr, SID,Serial#,username,Status,Machine,a.SQL_ID
from v$session a ,v$SQL b
where a.saddr in (select SES_ADDR FROM V$TRANSACTION
where used_urec is not null and used_ublk is not null )
and a.sql_id=b.sql_id

TAF can be verified by querying the Oracle-provided views

Posted by Sagar Patil

Read more…

Rectify Differences Between Replicated Tables

Posted by Sagar Patil

Using the DIFFERENCES Procedure

The DIFFERENCES procedure compares two replicas of a table, and determines all rows in the first replica that are not in the second and all rows in the second that are not in the first. The output of this procedure is stored in two user-created tables. The first table stores the values of the missing rows, and the second table is used to indicate which site contains each row.

The RECTIFY procedure uses the information generated by the DIFFERENCES procedure to rectify the two tables. Any rows found in the first table and not in the second are inserted into the second table. Any rows found in the second table and not in the first are deleted from the second table.

To restore equivalency between all copies of a replicated table, complete the following steps:

1. Select one copy of the table to be the “reference” table. This copy will be used to update all other replicas of the table as needed.

2. Determine if it is necessary to check all rows and columns in the table for differences, or only a subset.

For example, it may not be necessary to check rows that have not been updated since the last time that you checked for differences. Although it is not necessary to check all columns, your column list must include all columns that make up the primary key (or that you designated as a substitute identity key) for the table.

3. After determining which columns you will be checking in the table, create two tables to hold the results of the comparison.

You must create one table that can hold the data for the columns being compared. For example, if you decide to compare the employee_id, salary, and department_id columns of the employees table, then your CREATE statement would need to be similar to the following:

CREATE TABLE hr.missing_rows_data (
  employee_id     NUMBER(6),
  salary          NUMBER(8,2),
  department_id   NUMBER(4));

You must also create a table that indicates where the row is found. This table must contain three columns with the datatypes shown in the following example:

CREATE TABLE hr.missing_rows_location (
  present     VARCHAR2(128),
  absent      VARCHAR2(128),
  r_id        ROWID);

4. Suspend replication activity for the replication group containing the tables that you want to compare. Although suspending replication activity for the group is not a requirement, rectifying tables that were not quiesced first can result in inconsistencies in your data.

BEGIN
   DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (
      gname => 'hr_repg');
END;
/

5. At the site containing the “reference” table, call the DIFFERENCES procedure in the DBMS_RECTIFIER_DIFF package.

For example, if you wanted to compare the employees tables at the New York and San Francisco sites, then your procedure call would look similar to the following:

BEGIN
   DBMS_RECTIFIER_DIFF.DIFFERENCES (
      sname1              =>   'hr',
      oname1              =>   'employees',
      reference_site      =>   'ny.world',
      sname2              =>   'hr',
      oname2              =>   'employees',
      comparison_site     =>   'sf.world',
      where_clause        =>   '',
      column_list         =>   'employee_id,salary,department_id',
      missing_rows_sname  =>   'hr',
      missing_rows_oname1 =>   'missing_rows_data',
      missing_rows_oname2 =>   'missing_rows_location',
      missing_rows_site   =>   'ny.world',
      max_missing         =>    500,
      commit_rows         =>    50);
END;

6. Rectify the table at the “comparison” site to be equivalent to the table at the “reference” site by calling the RECTIFY procedure in the as shown in the DBMS_RECTIFIER_DIFF package following example:

BEGIN
   DBMS_RECTIFIER_DIFF.RECTIFY (
      sname1              =>   'hr',
      oname1              =>   'employees',
      reference_site      =>   'ny.world',
      sname2              =>   'hr',
      oname2              =>   'employees',
      comparison_site     =>   'sf.world',
      column_list         =>   'employee_id,salary,department_id',
      missing_rows_sname  =>   'hr',
      missing_rows_oname1 =>   'missing_rows_data',
      missing_rows_oname2 =>   'missing_rows_location',
      missing_rows_site   =>   'ny.world',
      commit_rows         =>    50);
END;
/

The RECTIFY procedure temporarily disables replication at the “comparison” site while it performs the necessary insertions and deletions, as you would not want to propagate these changes. RECTIFY first performs all of the necessary DELETE operations and then performs all of the INSERT operations. This ensures that there are no violations of a PRIMARY KEY constraint.

After you have successfully executed the RECTIFY procedure, your missing rows tables should be empty.

Tuning SQL to drop execution cost

Posted by Sagar Patil
SELECT n.MSISDN,
(SELECT ptc2.PRIMARY_ACCOUNT_NUMBER
FROM p_topup_cards ptc2
WHERE ptc2.NUMR_MSISDN = n.MSISDN
--AND ptc2.CARD_TYPE = 1
AND ptc2.PRIMARY_ACCOUNT_NUMBER LIKE '894428%'
AND ROWNUM < 2) pan
FROM numbers n  ,p_number_history pnh
WHERE n.MSISDN = pnh.NUMR_MSISDN
AND n.STATUS = 'A'
AND n.Barred = 'N'
AND n.spid_spid = '416'
--AND n.first_transaction IS NOT NULL
--AND pnh.END_TIMESTAMP IS NULL
AND pnh.PLFM_PLTP_CODE = 'IN'
AND ROWNUM <= 2000
Plan
SELECT STATEMENT  FIRST_ROWS Cost: 758,319 Bytes: 72,000  Cardinality: 2,000
3 COUNT STOPKEY
2 TABLE ACCESS BY INDEX ROWID LIVEUSER.PREPAY_TOPUP_CARDS Cost: 6  Bytes: 32  Cardinality: 1
1 INDEX RANGE SCAN UNIQUE LIVEUSER.PPTC_PK Cost: 4  Cardinality: 1
8 COUNT STOPKEY
7 NESTED LOOPS  Cost: 758,319 Bytes: 8,591,616  Cardinality: 238,656
5 TABLE ACCESS BY INDEX ROWID LIVEUSER.NUMBERS Cost: 46,110  Bytes: 4,748,060  Cardinality: 237,403
4 INDEX RANGE SCAN NON-UNIQUE LIVEUSER.NUMR_SPID_FK_I Cost: 3,682  Cardinality: 949,610
6 INDEX RANGE SCAN UNIQUE LIVEUSER.PFM_NUM_HS_PK Cost: 3  Bytes: 16  Cardinality: 1

Initial Analysis
This SQL needs to be re-written to avoid any join and mainly try and avoid statement “n.spid_spid = ‘416’”

Why?
This simple stmt “select * from numbers where spid_spid = ‘416’;” and it shows the cost of 46K+

Plan
SELECT STATEMENT FIRST_ROWS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
2 TABLE ACCESS BY INDEX ROWID CPI_SYSTEM.NUMBERS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
1 INDEX RANGE SCAN NON-UNIQUE CPI_SYSTEM.NUMR_SPID_FK_I Cost: 3,682 Cardinality: 949,610

Get the current Session ID

Posted by Sagar Patil

SQL> desc v$mystat
Name Null? Type
—————————————– ——– —————————-
SID NUMBER
STATISTIC# NUMBER
VALUE NUMBER

SQL> select distinct sid from v$mystat;

SID
———-
139

SQL> select sid, serial# from v$session
2 where audsid=SYS_CONTEXT(‘USERENV’,’SESSIONID’);

SID SERIAL#
———- ———-
139 6

SQL> select dbms_session.unique_session_id from dual;

UNIQUE_SESSION_ID
——————————————————————————–
008B00060001

Loading or Re-Synchronising Data in a Replication Cluster

Posted by Sagar Patil

To strop replication and not to place a system into read only status please execute following steps.

Execute DBMS_REPUTIL.replication_OFF;

You need to execute this API at each site. This will disable all triggers and you can manually load data at respective sites.

Once the job is done enable Triggers using

Execute DBMS_REPUTIL.replication_On;

Now all further changes would be replicated.

What is a difference between QUISCED Group and Replication off?
A quisced group puts all sites under read only status while above API could be used on individual replication master sites.

If you want to sync Table Data at Multimaster Replicated Sites please see  this post

Replication Packages/Triggers Missing at new site : ORA-04067: stored procedure OBJECT_NAME$RP does not exist

Posted by Sagar Patil

I built a 6 node cluster using notes at http://www.oracledbasupport.co.uk/adding-a-new-master-node-in-an-existing-multi-master-cluster/

Everything was going OK but then noticed some tables are returning errors like

ORA-04067: not executed, stored procedure “SYSTEM.VOUCHER_NO$RP” does not exist
ORA-01085: preceding errors in deferred rpc to “SYSTEM.VOUCHER_NO$RP.REP_INSERT”
ORA-02063: preceding 2 lines from REPAA

Solution : Run following SQL to locate which tables are missing the much needed replication packages which should be generated as a part of GENERATE_REPLICATION_SUPPORT API. A replication support was pushed by Master Defn site when we add a new master/new objects into existing replication group.

— List if replication packages are available at Master as well as remote node
COLUMN group HEADING ‘Replication_Group’ FORMAT A20;
COLUMN count(*) HEADING ‘NUMBER_OF_TABLES’ FORMAT 9999999;
select gname Replication_Group,count(*) NUMBER_OF_TABLES
from REPCAT$_REPOBJECT where type= 2 group by gname order by 1;

REPLICATION_GROUP NUMBER_OF_TABLES
—————————— —————-
O1_A-B_ALL 14< BR> O1_ERROR_A-B 7

Above SQL will return how many objects have been replicated. Let’s make sure this is consistent with the remote site.

repA> select object_name from dba_objects where object_name like ‘%$R%’
minus
select object_name from dba_objects@repAA where object_name like ‘%$R%’;

OBJECT_NAME
——————————————————————————–
ACCOUNT$RP
OFFER$RP
OFFER$RP
OFFER$RP
SERVICE$RP
SUBSCRIBE$RP
SUBSCRIBE$RP
TRANSACTION$RP

8 rows selected, so 8 objects are missing replication support.

I can also use “select sname,oname,type,STATUS from DBA_REPOBJECT minus
select sname,oname,type,STATUS from DBA_REPOBJECT<at>remote_db;”

What does $RP package contain?
This package have following procedures
– rep_delete
– rep_insert
– rep_update

Let’s generate the packages missing at remote repAA site

At Master Defn Site > BEGIN
Master> DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (gname => ‘O1_BROKER_A-B’);
END;

Master> BEGIN
DBMS_REPCAT.generate_replication_package (sname => ‘BROKER_SYSTEM’,
oname =>’&1′);
END;

This went fine and did generate required packages. Above SQL returned no differences this time, still replication status was in “ERROR” at EM console

Next I tried generating a complete replication support as below.

Master/Remote DB> analyze table broker_system.account_status validate structure;

Table analyzed.
Master> BEGIN
dbms_repcat.generate_replication_support (
sname => ‘BROKER_SYSTEM’,
oname => ‘ACCOUNT_STATUS’,
type => ‘TABLE’,
distributed => TRUE);
END;
/
BEGIN
*
ERROR at line 1:
ORA-23308: object BROKER_SYSTEM.ACCOUNT_STATUS does not exist or is invalid
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.DBMS_REPCAT_UTL”, line 2874

I also followed oracle Note:176913.1 and compared every stats on these tables. These databases were copied as offline backup and had exact mirror image of each other. I am using 3 sites A,AA,AAA and above error was reported from remote system AAA. A and AA were running OK.

At last I used following procedure to get them VALID

Master repA> BEGIN
DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (gname => ‘O1_BROKER_A-B’);
END;

Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘OFFER_CATEGORY’,’TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘OFFER_STATUS’, ‘TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘OFFER_SUB_CATEGORY’, ‘TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘SERVICE_PROVIDER’, ‘TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘SUBSCRIBER_OFFER’, ‘TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘SUBSCRIBER_PLAN’, ‘TABLE’);
Master repA> EXECUTE Dbms_Repcat.Drop_Master_Repobject(‘BROKER_SYSTEM’, ‘TRANSACTION_TYPE’, ‘TABLE’);

Disable all Foreign key constraints accessing these tables else truncate would fail with error “ORA-02266: unique/primary keys in table referenced by enabled foreign key”.

repAA/repAAA>truncate table OFFER_CATEGORY;
repAA/repAAA>truncate table OFFER_STATUS ;
repAA/repAAA>truncate table OFFER_SUB_CATEGORY ;
repAA/repAAA>truncate table SERVICE_PROVIDER ;
repAA/repAAA>truncate table SUBSCRIBER_OFFER ;
repAA/repAAA>truncate table SUBSCRIBER_PLAN ;
repAA/repAAA>truncate table TRANSACTION_TYPE ;

Master repA> BEGIN
DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
gname => ‘O1_BROKER_A-B’,
type => ‘TABLE’,
oname => ‘&1’,
sname => ‘BROKER_SYSTEM’,
use_existing_object => TRUE,
copy_rows => TRUE);
END;
/
Enter value for 1: OFFER_CATEGORY
old 4: oname => ‘&1’,
new 4: oname => ‘OFFER_CATEGORY’,
Run above API for all tables

BEGIN
DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
sname => ‘BROKER_SYSTEM’,
oname => ‘&1’,
type => ‘TABLE’,
min_communication => TRUE);
END;
/
Enter value for 1: OFFER_CATEGORY
old 4: oname => ‘&1’,
new 4: oname => ‘OFFER_CATEGORY’,
Run above API for all tables

Run DBMS_JOB dbms_repcat.do_deferred_repcat_admin(‘”O1_BROKER_A-B”‘, FALSE); to propogate the ADMINISTRATIVE requests. I often found Oracle EM doesn’t work well so run this job manually at all sites.

Average Amount of Time to Apply Transactions at Remote Sites

Posted by Sagar Patil

SELECT substr(DBLINK,1,25), AVG_LATENCY “Average Latency”
FROM DEFSCHEDULE
WHERE upper(DBLINK) like ‘&1’;

Average latency is the average number of seconds between the first call of a transaction on the current site and the confirmation that the transaction was applied at the remote site. The

SQL> SELECT substr(DBLINK,1,25), AVG_LATENCY “Average Latency”
2 FROM DEFSCHEDULE
3 WHERE upper(DBLINK) like ‘&1’;
Enter value for 1: CPI%
old 3: WHERE upper(DBLINK) like ‘&1’
new 3: WHERE upper(DBLINK) like ‘CPI%’

SUBSTR(DBLINK,1,25) Average Latency
————————- —————
CPI_L1_A.O2
CPI_L1_B.O2 546244624
CPI_L2_A.O2 546860293
CPI_L2_B.O2 546860298

Tracking the Rate of Transactions Entering the Deferred Transactions Queue

Posted by Sagar Patil

SQL will dispaly rate of x’actions entering at DEFTRAN Queue for last Hour

SQL> select to_char(trunc(start_time,’MI’),’DD-MON-YY HH24:MI:SS’),count(*) from deftran where start_time > sysdate -(1/24)
group by to_char(trunc(start_time,’MI’),’DD-MON-YY HH24:MI:SS’)
order by 1 desc

TO_CHAR(TRUNC(STAR   COUNT(*)
—————— ———-
15-DEC-08 20:32:00       1481
15-DEC-08 20:31:00       2510
15-DEC-08 20:30:00       2482
15-DEC-08 20:29:00       2543
15-DEC-08 20:28:00       2468
15-DEC-08 20:27:00       2458
15-DEC-08 20:26:00       2490
15-DEC-08 20:25:00       2505
15-DEC-08 20:24:00       2488
15-DEC-08 20:23:00       2443
15-DEC-08 20:22:00       2532
15-DEC-08 20:21:00       2555
15-DEC-08 20:20:00       2485
15-DEC-08 20:19:00        757

SQL> SELECT (R.TXNS_ENQUEUED / ((SYSDATE – I.STARTUP_TIME)*24*60*60)) “Average TPS”
2 FROM V$REPLQUEUE R, V$INSTANCE I;

Average TPS
———–
3.01341941

Tracking the Average Number of Row Changes in a Replication Transaction

Posted by Sagar Patil

SELECT DECODE(TXNS_ENQUEUED, 0, ‘No Transactions Enqueued’,
         (CALLS_ENQUEUED / TXNS_ENQUEUED)) “Average Number of Row Changes”
                  FROM V$REPLQUEUE;

SQL> SELECT (R.TXNS_ENQUEUED / ((SYSDATE – I.STARTUP_TIME)*24*60*60)) “Average TPS”
FROM V$REPLQUEUE R, V$INSTANCE I;

Average TPS
———–
3.01351231

Top of Page

Top menu