Display partition information for a specified index

Posted by Sagar Patil

Displays partition information for the specified index, or all indexes.

SET LINESIZE 500
SET PAGESIZE 1000
SET FEEDBACK OFF
SET VERIFY OFFSELECT a.index_name,
a.partition_name,
a.tablespace_name,
a.initial_extent,
a.next_extent,
a.pct_increase,
a.num_rows
FROM   dba_ind_partitions a
WHERE  a.index_name  = Decode(‘BRANCH_STATS_IX’,’ALL’,a.index_name,’BRANCH_STATS_IX’)
ORDER BY a.index_name, a.partition_name;

Replace ‘BRANCH_STATS_IX’ with valid index_name

select ‘Alter Index ‘|| index_owner ||’.’||index_name
||’ Rebuild Partition ‘ || partition_name ||’  Online;
‘ from dba_ind_partitions a
WHERE  a.index_name  = Decode(‘BRANCH_STATS_IX’,’ALL’,a.index_name,’BRANCH_STATS_IX’)
ORDER BY a.index_name, a.partition_name

This should create script like …
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060201 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060202 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060203 Online;
Alter Index STATS.BRANCH_STATS_IX Rebuild Partition BS_20060204 Online;

Problem : Oracle reported block corruption on data file id 21,22 mapped for ABC,XYZ tablespaces.

Get list of all partitions mapped into corrupt tablespaces

SELECT a.index_name,
a.partition_name,
a.tablespace_name,
a.initial_extent,
a.next_extent,
a.pct_increase,
a.num_rows
FROM dba_ind_partitions a
WHERE a.tablespace_name in (‘ABC’,’XYZ’)
ORDER BY a.index_name, a.partition_name

Get a script to rebuild all these partitions

SELECT ‘Alter Index ‘|| index_owner ||’.’||index_name ||’ Rebuild Partition ‘ || partition_name ||’ Online; ‘
FROM dba_ind_partitions a
WHERE a.tablespace_name in (‘ABC,’XYZ)
ORDER BY a.index_name, a.partition_name;

Function Based Indexes

Posted by Sagar Patil

Traditionally, performing a function on an indexed column in the where clause of a query guaranteed an index would not be used. Oracle 8i introduced Function Based Indexes to counter this problem. Rather than indexing a column, you index the function on that column, storing the product of the function, not the original column data. When a query is passed to the server that could benefit from that index, the query is rewritten to allow the index to be used. The following code samples give an example of the use of Function Based Indexes:

* Build Test Table
* Build Regular Index
* Build Function Based Index

Build Test Table
First we build a test table and populate it with enough data so that use of an index would be advantageous:

Code:
CREATE TABLE user_data (
id          NUMBER(10)    NOT NULL,
first_name  VARCHAR2(40)  NOT NULL,
last_name   VARCHAR2(40)  NOT NULL);

BEGIN
  FOR cur_rec IN 1 .. 2000 LOOP
    IF MOD(cur_rec, 2) = 0 THEN
      INSERT INTO user_data
      VALUES (cur_rec, ‘John’ || cur_rec, ‘Doe’);
    ELSE
      INSERT INTO user_data
      VALUES (cur_rec, ‘Jayne’ || cur_rec, ‘Doe’);
    END IF;
    COMMIT;
  END LOOP;
END;
/

ANALYZE TABLE user_data COMPUTE STATISTICS;

At this point the table is not indexed so we would expect a full table scan for any query:

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
[b]   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)[/b]

Build Regular Index
If we now create a regular index on the FIRST_NAME column we see that the index is not used:

CREATE INDEX first_name_idx ON user_data (first_name);
ANALYZE TABLE user_data COMPUTE STATISTICS;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;

Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
   1    0   TABLE ACCESS (FULL) OF ‘USER_DATA’ (Cost=1 Card=1 Bytes=10)

Build Function Based Index
If we now replace the regular index with a function based index on the FIRST_NAME column we see that the index is used:

DROP INDEX first_name_idx;
CREATE INDEX first_name_idx ON user_data (UPPER(first_name));
ANALYZE TABLE user_data COMPUTE STATISTICS;

ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;

SET AUTOTRACE ON
SELECT *
FROM   user_data
WHERE  Upper(first_name) = ‘JOHN2’;
   
Execution Plan
———————————————————-
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=14)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF ‘USER_DATA’ (Cost=2 Card=1 Bytes=14)

[b]   2    1     INDEX (RANGE SCAN) OF ‘FIRST_NAME_IDX’ (NON-UNIQUE) (Cost=1 Card=1)[/b]

The QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED parameters must be set or the server will not be able to rewrite the queries, and will therefore not be able to use the new index. These parameters may be set in the Init.ora file if they are needed permanently.

When to Rebuild a B-Tree Index

Posted by Sagar Patil

You have to periodically check your indexes to see if they become skewed and, therefore, good candidates for rebuild.

A skewed index has many records clumped close together on the index tree due to their similar indexed values. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance. It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.

Here is a sample procedure on how to identify the skewed indexes:

1. Gather statistics on your indexes. For large indexes (over one hundred
thousand records in the underlying table), use ESTIMATE instead of COMPUTE STATISTICS.

For example:   SQL> analyze index A1_PK compute statistics;
Index analyzed.

2. Run the script given below – index_check.sql – to find out how skewed each index is.

This query checks on all indexes that belong to user SCOTT:

SQL> select index_name, blevel,
decode(blevel,0,'OK BLEVEL',1,'OK BLEVEL',
2,'OK BLEVEL',3,'OK BLEVEL',4,'OK BLEVEL','BLEVEL HIGH') OK
from dba_indexes
where owner='SCOTT';

INDEX_NAME                                BLEVEL  OK
----------------------------------------  ------  ---- 
A1_PK                                     BLEVEL  HIGH
A1_UK                                     BLEVEL  HIGH
BUDVERPORT_BUDVERPORT2_UK                 1       OK BLEVEL
BUDVERPORT_BV_FK_I                        2       OK BLEVEL
BUDVERPORT_CHAR_CL_FK_I                   1       OK BLEVEL
BUDVERPORT_DIRCTE_FK_I                    3       OK BLEVEL
BUDVERPORT_FUND_TYPE_FK_I                 1       OK BLEVEL
BUDVERPORT_OMB_SUBFCT_FK_I                1       OK BLEVEL
BUDVERPORT_ORG_FK_I                       0       OK BLEVEL
BUDVERPORT_PL_TITLE_FK_I                  1       OK BLEVEL
BUDVERPORT_RDC_FK_I                       1       OK BLEVEL
S_INVENTORY_PRODID_WARID_PK               BLEVEL  HIGH
S_ITEM_ORDID_ITEMID_PK                    BLEVEL  HIGH
S_ITEM_ORDID_PRODID_UK                    BLEVEL  HIGH
S_LONGTEXT_ID_PK                          BLEVEL  HIGH
S_ORD_ID_PK                               BLEVEL  HIGH
S_PRODUCT_ID_PK                           BLEVEL  HIGH
S_PRODUCT_NAME_UK                         BLEVEL  HIGH
S_REGION_ID_PK                            BLEVEL  HIGH
S_REGION_NAME_UK                          BLEVEL  HIGH
S_TITLE_TITLE_PK                          BLEVEL  HIGH
S_WAREHOUSE_ID_PK                         BLEVEL  HIGH

3. The BLEVEL (or branch level) is part of the B-tree index format and relates to the number of times Oracle has to narrow its search on the index while searching for a particular record. In some cases, a separate disk hit is requested for each BLEVEL. Prior to 9i, if the BLEVEL is more than 4, it is recommended to rebuild the index. As database are getting bigger and bigger, BLEVEL may not be a good indicator of time to rebuild the index. BLEVEL > 4 may suggest an evaluation of whether the partitioning option could help you.

Note: If you do not analyze the index, the index_check.sql script will
show “BLEVEL HIGH” for such an index.

4. Gather more index statistics using the VALIDATE STRUCTURE option of the ANALYZE command to populate the INDEX_STATS virtual table. This table does not contain an OWNER column and assumes you are looking for statistics for indexes created by your active session only.

SQL> analyze index SCOTT.ORG_PK validate structure;
Index analyzed.

SQL> select DEL_LF_ROWS*100/decode(LF_ROWS, 0, 1, LF_ROWS) PCT_DELETED,
(LF_ROWS-DISTINCT_KEYS)*100/ decode(LF_ROWS,0,1,LF_ROWS) DISTINCTIVENESS
from index_stats  where NAME='&index_name';

Enter value for index_name: ORG_PK

PCT_DELETED DISTINCTIVENESS
———– —————
0 0

The PCT_DELETED column shows what percent of leaf entries (index entries) have been deleted and remain unfilled. The more deleted entries exist on an index, the more unbalanced the index becomes. If the PCT_DELETED is 20% or higher, the index is candidate for rebuilding. If you can afford to rebuild indexes more frequently, then do so if the value is higher than 10%. Leaving indexes with high PCT_DELETED without rebuild might cause excessive redo allocation on some systems.
The DISTINCTIVENESS column shows how often a value for the column(s) of the index is repeated on average.

For example, if a table has 10000 records and 9000 distinct SSN values, the formula would result in
(10000-9000) x 100 / 10000 = 10. This shows a good distribution of values.

If, however, the table has 10000 records and only 2 distinct SSN values, the formula would result in (10000-2) x 100 /10000 = 99.98. This shows that there are very few distinct values as a percentage of total records in the column. Such columns are not candidates for a rebuild but good candidates for bitmapped indexes.

Index Monitoring

Posted by Sagar Patil

Index monitoring could be initiated and stopped using ALTER INDEX syntax shown below.

ALTER INDEX my_index_i MONITORING USAGE;
ALTER INDEX my_index_i NOMONITORING USAGE;

Information about the index usage can be displayed using the V$OBJECT_USAGE view.

SELECT index_name,
table_name,
monitoring,
used,
start_monitoring,
end_monitoring
FROM v$object_usage
WHERE index_name = 'MY_INDEX_I'
ORDER BY index_name; 

The V$OBJECT_USAGE view does not contain an OWNER column so you must to log on as the object owner to see the usage data.

Foreign Key Indexes
To quote the Oracle Database Concepts manual:
“You should almost always index foreign keys. The only exception is when the matching unique or primary key is never updated or deleted.”

When a foreign key is unindexed, DML on the parent primary key results in a share row exclusive table lock (or share-subexclusive table lock, SSX) on the child table, preventing DML from other transactions against the child table. If the DML affects several rows in the parent table, the lock on the child table is obtained and released immediately for each row in turn. Despite the speed of the lock-release process, this can cause significant amounts of contention on the child table during periods of heavy update/delete activity on the parent table.

When a foreign key is indexed, DML on the parent primary key results in a row share table lock (or subshare table lock, SS) on the child table. This type of lock prevents other transactions from issuing whole table locks on the child table, but does not block DML on either the parent or the child table. Only the rows relating to the parent primary key are locked in the child table.

Building virtual index using the NOSEGMENT clause.

Posted by Sagar Patil

Let’s create a virtual index

07:59:12 orcl> create index hr.emp2_emp_id_virtual on hr.employees2(employee_id) nosegment;
Index created.

I am going to set a hidden startup parameter “_use_nosegment_indexes” to TRUE so that our session will recognize our new virtual index.

08:00:09 orcl> alter session set “_use_nosegment_indexes” = true;
Running our statement again to see if it will use our new virtual index. Check out the access path below. The optimizer has chosen our virtual index.

select employee_id, a.department_id, b.department_name from
hr.departments b, hr.employees2 a where
a.department_id = b.department_id
and employee_id = 203

Execution Plan
———————————————————-
Plan hash value: 2516110069
—————————————————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 25 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 25 | 3 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES2 | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | EMP2_EMP_ID_VIRTUAL | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
—————————————————————————————————-

If I set the “_use_nosegment_indexes” to FALSE, the optimizer did NOT choose virtual index.

08:01:09 orcl> alter session set “_use_nosegment_indexes” = false;
Session altered.

08:01:33 orcl> select employee_id, a.department_id, b.department_name
08:01:47 2 from
08:01:47 3 hr.departments b, hr.employees2 a
08:01:47 4 where
08:01:47 5 a.department_id = b.department_id
08:01:47 6 and employee_id = 203;

Execution Plan
———————————————————-
Plan hash value: 2641883601
——————————————————————————————–

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 25 | 818 (3)| 00:00:10 |
| 1 | NESTED LOOPS | | 1 | 25 | 818 (3)| 00:00:10 |
|* 2 | TABLE ACCESS FULL | EMPLOYEES2 | 1 | 9 | 817 (3)| 00:00:10 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 1 | 16 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | DEPT_ID_PK | 1 | | 0 (0)| 00:00:01 |
——————————————————————————————–

Executing DBMS_STATS to gather statistics on both the virtual and standard index. I have run tests with statistics and without and it does seem to affect virtual index access paths.

08:21:57 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_NON_VIRTUAL’);
PL/SQL procedure successfully completed.

08:23:10 orcl> exec dbms_stats.gather_index_stats(‘HR’, ‘EMP2_EMP_ID_VIRTUAL’);

PL/SQL procedure successfully completed.

Looking for information on indexes built on the EMPLOYEES2 table.
Oracle returns a row for the standard index but not the virtual index.

08:20:31 orcl> select index_name, last_analyzed from dba_indexes where table_name = ‘EMPLOYEES2’

INDEX_NAME LAST_ANAL
—————————— ———
EMP2_EMP_ID_NON_VIRTUAL 31-MAY-07

Determining f we can find the virtual index in DBA_SEGMENTS. No success.

08:26:09 orcl> select segment_name, segment_type from dba_segments where segment_name like ‘EMP2%’;
SEGMENT_NAME SEGMENT_TYPE
——————– ——————
EMP2_EMP_ID_NON_VIRT INDEX
UAL

Looking for the the virtual index in DBA_OBJECTS. Finally, we find some sort of evidence that the virtual index exists in the database!

08:30:21 orcl> col object_name for a30
08:30:29 orcl> r
1 select object_name, object_type, created, status, temporary
2* from dba_objects where object_name like ‘EMP2%’

OBJECT_NAME OBJECT_TYPE CREATED STATUS T
—————————— ——————- ——— ——- –
EMP2_EMP_ID_NON_VIRTUAL INDEX 31-MAY-07 VALID N
EMP2_EMP_ID_VIRTUAL INDEX 31-MAY-07 VALID N

Tuning SQL to drop execution cost

Posted by Sagar Patil
SELECT n.MSISDN,
(SELECT ptc2.PRIMARY_ACCOUNT_NUMBER
FROM p_topup_cards ptc2
WHERE ptc2.NUMR_MSISDN = n.MSISDN
--AND ptc2.CARD_TYPE = 1
AND ptc2.PRIMARY_ACCOUNT_NUMBER LIKE '894428%'
AND ROWNUM < 2) pan
FROM numbers n  ,p_number_history pnh
WHERE n.MSISDN = pnh.NUMR_MSISDN
AND n.STATUS = 'A'
AND n.Barred = 'N'
AND n.spid_spid = '416'
--AND n.first_transaction IS NOT NULL
--AND pnh.END_TIMESTAMP IS NULL
AND pnh.PLFM_PLTP_CODE = 'IN'
AND ROWNUM <= 2000
Plan
SELECT STATEMENT  FIRST_ROWS Cost: 758,319 Bytes: 72,000  Cardinality: 2,000
3 COUNT STOPKEY
2 TABLE ACCESS BY INDEX ROWID LIVEUSER.PREPAY_TOPUP_CARDS Cost: 6  Bytes: 32  Cardinality: 1
1 INDEX RANGE SCAN UNIQUE LIVEUSER.PPTC_PK Cost: 4  Cardinality: 1
8 COUNT STOPKEY
7 NESTED LOOPS  Cost: 758,319 Bytes: 8,591,616  Cardinality: 238,656
5 TABLE ACCESS BY INDEX ROWID LIVEUSER.NUMBERS Cost: 46,110  Bytes: 4,748,060  Cardinality: 237,403
4 INDEX RANGE SCAN NON-UNIQUE LIVEUSER.NUMR_SPID_FK_I Cost: 3,682  Cardinality: 949,610
6 INDEX RANGE SCAN UNIQUE LIVEUSER.PFM_NUM_HS_PK Cost: 3  Bytes: 16  Cardinality: 1

Initial Analysis
This SQL needs to be re-written to avoid any join and mainly try and avoid statement “n.spid_spid = ‘416’”

Why?
This simple stmt “select * from numbers where spid_spid = ‘416’;” and it shows the cost of 46K+

Plan
SELECT STATEMENT FIRST_ROWS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
2 TABLE ACCESS BY INDEX ROWID CPI_SYSTEM.NUMBERS Cost: 46,110 Bytes: 40,833,230 Cardinality: 949,610
1 INDEX RANGE SCAN NON-UNIQUE CPI_SYSTEM.NUMR_SPID_FK_I Cost: 3,682 Cardinality: 949,610

How optimizer stats with Histograms can change execution Plan

Posted by Sagar Patil

Optimizer stats can play a key part in deciding execution plan. Here is an example

Table “RSODSACTREQ” has 313783 of total rows

Database with NO histograms :
For following 4 statements the SQL plan is always same i.e FULL TABLE SCAN

1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’
2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’

Execution Plan :
SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 2 K 39 K 350

Database with histograms:
1. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPNPCAMP’

Here Oracle can predict the resulting number of rows are lot more and prefers a full table scan

SELECT STATEMENT Optimizer Mode=CHOOSE 1 350
SORT AGGREGATE 1 16
TABLE ACCESS FULL SAPLIVE.RSODSACTREQ 262 K 3 M 350

2. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZCUS_ACT’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 19
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 93 1 K 19
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 93 3

 

3. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZTCIT_OH’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 9
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 35 560 9
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~STP 35 3

4. SELECT MAX( “TIMESTAMP” ) FROM SAPLIVE.”RSODSACTREQ” WHERE “ODSOBJECT” = ‘ZPA0037A’
For this value it can predict values returned are less and will use a index scan.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 4
SORT AGGREGATE 1 16
TABLE ACCESS BY INDEX ROWID SAPLIVE.RSODSACTREQ 1 16 4
INDEX RANGE SCAN SAPLIVE.RSODSACTREQ~0 1 3

 

running now

How to Backup/Export Oracle Optimizer Statistics into Table

Posted by Sagar Patil

Exporting and Importing Statistics

Caveat: Always use import/export and use imp/exp utility on schema user who owns tables.
I have wasted a week where I was exporting as DBA for XYZ user and then importing into
different system under different username.

Statistics can be exported and imported from the data dictionary to user-owned tables. This enables to create multiple versions of statistics for the same schema. One can also copy statistics from one database to another database.

You may want to copy statistics from a production database to a scaled-down test database to look at SQL execution plans.

Note:
Exporting and importing statistics is a distinct concept from the EXP and IMP utilities of the database.

The DBMS_STATS export and import packages do utilize IMP and EXP dump files.

Before exporting statistics, you first need to create a table for holding the statistics.

This statistics table is created using the procedure  DBMS_STATS.CREATE_STAT_TABLE.

After this table is created, then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS.EXPORT_*_STATS procedures.

The statistics can then be imported using the DBMS_STATS.IMPORT_*_STATS procedures.

Note that the optimizer does not use statistics stored in a user-owned table. The only statistics used by the optimizer are the statistics stored in the data dictionary.

In order to have the optimizer use the statistics in a user-owned tables, you must import those statistics into the data dictionary using the statistics import procedures.

In order to move statistics from one database to another, you must first export the statistics on the first database, then copy the statistics table to the second database, using the EXP and IMP utilities or other mechanisms, and finally import the statistics into the second database.

The EXP and IMP utilities export and import optimizer statistics from the database along with the table. One exception is that statistics are not exported with the data if a table has columns with system-generated names.

Restoring Statistics Versus Importing or Exporting Statistics

The functionality for restoring statistics is similar in some respects to the functionality of importing and exporting statistics.

In general, you should use the restore capability when:

  • You want to recover older versions of the statistics. For example, to restore the optimizer behavior to an earlier date.
  • You want the database to manage the retention and purging of statistics histories.

You should use EXPORT/IMPORT_*_STATS procedures when:

  • You want to experiment with multiple sets of statistics and change the values back and forth.
  • You want to move the statistics from one database to another database. For example, moving statistics from a production system to a test system.
  • You want to preserve a known set of statistics for a longer period of time than the desired retention date for restoring statistics.
1. Create the Statistics table.
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' ,
tblspace => 'STATS_TABLESPACE');
>>>>>>>> For 10G
exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');
>>>>>>>>  For 9i and earlier
begin
 DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATS_TABLE');
end;

2. Export statistics to statistics table

EXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');

3. Import statistics into the data dictionary.
exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');

4. Drop the statistics table.
exec DBMS_STATS.DROP_STAT_TABLE('SYSTEM','STATS_TABLE');
FOR 9i
begin
DBMS_STATS.CREATE_STAT_TABLE('dba_admin','STATISTICS_TABLE_060307');
end;
begin
DBMS_STATS.EXPORT_SCHEMA_STATS('SAPBP2' ,'STATISTICS_TABLE_060307',NULL,'DBA_ADMIN');
end;
SQL> exec DBMS_STATS.IMPORT_SCHEMA_STATS('SAGAR','STATISTICS_TABLE_060307',NULL,'SAGAR');
PL/SQL procedure successfully completed.

Monitor export Process >>>>>>>>
select count(*) from &STATS_NAME
Stats table can grow exponentially so look at table size while export is active.
select sum(bytes)/1000000 from dba_extents where segment_name='&TABLE_NAME'
Sample statistics at SAP BW System of size 4.2 Tera bytes
Time Elapsed for Export : 40 Mins
Total stats Table Size : 2GB
Time Elapsed for Import :

How to Validate that Stats are reflected after exp/imp

select TABLE_NAME, NUM_ROWS, BLOCKS, EMPTY_BLOCKS,
AVG_SPACE, CHAIN_CNT, AVG_ROW_LEN
from dba_tables where owner='&USER'
At both Databases and see they are very similar.

Identifying a Poor performing SQL

Posted by Sagar Patil

Our preferred v$sqlarea query is to actually report on physical disk I/O per statement execution. Hit ratios are informative but sometimes misleading. Logical I/O is less relevant. If the statement executes 1,000,000 logical I/Os but still only takes less than one-tenth of a second, who cares? It is the total physical I/O that consumes nearly all the time and identifies the potentially incorrect SQL. For example:

SELECT sql_text, executions,
ROUND(disk_reads / executions, 2) reads_per_run,
disk_reads, buffer_gets,
ROUND((buffer_gets – disk_reads)/
buffer_gets, 2) hit_ratio,
SQL_TEXT
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 0
AND (buffer_gets – disk_reads) / buffer_gets < 0.80
ORDER by 3 desc ;

The previous two statements would have reported more enlightening results:
Executions Reads_Per_Run Disk_Reads Buffer_Gets Hit_Ratio Sql_Text
———- ————- ———- ———– ——— ————
2 3 6 19 0.68 SELECT …
2 1812.5 3625 178777 0.98 SELECT …
From this view of the v$sqlarea table, we can immediately isolate all statements that are performing high numbers of physical reads. These statements might not necessarily be inefficient or poorly written, but they are prime candidates for investigation and further tuning.

Top of Page

Top menu