Identifying a Poor performing SQL

Posted by Sagar Patil

Our preferred v$sqlarea query is to actually report on physical disk I/O per statement execution. Hit ratios are informative but sometimes misleading. Logical I/O is less relevant. If the statement executes 1,000,000 logical I/Os but still only takes less than one-tenth of a second, who cares? It is the total physical I/O that consumes nearly all the time and identifies the potentially incorrect SQL. For example:

SELECT sql_text, executions,
ROUND(disk_reads / executions, 2) reads_per_run,
disk_reads, buffer_gets,
ROUND((buffer_gets – disk_reads)/
buffer_gets, 2) hit_ratio,
SQL_TEXT
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 0
AND (buffer_gets – disk_reads) / buffer_gets < 0.80
ORDER by 3 desc ;

The previous two statements would have reported more enlightening results:
Executions Reads_Per_Run Disk_Reads Buffer_Gets Hit_Ratio Sql_Text
———- ————- ———- ———– ——— ————
2 3 6 19 0.68 SELECT …
2 1812.5 3625 178777 0.98 SELECT …
From this view of the v$sqlarea table, we can immediately isolate all statements that are performing high numbers of physical reads. These statements might not necessarily be inefficient or poorly written, but they are prime candidates for investigation and further tuning.

10G ASH script :True Session Wait Activity in Oracle 10g

Posted by Sagar Patil

1> What resource is currently in high demand?

select  active_session_history.event
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 where active_session_history.sample_time between
sysdate – 60 / 2880
 and sysdate group by active_session_history.event
 order by 2
2> Which user is waiting the most?
select  sesion.sid
 , sesion.username
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , v$session sesion
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.session_id = sesion.sid
 group by sesion.sid
 , sesion.username
 order by 3

SID User TTL_WAIT_TIME
—– ———- ————-
135 SCOTT 91167481
149 SCOTT 107409491
153 SCOTT 110796799

3> What SQL is currently using the most resources?
select  active_session_history.user_id
 , dba_users.username
 , sqlarea.sql_text
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , v$sqlarea sqlarea
 , dba_users
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.sql_id = sqlarea.sql_id
 and active_session_history.user_id = dba_users.user_id
 group by active_session_history.user_id
 , sqlarea.sql_text
 , dba_users.username
 order by 4;

USER_ID User SQL_TEXT TTL_WAIT_TIME
——- —— —————————————————– ————-
57 SCOTT insert into sys.sourcetable (select * from sys.source$) 304169752

4> What object is currently causing the highest resource waits?

select  dba_objects.object_name
 , dba_objects.object_type
 , active_session_history.event
 , sum ( active_session_history.wait_time + active_session_history.time_waited ) ttl_wait_time
 from v$active_session_history active_session_history
 , dba_objects
 where active_session_history.sample_time between
sysdate - 60 / 2880
 and sysdate and active_session_history.current_obj# = dba_objects.object_id
 group by dba_objects.object_name
 , dba_objects.object_type
 , active_session_history.event
 order by 4

Find execution time for given SQL from Statpack tables

Posted by Sagar Patil

Please chnage “like” string for filtering results for a specific SQL statement.

set linesize 150;
set pagesize 1000;
BREAK ON hash_value SKIP 1;
spool 1_sql.lst;
select hash_value,to_char(snap_time,’DD-MON-YY HH24:MI:SS’),snap_id,piece, sql_text from STATS$SQLTEXT a, stats$snapshot b where hash_value in (
select hash_value
from STATS$SQLTEXT where sql_text like ‘SELECT%VBELN%KUNNR%ZZCURRENT_OWNER%VBKA%’)
and b.snap_id=a.last_snap_id
order by snap_time,hash_value,piece;
spool off;

What is causing Redo / Writes/ Log Creation- Statspack 8i/9i

Posted by Sagar Patil

Physical WRITES from the time Instance Started

Read more…

AWR : How to purge old Snapshots?

Posted by Sagar Patil

Read more…

How to Install Statspack (8i/9i/10g)

Posted by Sagar Patil

Read more…

Locate Server Workload for a Defined Timeframe

Posted by Sagar Patil

Retrieve Stats for a Day from Statspack tables

select  to_char ( trunc ( b.snap_time )
 , 'DD-MM-YYYY' )
 , statistic#
 , name
 , sum ( value )
 from STATS$SYSSTAT A
 , stats$snapshot B
 where a.snap_id = b.snap_id
 and trunc ( b.snap_time ) > trunc ( sysdate - 30 )
 and a.statistic# = 54 < Replace
 with Stats
 Number Below > group
 by trunc ( b.snap_time )
 , statistic#
 , name
 order by trunc ( b.snap_time );

Please replace statistic from Type of stats

Select *
 from STATS$SYSSTAT
 where name like '%XXX%'

9 session logical reads
Physical Reads
54 Physical Reads
56 Physical reads direct
58 physical read bytes
39 physical read total bytes
42 physical write total bytes
66 physical write bytes
66 physical writes
CPU Related
355 OS Wait-cpu (latency) time
328 parse time cpu
8 recursive cpu usage

Rollback Related –
176 transaction tables consistent read rollbacks
180 rollbacks only – consistent read gets
181 cleanouts and rollbacks – consistent read gets
187 transaction rollbacks
5 user rollbacks
239 IMU CR rollbacks
186 rollback changes – undo records applied

How to find current Statspack level?

Posted by Sagar Patil

1. Look at table PERFSTST.STATS$SNAPSHOT
2. Run spreport.sql and you will notice it along with the snapids listed

Change Level
execute statspack.snap (i_snap_level=> 7, i_modify_parameter=>’true’);

Levels >= 0 General Performance Statistics
Any level greater than 0 collects general performance statistics, such as wait statistics, system events, system statistics, rollback segment data, row cache, SGA, background events, session events, lock statistics, buffer pool statistics, and parent latch statistics.

Levels >= 5 Additional Data: SQL Statements
This level includes all statistics gathered in the lower level(s), as well as performance data on SQL statements with high resource usage. In a level 5 snapshot (or higher), the time required for the snapshot to complete depends on the SHARED_POOL_SIZE and on the number of SQL statements in the shared pool at the time of the snapshot. The larger the shared pool, the longer it takes to complete the snapshot.

Levels >= 6 Additional Data: SQL Plans and SQL Plan Usage
This level includes all statistics gathered in the lower levels, as well a SQL plans and plan usage data for each of the high-resource SQL statements captured.
A level 6 snapshot gathers valuable information for determining whether the execution plan used for a SQL statement has changed. Therefore, level 6 snapshots should be used whenever a plan might have changed.

Levels >= 10 Additional Statistics: Parent and Child Latches
This level includes all statistics gathered in the lower levels, as well as parent and child latch information. Sometimes data gathered at this level can cause the snapshot to take longer to complete. This level can be resource-intensive, and it should only be used when advised by Oracle personnel.

Statspack threshold parameters:
* Number of executions of the SQL statement (default 100)
* Number of disk reads performed by the SQL statement (default 1,000)
* Number of parse calls performed by the SQL statement (default 1,000)
* Number of buffer gets performed by the SQL statement (default 10,000)
* Size of sharable memory used by the SQL statement (default 1 Mb)
* Version count for the SQL statement (default 20)

The SQL threshold levels used are either those stored in the table STATS$STATSPACK_PARAMETER or by the thresholds specified when the snapshot is taken.

How to define statspack level ?
SQL>  EXECUTE STATSPACK.SNAP(i_snap_level=>6);

Gather session statistics and wait events for a particular session
SQL>  EXECUTE STATSPACK.SNAP(i_session_id=>3);

Set new value as instance’s default
SQL>  EXECUTE STATSPACK.SNAP(i_snap_level=>10, i_modify_parameter=>’true’);

A database is creating more than 100 logs every single hour. What is happening ?

Posted by Sagar Patil

Solution: I tried looking into statspack report but it wouldn’t flag the insert/update/delete activity. Also statspack won’t record some of system activity which may contribute to redo logs.Only option to use was “LOGMINER”.

1. Start TOAD and click on DBA-> logminer

2. Select ftp directory else local files to mine

3. Enter FTP details of server

4. Look into file timestamp and select required files

5. I have selected nmst_0000000700.arc file

6. Click on “options” and Select options to display in TOAD window

7. Now click on Green arrow and logminer will start reading log files

8. Please be patient for some time and you would see a report

Oracle Performance Tuning | Effect of Optimizer_index_cost_adj parameter on Oracle Execution Plans

Posted by Sagar Patil

I have a SAP system with optimizer_index_cost_adj set to 10. Let’s look at Oracle execution plans and the resulting execution costs.

A value for “optimizer_index_cost_adj” =10 which will always favour index scans over full table scans. There are certain SQLs which will be better off with FULL Scan (with increased DB_FILE_MULTIBLOCK_READ_COUNT) over index scans.

select
a.average_wait c1,
b.average_wait c2,
a.total_waits /(a.total_waits + b.total_waits)*100 c3,
b.total_waits /(a.total_waits + b.total_waits)*100 c4
from
v$system_event a,
v$system_event b
where
a.event = ‘db file scattered read’
and
b.event = ‘db file sequential read’;

This system event indicate scattered reads (Full table Scans) are done on avg at 5.9 MilliSec while Index scans are done at 94 Milli secs.
We need to push oracle optimizer to evaluate what is better ? INDEX and FULL table scans but at a moment due to above parameter oracle will always go for index even though full scan could be quicker and less expensive. We are in a situation where we are almost using a RULE based optimizer.

I have spooled details on where oracle is spending it’s most time..
Scattered Reads – Most FULL table Scans which are not happening much
Sequential Reads – Index Scans which are very frequent

I have spooled avg value for OPTIMIZER_INDEX_COST_ADJ parameter using statspack system wait events.The calculations are purely based on time taken for “scattered and sequential reads”. I can say it’s value should be set to 86 looking at last 4 days statspack data.

Example on how FTS can be quicker than Index Scans

I have picked up a worst performer for analysis here. Following view “”VBAP_VAPMA”2 is based on VBAP and VAPMA tables, VBAP listed in top wait segments consistently. As we know Optimizer_index_cost_adj is favouring index scans even if they are worst performer over FULL table scan. I have done some calcualtions below.

SELECT “AEDAT”, “AUART”, “ERDAT”, “ERNAM”, “KONDM”, “KUNNR”, “MATKL”, “MATNR”,
“NETWR”, “POSNR”, “VBELN”, “VKORG”, “WAERK”, “ZZAD_LINE_STATUS”,
“ZZCDO”, “ZZCDO_P”, “ZZKONDM_P”
FROM SAPR3.”VBAP_VAPMA”
WHERE “MANDT” = :a0
AND “AEDAT” > :a1
AND “AUART” = :a2
AND “KONDM” = :a3
AND “VKORG” = :a4
AND “ZZCDO” >= :a5

Setting “Optimizer_index_cost_adj=100 changes execution plan from index “VBAP~Z3” to Full table sacn.

Optimizer_index_cost_adj=10 (Currently Set)
 SELECT STATEMENT Optimizer Mode=CHOOSE 2 313894
 TABLE ACCESS BY INDEX ROWID SAPR3.VAPMA 1 49 .4
 NESTED LOOPS 2 206 313893.8
 TABLE ACCESS BY INDEX ROWID SAPR3.VBAP 3 K 174 K 312568.2
 INDEX RANGE SCAN SAPR3.VBAP~Z3 15 M 100758
 INDEX RANGE SCAN SAPR3.VAPMA~Z01 1 3

Optimizer_index_cost_adj=100 (Oracle recommended Default Value)

SELECT STATEMENT Optimizer Mode=CHOOSE 2 577409
 TABLE ACCESS BY INDEX ROWID SAPR3.VAPMA 1 49 4
 NESTED LOOPS 2 206 577409
 TABLE ACCESS FULL SAPR3.VBAP 3 K 174 K 564153
 INDEX RANGE SCAN SAPR3.VAPMA~Z01 1 3

I will carry a simple calculations on how Oracle will estimate execution costs. Please note these are not precise formulas.

Approx Full Table Scan Cost : 484,193 Unadjusted
Cost here is calculated as “IO + CPU/1000 + NetIO*1.5” but a simple formula is (No of blocks/DB_FILE_MULTIBLOCK_READCOUNT)
(No of blocks/DB_FILE_MULTIBLOCK_READCOUNT)= 3,873,549 blocks/8 = 484,193

If we increase DB_FILE_MULTIBLOCK_READCOUNT to 32 + Reorg of table , cost of “FULL Scan” will drop to 82,000 giving 5 fold increase in IO.

Cost of an Index Scan : 149,483 is Adjusted value
It is using a non-unique index “SAPR3.VBAP~Z3” defined on columns MANDT, ZZBU_DIR, ZZBU_EDITION.
There are only 160 distinct values on this index out of 15.9 million rows –

“select MANDT, ZZBU_DIR, ZZBU_EDITION from SAPR3.vbap”

Index Range Scan Cost = blevel + (Avg leaf blk per key * (num_rows * selectivity))= 1,188,451 (Actual Value) > than FTS

Since we have set Optimizer_index_cost_adj=10, real cost we set is = 1,188,451*10/100= 118845.1 which is 10% of actual overhead

Final value of index cost must include efforts for accessing data blocks = Previous Cost + (Avg_data_blks_per_key * (Clustering_fact / Total Table blks))= 149,483

Conclusion:
We need to let oracle optimizer decide a best path for execution than forcing it to choose indexes all the time. Putting defualt value for “optimizer_index_cost_adj” must be followed with up-to-date stats as cost based optmizer is heavily dependent on right stats.

Top of Page

Top menu