Monday, 8 May 2017

Performance Tuning Concepts and AWR Report Analysis and Interpretation -Interview


  

Performance Tuning Concepts

Learn Performance Tuning with Real-Time in 30 Day by Industry Expert

 

This document provides guidance for identify Top SQL,Advise for Oracle Memory and Deal with slow Performance of Database

 

 

Improving performance means

A)Reduce response time
B)Increase throughput

How to start Tuning?

A) How frequently the same issue occurred as of now?
B) is every time, the issue reported time is same?
C) is the issue only from specific user?
D) is the issue only from specific location?
E) is the issue only from specific Machine?
F) is the issue only from specific Application?
G) are there any changes happened in the Applications/Databases/Query
 O/s, H/w, Network
H) is the issue is in Business Hours/ non-business hours?
I) are there any backups jobs running at the time of problem?
J) whether latest statistics collected or not?
K) are there any increase in number of users/calls/transactions?

Tuning Tool

1) Automated Maintenance Tasks
2) ADDM Report / Regular ADDM Report (pre-12c)
3) ADDM Compare Report (New in 12c)
4) Real-Time ADDM Report (New in 12c)
5) Emergency Monitoring (Enhacement in "Memory Access Mode")
6) ASH Report and AWR Report


Automated Maintenance Tasks

Oracle 11g includes three automated database maintenance tasks:

 Automatic Optimizer Statistics Collection - Gathers stale or missing statistics for all schema objects . The task name is 'auto optimizer stats collection'.
 Automatic Segment Advisor - Identifies segments that could be reorganized to save space. The task name is 'auto space advisor'.
Automatic SQL Tuning Advisor - Identifies and attempts to tune high load SQL. The task name is 'sql tuning advisor'.

These tasks run during maintenance windows scheduled to open overnight.

 set lines 180 pages 1000
set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client
/
SQL >
CLIENT_NAME        STATUS    ATTRIBUTES    SERVICE_NAME
---------------------------------------- -------- ------------------------------------------------------------
auto optimizer stats collection     DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
auto space advisor                         ENABLED  ON BY DEFAULT, VOLATILE, SAFE TO KILL
sql tuning advisor                            ENABLED  ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL


SELECT count(*) FROM dba_autotask_client_history WHERE client_name = 'auto optimizer stats collection' AND window_start_time > (SYSDATE-8);

SELECT CLIENT_NAME,JOBS_STARTED,JOBS_COMPLETED,WINDOW_END_TIME FROM dba_autotask_client_history WHERE client_name = 'auto optimizer stats collection' AND window_start_time > (SYSDATE-8);

CLIENT_NAME                 JOBS_STARTED JOBS_COMPLETED WINDOW_END_TIME
---------------------------------------------------------------- ------------ -------------- ---------------------------------------------------------------------------
auto optimizer stats collection                                             1              1 29-OCT-16 03.00.00.015508 AM +01:00
auto optimizer stats collection                                             1              1 01-NOV-16 02.00.00.020355 AM +00:00
auto optimizer stats collection                                             5              5 30-OCT-16 02.00.00.009613 AM +00:00
auto optimizer stats collection                                             5              5 31-OCT-16 02.00.00.006245 AM +00:00
auto optimizer stats collection                                             1              1 28-OCT-16 03.00.00.011625 AM +01:00
auto optimizer stats collection                                             1              1 26-OCT-16 03.00.00.026329 AM +01:00
auto optimizer stats collection                                             1              1 02-NOV-16 02.00.00.005620 AM +00:00
auto optimizer stats collection                                             1              1 27-OCT-16 03.00.00.031229 AM +01:00

To collect complete Info

ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT ='DD/MM/YYYY HH24:MI:SS TZR TZD';
ALTER SESSION SET NLS_DATE_FORMAT ='DD-MM-YYYY HH24:MI:SS';
set pagesize 9999
spool c:\dba_autotask_client.html
set markup html on
select * from DBA_AUTOTASK_CLIENT;
select * from DBA_AUTOTASK_CLIENT_HISTORY;
select * from DBA_AUTOTASK_CLIENT_JOB;
select * from DBA_AUTOTASK_JOB_HISTORY order by JOB_START_TIME;
select * from DBA_AUTOTASK_OPERATION;
select * from DBA_AUTOTASK_SCHEDULE order by START_TIME;
select * from DBA_AUTOTASK_TASK;
select * from DBA_AUTOTASK_WINDOW_CLIENTS;
select * from DBA_AUTOTASK_WINDOW_HISTORY order by WINDOW_START_TIME;
select * from dba_scheduler_windows;
select * from dba_scheduler_window_groups;
select * from dba_scheduler_job_run_details order by ACTUAL_START_DATE;
select * from DBA_SCHEDULER_JOB_LOG;
SELECT program_name, program_action, enabled FROM dba_scheduler_programs;
spool off


ADDM Report / Regular ADDM Report (pre-12c) 

 - auto generated report (system generated)
 - user generated (Manually created ADDM)
 - depended on Snapshots (AWR Based Report)
 - findings are two categories
                   - for SQL Queries
                   - for waits at Database (Ex:- Memory structures)
- addmrpt.sql
- addmrpti.sql

 - dbms_advisor.get_task_report() from SQL Prompt
- ADDM information can be contorled by statistics_level
- statistics_level=basic => NO auto ADDM Report
- ADDM Report for I/O Analysis partially depended on DBIO_EXPECTED param


ADDM Compare Report (New in 12c)

 - based on AWR Snapshots
 - identifies the major changes for performance impact
 - runs analysis for the base period and compare period
 - snap1, snap2 => good time
 - snap3, snap4 => bad time
 - dbms_addm is new PL/SQL Package
 - compare report generation is possible


Real-Time ADDM Report (New in 12c) 

 - based on ASH Data
- ASH Data for last 10 minutes (available in SGA)
 - did NOT find any blokings
 - able to connect as sysdba
 - "Emergency Monitoring" report NOT in a position to give root cause
- PROD Performance is highly sick/slow
 - generating ADDM is emergency


Emergency Monitoring (Enhancement in "Memory Access Mode")

 - useful at the time of PROD highly sick and not able to login
- available in 11g R2 with the name "Memory Access Mode"
- can be enabled/disabled
- if enabled, collectors are working
 - these collector collects information directly from SGA
 - from 12c, it is "Emergency Monitoring"
 - collecting information from SGA directly by "Agent"
 - this information sent to OEM Single Page
- that page has "hang analysis information", blockings,
 kill option, shutdown options, etc.


ASH Report and AWR Report

ASH Report(Oracle Active Session History)- it need to generate when performance analysis on sessions that run too frequently or are too short to be available on AWR
snapshots. The AWR snapshots are not taken often enough to capture the information that you need. it is sampled every second from V$SESSION, and it can show more real-time or near real-time session information to assist in doing performance analysis on your database.


Important Summary Sections in AWR Report (or) How to Read AWR Report?

 

Single Instance Report

 

 

 

RAC Database AWR Compare Report

 


 

If DBTime is greater during bad period, it is likely the database is causing the problem; you have verified the problem is in the  database

    • Release
      Depending on the problem, the database version may be important. If the version is old or is not the latest patchset release then the most up to date fixes may not be applied which has the potential to open the database up to issues.
    • RAC
      If the database is running in a Real Application Cluster configuration then you may need to look at information from the other instances to get a full picture of the database performance
    • Platform
      There may be platform specific issues that have a bearing on the system
    • CPUs/Cores
      In a multi-processor environment, the "wall clock" time is not necessarily a good indicator of how much work the database can do since multiple operations can be pursued simultaneously. You can use cores for an indication of how much CPU work can likely be done at once.
    • Snap Time
      The Snap time shows the times for the starting and ending snapshots for the report period. Does this cover the time of problem that is being encountered?
    • Elapsed time
      The elapsed time indicates the duration of the report between the 2 selected snapshots.
    • DB time
      The DB Time is the time spent in the database for the period of the report. If this is significantly higher than the Elapsed time then this is a good indicator of a heavily loaded system. Remember that on a multi-processor system, you might expect the DB Time to be able to exceed the elapsed time. Additionally, the db time includes the time waiting for the CPU to become available, so this number can be higher than the Elapsed time X Cores. 
      In the example above, the numbers say that the database worked for 2193 minutes in 15 minutes of elapsed time. Whether that is an indication of a problem depends on the capacity and concurrency capabilities of the system. Looking at the numbers, 2193:15 is a ratio of 146:1, so, in this case, if they had significantly less than 146 cpus it is likely that there is some overloading issues.  Remember that the user perception is also a significant factor in whether there is a "performance issue" - if the system delivers what the users want then there might not be a problem!
    • Sessions
      You can use the sessions information along with the DB time to give an average amount of DB time per session. Are there a large number or a small number of connections?

 

 

 

Important Summary Sections in AWR Report

 

1) Load Profile => load on Instance / Instance activities
2) Instance Efficiency Target (100%)
3) Top 5/10 Events -=> high performance impact wait events
4) Time Mode Statistics => to find root cause of Query issues
5) Instance Efficiency Target (100%)   => overall health of Database

Based upon the relative information in above sections,
 Go to Drill Down Sections

Load Profile:-
-------------
Parses
Hard Parses
Physical Reads
Physical Writes

Single Instance Report

 

 

Where:

  • DB CPU(s)
    The DB CPU(s) figure shows the amount of the CPU being used by the database. You can use this alongside the actual number of cores to give you an idea of how much of the total available machine CPU is actually being used by this instance.
  • DB Time(s) Here the "Per Second" information gives you another version of the total DB time used, just in this case expressed as every second as opposed to the full elapsed period.

 

. Redo size: This is the amount of redo generated during this report.

. Logical Reads: This is calculated as Consistent Gets + DB Block Gets = Logical Reads

. Block changes: The number of blocks modified during the sample interval

. Physical Reads: The number of requests for a block that caused a physical I/O.

. Physical Writes: The number of physical writes issued.

. User Calls: The number of queries generated

. Parses: Total of all parses: both hard and soft

. Hard Parses: Those parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.

. Soft Parses: Not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse and hence consumes far fewer resources.

. Sorts, Logons, Executes and Transactions are all self explanatory

 

 

 

RAC Database Compare Report

 

Instance Efficiency Target (100%):-
----------------------------------

Buffer Hit%
Library Hit%
Soft Parse%
Execute to Parse%
Parse CPU to Parse Elapsed%

 

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
           Buffer Nowait %:   99.91       Redo NoWait %:  100.00
           Buffer  Hit   %:   98.14    In-memory Sort %:   99.98
           Library Hit   %:   99.91        Soft Parse %:   94.48
        Execute to Parse %:   99.45         Latch Hit %:   99.97
Parse CPU to Parse Elapsd %:   71.23     % Non-Parse CPU:   99.00


The most important Statistic presented here from the point of view of our example is the '% Non-Parse CPU' because this indicates that almost all the CPU  time that we see in the Top Waits section is attributable to Execution and not parse, which means that tuning SQL may help to improve this.

If we were tuning, then 94.48% soft parse rate would show a small proportion of hard parsing which is desirable.  The high execute to parse % indicates good usage of cursors.  Generally, we want the statistics here close to 100%, but remember that a few percent may not be relevant dependent on the application.  For example, in a data warehouse environment, hard parsing may be higher due to usage of materialized views and, or histograms.  So again comparing to baseline report when performance was good is important. 

 


Top 5/10 Events:- 
----------------

 

 

 

Buffer Busy Waits
Free Buffer Waits
Latch: Cache Buffer Chains
DB File Scatter Read
DB File Sequential Read
DB File Parallel Read

(in case of Shared Pool or Library Cache Size issues)
Latch: shared pool
Latch: row cache objects

Time Model Statistics:-
----------------------
Parse Time Elapsed
Hard Parse Elapsed Time

 

 


NOTE:- shared_pool_reserved_size > 50% of shared_pool_size => signal memory leak error
=> ORA-04031



Free Buffer Waits:-

Problem1:- buffer cache is too small

Possible solutions:-

1) use AMM (memory_target) or ASMM (sga_target)
2) increase buffer cache size as per
 a) ADDM Recommendations
 b) AWR Buffer Cache Advisory Results
c) V$db_cache_advice

Problem2:- DBWR is slow

Possible Solutions:-

1) increase db_writer_processes => increases I/O bandwidth
 (check with System Administrator for better value)

2) increase I/O bandwidth by stripping the datafiles across multiple Disks  (storage level)

Buffer Busy Waits:-

Find hot blocks / hot segments from

segment statistics
Segments by Buffer Busy Waits

Solution:-

1) Manually keep hot object into KEEP Buffer
2) check the segment advisory results
3) check the ADDM Recommenations
4) check the Tablespace management (whether it is dictionary manager or locally managed)


Latch: cache buffer chains:-
---------------------------

Problem1:- the cache chain latches are used when searching for adding or deleting
 a buffer from the buffer cache chain

Reason:- multiple users are trying to read code / description from look up tables

Solutions:-

a) identify the hot block
b) modify the application to use PL/SQL to read the look up table once
 and store code / descriptions in local variables which can be accessed later many times

Problem2:- simultaneous update/select operations on the same block

Solutions:- modify the application to commit frequently and reasonbly
 => CR clones need not be created


Read Waits:-
-----------

DB File Scatter Read => waits for multiple blocks to read from disk while Full Table Scan

Possible Solution:- try to cache frequently used small tables

DB File Sequential Read => waits for single block read from disk while Index Full Scan

possible solution:- check indexes on the table to ensure that the right index is being used
 check the column order of the composite index with WHERE Clause

storage disks
- hard disks
- flash disks (available by default in Exadata Server)


Tuning PGA:-
-----------

What are the other memory structures in PGA?
 - Hash Area (hash join operations for the SELECT)
 - SORT Area (sort process in case or ORDER BY, GROUP BY, DISTINCT, etc.)
 - workarea_size_policy parameter
 - AUTO (default = HASH_AREA and SORT_AREA automatically set)
 - by setting PGA_AGGREGATE_TARGET
 - MANUAL 

what is workarea in PGA?
- a memory area for processing

How to set the optimal size for PGA?
- based on information about optimal pass, one pass and multi pass operations

Examples:-

1) assume workarea size is 50 MB
 SELECT require 25MB for processing
 25MB < 50 MB => optimal pass (very good)


2) assume workarea size is 50MB
 SELECT require 75MB for processing
50MB + 25 MB = 75 MB => one pass (ok)


3) assume workarea size is 50MB

 SELECT require 125 MB for processing
50MB + 50MB + 25MB => more than one pass => multi pass (very bad)

sga_target <= sga_max_size

memory_target <= memory_max_target

sga_target = 500 MB => within in SGA + pga_aggregate_target

memory_target => SGA + PGA



DB Time is computed by adding the CPU time and wait time of all sessions (excluding the waits for idle events)

An AWR report shows the total DB Time for the instance

(in the section “Time Model System Stats”) during the period covered by the AWR snapshots. If the time model
statistic DB CPU consumes most of the database time for the instance, it shows the database was actively processing most of the time.DB time, which represents activity on your database. If the DB time exceeds the elapsed time, it denotes a busy database. If it is a lot higher than the elapsed time, it may mean that some sessions are waiting for resources


SQL script for getting AWR Report on RAC database:
SQL>@$ORACLE_HOME/rdbms/admin/awrgrpt.sql

SQL script for getting AWR Report for  single instance:
SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql

SQL script for getting ASH Report on RAC database:
SQL>@$ORACLE_HOME/rdbms/admin/ashrpti.sql

SQL script for getting ASH Report for single Instance:
SQL>@$ORACLE_HOME/rdbms/admin/ashrpt.sql

Comparing Database Performance Between Two Periods
The AWR Compare Periods report compares performance between two different time periods, with each time
period involving a different pair of snapshots.

 @$ORACLE_HOME/rdbms/admin/awrddrpt.sql




Query Execution Phases (Parse, Bind, Execute and Fetch)

Parse Phase
 - two checks
 - syntax check
 Ex:- select * fromm emp; -----> "from" keyword NOT found
 select * emp;

- symantic check
                   Ex: select * from emp; (assume, there is NO table with the ename "emp")
                                     => "Table or View does not exists"
                   update scott.emp set sal=sal+1000; (connected to "hr" schema)
                                     => "Insufficient Privileges"
                   select name from emp; (assume, here "name" NOT found in the "emp")
                                     => "Invalid identifier "name""

 
- two types of parses
                   - soft parse
                   - hard parse

-- Hard Parse
If a new SQL statement is issued which does not exist in the shared pool then this has to be parsed fully.
Eg: Oracle has to allocate memory for the statement from the shared pool, check the statement syntactically
and semantically etc... This is referred to as a hard parse and is very expensive in both terms of CPU used
and in the number of latch gets performed.

--Soft Parse
If a session issues a SQL statement which is already in the shared pool AND it can use an existing version
of that statement then this is known as a 'soft parse'.
As far as the application is concerned it has asked to parse the statement.

if two statements are textually identical but cannot be shared then these are called 'versions' of the same statement.
If Oracle matches to a statement with many versions it has to check each version in turn to see
if it is truely identical to the statement currently being parsed. Hence high version counts are best avoided.

shared pool
 - library cache
                   - shared SQL Area
                   - holds SQL CURSOR(s)
 - data dictionary cache
                   - result cache


SQL CURSOR?
 - workarea
- memory area
                   - for workload of a particular SQL Statement
                   - information which is required to run a particular SQL St.
                   - statistics, selectivity, cardinality, explain plan, access path,
                   optimizer join method, etc.

 - when you issue a particular SQL St. first time, then CURSOR creates in shared SQL Area



Examples:-

    first time:   SELECT * FROM emp;    => store in library cache (Hard Parse)
        2nd time: SELECT * FROM emp;    => soft parse
        3rd time: select * from emp;    => Hard Parse
        4th time: SELECT *  FROM emp;    => Hard Parse
        5th time: SELECT * FROM emp WHERE deptno=10;    => HP

CURSOR_SHARING=EXACT (default)    => character by character, space by space checking

   
    select * from emp where deptno=10;    => hard parse
    select * from emp where deptno=20;    => hard parse
    select * from emp where deptno=30;    => hard parse
    select * from emp where deptno=40;    => hard parse


How to avoid hard parses?

a) follow standards
b) use bind variables in place of literals

    ex: select * from emp where deptno = :dno;    (var command in sqlplus)



CURSOR_SHARING
    - EXACT (default)    (good for OLTP) >90% DML operations
    - SIMILAR (deprecated from 12c)
    - FORCE


CURSOR_SHARING=FORCE    (good for OLAP) >90% are SELECT operations

    select * from emp where deptno=10;    => hard parse
    select * from emp where deptno=20;    => soft parse
    select * from emp where deptno=30;    => soft parse
    select * from emp where deptno=40;    => soft parse


   
    select * from emp where deptno = :SYS.bind_variable;




Bind Phase
    - optional
    - useful in case of any bind variable in the query
   


every SQL query identified by SQL ID
every execution plan of a Query identified by Plan Hash Value
can I have same SQL ID with different Plan Hash Values?
Ans:  YES



select * from emp where deptno = :dno;

:dno=10    => sqlid1, plan1
:dno=20 => sqlid1, plan2
:dno=30 => sqlid1, plan3


the above is possible from 11g R2 due to Adaptive Cursor Sharing

statistics is important input to Optimizer to run a query
from 11g R2, selectivity is also important to optimizer

assume, emp has 100 rows, 10th dept has 15 rows, 20th dept has 65 rows, 30th has 20 rows


selectivity (10th Dept) = 15/100 = 0.15
selectivity (20th dept) = 65/100 = 0.65
selectivity (30th dept) = 20/100 = 0.2


select * from emp where deptno = :dno;

:dno=10    => 0.15, plan1
:dno=20 => 0.65, plan2
:dno=30 => 0.2, plan1


Bind Peaking



select * from emp where deptno = :dno;

:dno=10    => 0.15, plan1
:dno=20 => 0.65, plan1
:dno=30 => 0.2, plan1


 

 

Case Study

 

1) Check server load

2)Generate and Check ADDM report, implement findings, re-test                     

3)Check Gather Stats on database or particular/relevant tables

4)AWR report covering problem period                                            

5)AWR report covering good period of similar load and duration for comparison   

6)AWR Compare Periods report comparing the good and bad periods                 

7)Collect an ASH report for the same period                                     

8)Collect OSWatcher data                                                           

9)Collect Alert log and traces covering the duration of the problem               

10)Check the Alert log for the period when the issue occurred                     

11)Find any trace files referenced in the problem period                           [

 

Database Performance issue check

1) Check server load

 topas to know top consuming process and load on server

 nmon and press m to know memory and paging utilization

 vmstat 5 5

 lsps -s to know paging usages

2) Map  OS processes with database process as below

  select s.sql_id,s.sql_hash_value,,s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi')  from v$session s, v$process p where p.addr=s.paddr and p.spid in(1105,4452)

3) Take explain plan and check statistics of tables and indexes

select * from table(dbms_xplan.display_cursor('&sql_id',null,'AdVanced ALLSTATS LAST'))

select a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from dba_tables a, dba_segments b 

where  a.owner=b.owner and a.table_name=b.segment_name and a.table_name='&table_name' order by b.bytes desc;

4) Check any long running SQL on database

select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete" from v$session_longops where sofar <> totalwork;

5) Check session level and object level lock on database

object level locking

------------------------

select a.sid, a.serial#,OBJECT_NAME from v$session a, v$locked_object b, dba_objects c  where b.object_id = c.object_id  and a.sid = b.session_id;

Blocking Sessions

------------------

SELECT DECODE(request,0,'Holder: ','Waiter: ') || sid sess, id1, id2, lmode, request, type

FROM V$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request > 0) ORDER BY id1, request;

6)  Check module wise usages and problematic event on database

select count(1),status,module from v$session group by status,module order by module desc

select count(1),status,module,sql_id from v$session group by status,module,sql_id order by module desc

select count(1),event from v$session_wait group by event;

 

7) Generate AWR,ASH and ADDM Report for further analysis

@ ?/rdbms/admin/awrrpt.sql

@ ?/rdbms/admin/ashrpt.sql

@ ?/rdbms/admin/addmrpt.sql

 @?/rdbms/admin/awrddrpt.sql--compare how the database performed during two different periods.

JOB slowness check

check job Details

server:dummy10205:/home/oracle $ps -ef|grep uged7551

 tprd001 40698072 28967422   0 00:16:54      -  0:00 /bin/ksh -x /prod/geprod/jobs_exec/uged7551.sh

  oracle 26739026  4325714   0 02:33:01  pts/2  0:00 grep uged7551

server:dummy10205:/home/oracle $ps -ef|grep uged7554

  oracle  3342770  4325714   1 02:35:01  pts/2  0:00 grep uged7554

 tprd001 21496252  6095180   0 00:16:54      -  0:00 /bin/ksh -x /prod/geprod/jobs_exec/uged7554.sh

proctree 21496252

 Server:dummy10205:/home/oracle $proctree 21496252

6095180    p_ctmag

   21496252    /bin/ksh -x /prod/geprod/jobs_exec/uged7554.sh

      58982596    /bin/ksh /prod/geprod/scripts/rungensql.sh ge5790e02 4 10

         14877100    sqlplus   @/prod/geprod/sql/ge5790e02.sql 4 10

            60162290    oracleintclprd (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

Server:dummy10205:/home/oracle $

or

Top users, program,modules,machine,sql_id

SELECT * FROM (

 SELECT count(*) AS count,USERNAME program, module, machine, sql_id

 FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY, DBA_USERS

 WHERE  DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID

 AND session_type='FOREGROUND' and snap_id between 96067 and 96073

 GROUP BY USERNAME, program, module, machine, sql_id

 ORDER BY count(*) DESC) WHERE rownum <= 20

or

SELECT * FROM (

 SELECT count(*) AS count,USERNAME program, module, machine, sql_id

 FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY, DBA_USERS

 WHERE  DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID

 AND session_type='FOREGROUND'

 AND  sample_time > sysdate-60/1440

 AND sample_time < sysdate

 --AND event = 'library cache: mutex X'

 GROUP BY USERNAME, program, module, machine, sql_id

 ORDER BY count(*) DESC )WHERE rownum <= 20

 

-- Top temp segments given than a threshold

SELECT * FROM (

  SELECT count(*) AS count,username, program, module, machine, sql_id,sum(temp_space_allocated)

  FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY,DBA_USERS

  WHERE DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID

  AND sample_time > sysdate-60/1440

  AND sample_time < sysdate

  AND temp_space_allocated > 100*1024*1024

  GROUP BY USERNAME, program, module, machine, sql_id

  ORDER BY count(*) DESC ) WHERE rownum <= 20

/

 

SET LINESIZE 500

SET PAGESIZE 1000

SET VERIFY OFF

 SELECT b.username,b.sid,b.serial#,username,b.status,b.sql_id,to_char(b.logon_time,'dd-mon:hh24:mi'),b.machine,b.module

FROM   v$sqltext a,     v$session b  WHERE  a.address = b.sql_address AND    a.hash_value = b.sql_hash_value AND    b.sql_id = 'dt7cud8uwzstp';

or

select sid,serial#,username,program,machine,module from v$session where sql_id='dt7cud8uwzstp';

select count(1),event from v$session  group by event;

select s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi')  from v$session s, v$process p

where p.addr=s.paddr and p.spid in(4260298,41418786,5570940,3146020,1507632,4587808,4915642,3080460,2425340);

 

Some important Oracle Tool

1) RDA

2) TFA

3) SQLT

4) OS watcher

5) Various OS commands for identify bottleneck

1) Remote Diagnostics Agent(RDA)

RDA is a utility, a set of shell scripts or a PERL script, that can be downloaded from Oracle Support to collect diagnostics information from an Oracle database.

 [oracle@lab rda]$ sh rda.sh

------------------------------------------------------------------------------

RDA Data Collection Started 11-Jun-2016 00:57:32

------------------------------------------------------------------------------

Processing RDA.BEGIN module ...

Enter the password for "SYSTEM" at "cdb1":

Re-enter it to confirm:

Processing OS.PERF module ...

Processing RDA.CONFIG module ...

Processing SAMPLE.SAMPLE module ...

Processing OS.OS module …

Processing OS.PROF module ...

Processing OS.NET module ...

Processing OS.INST module ...

Processing DB.DBA module ...

Processing OFM.WREQ module ...

This can take time. Be patient ...

Processing OFM.IREQ module ...

Processing RDA.LOAD module ...

Processing RDA.END module ...

------------------------------------------------------------------------------

RDA Data Collection Ended 11-Jun-2016 00:58:26

 [oracle@ibmlab rda]$

2 .TFA

 

It is used to collect  Diagnostic Gathering Across All Cluster Nodes ( Doc ID 1513912.1 )

Examples:

<gi_home>/bin/tfactl diagcollect -from "<MON/DD/YYYY HH:00:00>" -to "<MON/DD/YYYY HH:00:00>"

<gi_home>/bin/tfactl diagcollect -all -since 2d

<gi_home>/bin/tfactl diagcollect -for "<MON/DD/YYYY HH:00:00>"

 

tfactl diagcollect -from "APR/21/2017 14:00:00" -to "APR/21/2017 19:00:00"

 

* If you cannot use TFA, then please collect/upload the following from the problematic node:

 

This note will be obsolete in the future, it's strongly recommended to use TFA to prune and collect files from all nodes:

 

Reference: Document 1513912.1 TFA Collector - Tool for Enhanced Diagnostic Gathering

 

TFA Collector is installed in the GI HOME and comes with 11.2.0.4 GI and higher.  For GI 11.2.0.3 or lower, install the TFA Collector by referring to Document 1513912.1 for instruction on downloading and installing TFA collector.

 

 

$GI_HOME/tfa/bin/tfactl diagcollect -from "MMM/dd/yyyy hh:mm:ss" -to "MMM/dd/yyyy hh:mm:ss"

 

Format example: "Jul/1/2014 21:00:00"

Specify the "from time" to be 4 hours before and the "to time" to be 4 hours after the time of error.

 

 

3) SQLT

SQLT, also known as SQLTXPLAIN is a tool provided by Oracle Server Technologies Center of Expertise that can be used to diagnose why a particular SQL statement is performing poorly. SQLT is very focused and works on one SQL statement at a time. SQLT inputs one SQL statement and outputs a set of diagnostics files. These files can be used to diagnose SQL statements performing poorly.

starting with Oracle 10.2 there is a script ORACLE_HOME/rdbms/admin/sqltrpt.sql which can be used for usage of SQL Tuning Advisor from the command line and it will give recommendation for problematic sql statements.

[oracle@lab ~]$ sqlplus / as sysdba

SQL> @?/rdbms/admin/sqltrpt.sql

15 Most expensive SQL in the cursor cache

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL_ID           ELAPSED SQL_TEXT_FRAGMENT

------------- ---------- -------------------------------------------------------

drnmr4vyabw0b     161.99 CREATE TABLE IDT_ML6121G TABLESPACE TSP_MEDIUM PARALLEL

1z1b4yqc1z9gb     151.69 DECLARE   cnt      NUMBER;   bid      NUMBER;   eid

8bbt6aufcb7uc      91.73 BEGIN PACK_ML6121G02.PC_ADM_ML6121G02(9,10); END;

15 Most expensive SQL in the workload repository

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SQL_ID           ELAPSED SQL_TEXT_FRAGMENT

------------- ---------- -------------------------------------------------------

gay2fcy4gd2ka ########## BEGIN pack_ml8704e.getTriadBatchData_ext(:1, :2, :3, :4

b6usrg82hwsa3 ########## call dbms_stats.gather_database_stats_job_proc (  )

9pask37fn7ktu ########## SELECT CREDIT_LIMIT_DECISION_SID, CRED_LIMIT_DECISION_T

bcdw1bfudc6u0  86,840.00 SELECT FUNDING_TYPE_CODE FROM CARD_TRANSACTION CT, FUND

as1c06b0zckuv  82,945.66 SELECT * FROM ( SELECT CLD.CREDIT_MONETARY_LIMIT, TCLS.

Specify the Sql id

~~~~~~~~~~~~~~~~~~

Enter value for sqlid: 6cg8puc5mz1va

Sql Id specified: 6cg8puc5mz1va

Tune the sql

~~~~~~~~~~~~

GENERAL INFORMATION SECTION

-------------------------------------------------------------------------------

Tuning Task Name                  : TASK_952340

Tuning Task Owner                 : SYS

Scope                             : COMPREHENSIVE

Time Limit(seconds)               : 1800

Completion Status                 : COMPLETED

Started at                        : 05/28/2014 10:02:51

Completed at                      : 05/28/2014 10:03:01

Number of SQL Profile Findings    : 1

-------------------------------------------------------------------------------

Schema Name: PILOT_DBA

 

SQL ID     : 6cg8puc5mz1va

SQL Text   : SELECT COUNT (*) COUNT_ANNOTATIONS FROM ANNOTATION_TYPE ANN_TY,

             ANNOTATION ANN, TRIAD_CRED_STRAT_ANNO_ASSIGN TRD_CS_ANN_ASS

             WHERE ANN.CONTRACT_NUMBER = :B4 AND ANN.ANNOTATION_TYPE_CODE =

             ANN_TY.ANNOTATION_TYPE_CODE AND TRD_CS_ANN_ASS.CREDIT_STRATEGY_RE

             FNO (+)= :B3 AND TRD_CS_ANN_ASS.ANNOTATION_TYPE_CODE (+)=

             ANN.ANNOTATION_TYPE_CODE AND ( (ANN.DATE_VALID_UNTIL >= :B1 ) OR

             ( TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NULL AND (

             ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =

             ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM + :B2 >= :B1 ) OR

             ( TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NOT NULL AND (

             ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =

             ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM +

             PACK_DATE_UTILS.FN_CONVERT_PERIOD (

             TRD_CS_ANN_ASS.PERIOD_ANNOTATION_EFFECTIVE,

             TRD_CS_ANN_ASS.PERIOD_UNIT_ANNOTATION_EFFECT, 'DY') >=:B1 ) )

 

-------------------------------------------------------------------------------

FINDINGS SECTION (1 finding)

-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)

--------------------------------------------------------

  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 19.73%)

  ------------------------------------------

  - Consider accepting the recommended SQL profile.

    execute dbms_sqltune.accept_sql_profile(task_name => 'TASK_952340',

            replace => TRUE);

-------------------------------------------------------------------------------

EXPLAIN PLANS SECTION

-------------------------------------------------------------------------------

1- Original With Adjusted Cost

------------------------------

Plan hash value: 1889861245

----------------------------------------------------------------------------------------------------------------

| Id  | Operation                       | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |

----------------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                |                              |     1 |    41 |    10   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE                 |                              |     1 |    41 |            |          |

|   2 |   NESTED LOOPS                  |                              |     9 |   369 |    10   (0)| 00:00:01 |

|*  3 |    FILTER                       |                              |       |       |            |          |

|   4 |     NESTED LOOPS OUTER          |                              |     9 |   333 |    10   (0)| 00:00:01 |

|   5 |      TABLE ACCESS BY INDEX ROWID| ANNOTATION                   |    17 |   408 |     7   (0)| 00:00:01 |

|*  6 |       INDEX RANGE SCAN          | PF_ANN_01                    |     3 |       |     4   (0)| 00:00:01 |

|   7 |      TABLE ACCESS BY INDEX ROWID| TRIAD_CRED_STRAT_ANNO_ASSIGN |     1 |    13 |     1   (0)| 00:00:01 |

|*  8 |       INDEX UNIQUE SCAN         | PK_DIC_TRD_CS_ANN_ASS        |     1 |       |     0   (0)| 00:00:01 |

|*  9 |    INDEX UNIQUE SCAN            | PK_ANN_TY                    |     1 |     4 |     0   (0)| 00:00:01 |

----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

 

   3 - filter("ANN"."DATE_VALID_UNTIL">=:B1 OR "TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND

              ("ANN"."DATE_VALID_UNTIL" IS NULL OR "ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM") AND

              INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1 OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR

              "ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM") AND

              INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO

              D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1 AND

              "TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT NULL)

   6 - access("ANN"."CONTRACT_NUMBER"=:B4)

 

   8 - access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND

              "TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")

   9 - access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")

 

2- Using SQL Profile

--------------------

Plan hash value: 1435743108

----------------------------------------------------------------------------------------------------------------

| Id  | Operation                       | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |

----------------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                |                              |     1 |    41 |     8   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE                 |                              |     1 |    41 |            |          |

 

|*  2 |   FILTER                        |                              |       |       |            |          |

|   3 |    NESTED LOOPS OUTER           |                              |     9 |   369 |     8   (0)| 00:00:01 |

|   4 |     NESTED LOOPS                |                              |     3 |    84 |     7   (0)| 00:00:01 |

|   5 |      TABLE ACCESS BY INDEX ROWID| ANNOTATION                   |    17 |   408 |     7   (0)| 00:00:01 |

|*  6 |       INDEX RANGE SCAN          | PF_ANN_01                    |     3 |       |     4   (0)| 00:00:01 |

|*  7 |      INDEX UNIQUE SCAN          | PK_ANN_TY                    |     1 |     4 |     0   (0)| 00:00:01 |

|   8 |     TABLE ACCESS BY INDEX ROWID | TRIAD_CRED_STRAT_ANNO_ASSIGN |     1 |    13 |     1   (0)| 00:00:01 |

|*  9 |      INDEX UNIQUE SCAN          | PK_DIC_TRD_CS_ANN_ASS        |     1 |       |     0   (0)| 00:00:01 |

----------------------------------------------------------------------------------------------------------------

 

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("ANN"."DATE_VALID_UNTIL">=:B1 OR "TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND

              ("ANN"."DATE_VALID_UNTIL" IS NULL OR "ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM") AND

              INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1 OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR

              "ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM") AND

              INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO

              D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1 AND

              "TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT NULL)

   6 - access("ANN"."CONTRACT_NUMBER"=:B4)

   7 - access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")

   9 - access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND

              "TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")

 

-------------------------------------------------------------------------------

 

This report has given recommendation for accepting SQL profile to improve performance of above query

     execute dbms_sqltune.accept_sql_profile(task_name => 'TASK_952340',

            replace => TRUE);

 

4:  OS Watcher   Installation Steps:-

OS Watcher is a series of shell scripts that collect specific kinds of data, using operating system diagnostic utilities. Control is passed to individually spawned operating system data collector processes, which in turn collect specific data, time stamp the data output, and append the data to pre-generated and named files. Each data collector will have its own file, created and named by the File Manager process. OSW invokes the distinct operating system utilities listed below as data collectors. OSW will not put any significant performance affecting load on the system. It will have the same impact as running the regular OS command like netstat, ps etc. These utilities will be supported, or their equivalents, as available for each supported target platform:

* ps

* top

* mpstat

* iostat

* netstat

* traceroute

* vmstat

 

The size of the archived files saved during the running of the OSW will be based on the user parameters set at the starting of OSW and the OS information. For example, if each file will contain an hour of data and the collection interval is 10 sec the amount of data will be bigger compared to collecting with an interval of 60 sec.

It is highly recommended that OSW be installed and run continuously on ALL cluster nodes, at all times Document 301137.1

Be sure to use separate directories per node for storing OSW output. When using OS Watcher in a RAC environment, each node must write its output files to a separate archive directory. Combining the output files under one archive (on shared storage) is not supported and causes the OSW tool to crash. Shared storage is fine, but each node needs a separate archive directory.

Document 301137.1 - OS Watcher User Guide

OSW for WINDOWS:  OS Watcher for Windows is no longer supported. It has been replace by the Cluster Health Monitor. Please see Document ID: 736752.1 for more information, and how to download, the Cluster Health Monitor.

 

[oracle@ibmlab jb]$ pwd

/home/oracle/jb

[oracle@ibmlab jb]$ ls

oswbb733.tar

[oracle@ibmlab jb]$ tar -xvf oswbb733.tar

oswbb/

oswbb/docs/

oswbb/docs/The_Analyzer/

oswbb/docs/The_Analyzer/OSWatcherAnalyzerOverview.pdf

oswbb/docs/The_Analyzer/oswbbaUserGuide.pdf

nput.txt

oswbb/oswrds.sh

[oracle@ibmlab jb]$

 

Start OS watcher using the scripts  as oracle user to capture data every 15 second:

nohup ./startOSWbb.sh 15 300 gzip &

 [oracle@ibmlab oswbb]$ ps -ef |grep -i osw

oracle   21464     1  0 19:39 pts/1    00:00:00 /bin/sh ./OSWatcher.sh 15 300 gzip

oracle   21522 21464  0 19:39 pts/1    00:00:00 /bin/sh ./OSWatcherFM.sh 300 /home/oracle/jb/oswbb/archive

oracle   21943 21266  0 19:41 pts/1    00:00:00 grep -i osw

[oracle@ibmlab oswbb]$

To stop OSWatcher:

[oracle@ibmlab oswbb]$ ./stopOSWbb.sh

[oracle@ibmlab oswbb]$

To uninstall the OSWatche:

simply delete the oswatcher parent directory.

 

6) Various OS commands for identify bottleneck:

df –h /df –g

We may ran out of space of the file system and the database will not even allow to login as it's not having space to generate the os level files. We can use the below commands to investigate and resolve the issue.

 

 

On the above screen shot we can clearly observed that the /opt is above 80% which is warning and we should check and remove the files which are no longer required.

VMSTAT(virtual memory statistics) to determine where the system is taking more resources.

/home/oracle $vmstat 5 5

System configuration: lcpu=32 mem=90112MB ent=2.00

kthr    memory              page              faults              cpu

----- ----------- ------------------------ ------------ -----------------------

 r  b   avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa    pc    ec

 5  0 20454530 211691   0   0   0   0    0   0 104 26418 1889  8  6 87  0  0.42  20.9

 4  0 20454530 211691   0   0   0   0    0   0 1856 11360 3806 15 12 73  0  0.86  43.1

 4  0 20454531 211687   0   0   0   0    0   0 418 21436 2441 11  7 82  0  0.58  29.0

 4  0 20454531 211664   0   0   0   0    0   0 108 13954 1947  9  4 87  0  0.43  21.3

 2  0 20456049 210136   0   0   0   0    0   0  81 18128 1872  8  5 88  0  0.40  19.8

 

The r column displays the number of processes waiting for access to a processor. The b column displays the number of processes in a sleep state. These values are usually zero.

 If b(processes sleeping) and r are consistently greater than 0, then you may be using more CPU than available. .

 If po (memory swapped out to disk) and pi (memory swapped in from disk) are consistently greater than 0, you may have a memory bottleneck. Paging and swapping occur when

there isn’t enough physical memory to accommodate the memory

I/O Stats

 

Reports I/O statistics

 

SAR:(System Activity Reporter) SAR is a legacy system monitoring tool which is  used to report  CPU activity, system loads average, memory/paging, LUN load, network activity.

 

bash-2.05$ sar -p 10 10

SunOS drsmgtp1 5.9 Generic_122300-31 sun4u    06/15/2016

11:31:07  atch/s  pgin/s ppgin/s  pflt/s  vflt/s slock/s

11:31:17    8.00    2.40    6.00    6.40   18.30    0.00

11:31:27    0.70    2.80    2.80    0.00    0.40    0.00

11:31:37    0.40    1.50    1.50    0.10    0.00    0.00

11:31:47    4.30    4.40    4.40    5.90   16.10    0.00

11:31:57    0.70    2.80    2.80    0.00    0.00    0.00

11:32:07    0.40    1.60    1.60    0.70    0.00    0.00

11:32:17    3.50    1.20    1.20    5.90   16.10    0.00

11:32:27    0.40    1.90    5.70    0.10    0.00    0.00

11:32:37    1.20    0.00    0.00    0.00    0.00    0.00

11:32:47    4.40    0.00    0.00    5.90   16.20    0.00

Average     2.40    1.86    2.60    2.50    6.71    0.00

bash-2.05$

TOP is  a command to get the load information along with memory and process utilization details.

 

PS: ps is a command to quickly identify the what are the process running on the Unix an Linux servers.

 [oracle@ibmlab cdump]$ ps -ef |grep pmon

oracle   23212 22307  0 21:04 pts/3    00:00:00 grep pmon

oracle   27788     1  0 Jun10 ?        00:00:20 ora_pmon_cdb1

[oracle@ibmlab cdump]$

free -m: is a command to get the memory information on the linux operating system.

 [oracle@ibmlab cdump]$ free -m

             total       used       free     shared    buffers     cached

Mem:          3962       3897         65          0        139       3170

-/+ buffers/cache:        587       3375

Swap:        10239         18      10221

[oracle@ibmlab cdump]$

 

NMON

To display usages  memory ,CPU ,Disk etc.

 

 

  1. Paging info

  1. Memory and paging in one output



 ==============SQL Tuning Enhancement in 12c =================
SPM V/S SQL PROFILE

1) sql profile is reactive
2) SPM is proactive
3) SPM reproduces specific plan
4) SQL profile corrects CBO estimates
5) SQL profile don't guarantee plan stability
6) SQL provides plan stability 

SQL PLAN Managment (SPM )

  • introduce in 11g
  • preserve the performance
  • only accepted plan are used
  • new plan save for future evaluation
  • enhance in 12c
  • whole idea is used to avoid new plan
SPM BASELINE

  • proactive and conservative approach
  • guarantee execution plan stability
  • only accepted/verified plan are used
  • enforce specific plan
  • New plans are stored in Plan History
  • purges history according to retention








  • CDB resource plan determines the amount of resources allocated to each PDB.
  • A PDB resource plan determines how the resources allocated to a specific PDB are allocated to consumer groups within that PDB

 To understand those parameters which will change behavior of execution plan

How it works
-----------------------

Optimizer consider some parameters to generate low-cost execution plan it depend on.

          Initialization parameters
          Statistics
          Hints
          SQL profiles
          SQL plan management (plan baselines)


These all are earlier version feature.

          Adaptive SQL plan management (12c)
          Adaptive query optimization (12c)

These are 12c new feature, we are going to discuss these 12c SQL tuning enhancement feature in details


When we trigger a SQL statement, the query optimizer creates an execution plan based on above parameters. Explain PLAN is a combination of steps Oracle takes to retrieve or modify the data and the optimizer quickly produces several execution plans and will determine which plan is most efficient.
When we are troubleshooting SQL performance problems, you must know which of features are enabled and how they influence query behavior.


1)Initialization parameters- it is enabled at database level Influence aspects such as efficiently delivering query result sets to the client application
ALTER SYSTEM or SESSION statement

2) Statistics--it Provide optimizer with characteristics of the table, data, and indexes so as to better generate execution plans. Statistics are automatically enabled and gathered, and we can also collect manually.

3) hint--Hints are the only feature that requires a physical modification to the SQL query. All of the
other techniques can be used to improve performance without changing the query-
hints will give suggestion to optimizer for generate execution plan for the query it will influence optimizer decisions when generating an execution plan

3) SQL profiles --A SQL profile is a database object that contains optionally generated corrections and improvements to statistics for a particular SQL statement.

1.         Run SQL Tuning Advisor
2.         SQL Tuning

If Adviser recommends a profile, enable via DBMS_TUNE package.


4) Plan baselines--

1.         Identify queries.
2.         Enable via DBMS_SPM package.

It contains only set of acceptable execution plans and if a plan baseline exists for the query, the optimizer will choose the lowest cost accepted plan from the plan baseline. Additionally, the optimizer
will give preference to accepted plans it allows us to store and manage execution plans within tables in the database. Plan baselines consist of one or more stored execution plans that have been accepted for a SQL query. When you run a query, and if a valid execution plan exists within a plan baseline (for that query), then the optimizer will use that execution plan (instead of using an execution plan generated on-the-fly). Plan baselines help ensure that the optimizer consistently chooses the same execution plan, regardless of changes in the database environment. Plan baselines provide the following benefits:

•  Preserving performance when upgrading from one database version to another; in other words,
 helping ensure that the same execution plan is used for a query before and after the upgrade
•  Keeping performance stable and consistent when data changes, or statistics are updated,
or new SQL profiles become available
•  Providing a mechanism for adding more efficient executions plans as they become available
    (like a new index is added or a SQL profile becomes available


5)ADAPTIVE SQL plan management.

Starting with Oracle Database 12c, if a plan baseline exists for a query, and if a new plan is generated that has a significantly lower cost than an existing plan in the plan baseline, then Oracle will automatically add the new plan to the plan baseline. This is known as adaptive SQL plan management.

6)  Adaptive query optimization: it will instruct the optimizer to optimize execution plan using following features.

     Adaptive plans
     Automatic re-optimization
      SQL plan directives
      Dynamic statistics enhancements

An adaptive plan is an execution plan that optimizer can improve through metrics automatically gathered when a query executes. If the actual execution statistics vary significantly from the original plan statistics, then the optimizer will store the statistics.
When a query first executes, the optimizer monitors the query and if the actual execution statistics vary
significantly from the original plan statistics then the optimizer will automatically re-optimize the query (and generate a more efficient execution plan) the next time it executes.
As a query executes, the optimizer will collect metrics and determine if additional statistics might help generate a more efficient execution plan. If so, a SQL plan directive is created that records the desirable statistics

================SQL Management Base (SBM)=============

SQL Management Base (SBM) -it contains statement log, plan history, and plan baselines and SQL profiles. the SMB is part of the database dictionary and stored in the SYSAUX tablespace.
The SMB has automatic space management (for example, periodic purging of unused plans).
You can configure the SMB to change the plan retention policy and set space size limits.

 If the database instance is up but the SYSAUX tablespace is OFFLINE, the optimizer is unable to access SQL management objects. This can affect the performance of some of the SQL workload.


7) statement log. lt contain log of sql statement which executed more than once,
                   The optimizer recognizes a repeatable SQL statement by maintaining it

8) plan history= A plan history contains different plans generated by the optimizer for a SQL statement over time. If a plan baseline exists for a query, and if the low-cost plan has a lower cost
than the plan from the plan baseline, then the low-cost plan is automatically added to the
plan history for the query in an unaccepted state.You can choose to move plans from the
plan history into the plan baseline so that the optimizer will consider them when choosing
an execution plan. This provides you the flexibility to use better plans as they become
available (evolving the plan).

For example, a new plan generated by the optimizer is not normally used until it has been verified not to cause a performance regression. Plan verification is done “out of the box” as part of Automatic SQL Tuning running as an automated task in a maintenance window
An Automatic SQL Tuning task targets only high-load SQL statements
it contains set of both accepted and unaccepted execution plans for a query.
You can manually change the state of an unaccepted plan to accepted (this moves it to the plan baseline). This is known as evolving a plan baseline.


SQL plan management 

With SPM, the optimizer automatically manages execution plans and ensures that only known
or verified plans are used. When a new plan is found for a SQL statement, the plan is not
used until the database verifies that its performance is comparable to or better than the current plans.
The components of SPM are SQL plan baseline capture, SQL plan baseline selection and
SQL plan baseline evolution. There are two ways to seed or populate a SQL management
base (SMB) with execution plans:

Automatic capture of execution plans (available starting with Oracle Database 11g)
Bulk load execution plans or pre-existing SQL plan baselines

from oracle 12c , the database can use adaptive SPM. With adaptive SQL.plan management, DBAs no longer have to manually run the verification or evolve process for non-accepted plans.
They can go back days or weeks later and review what plans were evolved during each of the nightly maintenance window. When the SPM Evolve Advisor is enabled, it runs a verification or evolve process for all SQL statements that have non-accepted plans during the nightly maintenance window.
All the non-accepted plans are ranked and the evolve process is run for them. The newly found plans are ranked the highest.
If the non-accepted plan performs better than the existing accepted plans in the SQL plan baseline, then the plan is automatically accepted and becomes usable by the optimizer.



===========================Re optimization============================

Re optimization -it fixes plan problems that dynamic plans cannot solve. Whereas dynamic plans
help change relatively local parts of a plan; dynamic plans are not feasible for some global
plan changes. In reoptimization, the optimizer adapts a plan only on subsequent statement executions.
A query may be reoptimized several times, each time creating a larger and more precise set of optimizer adjustments.

For example, a query with an inefficient join order might cause suboptimal
performance, but it is not feasible to adapt the join order during execution. In these cases, the
optimizer automatically considers re-optimization. For reoptimization, the optimizer must
decide which statistics to collect and when.
As of Oracle Database 12c, join statistics are also included. Statements are continually
monitored to verify if statistics fluctuate over different executions. It works with adaptive cursor
sharing for statements with bind variables. This feature improves the ability of the query
processing engine (compilation and execution) to generate better execution plans.

The optimizer uses the statistics in the following ways:

•  The statistics collectors and their associated optimizer estimates determine if automatic
reoptimization is an option. If statistics differ from estimates by more than a threshold,
the optimizer looks for a replacement plan.
After the optimizer identifies a query as a reoptimization candidate, the database
submits all collected statistics to the optimizer.



=================SQL plan directive=================

In previous releases, the database stored compilation and execution statistics in the shared SQL area which is not persistent Starting from Oracle Database 12c, the database can use a SQL plan directive, which is additional information and instructions that the optimizer can use to generate a more optimal plan. 

For example, a SQL plan directive might instruct the optimizer to collect missing statistics, create column group statistics, or perform dynamic sampling. During SQL compilation or execution, the database analyzes the query expressions that are missing statistics and cause the optimizer to misestimate cardinality to create SQL plan directives. When the optimizer generates an execution plan, the directives give the optimizer additional information about objects that are referenced in the plan.
SQL plan directives are not tied to a specific SQL statement or SQL ID. The optimizer can
use SQL plan directives for SQL statements that are nearly identical because SQL plan
directives are defined on a query expression. For example, directives can help the optimizer
with queries that use similar patterns, such as web-based queries that are the same except
for a select list item. The database stores SQL plan directives persistently in the SYSAUX
tablespace. When generating an execution plan, the optimizer can use SQL plan directives to
obtain more information about the objects that are accessed in the plan.
Directives are automatically maintained, created as needed, purged if not used after a year.
Directives can be monitored in DBA_SQL_PLAN_DIR_OBJECTS. SQL plan directives improve
plan accuracy by persisting both compilation and execution statistics in the SYSAUX
tablespace, allowing them to be used by multiple SQL statements.
in conclusion, we can say that these enhance features (Create column group statistics, Perform dynamic sampling). are used to help optimizer to generate optimal and accurate plan some time due to Missing statistics, cardinality misestimates s,complex join predicate,complex query expression), optimizer is not able to generate execution plan


==================Dynamic sampling======================

 Dynamic sampling (DS) was introduced in Oracle Database 9i Release 2
it help optimizer to generate good execution plans.The goal of DS is to increase the optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates. So how and when will DS be use? During the compilation of a SQL statement, the optimizer decides whether to use DS or not by considering whether the available statistics are sufficient to generate a good execution plan . If the available statistics are not enough, dynamic sampling will be used. It is typically used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan.

 Gathering Extended Query Execution Statistics

Problem

if You don’t know whether the cardinality estimates in your explain plan are accurate for a specific query, and you want to gather extended explain plan statistics to validate those estimates.



================Histogram==========================

Histogram Enhancements in Oracle Database 12c Release 1 (12.1)

Data skew in a column can make it difficult for the optimizer to accurately estimate the cardinality of an operation. Without a histogram it will assume an uniform distribution amongst the distinct values for the column. . Histograms describe the distribution of data in a column, which helps identify data skew and allows the optimizer to make a better decision.

In previous releases, two types of histograms were supported, "frequency" and "height-balanced" histograms. Oracle database 12c introduces two new types of histogram, a "top frequency" and "hybrid" histogram. This article presents an example of each type of histogram.

Frequency Histograms (Pre-12c)

A frequency histogram is created when the number of distinct values (NDV) for the column is less than or equal to the total number of histogram buckets, which defaults to 254.
This means each distinct value can be represented by its own bucket.

Height-Balanced Histograms (Pre-12c)

In previous releases, when there were more distinct column values than the number of histogram buckets, a height-balanced histogram was created. In this case the histogram buckets are compressed so each bucket represents a range of values, which should contain approximately the same frequency of rows.

In Oracle database 12c, height-balanced histograms are only created if sampling is explicitly used during statistics collection. If there is no explicit sampling, Oracle will perform a full table scan and build a hybrid histogram, or possibly a top frequency histogram depending on the circumstances.

Top Frequency Histograms (12c)

Top-frequency histograms are a variant on the frequency histogram, where the histogram only focusses on the popular values, ignoring the less popular values as statistically insignificant. For a top-frequency histogram to be useful, the number of distinct popular values must be less than or equal to the number of histogram buckets, while the non-popular values are significantly non-popular in comparison to the popular values.

Hybrid Histograms (12c)

Hybrid histograms are like a combination of frequency and height-balanced histograms. In the majority of cases, 12c uses hybrid histograms in place of height-balanced histograms. Unlike height-balanced histograms, a single endpoint value cannot span buckets. In addition to the highest value in the bucket, the histogram stores the amount of times the highest value is represented in the bucket, giving an accurate idea of its popularity, as well as giving an indication of the popularity of the other endpoints in the bucket


Cardinality  –How many number of rows are there for that particular values
Skewed columns are columns in which the data is not evenly distributed among the rows.
For example, suppose:
·         You have a table order_lines with 100,000,000 rows
·         The table has a column named customer_id
·         You have 1,000,000 distinct customers
·         Some (very large) customers can have hundreds of thousands or millions of order lines

When the values of the column in the WHERE clause have a uniform distribution.
Optimizer would generate good execution plan with normal statistics at table level. 
But If the column values are skewed (means not uniform distributed in where clause) then Histogram statistics (column statistics) are required.
Since optimizer is required additional information to generate good execution plan.

For Example,

Our test table, DEMO, has 120 rows.
The Table has columns ID, which are skewed
Since ID=8  have more than 40% of rows which are making the data extremely skewed.e
COLUMN column_name FORMAT A20

select column_name,table_name,histogram from user_TAB_COLUMNS where table_name='DEMO';

COLUMN_NAME TABLE_NAME HISTOGRAM
-------------- ----------- ---------------
RNUM   DEMO    HEIGHT BALANCED
RNAME   DEMO   HEIGHT BALANCED
ID  DEMO   FREQUENCY



=================Dynamic Sampling===============


 Handling Missing Statistics

Problem

Certain tables in your database are missing statistics because the tables have had data loaded into them outside the nightly batch window. You can’t collect statistics on the table during the day when the database is handling other
workload.

Solution

Oracle uses dynamic statistics to compensate for missing statistics. In earlier releases, the databases always gathered dynamic statistics by default when it was confronted by a table with missing optimizer statistics. In Oracle Database 12c, the optimizer determines whether dynamic statistics collection is useful in a particular case, as well as the statistics level to use. In Oracle Database 12c, not only missing statistics but also insufficient statistics can trigger dynamic statistics collection by the optimizer. By default, when optimizer statistics are either missing or not sufficient, the database automatically collects dynamic statistics.The database will scan a random sample of data blocks in a table when you enable dynamic sampling. You enable/disable dynamic sampling in the database by setting the optimizer_dynamic_sampling initialization parameter. Dynamic sampling is enabled by default, as you can see from the following:

SQL> show parameter dynamic


NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_dynamic_sampling           integer     2
SQL>
SQL>

The default level of dynamic sampling is 2; setting it to 0 disables dynamic sampling. You can modify the default value by setting a different sampling level as shown here:

SQL> alter system set optimizer_dynamic_sampling=4 scope=both;
System altered.

SQL>

How It Works

Ideally, you should gather optimizer statistics with the DBMS_STATS package (manually or through automatic statistics collection). In cases where you don’t have a chance to collect statistics for a newly created or newly loaded table, the table won’t have any statistics until the database automatically generates the statistics through its automatic stats collection job or when you schedule a manual statistics collection job. Even if you don’t collect any statistics, the database uses some basic statistics, such as table and index block counts, estimated number of rows, join column,and GROUP BY statistics, to estimate the selectivity of the predicates in a query. Dynamic statistics go a step further,
augmenting these basic statistics by dynamically gathering additional statistics at compile time. Dynamic sampling is of particular help when dealing with frequently executed queries that involve tables with no statistics.


==============Configuring the Optimizer===============

There’s a cost to collecting dynamic statistics because the database uses resources to gather statistics during query compilation. If you don’t execute these queries many times, the database incurs an overhead each time it executes a query involving table(s) for which it must dynamically collect statistics. For dynamic sampling to really pay off, it must help queries that are executed frequently.
It’s important to understand the significance of the dynamic statistics levels, which can range from 0 to 11. The dynamic statistics level controls when Oracle collects dynamic statistics as well as the size of the sample. You can set the dynamic statistics level with the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter or with a SQL hint.
Unlike in previous releases when the database collected dynamic statistics only when tables had no statistics, in Oracle Database 12c the optimizer decides automatically whether dynamic statistics are useful and which statistics level to use. So, the main determinant of whether the database collects dynamic statistics isn’t the presence or absence of statistics; it is whether the available statistics are sufficient to generate optimal execution plans. Whenever the optimizer deems that the existing statistics are insufficient, it collects and uses dynamic statistics.

Automatic dynamic statistics are enabled when any of the following is true:

   The OPTIMIZER_DYNAMIC_SAMPLING initialization parameter is set to its default value.
   You set the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter to 11.
   You invoke the gathering of dynamic statistics through a SQL hint.

The optimizer automatically collects dynamic statistics when there are missing statistics, stale statistics, or insufficient statistics. The optimizer decides whether to use dynamic statistics based on factors such as the following:

  SQL statements use parallel execution.
  You’ve created a SQL plan directive.
  The SQL statement was captured in SQL Plan Management or Automatic Workload Repository
    or is currently in the shared SQL area.

Note that the sample size used for dynamic statistics at various sampling levels is in terms of data blocks, not rows. Here is a brief description of the various dynamic statistics levels:


======================================================================
 
5)adaptive SQL plan management
6)adaptive query optimization.


SQL PLAN management has feature to understand and verify new plan performance

For SQL statements that are executed more than once, the optimizer maintains a history of
Plan History for individual SQL statements. The optimizer recognizes a repeatable SQL statement by
maintaining a statement log. A SQL statement is recognized as repeatable when it is parsed
or executed again after it has been logged. After a SQL statement is recognized as
repeatable, various plans generated by the optimizer are maintained as a plan history
containing relevant information (such as SQL text, outline, bind variables, and compilation
environment) that is used by the optimizer to reproduce an execution plan.

A plan history contains different plans generated by the optimizer for a SQL statement over
time. However, only some of the plans in the plan history may be accepted for use. For
example, a new plan generated by the optimizer is not normally used until it has been verified
not to cause a performance regression. Plan verification is done “out of the box” as part of
Automatic SQL Tuning running as an automated task in a maintenance window.

The statement log, plan history, and plan baselines are stored in the SQL Management Base
(SMB), which also contains SQL profiles. The SMB is part of the database dictionary and is
stored in the SYSAUX tablespace. The SMB has automatic space management (for example,
periodic purging of unused plans). You can configure the SMB to change the plan retention
policy and set space size limits.

SQL Plan Management ensures that runtime performance never degrades due to the change of an execution plan.so only known plan are used
SPM consist of following plan -
1) sql basline capture
2) sql baseline selection
3) sql baseline evolution

With SPM, the optimizer automatically manages execution plans and ensures that only known
or verified plans are used. When a new plan is found for a SQL statement, the plan is not
used until the database verifies that its performance is comparable to or better than the
current plans. Verification is a manual process performed either through Oracle Enterprise
Manager (EM) or by running DBMS_SPM.EVOLVE_SQL_PLAN_BASELINES in Oracle
Database 11g. Verification is an automatic process performed by the Automatic SQL Tuning
(AST) job running DBMS_SPM.EVOLVE_SQL_PLAN_BASELINES in Oracle Database 11g.
The components of SPM are SQL plan baseline capture, SQL plan baseline selection and
SQL plan baseline evolution. There are two ways to seed or populate a SQL management
base (SMB) with execution plans:

Adaptive SQL Plan Management
----------------------------
The new evolve auto task runs in the nightly maintenance window.It ranks all nonaccepted plans and runs the evolve process for them. Newly found plans are ranked the highest. Poor performing plans are not retried for 30 days. After that, they are retired only if the statement is active

SPM Evolve Advisor is a SQL advisor that evolves plans that have recently been added to the
TS ransf G -t SQL plan baseline. The advisor simplifies plan evolution by eliminating the manual chore of non evolution. New functions introduced in the DBMS_SPM package work on the advisory task infrastructure.
The new functions such as DBMS_SPM.CREATE_EVOLVE_TASK,
DBMS_SPM.EXECUTE_EVOLVE_TASK, and so on allow database administrators (DBAs) to
create a task, evolve one or more plans in the task, fetch the report of the task, and implement
the results of a task. The task can be executed multiple times, and the report of a given task
can be fetched multiple times. SPM can schedule an evolve task as well as rerun an evolve
task. There is also a function to implement the results of a task, which in this case would be to
accept the currently non-accepted plans.
Using the task infrastructure allows the report of the evolution to be persistently stored in the
advisory framework repository. It also allows Enterprise Manager to schedule SQL plan
baselines evolution and fetch reports on demand. The new function supports HTML and XML
reports in addition to TEXT. In Oracle Database 12c, the new task-based functions in
DBMS_SPM retain the time_limit specification as a task parameter. It has the same default
values as Oracle Database 11g. When ACCEPT_PLANS is true (default), SQL plan
management automatically accepts all plans recommended by the task. When set to false,
the task verifies the plans and generates a report if its findings, but does not evolve the plans.
Users can view the report by using the appropriate function, and then execute a new
implement function to accept the successful plans.


Adaptive Execution Plans
A query plan changes during execution because runtime
conditions indicate that optimizer estimates are inaccurate.
All adaptive execution plans rely on statistics that are
collected during query execution.
The two adaptive plan techniques are:
– Dynamic plans
– Reoptimization

The Adaptive Execution Plans feature enables the optimizer to automatically adapt a poorly
TS ransf G execution plan at run time and prevent a poor plan from being chosen on
performingn-t no subsequent executions. The optimizer instruments its chosen plan so that at run time, it can be detected if the optimizer’s estimates are not optimal. Then the plan can be automatically
adapted to the actual conditions. An adaptive plan is a plan that changes after optimization
when optimizer estimates prove inaccurate.
The optimizer can adapt plans based on statistics that are collected during statement
execution. All adaptive mechanisms can execute a plan that differs from the plan which was
originally determined during hard parse. This improves the ability of the query-processing
engine (compilation and execution) to generate better execution plans.
The two Adaptive Plan techniques are:

• Dynamic plans: A dynamic plan chooses among subplans during statement execution.
For dynamic plans, the optimizer must decide which subplans to include in a dynamic
plan, which statistics to collect to choose a subplan, and thresholds for this choice.

• Re-optimization: In contrast, reoptimization changes a plan for executions after the
current execution. For re-optimization, the optimizer must decide which statistics to
collect at which points in a plan and when re-optimization is feasible.
Note: OPTIMIZER_ADAPTIVE_REPORTING_ONLY controls reporting-only mode for adaptive
optimizations. When set to TRUE, adaptive optimizations run in reporting-only mode where the
information required for an adaptive optimization is gathered, but no action is taken to change
the plan.


A dynamic plan ans execution plan that has different built-in plan options. During the first
TS r is an f G before execution, n-t a specific subplan becomes active, the optimizer makes a final decision no about which option to use. The optimizer bases its choice on observations made during the
execution up to this point. A dynamic plan enables the final plan for a statement to differ from
the default plan, thereby potentially improving query performance.
A subplan is a portion of a plan that the optimizer can switch to as an alternative at run time.
During statement execution, the statistics collector buffers a portion of rows. Portions of the
plan preceding the statistics collector can have alternate subplans, each of which is valid for a
subset of possible values returned by the collector.
Often the set of values for a subplan is a range. If a statistic falls into the valid values of a
subplan that was not the default plan, then the optimizer chooses the alternative subplan.
After the optimizer chooses a subplan, buffering is disabled. The statistics collector stops
collecting rows, and passes them through instead. On subsequent executions of the child
cursor, the optimizer disables buffering, and chooses the same final plan.
With dynamic plans, the execution plan adapts to the optimizer’s poor plan choices, and
correct decisions can be made during the first execution.
 

In reoptimization, ns optimizer adapts a plan only on subsequent statement executions.
TS rathe fG -t
Deferred optimization can fix any suboptimal plan chosen because of incorrect optimizer
non estimates, such as a suboptimal distribution method or an incorrect choice of degree of
parallelism. At the end of the first execution, the optimizer has a more complete set of
statistics, and this information can improve plan selection in the future. A query may be
reoptimized several times, each time creating a larger and more precise set of optimizer
adjustments.
Reoptimization fixes plan problems that dynamic plans cannot solve. Whereas dynamic plans
help change relatively local parts of a plan; dynamic plans are not feasible for some global
plan changes. For example, a query with an inefficient join order might cause suboptimal
performance, but it is not feasible to adapt the join order during execution. In these cases, the
optimizer automatically considers re-optimization. For re -optimization, the optimizer must
decide which statistics to collect and when. As of Oracle Database 12c, join statistics are also included. Statements are continually monitored to verify if statistics fluctuate over different executions. It works with adaptive cursor sharing for statements with bind variables. This feature improves the ability of the query processing engine (compilation and execution) to generate better execution plans.



How I Create SQL Baseline to fix query with an Execution Plan            

creating SQL baseline and force query to use better execution plan (plan hash value). 

Here we are sharing this scenario where the query which was running fine till yesterday now suddenly running long . So it is very important to check the plan for a particular sqlid. 


Manual Plan Loading

Manual plan loading can be used in conjunction with, or as an alternative to automatic plan capture. The load operations are performed using the DBMS_SPM package, 

which allows SQL plan baselines to be loaded from SQL tuning sets or from specific SQL statements in the cursor cache. Manually loaded statements are flagged as accepted by default. 

If a SQL plan baseline is present for a SQL statement, the plan is added to the baseline, otherwise a new baseline is created.

The following code uses the LOAD_PLANS_FROM_SQLSET function to load all statements in an existing SQL tuning set into SQL baselines. 

A filter can be applied to limit the SQL statements loaded if necessary.


DECLARE

  l_plans_loaded  PLS_INTEGER;

BEGIN

  l_plans_loaded := DBMS_SPM.load_plans_from_sqlset(

    sqlset_name => 'my_sqlset');

END;

/

The LOAD_PLANS_FROM_CURSOR_CACHE functions allow SQL statements to be loaded from the cursor cache. There are four overloads, allowing statements to be identified by a number of criteria, including: SQL_ID, SQL_TEXT, PARSING_SCHEMA_NAME, MODULE and ACTION. The following example identifies the SQL statement using the SQL_ID.

or

DECLARE

  l_plans_loaded  PLS_INTEGER;

BEGIN

  l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(

    sql_id => '1fkh93md0802n');

END;

/


select sql_id,plan_hash_value from v$sql  where sql_id=’&sql_id’;

select count(*) from dba_sql_plan_baselines;

Load plan from cursor cache


DECLARE

  l_plans_loaded  PLS_INTEGER;

BEGIN

  l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(

    sql_id => '1fkh93md0802n' ,plan_hash_value=>4453343434);

END;

/   

END;


select count(*) from dba_sql_plan_baselines;

SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE  sql_text LIKE '%spm_test_tab%' ;

SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE sql_handle = 'SYS_SQL_7b76323ad90440b9';

select * from table(dbms.xplan.display_sql_plan_baseline(sql_handle=>'sqlccdd555555s'))

select sql_id,plan_hash_value,exact_matching_signature,sql_plan_baseline from v$sql where sql_id='&sql_id';

select sql_handle,plan_name,accepted, enabled,optimizer_cost from dba_sql_plan_baselines where SQL_HANDLE='SQL_540fb108407c78ed';


The return value of the LOAD_PLANS_FROM_SQLSET and LOAD_PLANS_FROM_CURSOR_CACHE functions indicates the number of plan loaded by the function call.

Evolving SQL Plan Baselines

Evolving a SQL plan baseline is the process by which the optimizer determines if non-accepted plans in the baseline should be accepted. As mentioned previously, manually loaded plans are automatically marked as accepted, so manual loading forces the evolving process. When plans are loaded automatically, the baselines are evolved using the EVOLVE_SQL_PLAN_BASELINE function, which returns a CLOB reporting its results.


SET LONG 10000

SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_7b76323ad90440b9')

FROM   dual;


Note my DB version in 12.2.0.1.0


Query to check the multiple hash plans for sql id.



select

SQL_ID

, PLAN_HASH_VALUE

, sum(EXECUTIONS_DELTA) EXECUTIONS

, sum(ROWS_PROCESSED_DELTA) CROWS

, trunc(sum(CPU_TIME_DELTA)/1000000/60) CPU_MINS

, trunc(sum(ELAPSED_TIME_DELTA)/1000000/60)  ELA_MINS

from DBA_HIST_SQLSTAT

where SQL_ID in ('&sql_id') --repalce sqlid with your sqlid

group by SQL_ID , PLAN_HASH_VALUE

order by SQL_ID, CPU_MINS

 

 

SQL_ID        PLAN_HASH_VALUE EXECUTIONS      CROWS   CPU_MINS   ELA_MINS

------------- --------------- ---------- ---------- ---------- ----------

99v5172d7sg2r      2819387955          2        167          1          1

99v5172d7sg2r      3788285656          2        184          1          1

99v5172d7sg2r      2728488078          3        218          1          1

99v5172d7sg2r      1528540721          4        332          1          1

99v5172d7sg2r      1565483063          7        699          7          8

99v5172d7sg2r      1370745820         17       1236          8          9

99v5172d7sg2r       221645537          0          0         36         51

99v5172d7sg2r      1573610442          3          0         74         75

 

8 rows selected


we find out the best execution plan (Plan_hash_value) and force the query to use that plan. Below are the steps I did to create and fix bad queries by creating SQL baseline.


STEP 1: GENERATE ALL PREVIOUS HISTORY RUN DETAILS OF SQL_ID FROM AWR 



break off sdate

set lines 2000

set linesize 2000

col SDATE format a10

col STIME format a10

select to_char(begin_interval_time,'YYYY/MM/DD') SDATE,to_char(begin_interval_time,'HH24:MI')  STIME,s.snap_id,

        sql_id, plan_hash_value PLAN,

        ROUND(elapsed_time_delta/1000000,2) ET_SECS,

        nvl(executions_delta,0) execs,

        ROUND((elapsed_time_delta/decode(executions_delta,null,1,0,1,executions_delta))/1000000,2) ET_PER_EXEC,

        ROUND((buffer_gets_delta/decode(executions_delta,null,1,0,1,executions_delta)), 2) avg_lio,

        ROUND((CPU_TIME_DELTA/decode(executions_delta,null,1,0,1,executions_delta))/1000, 2) avg_cpu_ms,

        ROUND((IOWAIT_DELTA/decode(executions_delta,null,1,0,1,executions_delta))/1000, 2) avg_iow_ms,

        ROUND((DISK_READS_DELTA/decode(executions_delta,null,1,0,1,executions_delta)), 2) avg_pio,

        ROWS_PROCESSED_DELTA num_rows

from DBA_HIST_SQLSTAT S,  DBA_HIST_SNAPSHOT SS

where s.sql_id = '&sql_id'

and ss.snap_id  =S.snap_id

and ss.instance_number = S.instance_number

order by sdate,stime;



col begin_interval_time for a35

select ss.snap_id, ss.instance_number node, begin_interval_time, s.sql_id, s.plan_hash_value,

nvl(executions_delta,0) execs,

(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,

(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio

from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS

where s.sql_id = nvl('&sql_id','99v5172d7sg2r')

and ss.snap_id = S.snap_id

and ss.instance_number = S.instance_number

and executions_delta > 0 --and plan_hash_value=3099046451

order by 1, 2, 3

/

 

Enter value for sql_id: 99v5172d7sg2r


   SNAP_ID       NODE BEGIN_INTERVAL_TIME                 SQL_ID        PLAN_HASH_VALUE      EXECS  AVG_ETIME    AVG_LIO

---------- ---------- ----------------------------------- ------------- --------------- ---------- ---------- ----------

    234043          1 31-OCT-22 06.15.10.441 AM           99v5172d7sg2r      1565483063          1  44.977487    1275694

    234724          1 07-NOV-22 08.30.15.419 AM           99v5172d7sg2r      1370745820          1  15.671934     485334

    235398          1 14-NOV-22 09.00.14.042 AM           99v5172d7sg2r      1370745820          1  63.020552     747894

    236069          1 21-NOV-22 08.45.39.541 AM           99v5172d7sg2r      2728488078          1  29.948022     918476

    236740          1 28-NOV-22 08.30.50.784 AM           99v5172d7sg2r      3788285656          1   37.30876    1127835

    236935          1 30-NOV-22 09.15.45.852 AM           99v5172d7sg2r      3788285656          1  41.008219    1206658

    237416          1 05-DEC-22 09.30.56.546 AM           99v5172d7sg2r      1370745820          1  14.640457     439093

    238086          1 12-DEC-22 09.00.56.011 AM           99v5172d7sg2r      1370745820          1  23.707833     719818

    238778          1 19-DEC-22 02.00.28.302 PM           99v5172d7sg2r      1370745820          1  42.164343    1024287

    239436          1 26-DEC-22 10.30.20.445 AM           99v5172d7sg2r      1573610442          1 840.569558   15504539

    239438          1 26-DEC-22 11.00.35.118 AM           99v5172d7sg2r      1573610442          1 978.608431   17804033

 

   SNAP_ID       NODE BEGIN_INTERVAL_TIME                 SQL_ID        PLAN_HASH_VALUE      EXECS  AVG_ETIME    AVG_LIO

---------- ---------- ----------------------------------- ------------- --------------- ---------- ---------- ----------

    239439          1 26-DEC-22 11.15.42.999 AM           99v5172d7sg2r      1573610442          1 1790.28067   30838064

  

/*In this scenario sql_id=>99v5172d7sg2r and plan_hash_value for good plan that we want to force is 1370745820.*/

Follow below steps to create sql baseline for sql_id

 SELECT sql_handle, plan_name,enabled,accepted,fixed FROM dba_sql_plan_baselines

WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')  order by accepted,enabled;

 

2      Enter value for sql_id: 99v5172d7sg2r

old   2: WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')

new   2: WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='99v5172d7sg2r')

 

no rows selected

 

STEP 2: DROP SQL TUNING SET (STS) IF EXISTS


BEGIN

  DBMS_SQLTUNE.DROP_SQLSET(

    sqlset_name > 'SQL_FOR_99v5172d7sg2r');

END;


STEP 3: CREATE SQL TUNING SET 


BEGIN

  DBMS_SQLTUNE.create_sqlset (

    sqlset_name  > 'SQL_FOR_99v5172d7sg2r ',

    description  > 'SQL tuning set for 99v5172d7sg2r ');

END;

/


/* Populate STS from AWR by specifying snapshot for a desired plan which we found using the above query.

In this scenario snap id's are 239439 and 237416 and change plan_hash_value accordingly.*/



DECLARE

  l_cursor  DBMS_SQLTUNE.sqlset_cursor;

BEGIN

  OPEN l_cursor FOR

    SELECT VALUE(p)

    FROM   TABLE (DBMS_SQLTUNE.select_workload_repository (

                    23483,  -- begin_snap

                    23484,  -- end_snap

                    q'<sql_id in ('6hxw283cyw0ub') and plan_hash_value in (1370745820)>',  -- basic_filter 

                    NULL, -- object_filter

                    NULL, -- ranking_measure1

                    NULL, -- ranking_measure2

                    NULL, -- ranking_measure3

                    NULL, -- result_percentage

                    100)   -- result_limit

                  ) p;



  DBMS_SQLTUNE.load_sqlset (

    sqlset_name     > 'SQL_FOR_99v5172d7sg2r ',

    populate_cursor > l_cursor);

END;

/


STEP 4: CHECK SQL SET DETAILS 


column text format a20

select sqlset_name, sqlset_owner, sqlset_id, sql_id,substr(sql_text,1,20) text,elapsed_time,buffer_gets,

parsing_schema_name, plan_hash_value, bind_data from dba_sqlset_statements where sqlset_name ='SQL_FOR_99v5172d7sg2r';


STEP 5: LOAD DESIRED PLAN FROM STS AS SQL PLAN BASELINE



DECLARE

  L_PLANS_LOADED  PLS_INTEGER;

BEGIN

  L_PLANS_LOADED : DBMS_SPM.LOAD_PLANS_FROM_SQLSET(

    SQLSET_NAME > 'SQL_FOR_99v5172d7sg2r ');

END;



STEP 6: CHECK SQL PLAN BASELINE DETAILS 



SELECT sql_handle, plan_name,enabled,accepted,fixed FROM dba_sql_plan_baselines

WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')

order by accepted,enabled;


STEP 7: ENABLE FIXED=YES

var pbsts varchar2(30);

exec :pbsts : dbms_spm.alter_sql_plan_baseline('SQL_64e76e773c55f1c4','SQL_PLAN_99v5172d7sg2r 0a6275e','FIXED','YES');



STEP 8: PURGE OLD EXECUTION PLAN FROM SHARED POOL 


Find below two parameter which are required to purge specific sql from the shared pool.

select address||','||hash_value from gv$sqlarea where sql_id ='99v5172d7sg2r ';

ADDRESS||','||HASH_VALUE

---------------------------------------------------------

C00000016FF92618,2591833175

Now use below command to purge sql from shared pool.


exec sys.dbms_shared_pool.purge(' C00000016FF92618,2591833175','C',1);



Re-run query or program to test





Summary


Sometimes DBA’s or developers might wonder why suddenly an execution plan change where nothing  has been change in database only  the data has grown. 
Here I have showed a very simple example, why not refreshing statistics has caused the CBO to suddenly change an execution plan for no apparent reason.

Performance issue check

1) check event at database level
 select count(1),event from v$session group by event

2) session and object level lock
     
     SELECT DECODE(request,0,'Holder: ','Waiter: ') ||
sid sess, id1, id2, lmode, request, type
FROM V$LOCK
WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request > 0)
ORDER BY id1, request;


select a.sid, a.serial#,OBJECT_NAME
from v$session a, v$locked_object b, dba_objects c 
where b.object_id = c.object_id 
and a.sid = b.session_id;




3)  check session detail ,sql text And explain plan


select sid,serial#,sql_id,machine,program,to_char(logon_time,'dd-Mon-rrrr hh24:mi:ss') as login_time from v$session where sid=’&sid’

select sid,serial#,osuser,username,program,status,SQl_id from v$session where sid=&sid;

select SQL_TEXT from v$sqlarea where SQL_ID='99v5172d7sg2r';

select sql_id,rows_processed,upper(sql_fulltext) from v$sqlarea where sql_id='&sql_id';

select * from table(dbms_xplan.display_cursor('&sql_id',null,'AdVanced ALLSTATS LAST'))

4) check plan change

select ss.snap_id, ss.instance_number node, begin_interval_time, s.sql_id, s.plan_hash_value,
 nvl(executions_delta,0) execs,
 (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
 (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio, version_count
 from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
 where s.sql_id = nvl('&sql_id','99v5172d7sg2r')
 and ss.snap_id = S.snap_id
 and ss.instance_number = S.instance_number
 and executions_delta > 0 --and plan_hash_value=3099046451
 order by 1, 2, 3
/

We can also check if the SQL execution plan has changed by using the Active Workload

select sql_id, sql_text from gv$sql where lower(sql_fulltext) like '%some unique string%';

SELECT SQL_ID,PLAN_HASH_VALUE,SQL_TEXT FROM V$SQL WHERE SQL_ID='&SQL_ID';


If the plan has changed, you can view both plans with this and need to load good plan

select * from table(dbms_xplan.display_awr(sql_id => '&SQL_ID'));

select dbms_sqltune.report_sql_monitor(sql_id => '&SQL_ID', type => 'text') from dual;
select sql_id,plan_hash_value from v$sql  where sql_id=’&sql_id’;

select count(*) from dba_sql_plan_baselines;

Load plan from cursor cache

  

DECLARE

  l_plans_loaded  PLS_INTEGER;

BEGIN

  l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(

    sql_id => '1fkh93md0802n' ,plan_hash_value=>4453343434);

END;

/   

END;

 

select count(*) from dba_sql_plan_baselines;

SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE  sql_text LIKE '%spm_test_tab%' ;

SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE sql_handle = 'SYS_SQL_7b76323ad90440b9';

select * from table(dbms.xplan.display_sql_plan_baseline(sql_handle=>'sqlccdd555555s'))

 select sql_id,plan_hash_value,exact_matching_signature,sql_plan_baseline from v$sql where sql_id='&sql_id';

select sql_handle,plan_name,accepted, enabled,optimizer_cost from dba_sql_plan_baselines where SQL_HANDLE='SQL_540fb108407c78ed';



5)  check table statistic

select a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from dba_tables a,
 dba_segments b  where  a.owner=b.owner and a.table_name=b.segment_name and a.table_name='&table_name' 
order by b.bytes desc;

6) run sql advisor

 @?/rdbms/admin/sqltrpt.sql

select count(*) from dba_sql_plan_baselines;

select sql_id,plan_hash_value,exact_matching_signature,sql_plan_baseline from v$sql where sql_id='&sql_id';

 select sql_handle,plan_name,accepted, enabled,optimizer_cost from dba_sql_plan_baselines where SQL_HANDLE='SQL_540fb108407c78ed';


select sql_id,plan_hash_value,to_char(exact_matching_signature),sql_plan_baseline from v$sql where 
to_char(sql_text) like 'select * %'

SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE FROM   DBA_SQL_PLAN_BASELINES;

select * from table(dbms_xplan.display_awr(sql_id => '&SQL_ID'));
select * from table(dbms_xplan.display_cursor('&SQL_ID') 


7)Check any long running SQL on database

 select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete" from v$session_longops where sofar <> totalwork;