Performance Tuning Concepts
Learn Performance Tuning with
Real-Time in 30 Day by Industry Expert
This document provides guidance for
identify Top SQL,Advise for Oracle Memory and Deal with slow Performance of
Database
Improving performance means
A)Reduce response timeB)Increase throughput
How to start Tuning?
A) How frequently the same issue occurred as of now?B) is every time, the issue reported time is same?C) is the issue only from specific user?D) is the issue only from specific location?E) is the issue only from specific Machine?F) is the issue only from specific Application?G) are there any changes happened in the Applications/Databases/Query O/s, H/w, Network
H) is the issue is in Business Hours/ non-business hours?I) are there any backups jobs running at the time of problem?J) whether latest statistics collected or not?K) are there any increase in number of users/calls/transactions?
Tuning Tool
1) Automated Maintenance Tasks2) ADDM Report / Regular ADDM Report (pre-12c)3) ADDM Compare Report (New in 12c)4) Real-Time ADDM Report (New in 12c)5) Emergency Monitoring (Enhacement in "Memory Access Mode")6) ASH Report and AWR Report
Automated Maintenance Tasks
Oracle 11g includes three automated
database maintenance tasks:
Automatic
Optimizer Statistics Collection - Gathers stale or missing statistics
for all schema objects . The task name is 'auto optimizer stats collection'. Automatic Segment Advisor - Identifies
segments that could be reorganized to save space. The task name is 'auto space
advisor'.
Automatic SQL Tuning Advisor -
Identifies and attempts to tune high load SQL. The task name is 'sql tuning
advisor'.
These tasks run during maintenance windows scheduled to open overnight.
set lines 180 pages 1000set lines 180 pages 1000col client_name for a40col attributes for a60select client_name, status,attributes,service_name from dba_autotask_client/SQL >CLIENT_NAME STATUS ATTRIBUTES SERVICE_NAME---------------------------------------- --------
------------------------------------------------------------auto optimizer stats collection DISABLED
ON BY DEFAULT, VOLATILE, SAFE TO KILLauto space advisor
ENABLED ON BY DEFAULT, VOLATILE,
SAFE TO KILLsql tuning advisor
ENABLED ONCE PER WINDOW, ON BY
DEFAULT, VOLATILE, SAFE TO KILL
SELECT count(*) FROM dba_autotask_client_history WHERE client_name = 'auto
optimizer stats collection' AND window_start_time > (SYSDATE-8);
SELECT CLIENT_NAME,JOBS_STARTED,JOBS_COMPLETED,WINDOW_END_TIME FROM
dba_autotask_client_history WHERE client_name = 'auto optimizer stats
collection' AND window_start_time > (SYSDATE-8);
CLIENT_NAME JOBS_STARTED
JOBS_COMPLETED WINDOW_END_TIME---------------------------------------------------------------- ------------
-------------- ---------------------------------------------------------------------------auto optimizer stats
collection
1
1 29-OCT-16 03.00.00.015508 AM +01:00auto optimizer stats
collection
1
1 01-NOV-16 02.00.00.020355 AM +00:00auto optimizer stats
collection
5
5 30-OCT-16 02.00.00.009613 AM +00:00auto optimizer stats
collection
5
5 31-OCT-16 02.00.00.006245 AM +00:00auto optimizer stats
collection
1
1 28-OCT-16 03.00.00.011625 AM +01:00auto optimizer stats
collection
1
1 26-OCT-16 03.00.00.026329 AM +01:00auto optimizer stats
collection
1
1 02-NOV-16 02.00.00.005620 AM +00:00auto optimizer stats
collection
1
1 27-OCT-16 03.00.00.031229 AM +01:00
To collect complete Info
ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT ='DD/MM/YYYY HH24:MI:SS TZR TZD';ALTER SESSION SET NLS_DATE_FORMAT ='DD-MM-YYYY HH24:MI:SS';set pagesize 9999spool c:\dba_autotask_client.htmlset markup html onselect * from DBA_AUTOTASK_CLIENT;select * from DBA_AUTOTASK_CLIENT_HISTORY;select * from DBA_AUTOTASK_CLIENT_JOB;select * from DBA_AUTOTASK_JOB_HISTORY order by JOB_START_TIME;select * from DBA_AUTOTASK_OPERATION;select * from DBA_AUTOTASK_SCHEDULE order by START_TIME;select * from DBA_AUTOTASK_TASK;select * from DBA_AUTOTASK_WINDOW_CLIENTS;select * from DBA_AUTOTASK_WINDOW_HISTORY order by WINDOW_START_TIME;select * from dba_scheduler_windows;select * from dba_scheduler_window_groups;select * from dba_scheduler_job_run_details order by ACTUAL_START_DATE;select * from DBA_SCHEDULER_JOB_LOG;SELECT program_name, program_action, enabled FROM dba_scheduler_programs;spool off
ADDM Report / Regular ADDM Report (pre-12c)
- auto generated report (system
generated) - user generated (Manually created ADDM)
- depended on Snapshots (AWR Based
Report) - findings are two categories
- for SQL Queries - for waits at Database (Ex:- Memory
structures)
- addmrpt.sql- addmrpti.sql
- dbms_advisor.get_task_report()
from SQL Prompt- ADDM information can be contorled by statistics_level- statistics_level=basic => NO auto ADDM Report- ADDM Report for I/O Analysis partially depended on DBIO_EXPECTED param
ADDM Compare Report (New in 12c)
- based on AWR Snapshots - identifies the major changes for
performance impact
- runs analysis for the base period and
compare period - snap1, snap2 => good time
- snap3, snap4 => bad time - dbms_addm is new PL/SQL Package
- compare report generation is possible
Real-Time ADDM Report (New in 12c)
- based on ASH Data- ASH Data for last 10 minutes (available in SGA) - did NOT find any blokings
- able to connect as sysdba - "Emergency Monitoring" report
NOT in a position to give root cause
- PROD Performance is highly sick/slow - generating ADDM is emergency
Emergency Monitoring (Enhancement in
"Memory Access Mode")
- useful at the time of PROD highly sick
and not able to login- available in 11g R2 with the name "Memory Access Mode"- can be enabled/disabled- if enabled, collectors are working - these collector collects information
directly from SGA
- from 12c, it is "Emergency
Monitoring" - collecting information from SGA
directly by "Agent"
- this information sent to OEM Single
Page- that page has "hang analysis information", blockings, kill option, shutdown options, etc.
ASH Report and AWR Report
ASH Report(Oracle Active Session History)- it need to generate when performance
analysis on sessions that run too frequently or are too short to be available
on AWRsnapshots. The AWR snapshots are not taken often enough to capture the
information that you need. it is sampled every second from V$SESSION, and it
can show more real-time or near real-time session information to assist in
doing performance analysis on your database.
Important Summary Sections in AWR Report
(or) How to Read AWR Report?
Single Instance Report
RAC Database AWR Compare Report
If DBTime is greater during bad period, it is likely the database is causing the problem; you
have verified the problem is in the
database
- ReleaseDepending on the problem, the database version may be important. If the
version is old or is not the latest patchset release then the most up to
date fixes may not be applied which has the potential to open the
database up to issues.
- RACIf the database is running in a Real Application Cluster configuration
then you may need to look at information from the other instances to get
a full picture of the database performance
- PlatformThere may be platform specific issues that have a bearing on the system
- CPUs/CoresIn a multi-processor environment, the "wall clock" time is not
necessarily a good indicator of how much work the database can do since
multiple operations can be pursued simultaneously. You can use cores for
an indication of how much CPU work can likely be done at once.
- Snap TimeThe Snap time shows the times for the starting and ending snapshots
for the report period. Does this cover the time of problem that is being
encountered?
- Elapsed timeThe elapsed time indicates the duration of the report between the 2
selected snapshots.
- DB timeThe DB Time is the time spent in the database for the period of the report.
If this is significantly higher than the Elapsed time then this is a good
indicator of a heavily loaded system. Remember that on a
multi-processor system, you might expect the DB Time to be able to exceed
the elapsed time. Additionally, the db time includes the time waiting for
the CPU to become available, so this number can be higher than the
Elapsed time X Cores. In the example above, the numbers say that the database worked for 2193
minutes in 15 minutes of elapsed time. Whether that is an indication of a
problem depends on the capacity and concurrency capabilities of the
system. Looking at the numbers, 2193:15 is a ratio of 146:1, so, in this
case, if they had significantly less than 146 cpus it is likely that
there is some overloading issues. Remember that the user perception
is also a significant factor in whether there is a "performance
issue" - if the system delivers what the users want then there might
not be a problem!
- SessionsYou can use the sessions information along with the DB time to give an
average amount of DB time per session. Are there a large number or a
small number of connections?
Important Summary Sections in AWR Report
1) Load Profile => load on Instance / Instance activities2) Instance Efficiency Target (100%)3) Top 5/10 Events -=> high performance impact wait events4) Time Mode Statistics => to find root cause of Query issues5) Instance Efficiency Target (100%)
=> overall health of Database
Based upon the relative information in above sections, Go to Drill Down Sections
Load Profile:--------------ParsesHard ParsesPhysical ReadsPhysical Writes
Single Instance Report
Where:
- DB CPU(s)The DB CPU(s) figure shows the amount of the CPU being used by the
database. You can use this alongside the actual number of cores to
give you an idea of how much of the total available machine CPU is
actually being used by this instance.
- DB Time(s) Here the "Per Second" information gives you
another version of the total DB time used, just in this case expressed as
every second as opposed to the full elapsed period.
. Redo size: This is
the amount of redo generated during this report.
. Logical Reads: This is calculated as
Consistent Gets + DB Block Gets = Logical Reads
. Block changes: The
number of blocks modified during the sample interval
. Physical Reads:
The number of requests for a block that caused a physical I/O.
. Physical Writes:
The number of physical writes issued.
. User Calls: The
number of queries generated
. Parses: Total of
all parses: both hard and soft
. Hard Parses: Those parses requiring a
completely new parse of the SQL statement. These consume both latches and
shared pool area.
. Soft Parses: Not
listed but derived by subtracting the hard parses from parses. A soft parse
reuses a previous hard parse and hence consumes far fewer resources.
. Sorts, Logons,
Executes and Transactions are all self explanatory
RAC Database Compare Report
Instance Efficiency Target (100%):-----------------------------------
Buffer Hit%Library Hit%Soft Parse%Execute to Parse%Parse CPU to Parse Elapsed%
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.91 Redo NoWait %: 100.00
Buffer Hit %: 98.14 In-memory Sort %: 99.98
Library Hit %: 99.91 Soft Parse %: 94.48
Execute to Parse %: 99.45 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 71.23 % Non-Parse CPU: 99.00
The most important Statistic presented here from the point of view of our
example is the '% Non-Parse CPU' because this indicates that almost all the
CPU time that we see in the Top Waits section is attributable to
Execution and not parse, which means that tuning SQL may help to improve this.
If we were tuning, then 94.48% soft parse rate would show a small proportion of
hard parsing which is desirable. The high execute to parse % indicates
good usage of cursors. Generally, we want the statistics here close to
100%, but remember that a few percent may not be relevant dependent on the
application. For example, in a data warehouse environment, hard parsing
may be higher due to usage of materialized views and, or histograms. So
again comparing to baseline report when performance was good is
important.
Top 5/10 Events:-
----------------
Buffer Busy WaitsFree Buffer WaitsLatch: Cache Buffer ChainsDB File Scatter ReadDB File Sequential ReadDB File Parallel Read
(in case of Shared Pool or Library Cache Size issues)Latch: shared poolLatch: row cache objects
Time Model Statistics:-----------------------Parse Time ElapsedHard Parse Elapsed Time
NOTE:- shared_pool_reserved_size > 50% of shared_pool_size => signal
memory leak error=> ORA-04031
Free Buffer Waits:-
Problem1:- buffer cache is too small
Possible solutions:-
1) use AMM (memory_target) or ASMM (sga_target)2) increase buffer cache size as per a) ADDM Recommendations
b) AWR Buffer Cache Advisory Resultsc) V$db_cache_advice
Problem2:- DBWR is slow
Possible Solutions:-
1) increase db_writer_processes => increases I/O bandwidth (check with System Administrator for
better value)
2) increase I/O bandwidth by stripping the datafiles across multiple Disks (storage level)
Buffer Busy Waits:-
Find hot blocks / hot segments from
segment statisticsSegments by Buffer Busy Waits
Solution:-
1) Manually keep hot object into KEEP Buffer2) check the segment advisory results3) check the ADDM Recommenations4) check the Tablespace management (whether it is dictionary manager or locally
managed)
Latch: cache buffer chains:-
---------------------------
Problem1:- the cache chain latches are used when searching for adding or
deleting a buffer from the buffer cache chain
Reason:- multiple users are trying to read code / description from look up tables
Solutions:-
a) identify the hot blockb) modify the application to use PL/SQL to read the look up table once and store code / descriptions in local
variables which can be accessed later many times
Problem2:- simultaneous update/select operations on the same block
Solutions:- modify the application to commit frequently and reasonbly => CR clones need not be created
Read Waits:-
-----------
DB File Scatter Read => waits for multiple blocks to read from disk while
Full Table Scan
Possible Solution:- try to cache frequently used small tables
DB File Sequential Read => waits for single block read from disk while Index
Full Scan
possible solution:- check indexes on the table to ensure that the right index
is being used check the column order of the composite
index with WHERE Clause
storage disks- hard disks- flash disks (available by default in Exadata Server)
Tuning PGA:-
-----------
What are the other memory structures in PGA? - Hash Area (hash join operations for
the SELECT)
- SORT Area (sort process in case or
ORDER BY, GROUP BY, DISTINCT, etc.) - workarea_size_policy parameter
- AUTO (default = HASH_AREA and
SORT_AREA automatically set) - by setting PGA_AGGREGATE_TARGET
- MANUAL
what is workarea in PGA?- a memory area for processing
How to set the optimal size for PGA?- based on information about optimal pass, one pass and multi pass operations
Examples:-
1) assume workarea size is 50 MB SELECT require 25MB for processing
25MB < 50 MB => optimal pass (very
good)
2) assume workarea size is 50MB SELECT require 75MB for processing
50MB + 25 MB = 75 MB => one pass (ok)
3) assume workarea size is 50MB
SELECT require 125 MB for
processing50MB + 50MB + 25MB => more than one pass => multi pass (very bad)
sga_target <= sga_max_size
memory_target <= memory_max_target
sga_target = 500 MB => within in SGA + pga_aggregate_target
memory_target => SGA + PGA
DB Time is computed by adding the CPU time and wait time of all sessions (excluding
the waits for idle events)
An AWR report shows the total DB Time
for the instance
(in the section “Time Model System Stats”) during the period covered by
the AWR snapshots. If the time modelstatistic DB CPU consumes most of the database time for the instance, it shows
the database was actively processing most of the time.DB time, which represents
activity on your database. If the DB time exceeds the elapsed time, it denotes
a busy database. If it is a lot higher than the elapsed time, it may mean that
some sessions are waiting for resources
SQL script for getting AWR Report on RAC database:SQL>@$ORACLE_HOME/rdbms/admin/awrgrpt.sql
SQL script for getting AWR Report for single instance:SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql
SQL script for getting ASH Report on RAC database:SQL>@$ORACLE_HOME/rdbms/admin/ashrpti.sql
SQL script for getting ASH Report for single Instance:SQL>@$ORACLE_HOME/rdbms/admin/ashrpt.sql
Comparing Database Performance Between Two PeriodsThe AWR Compare Periods report compares performance between two different time
periods, with each timeperiod involving a different pair of snapshots.
@$ORACLE_HOME/rdbms/admin/awrddrpt.sql
Query Execution Phases (Parse, Bind, Execute and Fetch)
Parse Phase - two checks
- syntax check Ex:- select * fromm emp; ----->
"from" keyword NOT found
select * emp;
- symantic check Ex: select * from emp; (assume, there is NO
table with the ename "emp")
=> "Table or View does not
exists" update scott.emp set sal=sal+1000; (connected
to "hr" schema)
=> "Insufficient Privileges" select name from emp; (assume, here
"name" NOT found in the "emp")
=> "Invalid identifier
"name""
- two types of parses - soft parse
- hard parse
-- Hard ParseIf a new SQL statement is issued which does not exist in the shared pool then
this has to be parsed fully.Eg: Oracle has to allocate memory for the statement from the shared pool, check
the statement syntacticallyand semantically etc... This is referred to as a hard parse and is very
expensive in both terms of CPU usedand in the number of latch gets performed.
--Soft ParseIf a session issues a SQL statement which is already in the shared pool AND it
can use an existing versionof that statement then this is known as a 'soft parse'.As far as the application is concerned it has asked to parse the statement.
if two statements are textually identical but cannot be shared then these are
called 'versions' of the same statement.If Oracle matches to a statement with many versions it has to check each
version in turn to seeif it is truely identical to the statement currently being parsed. Hence high
version counts are best avoided.
shared pool - library cache
- shared SQL Area - holds SQL CURSOR(s)
- data dictionary cache - result cache
SQL CURSOR? - workarea
- memory area - for workload of a particular SQL Statement
- information which is required to run a
particular SQL St. - statistics, selectivity, cardinality,
explain plan, access path,
optimizer join method, etc.
- when you issue a particular SQL
St. first time, then CURSOR creates in shared SQL Area
Examples:-
first time: SELECT * FROM emp;
=> store in library cache (Hard Parse) 2nd time: SELECT * FROM
emp; => soft parse 3rd time: select * from
emp; => Hard Parse 4th time: SELECT * FROM
emp; => Hard Parse 5th time: SELECT * FROM emp WHERE
deptno=10; => HP
CURSOR_SHARING=EXACT (default) => character by character,
space by space checking
select * from emp where deptno=10; =>
hard parse select * from emp where deptno=20; =>
hard parse select * from emp where deptno=30; =>
hard parse select * from emp where deptno=40; =>
hard parse
How to avoid hard parses?
a) follow standardsb) use bind variables in place of literals
ex: select * from emp where deptno = :dno;
(var command in sqlplus)
CURSOR_SHARING - EXACT (default) (good for OLTP) >90%
DML operations - SIMILAR (deprecated from 12c) - FORCE
CURSOR_SHARING=FORCE (good for OLAP) >90% are SELECT
operations
select * from emp where deptno=10; =>
hard parse select * from emp where deptno=20; =>
soft parse select * from emp where deptno=30; =>
soft parse select * from emp where deptno=40; =>
soft parse
select * from emp where deptno = :SYS.bind_variable;
Bind Phase - optional - useful in case of any bind variable in the query
every SQL query identified by SQL IDevery execution plan of a Query identified by Plan Hash Valuecan I have same SQL ID with different Plan Hash Values?Ans: YES
select * from emp where deptno = :dno;
:dno=10 => sqlid1, plan1:dno=20 => sqlid1, plan2:dno=30 => sqlid1, plan3
the above is possible from 11g R2 due to Adaptive Cursor Sharing
statistics is important input to Optimizer to run a queryfrom 11g R2, selectivity is also important to optimizer
assume, emp has 100 rows, 10th dept has 15 rows, 20th dept has 65 rows, 30th
has 20 rows
selectivity (10th Dept) = 15/100 = 0.15selectivity (20th dept) = 65/100 = 0.65selectivity (30th dept) = 20/100 = 0.2
select * from emp where deptno = :dno;
:dno=10 => 0.15, plan1:dno=20 => 0.65, plan2:dno=30 => 0.2, plan1
Bind Peaking
select * from emp where deptno = :dno;
:dno=10 => 0.15, plan1:dno=20 => 0.65, plan1:dno=30 => 0.2, plan1
Case Study
1) Check server load
2)Generate and Check
ADDM report, implement findings, re-test
3)Check Gather Stats
on database or particular/relevant tables
4)AWR report
covering problem period
5)AWR report
covering good period of similar load and duration for comparison
6)AWR Compare
Periods report comparing the good and bad periods
7)Collect an ASH
report for the same period
8)Collect OSWatcher
data
9)Collect Alert log
and traces covering the duration of the problem
10)Check the Alert
log for the period when the issue occurred
11)Find any trace
files referenced in the problem period [
Database Performance
issue check
1) Check server load
topas to know top consuming process and
load on server
nmon and press m to know memory and paging
utilization
vmstat 5 5
lsps -s to know paging usages
2) Map OS processes with database process as below
select s.sql_id,s.sql_hash_value,,s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi') from v$session s, v$process p where
p.addr=s.paddr and p.spid in(1105,4452)
3) Take explain plan
and check statistics of tables and indexes
select * from
table(dbms_xplan.display_cursor('&sql_id',null,'AdVanced ALLSTATS LAST'))
select
a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from
dba_tables a, dba_segments b
where a.owner=b.owner and
a.table_name=b.segment_name and a.table_name='&table_name' order by b.bytes
desc;
4) Check any long
running SQL on database
select
sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2)
"%_complete" from v$session_longops where sofar <> totalwork;
5)
Check session level and object level lock on database
object
level locking
------------------------
select
a.sid, a.serial#,OBJECT_NAME from v$session a, v$locked_object b, dba_objects c
where b.object_id = c.object_id and a.sid = b.session_id;
Blocking
Sessions
------------------
SELECT
DECODE(request,0,'Holder: ','Waiter: ') || sid sess, id1, id2, lmode, request,
type
FROM
V$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE
request > 0) ORDER BY id1, request;
6) Check module wise usages and problematic
event on database
select
count(1),status,module from v$session group by status,module order by module
desc
select
count(1),status,module,sql_id from v$session group by status,module,sql_id
order by module desc
select
count(1),event from v$session_wait group by event;
7)
Generate AWR,ASH and ADDM Report for further analysis
@
?/rdbms/admin/awrrpt.sql
@
?/rdbms/admin/ashrpt.sql
@
?/rdbms/admin/addmrpt.sql
@?/rdbms/admin/awrddrpt.sql--compare how the
database performed during two different periods.
JOB slowness check
check job Details
server:dummy10205:/home/oracle
$ps -ef|grep uged7551
tprd001 40698072 28967422 0 00:16:54 -
0:00 /bin/ksh -x /prod/geprod/jobs_exec/uged7551.sh
oracle 26739026 4325714
0 02:33:01 pts/2 0:00 grep uged7551
server:dummy10205:/home/oracle
$ps -ef|grep uged7554
oracle
3342770 4325714 1 02:35:01
pts/2 0:00 grep uged7554
tprd001 21496252 6095180
0 00:16:54 - 0:00 /bin/ksh -x
/prod/geprod/jobs_exec/uged7554.sh
proctree 21496252
Server:dummy10205:/home/oracle $proctree
21496252
6095180 p_ctmag
21496252
/bin/ksh -x /prod/geprod/jobs_exec/uged7554.sh
58982596 /bin/ksh /prod/geprod/scripts/rungensql.sh
ge5790e02 4 10
14877100 sqlplus
@/prod/geprod/sql/ge5790e02.sql 4 10
60162290 oracleintclprd
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
Server:dummy10205:/home/oracle
$
or
Top users,
program,modules,machine,sql_id
SELECT
* FROM (
SELECT count(*) AS count,USERNAME program,
module, machine, sql_id
FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY,
DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND session_type='FOREGROUND' and snap_id
between 96067 and 96073
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC) WHERE rownum <= 20
or
SELECT
* FROM (
SELECT count(*) AS count,USERNAME program,
module, machine, sql_id
FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY,
DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND session_type='FOREGROUND'
AND
sample_time > sysdate-60/1440
AND sample_time < sysdate
--AND event = 'library cache: mutex X'
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC )WHERE rownum <= 20
-- Top
temp segments given than a threshold
SELECT
* FROM (
SELECT count(*) AS count,username, program,
module, machine, sql_id,sum(temp_space_allocated)
FROM
SYS.DBA_HIST_ACTIVE_SESS_HISTORY,DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND sample_time > sysdate-60/1440
AND sample_time < sysdate
AND temp_space_allocated > 100*1024*1024
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC ) WHERE rownum <=
20
/
SET
LINESIZE 500
SET
PAGESIZE 1000
SET
VERIFY OFF
SELECT
b.username,b.sid,b.serial#,username,b.status,b.sql_id,to_char(b.logon_time,'dd-mon:hh24:mi'),b.machine,b.module
FROM v$sqltext a, v$session b WHERE
a.address = b.sql_address AND
a.hash_value = b.sql_hash_value AND
b.sql_id = 'dt7cud8uwzstp';
or
select
sid,serial#,username,program,machine,module from v$session where
sql_id='dt7cud8uwzstp';
select
count(1),event from v$session group by
event;
select
s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi') from v$session s, v$process p
where
p.addr=s.paddr and p.spid
in(4260298,41418786,5570940,3146020,1507632,4587808,4915642,3080460,2425340);
Learn Performance Tuning with
Real-Time in 30 Day by Industry Expert
This document provides guidance for
identify Top SQL,Advise for Oracle Memory and Deal with slow Performance of
Database
Single Instance Report
RAC Database AWR Compare Report
If DBTime is greater during bad period, it is likely the database is causing the problem; you
have verified the problem is in the
database
- ReleaseDepending on the problem, the database version may be important. If the version is old or is not the latest patchset release then the most up to date fixes may not be applied which has the potential to open the database up to issues.
- RACIf the database is running in a Real Application Cluster configuration then you may need to look at information from the other instances to get a full picture of the database performance
- PlatformThere may be platform specific issues that have a bearing on the system
- CPUs/CoresIn a multi-processor environment, the "wall clock" time is not necessarily a good indicator of how much work the database can do since multiple operations can be pursued simultaneously. You can use cores for an indication of how much CPU work can likely be done at once.
- Snap TimeThe Snap time shows the times for the starting and ending snapshots for the report period. Does this cover the time of problem that is being encountered?
- Elapsed timeThe elapsed time indicates the duration of the report between the 2 selected snapshots.
- DB timeThe DB Time is the time spent in the database for the period of the report. If this is significantly higher than the Elapsed time then this is a good indicator of a heavily loaded system. Remember that on a multi-processor system, you might expect the DB Time to be able to exceed the elapsed time. Additionally, the db time includes the time waiting for the CPU to become available, so this number can be higher than the Elapsed time X Cores.In the example above, the numbers say that the database worked for 2193 minutes in 15 minutes of elapsed time. Whether that is an indication of a problem depends on the capacity and concurrency capabilities of the system. Looking at the numbers, 2193:15 is a ratio of 146:1, so, in this case, if they had significantly less than 146 cpus it is likely that there is some overloading issues. Remember that the user perception is also a significant factor in whether there is a "performance issue" - if the system delivers what the users want then there might not be a problem!
- SessionsYou can use the sessions information along with the DB time to give an average amount of DB time per session. Are there a large number or a small number of connections?
Important Summary Sections in AWR Report
Single Instance Report
Where:
- DB CPU(s)The DB CPU(s) figure shows the amount of the CPU being used by the database. You can use this alongside the actual number of cores to give you an idea of how much of the total available machine CPU is actually being used by this instance.
- DB Time(s) Here the "Per Second" information gives you
another version of the total DB time used, just in this case expressed as
every second as opposed to the full elapsed period.
. Redo size: This is
the amount of redo generated during this report.
. Logical Reads: This is calculated as
Consistent Gets + DB Block Gets = Logical Reads
. Block changes: The
number of blocks modified during the sample interval
. Physical Reads:
The number of requests for a block that caused a physical I/O.
. Physical Writes:
The number of physical writes issued.
. User Calls: The
number of queries generated
. Parses: Total of
all parses: both hard and soft
. Hard Parses: Those parses requiring a
completely new parse of the SQL statement. These consume both latches and
shared pool area.
. Soft Parses: Not
listed but derived by subtracting the hard parses from parses. A soft parse
reuses a previous hard parse and hence consumes far fewer resources.
. Sorts, Logons,
Executes and Transactions are all self explanatory
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.91 Redo NoWait %: 100.00
Buffer Hit %: 98.14 In-memory Sort %: 99.98
Library Hit %: 99.91 Soft Parse %: 94.48
Execute to Parse %: 99.45 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 71.23 % Non-Parse CPU: 99.00
Case Study
1) Check server load
2)Generate and Check
ADDM report, implement findings, re-test
3)Check Gather Stats
on database or particular/relevant tables
4)AWR report
covering problem period
5)AWR report
covering good period of similar load and duration for comparison
6)AWR Compare
Periods report comparing the good and bad periods
7)Collect an ASH
report for the same period
8)Collect OSWatcher
data
9)Collect Alert log
and traces covering the duration of the problem
10)Check the Alert
log for the period when the issue occurred
11)Find any trace
files referenced in the problem period [
Database Performance
issue check
1) Check server load
topas to know top consuming process and
load on server
nmon and press m to know memory and paging
utilization
vmstat 5 5
lsps -s to know paging usages
2) Map OS processes with database process as below
select s.sql_id,s.sql_hash_value,,s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi') from v$session s, v$process p where
p.addr=s.paddr and p.spid in(1105,4452)
3) Take explain plan
and check statistics of tables and indexes
select * from
table(dbms_xplan.display_cursor('&sql_id',null,'AdVanced ALLSTATS LAST'))
select
a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from
dba_tables a, dba_segments b
where a.owner=b.owner and
a.table_name=b.segment_name and a.table_name='&table_name' order by b.bytes
desc;
4) Check any long
running SQL on database
select
sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2)
"%_complete" from v$session_longops where sofar <> totalwork;
5)
Check session level and object level lock on database
object
level locking
------------------------
select
a.sid, a.serial#,OBJECT_NAME from v$session a, v$locked_object b, dba_objects c
where b.object_id = c.object_id and a.sid = b.session_id;
Blocking
Sessions
------------------
SELECT
DECODE(request,0,'Holder: ','Waiter: ') || sid sess, id1, id2, lmode, request,
type
FROM
V$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE
request > 0) ORDER BY id1, request;
6) Check module wise usages and problematic
event on database
select
count(1),status,module from v$session group by status,module order by module
desc
select
count(1),status,module,sql_id from v$session group by status,module,sql_id
order by module desc
select
count(1),event from v$session_wait group by event;
7)
Generate AWR,ASH and ADDM Report for further analysis
@
?/rdbms/admin/awrrpt.sql
@
?/rdbms/admin/ashrpt.sql
@
?/rdbms/admin/addmrpt.sql
@?/rdbms/admin/awrddrpt.sql--compare how the
database performed during two different periods.
JOB slowness check
check job Details
server:dummy10205:/home/oracle
$ps -ef|grep uged7551
tprd001 40698072 28967422 0 00:16:54 -
0:00 /bin/ksh -x /prod/geprod/jobs_exec/uged7551.sh
oracle 26739026 4325714
0 02:33:01 pts/2 0:00 grep uged7551
server:dummy10205:/home/oracle
$ps -ef|grep uged7554
oracle
3342770 4325714 1 02:35:01
pts/2 0:00 grep uged7554
tprd001 21496252 6095180
0 00:16:54 - 0:00 /bin/ksh -x
/prod/geprod/jobs_exec/uged7554.sh
proctree 21496252
Server:dummy10205:/home/oracle $proctree
21496252
6095180 p_ctmag
21496252
/bin/ksh -x /prod/geprod/jobs_exec/uged7554.sh
58982596 /bin/ksh /prod/geprod/scripts/rungensql.sh
ge5790e02 4 10
14877100 sqlplus
@/prod/geprod/sql/ge5790e02.sql 4 10
60162290 oracleintclprd
(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
Server:dummy10205:/home/oracle
$
or
Top users,
program,modules,machine,sql_id
SELECT
* FROM (
SELECT count(*) AS count,USERNAME program,
module, machine, sql_id
FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY,
DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND session_type='FOREGROUND' and snap_id
between 96067 and 96073
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC) WHERE rownum <= 20
or
SELECT
* FROM (
SELECT count(*) AS count,USERNAME program,
module, machine, sql_id
FROM SYS.DBA_HIST_ACTIVE_SESS_HISTORY,
DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND session_type='FOREGROUND'
AND
sample_time > sysdate-60/1440
AND sample_time < sysdate
--AND event = 'library cache: mutex X'
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC )WHERE rownum <= 20
-- Top
temp segments given than a threshold
SELECT
* FROM (
SELECT count(*) AS count,username, program,
module, machine, sql_id,sum(temp_space_allocated)
FROM
SYS.DBA_HIST_ACTIVE_SESS_HISTORY,DBA_USERS
WHERE
DBA_HIST_ACTIVE_SESS_HISTORY.USER_ID=DBA_USERS.USER_ID
AND sample_time > sysdate-60/1440
AND sample_time < sysdate
AND temp_space_allocated > 100*1024*1024
GROUP BY USERNAME, program, module, machine,
sql_id
ORDER BY count(*) DESC ) WHERE rownum <=
20
/
SET
LINESIZE 500
SET
PAGESIZE 1000
SET
VERIFY OFF
SELECT
b.username,b.sid,b.serial#,username,b.status,b.sql_id,to_char(b.logon_time,'dd-mon:hh24:mi'),b.machine,b.module
FROM v$sqltext a, v$session b WHERE
a.address = b.sql_address AND
a.hash_value = b.sql_hash_value AND
b.sql_id = 'dt7cud8uwzstp';
or
select
sid,serial#,username,program,machine,module from v$session where
sql_id='dt7cud8uwzstp';
select
count(1),event from v$session group by
event;
select
s.sql_id,s.sid,s.serial#,s.username,s.event,to_char(s.logon_time,'dd-mon:hh24:mi') from v$session s, v$process p
where
p.addr=s.paddr and p.spid
in(4260298,41418786,5570940,3146020,1507632,4587808,4915642,3080460,2425340);
Some important Oracle Tool
1) RDA
2) TFA
3) SQLT
4) OS watcher
5) Various OS commands for
identify bottleneck
1) Remote Diagnostics Agent(RDA)
RDA is a utility, a set of shell
scripts or a PERL script, that can be downloaded from Oracle Support to collect
diagnostics information from an Oracle database.
[oracle@lab rda]$ sh rda.sh
------------------------------------------------------------------------------
RDA Data Collection Started
11-Jun-2016 00:57:32
------------------------------------------------------------------------------
Processing RDA.BEGIN module ...
Enter the password for
"SYSTEM" at "cdb1":
Re-enter it to confirm:
Processing OS.PERF module ...
Processing RDA.CONFIG module ...
Processing SAMPLE.SAMPLE module
...
Processing OS.OS module …
Processing OS.PROF module ...
Processing OS.NET module ...
Processing OS.INST module ...
Processing DB.DBA module ...
Processing OFM.WREQ module ...
This can take time. Be patient
...
Processing OFM.IREQ module ...
Processing RDA.LOAD module ...
Processing RDA.END module ...
------------------------------------------------------------------------------
RDA Data Collection Ended 11-Jun-2016 00:58:26
[oracle@ibmlab rda]$
2 .TFA
It is used to collect Diagnostic Gathering Across All Cluster Nodes
( Doc ID 1513912.1 )
Examples:
<gi_home>/bin/tfactl diagcollect -from
"<MON/DD/YYYY HH:00:00>" -to "<MON/DD/YYYY
HH:00:00>"
<gi_home>/bin/tfactl diagcollect -all
-since 2d
<gi_home>/bin/tfactl diagcollect -for
"<MON/DD/YYYY HH:00:00>"
tfactl diagcollect -from "APR/21/2017
14:00:00" -to "APR/21/2017 19:00:00"
* If you cannot use TFA, then please collect/upload
the following from the problematic node:
This note will be obsolete in the future, it's
strongly recommended to use TFA to prune and collect files from all nodes:
Reference: Document 1513912.1 TFA Collector -
Tool for Enhanced Diagnostic Gathering
TFA Collector is installed in the GI HOME and
comes with 11.2.0.4 GI and higher. For
GI 11.2.0.3 or lower, install the TFA Collector by referring to Document
1513912.1 for instruction on downloading and installing TFA collector.
$GI_HOME/tfa/bin/tfactl diagcollect -from
"MMM/dd/yyyy hh:mm:ss" -to "MMM/dd/yyyy hh:mm:ss"
Format example: "Jul/1/2014 21:00:00"
Specify the "from time" to be 4 hours
before and the "to time" to be 4 hours after the time of error.
3) SQLT
SQLT, also known as SQLTXPLAIN
is a tool provided by Oracle Server Technologies Center of Expertise that can
be used to diagnose why a particular SQL statement is performing poorly. SQLT
is very focused and works on one SQL statement at a time. SQLT inputs one SQL
statement and outputs a set of diagnostics files. These files can be used to
diagnose SQL statements performing poorly.
starting with Oracle 10.2 there is a script
ORACLE_HOME/rdbms/admin/sqltrpt.sql which can be used for usage of SQL Tuning
Advisor from the command line and it will give recommendation for problematic sql statements.
[oracle@lab ~]$ sqlplus / as sysdba
SQL> @?/rdbms/admin/sqltrpt.sql
15 Most expensive SQL in the cursor cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID ELAPSED SQL_TEXT_FRAGMENT
------------- ----------
-------------------------------------------------------
drnmr4vyabw0b
161.99 CREATE TABLE IDT_ML6121G TABLESPACE TSP_MEDIUM PARALLEL
1z1b4yqc1z9gb
151.69 DECLARE cnt NUMBER;
bid NUMBER;
eid
8bbt6aufcb7uc
91.73 BEGIN PACK_ML6121G02.PC_ADM_ML6121G02(9,10); END;
15 Most expensive SQL in the workload repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID
ELAPSED SQL_TEXT_FRAGMENT
------------- ----------
-------------------------------------------------------
gay2fcy4gd2ka ########## BEGIN
pack_ml8704e.getTriadBatchData_ext(:1, :2, :3, :4
b6usrg82hwsa3 ########## call
dbms_stats.gather_database_stats_job_proc (
)
9pask37fn7ktu ########## SELECT CREDIT_LIMIT_DECISION_SID,
CRED_LIMIT_DECISION_T
bcdw1bfudc6u0
86,840.00 SELECT FUNDING_TYPE_CODE FROM CARD_TRANSACTION CT, FUND
as1c06b0zckuv
82,945.66 SELECT * FROM ( SELECT CLD.CREDIT_MONETARY_LIMIT, TCLS.
Specify the Sql id
~~~~~~~~~~~~~~~~~~
Enter value for
sqlid: 6cg8puc5mz1va
Sql Id specified:
6cg8puc5mz1va
Tune the sql
~~~~~~~~~~~~
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : TASK_952340
Tuning Task Owner : SYS
Scope : COMPREHENSIVE
Time Limit(seconds) : 1800
Completion Status : COMPLETED
Started at : 05/28/2014 10:02:51
Completed at : 05/28/2014 10:03:01
Number of SQL Profile Findings : 1
-------------------------------------------------------------------------------
Schema Name: PILOT_DBA
SQL ID :
6cg8puc5mz1va
SQL Text : SELECT
COUNT (*) COUNT_ANNOTATIONS FROM ANNOTATION_TYPE ANN_TY,
ANNOTATION ANN, TRIAD_CRED_STRAT_ANNO_ASSIGN TRD_CS_ANN_ASS
WHERE
ANN.CONTRACT_NUMBER = :B4 AND ANN.ANNOTATION_TYPE_CODE =
ANN_TY.ANNOTATION_TYPE_CODE AND TRD_CS_ANN_ASS.CREDIT_STRATEGY_RE
FNO (+)=
:B3 AND TRD_CS_ANN_ASS.ANNOTATION_TYPE_CODE (+)=
ANN.ANNOTATION_TYPE_CODE AND ( (ANN.DATE_VALID_UNTIL >= :B1 ) OR
(
TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NULL AND (
ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =
ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM + :B2 >= :B1 ) OR
(
TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NOT NULL AND (
ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =
ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM +
PACK_DATE_UTILS.FN_CONVERT_PERIOD (
TRD_CS_ANN_ASS.PERIOD_ANNOTATION_EFFECTIVE,
TRD_CS_ANN_ASS.PERIOD_UNIT_ANNOTATION_EFFECT,
'DY') >=:B1 ) )
-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially
better execution plan was found for this statement.
Recommendation
(estimated benefit: 19.73%)
------------------------------------------
- Consider
accepting the recommended SQL profile.
execute
dbms_sqltune.accept_sql_profile(task_name => 'TASK_952340',
replace => TRUE);
-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------
1- Original With Adjusted Cost
------------------------------
Plan hash value: 1889861245
----------------------------------------------------------------------------------------------------------------
| Id |
Operation |
Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT
STATEMENT | | 1 |
41 | 10 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 |
41 | | |
| 2 | NESTED LOOPS | |
9 | 369 | 10
(0)| 00:00:01 |
|* 3 | FILTER | | |
| | |
| 4 | NESTED LOOPS OUTER | | 9 |
333 | 10
(0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID|
ANNOTATION | 17 |
408 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | PF_ANN_01 | 3 |
| 4 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID|
TRIAD_CRED_STRAT_ANNO_ASSIGN | 1
| 13 | 1
(0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | PK_DIC_TRD_CS_ANN_ASS |
1 | | 0
(0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_ANN_TY | 1 |
4 | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 -
filter("ANN"."DATE_VALID_UNTIL">=:B1 OR
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND
("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1
OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO
D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1
AND
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT
NULL)
6 -
access("ANN"."CONTRACT_NUMBER"=:B4)
8 -
access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND
"TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")
9 - access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")
2- Using SQL Profile
--------------------
Plan hash value: 1435743108
----------------------------------------------------------------------------------------------------------------
| Id |
Operation |
Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT
STATEMENT | | 1 |
41 | 8 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 |
41 | | |
|* 2 | FILTER | | |
| | |
| 3 | NESTED LOOPS OUTER | | 9 |
369 | 8 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 3 |
84 | 7 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID|
ANNOTATION | 17 |
408 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | PF_ANN_01 | 3 |
| 4 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | PK_ANN_TY | 1 |
4 | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID |
TRIAD_CRED_STRAT_ANNO_ASSIGN | 1
| 13 | 1
(0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_DIC_TRD_CS_ANN_ASS |
1 | | 0
(0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 -
filter("ANN"."DATE_VALID_UNTIL">=:B1 OR
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND
("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1
OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO
D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1
AND
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT
NULL)
6 -
access("ANN"."CONTRACT_NUMBER"=:B4)
7 -
access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")
9 -
access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND
"TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")
-------------------------------------------------------------------------------
This report has given recommendation for accepting SQL
profile to improve performance of above query
1) RDA
2) TFA
3) SQLT
4) OS watcher
5) Various OS commands for
identify bottleneck
1) Remote Diagnostics Agent(RDA)
RDA is a utility, a set of shell
scripts or a PERL script, that can be downloaded from Oracle Support to collect
diagnostics information from an Oracle database.
[oracle@lab rda]$ sh rda.sh
------------------------------------------------------------------------------
RDA Data Collection Started
11-Jun-2016 00:57:32
------------------------------------------------------------------------------
Processing RDA.BEGIN module ...
Enter the password for
"SYSTEM" at "cdb1":
Re-enter it to confirm:
Processing OS.PERF module ...
Processing RDA.CONFIG module ...
Processing SAMPLE.SAMPLE module
...
Processing OS.OS module …
Processing OS.PROF module ...
Processing OS.NET module ...
Processing OS.INST module ...
Processing DB.DBA module ...
Processing OFM.WREQ module ...
This can take time. Be patient
...
Processing OFM.IREQ module ...
Processing RDA.LOAD module ...
Processing RDA.END module ...
------------------------------------------------------------------------------
RDA Data Collection Ended 11-Jun-2016 00:58:26
[oracle@ibmlab rda]$
2 .TFA
It is used to collect Diagnostic Gathering Across All Cluster Nodes
( Doc ID 1513912.1 )
Examples:
<gi_home>/bin/tfactl diagcollect -from
"<MON/DD/YYYY HH:00:00>" -to "<MON/DD/YYYY
HH:00:00>"
<gi_home>/bin/tfactl diagcollect -all
-since 2d
<gi_home>/bin/tfactl diagcollect -for
"<MON/DD/YYYY HH:00:00>"
tfactl diagcollect -from "APR/21/2017
14:00:00" -to "APR/21/2017 19:00:00"
* If you cannot use TFA, then please collect/upload
the following from the problematic node:
This note will be obsolete in the future, it's
strongly recommended to use TFA to prune and collect files from all nodes:
Reference: Document 1513912.1 TFA Collector -
Tool for Enhanced Diagnostic Gathering
TFA Collector is installed in the GI HOME and
comes with 11.2.0.4 GI and higher. For
GI 11.2.0.3 or lower, install the TFA Collector by referring to Document
1513912.1 for instruction on downloading and installing TFA collector.
$GI_HOME/tfa/bin/tfactl diagcollect -from
"MMM/dd/yyyy hh:mm:ss" -to "MMM/dd/yyyy hh:mm:ss"
Format example: "Jul/1/2014 21:00:00"
Specify the "from time" to be 4 hours
before and the "to time" to be 4 hours after the time of error.
3) SQLT
SQLT, also known as SQLTXPLAIN
is a tool provided by Oracle Server Technologies Center of Expertise that can
be used to diagnose why a particular SQL statement is performing poorly. SQLT
is very focused and works on one SQL statement at a time. SQLT inputs one SQL
statement and outputs a set of diagnostics files. These files can be used to
diagnose SQL statements performing poorly.
starting with Oracle 10.2 there is a script
ORACLE_HOME/rdbms/admin/sqltrpt.sql which can be used for usage of SQL Tuning
Advisor from the command line and it will give recommendation for problematic sql statements.
[oracle@lab ~]$ sqlplus / as sysdba
SQL> @?/rdbms/admin/sqltrpt.sql
15 Most expensive SQL in the cursor cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID ELAPSED SQL_TEXT_FRAGMENT
------------- ----------
-------------------------------------------------------
drnmr4vyabw0b
161.99 CREATE TABLE IDT_ML6121G TABLESPACE TSP_MEDIUM PARALLEL
1z1b4yqc1z9gb
151.69 DECLARE cnt NUMBER;
bid NUMBER;
eid
8bbt6aufcb7uc
91.73 BEGIN PACK_ML6121G02.PC_ADM_ML6121G02(9,10); END;
15 Most expensive SQL in the workload repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID
ELAPSED SQL_TEXT_FRAGMENT
------------- ----------
-------------------------------------------------------
gay2fcy4gd2ka ########## BEGIN
pack_ml8704e.getTriadBatchData_ext(:1, :2, :3, :4
b6usrg82hwsa3 ########## call
dbms_stats.gather_database_stats_job_proc (
)
9pask37fn7ktu ########## SELECT CREDIT_LIMIT_DECISION_SID,
CRED_LIMIT_DECISION_T
bcdw1bfudc6u0
86,840.00 SELECT FUNDING_TYPE_CODE FROM CARD_TRANSACTION CT, FUND
as1c06b0zckuv
82,945.66 SELECT * FROM ( SELECT CLD.CREDIT_MONETARY_LIMIT, TCLS.
Specify the Sql id
~~~~~~~~~~~~~~~~~~
Enter value for
sqlid: 6cg8puc5mz1va
Sql Id specified:
6cg8puc5mz1va
Tune the sql
~~~~~~~~~~~~
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : TASK_952340
Tuning Task Owner : SYS
Scope : COMPREHENSIVE
Time Limit(seconds) : 1800
Completion Status : COMPLETED
Started at : 05/28/2014 10:02:51
Completed at : 05/28/2014 10:03:01
Number of SQL Profile Findings : 1
-------------------------------------------------------------------------------
Schema Name: PILOT_DBA
SQL ID :
6cg8puc5mz1va
SQL Text : SELECT
COUNT (*) COUNT_ANNOTATIONS FROM ANNOTATION_TYPE ANN_TY,
ANNOTATION ANN, TRIAD_CRED_STRAT_ANNO_ASSIGN TRD_CS_ANN_ASS
WHERE
ANN.CONTRACT_NUMBER = :B4 AND ANN.ANNOTATION_TYPE_CODE =
ANN_TY.ANNOTATION_TYPE_CODE AND TRD_CS_ANN_ASS.CREDIT_STRATEGY_RE
FNO (+)=
:B3 AND TRD_CS_ANN_ASS.ANNOTATION_TYPE_CODE (+)=
ANN.ANNOTATION_TYPE_CODE AND ( (ANN.DATE_VALID_UNTIL >= :B1 ) OR
(
TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NULL AND (
ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =
ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM + :B2 >= :B1 ) OR
(
TRD_CS_ANN_ASS.CREDIT_STRATEGY_REFNO IS NOT NULL AND (
ANN.DATE_VALID_UNTIL IS NULL OR ANN.DATE_VALID_UNTIL =
ANN.DATE_VALID_FROM ) AND ANN.DATE_VALID_FROM +
PACK_DATE_UTILS.FN_CONVERT_PERIOD (
TRD_CS_ANN_ASS.PERIOD_ANNOTATION_EFFECTIVE,
TRD_CS_ANN_ASS.PERIOD_UNIT_ANNOTATION_EFFECT,
'DY') >=:B1 ) )
-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially
better execution plan was found for this statement.
Recommendation
(estimated benefit: 19.73%)
------------------------------------------
- Consider
accepting the recommended SQL profile.
execute
dbms_sqltune.accept_sql_profile(task_name => 'TASK_952340',
replace => TRUE);
-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------
1- Original With Adjusted Cost
------------------------------
Plan hash value: 1889861245
----------------------------------------------------------------------------------------------------------------
| Id |
Operation |
Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT
STATEMENT | | 1 |
41 | 10 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 |
41 | | |
| 2 | NESTED LOOPS | |
9 | 369 | 10
(0)| 00:00:01 |
|* 3 | FILTER | | |
| | |
| 4 | NESTED LOOPS OUTER | | 9 |
333 | 10
(0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID|
ANNOTATION | 17 |
408 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | PF_ANN_01 | 3 |
| 4 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID|
TRIAD_CRED_STRAT_ANNO_ASSIGN | 1
| 13 | 1
(0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | PK_DIC_TRD_CS_ANN_ASS |
1 | | 0
(0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_ANN_TY | 1 |
4 | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 -
filter("ANN"."DATE_VALID_UNTIL">=:B1 OR
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND
("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1
OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO
D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1
AND
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT
NULL)
6 -
access("ANN"."CONTRACT_NUMBER"=:B4)
8 -
access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND
"TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")
9 - access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")
2- Using SQL Profile
--------------------
Plan hash value: 1435743108
----------------------------------------------------------------------------------------------------------------
| Id |
Operation |
Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT
STATEMENT | | 1 |
41 | 8 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 |
41 | | |
|* 2 | FILTER | | |
| | |
| 3 | NESTED LOOPS OUTER | | 9 |
369 | 8 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 3 |
84 | 7 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID|
ANNOTATION | 17 |
408 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | PF_ANN_01 | 3 |
| 4 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | PK_ANN_TY | 1 |
4 | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID |
TRIAD_CRED_STRAT_ANNO_ASSIGN | 1
| 13 | 1
(0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | PK_DIC_TRD_CS_ANN_ASS |
1 | | 0
(0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 -
filter("ANN"."DATE_VALID_UNTIL">=:B1 OR
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NULL AND
("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+:B2>=:B1
OR ("ANN"."DATE_VALID_UNTIL" IS NULL OR
"ANN"."DATE_VALID_UNTIL"="ANN"."DATE_VALID_FROM")
AND
INTERNAL_FUNCTION("ANN"."DATE_VALID_FROM")+"PACK_DATE_UTILS"."FN_CONVERT_PERIOD"("TRD_CS_ANN_ASS"."PERIO
D_ANNOTATION_EFFECTIVE","TRD_CS_ANN_ASS"."PERIOD_UNIT_ANNOTATION_EFFECT",'DY')>=:B1
AND
"TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO" IS NOT
NULL)
6 -
access("ANN"."CONTRACT_NUMBER"=:B4)
7 -
access("ANN"."ANNOTATION_TYPE_CODE"="ANN_TY"."ANNOTATION_TYPE_CODE")
9 -
access("TRD_CS_ANN_ASS"."CREDIT_STRATEGY_REFNO"(+)=:B3 AND
"TRD_CS_ANN_ASS"."ANNOTATION_TYPE_CODE"(+)="ANN"."ANNOTATION_TYPE_CODE")
-------------------------------------------------------------------------------
This report has given recommendation for accepting SQL
profile to improve performance of above query
execute
dbms_sqltune.accept_sql_profile(task_name => 'TASK_952340',
replace => TRUE);
replace => TRUE);
4: OS Watcher Installation Steps:-
OS Watcher is a series of shell scripts that collect
specific kinds of data, using operating system diagnostic utilities. Control is
passed to individually spawned operating system data collector processes, which
in turn collect specific data, time stamp the data output, and append the data
to pre-generated and named files. Each data collector will have its own file,
created and named by the File Manager process. OSW invokes the distinct
operating system utilities listed below as data collectors. OSW will not put
any significant performance affecting load on the system. It will have the same
impact as running the regular OS command like netstat, ps etc. These utilities
will be supported, or their equivalents, as available for each supported target
platform:
* ps
* top
* mpstat
* iostat
* netstat
* traceroute
* vmstat
The size of the archived files saved during the running of
the OSW will be based on the user parameters set at the starting of OSW and the
OS information. For example, if each file will contain an hour of data and the
collection interval is 10 sec the amount of data will be bigger compared to
collecting with an interval of 60 sec.
It is highly recommended that OSW be installed and run
continuously on ALL cluster nodes, at all times Document 301137.1
Be sure to use separate directories per node for storing
OSW output. When using OS Watcher in a RAC environment, each node must write
its output files to a separate archive directory. Combining the output files
under one archive (on shared storage) is not supported and causes the OSW tool
to crash. Shared storage is fine, but each node needs a separate archive
directory.
Document 301137.1 - OS Watcher User Guide
OSW for WINDOWS: OS
Watcher for Windows is no longer supported. It has been replace by the Cluster
Health Monitor. Please see Document ID: 736752.1 for more information, and how
to download, the Cluster Health Monitor.
[oracle@ibmlab jb]$
pwd
/home/oracle/jb
[oracle@ibmlab jb]$
ls
oswbb733.tar
[oracle@ibmlab jb]$
tar -xvf oswbb733.tar
oswbb/
oswbb/docs/
oswbb/docs/The_Analyzer/
oswbb/docs/The_Analyzer/OSWatcherAnalyzerOverview.pdf
oswbb/docs/The_Analyzer/oswbbaUserGuide.pdf
nput.txt
oswbb/oswrds.sh
[oracle@ibmlab jb]$
Start OS watcher using the scripts as oracle user to capture data every 15
second:
nohup ./startOSWbb.sh 15 300 gzip &
[oracle@ibmlab oswbb]$ ps -ef |grep
-i osw
oracle 21464 1 0
19:39 pts/1 00:00:00 /bin/sh
./OSWatcher.sh 15 300 gzip
oracle 21522 21464 0 19:39 pts/1 00:00:00 /bin/sh ./OSWatcherFM.sh 300
/home/oracle/jb/oswbb/archive
oracle 21943 21266 0 19:41 pts/1 00:00:00 grep -i osw
[oracle@ibmlab oswbb]$
To stop OSWatcher:
[oracle@ibmlab oswbb]$ ./stopOSWbb.sh
[oracle@ibmlab oswbb]$
To uninstall the OSWatche:
simply delete the oswatcher parent directory.
6) Various OS commands for identify bottleneck:
df –h /df –g
We may ran out of space of the file system and the database will not even
allow to login as it's not having space to generate the os level files. We can
use the below commands to investigate and resolve the issue.
On the above screen shot we can clearly observed that the /opt is above
80% which is warning and we should check and remove the files which are no
longer required.
VMSTAT(virtual memory statistics) to determine where the system is taking more resources.
/home/oracle $vmstat 5 5
System configuration: lcpu=32 mem=90112MB ent=2.00
kthr memory page faults cpu
----- ----------- ------------------------ ------------
-----------------------
r
b avm fre
re pi po fr sr
cy in sy cs
us sy id wa pc ec
5
0 20454530 211691 0 0 0 0
0 0 104 26418 1889 8 6 87 0
0.42 20.9
4
0 20454530 211691 0 0 0 0
0 0 1856 11360 3806 15 12
73 0
0.86 43.1
4
0 20454531 211687 0 0 0 0
0 0 418 21436 2441 11 7 82 0 0.58
29.0
4
0 20454531 211664 0 0 0 0
0 0 108 13954 1947 9 4 87 0
0.43 21.3
2
0 20456049 210136 0 0 0 0
0 0 81 18128 1872
8 5 88 0
0.40 19.8
The r column displays the number of processes waiting for access to a
processor. The b column displays the number of processes in a sleep state.
These values are usually zero.
If b(processes sleeping) and r are
consistently greater than 0, then you may be using more CPU than available. .
If po (memory swapped out to disk)
and pi (memory swapped in from disk) are consistently greater than 0, you may
have a memory bottleneck. Paging and swapping occur when
there isn’t enough physical memory to accommodate the memory
I/O Stats
Reports I/O statistics
SAR:(System Activity Reporter) SAR is a legacy system
monitoring tool which is used to report CPU activity, system loads
average, memory/paging, LUN load, network activity.
bash-2.05$ sar -p 10 10
SunOS drsmgtp1 5.9 Generic_122300-31 sun4u 06/15/2016
11:31:07 atch/s pgin/s ppgin/s pflt/s
vflt/s slock/s
11:31:17 8.00 2.40
6.00 6.40 18.30
0.00
11:31:27 0.70 2.80
2.80 0.00 0.40
0.00
11:31:37 0.40 1.50
1.50 0.10 0.00
0.00
11:31:47 4.30 4.40
4.40 5.90 16.10
0.00
11:31:57 0.70 2.80
2.80 0.00 0.00
0.00
11:32:07 0.40 1.60
1.60 0.70 0.00
0.00
11:32:17 3.50 1.20
1.20 5.90 16.10
0.00
11:32:27 0.40 1.90
5.70 0.10 0.00
0.00
11:32:37 1.20 0.00
0.00 0.00 0.00
0.00
11:32:47 4.40 0.00
0.00 5.90 16.20
0.00
Average 2.40 1.86
2.60 2.50 6.71
0.00
bash-2.05$
TOP is a command to get the load information along
with memory and process utilization details.
PS: ps is a command to quickly
identify the what are the process running on the Unix an Linux servers.
[oracle@ibmlab cdump]$ ps -ef |grep
pmon
oracle 23212 22307 0 21:04 pts/3 00:00:00 grep pmon
oracle 27788 1 0
Jun10 ? 00:00:20 ora_pmon_cdb1
[oracle@ibmlab cdump]$
free -m: is a command to get the memory
information on the linux operating system.
[oracle@ibmlab cdump]$ free -m
total used free
shared buffers cached
Mem: 3962 3897 65 0 139 3170
-/+ buffers/cache: 587 3375
Swap: 10239 18
10221
[oracle@ibmlab cdump]$
NMON
To display usages memory ,CPU ,Disk
etc.
- Paging info
- Memory and paging in one output
OS Watcher is a series of shell scripts that collect
specific kinds of data, using operating system diagnostic utilities. Control is
passed to individually spawned operating system data collector processes, which
in turn collect specific data, time stamp the data output, and append the data
to pre-generated and named files. Each data collector will have its own file,
created and named by the File Manager process. OSW invokes the distinct
operating system utilities listed below as data collectors. OSW will not put
any significant performance affecting load on the system. It will have the same
impact as running the regular OS command like netstat, ps etc. These utilities
will be supported, or their equivalents, as available for each supported target
platform:
* ps
* top
* mpstat
* iostat
* netstat
* traceroute
* vmstat
The size of the archived files saved during the running of
the OSW will be based on the user parameters set at the starting of OSW and the
OS information. For example, if each file will contain an hour of data and the
collection interval is 10 sec the amount of data will be bigger compared to
collecting with an interval of 60 sec.
It is highly recommended that OSW be installed and run
continuously on ALL cluster nodes, at all times Document 301137.1
Be sure to use separate directories per node for storing
OSW output. When using OS Watcher in a RAC environment, each node must write
its output files to a separate archive directory. Combining the output files
under one archive (on shared storage) is not supported and causes the OSW tool
to crash. Shared storage is fine, but each node needs a separate archive
directory.
Document 301137.1 - OS Watcher User Guide
OSW for WINDOWS: OS
Watcher for Windows is no longer supported. It has been replace by the Cluster
Health Monitor. Please see Document ID: 736752.1 for more information, and how
to download, the Cluster Health Monitor.
[oracle@ibmlab jb]$
pwd
/home/oracle/jb
[oracle@ibmlab jb]$
ls
oswbb733.tar
[oracle@ibmlab jb]$
tar -xvf oswbb733.tar
oswbb/
oswbb/docs/
oswbb/docs/The_Analyzer/
oswbb/docs/The_Analyzer/OSWatcherAnalyzerOverview.pdf
oswbb/docs/The_Analyzer/oswbbaUserGuide.pdf
nput.txt
oswbb/oswrds.sh
[oracle@ibmlab jb]$
Start OS watcher using the scripts as oracle user to capture data every 15
second:
nohup ./startOSWbb.sh 15 300 gzip &
[oracle@ibmlab oswbb]$ ps -ef |grep
-i osw
oracle 21464 1 0
19:39 pts/1 00:00:00 /bin/sh
./OSWatcher.sh 15 300 gzip
oracle 21522 21464 0 19:39 pts/1 00:00:00 /bin/sh ./OSWatcherFM.sh 300
/home/oracle/jb/oswbb/archive
oracle 21943 21266 0 19:41 pts/1 00:00:00 grep -i osw
[oracle@ibmlab oswbb]$
To stop OSWatcher:
[oracle@ibmlab oswbb]$ ./stopOSWbb.sh
[oracle@ibmlab oswbb]$
To uninstall the OSWatche:
simply delete the oswatcher parent directory.
6) Various OS commands for identify bottleneck:
df –h /df –g
We may ran out of space of the file system and the database will not even
allow to login as it's not having space to generate the os level files. We can
use the below commands to investigate and resolve the issue.
On the above screen shot we can clearly observed that the /opt is above
80% which is warning and we should check and remove the files which are no
longer required.
VMSTAT(virtual memory statistics) to determine where the system is taking more resources.
/home/oracle $vmstat 5 5
System configuration: lcpu=32 mem=90112MB ent=2.00
kthr memory page faults cpu
----- ----------- ------------------------ ------------
-----------------------
r
b avm fre
re pi po fr sr
cy in sy cs
us sy id wa pc ec
5
0 20454530 211691 0 0 0 0
0 0 104 26418 1889 8 6 87 0
0.42 20.9
4
0 20454530 211691 0 0 0 0
0 0 1856 11360 3806 15 12
73 0
0.86 43.1
4
0 20454531 211687 0 0 0 0
0 0 418 21436 2441 11 7 82 0 0.58
29.0
4
0 20454531 211664 0 0 0 0
0 0 108 13954 1947 9 4 87 0
0.43 21.3
2
0 20456049 210136 0 0 0 0
0 0 81 18128 1872
8 5 88 0
0.40 19.8
The r column displays the number of processes waiting for access to a
processor. The b column displays the number of processes in a sleep state.
These values are usually zero.
If b(processes sleeping) and r are
consistently greater than 0, then you may be using more CPU than available. .
If po (memory swapped out to disk)
and pi (memory swapped in from disk) are consistently greater than 0, you may
have a memory bottleneck. Paging and swapping occur when
there isn’t enough physical memory to accommodate the memory
I/O Stats
Reports I/O statistics
SAR:(System Activity Reporter) SAR is a legacy system
monitoring tool which is used to report CPU activity, system loads
average, memory/paging, LUN load, network activity.
bash-2.05$ sar -p 10 10
SunOS drsmgtp1 5.9 Generic_122300-31 sun4u 06/15/2016
11:31:07 atch/s pgin/s ppgin/s pflt/s
vflt/s slock/s
11:31:17 8.00 2.40
6.00 6.40 18.30
0.00
11:31:27 0.70 2.80
2.80 0.00 0.40
0.00
11:31:37 0.40 1.50
1.50 0.10 0.00
0.00
11:31:47 4.30 4.40
4.40 5.90 16.10
0.00
11:31:57 0.70 2.80
2.80 0.00 0.00
0.00
11:32:07 0.40 1.60
1.60 0.70 0.00
0.00
11:32:17 3.50 1.20
1.20 5.90 16.10
0.00
11:32:27 0.40 1.90
5.70 0.10 0.00
0.00
11:32:37 1.20 0.00
0.00 0.00 0.00
0.00
11:32:47 4.40 0.00
0.00 5.90 16.20
0.00
Average 2.40 1.86
2.60 2.50 6.71
0.00
bash-2.05$
TOP is a command to get the load information along
with memory and process utilization details.
PS: ps is a command to quickly
identify the what are the process running on the Unix an Linux servers.
[oracle@ibmlab cdump]$ ps -ef |grep
pmon
oracle 23212 22307 0 21:04 pts/3 00:00:00 grep pmon
oracle 27788 1 0
Jun10 ? 00:00:20 ora_pmon_cdb1
[oracle@ibmlab cdump]$
free -m: is a command to get the memory
information on the linux operating system.
[oracle@ibmlab cdump]$ free -m
total used free
shared buffers cached
Mem: 3962 3897 65 0 139 3170
-/+ buffers/cache: 587 3375
Swap: 10239 18
10221
[oracle@ibmlab cdump]$
NMON
To display usages memory ,CPU ,Disk
etc.
- Paging info
- Memory and paging in one output
- introduce in 11g
- preserve the performance
- only accepted plan are used
- new plan save for future evaluation
- enhance in 12c
- whole idea is used to avoid new plan
- proactive and conservative approach
- guarantee execution plan stability
- only accepted/verified plan are used
- enforce specific plan
- New plans are stored in Plan History
- purges history according to retention
- CDB resource plan determines the amount of resources allocated to each PDB.
- A PDB resource plan determines how the resources allocated to a specific PDB are allocated to consumer groups within that PDB
How I Create SQL Baseline to fix query with an Execution Plan
creating SQL baseline and force
query to use better execution plan (plan hash value).
Here we are sharing this scenario where the query which
was running fine till yesterday now suddenly running long . So it is very
important to check the plan for a particular sqlid.
Manual Plan Loading
Manual plan loading can be used in conjunction with, or as an alternative to automatic plan capture. The load operations are performed using the DBMS_SPM package,
which allows SQL plan baselines to be loaded from SQL tuning sets or from specific SQL statements in the cursor cache. Manually loaded statements are flagged as accepted by default.
If a SQL plan baseline is present for a SQL statement, the plan is added to the baseline, otherwise a new baseline is created.
The following code uses the LOAD_PLANS_FROM_SQLSET function to load all statements in an existing SQL tuning set into SQL baselines.
A filter can be applied to limit the SQL statements loaded if necessary.
DECLARE
l_plans_loaded PLS_INTEGER;
BEGIN
l_plans_loaded := DBMS_SPM.load_plans_from_sqlset(
sqlset_name => 'my_sqlset');
END;
/
The LOAD_PLANS_FROM_CURSOR_CACHE functions allow SQL statements to be loaded from the cursor cache. There are four overloads, allowing statements to be identified by a number of criteria, including: SQL_ID, SQL_TEXT, PARSING_SCHEMA_NAME, MODULE and ACTION. The following example identifies the SQL statement using the SQL_ID.
or
DECLARE
l_plans_loaded PLS_INTEGER;
BEGIN
l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(
sql_id => '1fkh93md0802n');
END;
/
select sql_id,plan_hash_value from v$sql where sql_id=’&sql_id’;
select count(*) from dba_sql_plan_baselines;
Load plan from cursor cache
DECLARE
l_plans_loaded PLS_INTEGER;
BEGIN
l_plans_loaded := DBMS_SPM.load_plans_from_cursor_cache(
sql_id => '1fkh93md0802n' ,plan_hash_value=>4453343434);
END;
/
END;
select count(*) from dba_sql_plan_baselines;
SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE sql_text LIKE '%spm_test_tab%' ;
SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines WHERE sql_handle = 'SYS_SQL_7b76323ad90440b9';
select * from table(dbms.xplan.display_sql_plan_baseline(sql_handle=>'sqlccdd555555s'))
select sql_id,plan_hash_value,exact_matching_signature,sql_plan_baseline from v$sql where sql_id='&sql_id';
select sql_handle,plan_name,accepted, enabled,optimizer_cost from dba_sql_plan_baselines where SQL_HANDLE='SQL_540fb108407c78ed';
The return value of the LOAD_PLANS_FROM_SQLSET and LOAD_PLANS_FROM_CURSOR_CACHE functions indicates the number of plan loaded by the function call.
Evolving SQL Plan Baselines
Evolving a SQL plan baseline is the process by which the optimizer determines if non-accepted plans in the baseline should be accepted. As mentioned previously, manually loaded plans are automatically marked as accepted, so manual loading forces the evolving process. When plans are loaded automatically, the baselines are evolved using the EVOLVE_SQL_PLAN_BASELINE function, which returns a CLOB reporting its results.
SET LONG 10000
SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_7b76323ad90440b9')
FROM dual;
Note my DB version in 12.2.0.1.0
Query to check the multiple
hash plans for sql id.
select
SQL_ID
, PLAN_HASH_VALUE
, sum(EXECUTIONS_DELTA)
EXECUTIONS
, sum(ROWS_PROCESSED_DELTA)
CROWS
, trunc(sum(CPU_TIME_DELTA)/1000000/60)
CPU_MINS
,
trunc(sum(ELAPSED_TIME_DELTA)/1000000/60) ELA_MINS
from DBA_HIST_SQLSTAT
where SQL_ID in ('&sql_id')
--repalce sqlid with your sqlid
group by SQL_ID ,
PLAN_HASH_VALUE
order by SQL_ID, CPU_MINS
SQL_ID PLAN_HASH_VALUE EXECUTIONS CROWS
CPU_MINS ELA_MINS
------------- ---------------
---------- ---------- ---------- ----------
99v5172d7sg2r 2819387955 2 167 1 1
99v5172d7sg2r 3788285656 2
184 1 1
99v5172d7sg2r 2728488078 3 218 1 1
99v5172d7sg2r 1528540721 4 332 1 1
99v5172d7sg2r 1565483063 7 699 7
8
99v5172d7sg2r 1370745820 17 1236 8 9
99v5172d7sg2r 221645537 0 0 36 51
99v5172d7sg2r 1573610442 3 0 74 75
8 rows selected
we find out the best execution plan (Plan_hash_value) and force the query to use that plan. Below are the steps I did to create and fix bad queries by creating SQL baseline.
STEP 1: GENERATE ALL PREVIOUS
HISTORY RUN DETAILS OF SQL_ID FROM AWR
break off sdate
set lines 2000
set linesize 2000
col SDATE format a10
col STIME format a10
select
to_char(begin_interval_time,'YYYY/MM/DD')
SDATE,to_char(begin_interval_time,'HH24:MI') STIME,s.snap_id,
sql_id, plan_hash_value PLAN,
ROUND(elapsed_time_delta/1000000,2) ET_SECS,
nvl(executions_delta,0) execs,
ROUND((elapsed_time_delta/decode(executions_delta,null,1,0,1,executions_delta))/1000000,2)
ET_PER_EXEC,
ROUND((buffer_gets_delta/decode(executions_delta,null,1,0,1,executions_delta)),
2) avg_lio,
ROUND((CPU_TIME_DELTA/decode(executions_delta,null,1,0,1,executions_delta))/1000,
2) avg_cpu_ms,
ROUND((IOWAIT_DELTA/decode(executions_delta,null,1,0,1,executions_delta))/1000,
2) avg_iow_ms,
ROUND((DISK_READS_DELTA/decode(executions_delta,null,1,0,1,executions_delta)),
2) avg_pio,
ROWS_PROCESSED_DELTA num_rows
from DBA_HIST_SQLSTAT S,
DBA_HIST_SNAPSHOT SS
where s.sql_id = '&sql_id'
and ss.snap_id =S.snap_id
and ss.instance_number =
S.instance_number
order by sdate,stime;
col
begin_interval_time for a35
select ss.snap_id,
ss.instance_number node, begin_interval_time, s.sql_id, s.plan_hash_value,
nvl(executions_delta,0)
execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000
avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta))
avg_lio
from DBA_HIST_SQLSTAT
S, DBA_HIST_SNAPSHOT SS
where s.sql_id =
nvl('&sql_id','99v5172d7sg2r')
and ss.snap_id =
S.snap_id
and ss.instance_number
= S.instance_number
and executions_delta
> 0 --and plan_hash_value=3099046451
order by 1, 2, 3
/
Enter value for sql_id: 99v5172d7sg2r
SNAP_ID
NODE BEGIN_INTERVAL_TIME
SQL_ID PLAN_HASH_VALUE EXECS
AVG_ETIME AVG_LIO
---------- ----------
----------------------------------- ------------- --------------- ----------
---------- ----------
234043
1 31-OCT-22 06.15.10.441
AM 99v5172d7sg2r 1565483063 1
44.977487 1275694
234724 1 07-NOV-22 08.30.15.419 AM 99v5172d7sg2r 1370745820 1
15.671934 485334
235398 1 14-NOV-22 09.00.14.042 AM 99v5172d7sg2r 1370745820 1
63.020552 747894
236069 1 21-NOV-22 08.45.39.541 AM 99v5172d7sg2r 2728488078 1
29.948022 918476
236740 1 28-NOV-22 08.30.50.784 AM 99v5172d7sg2r 3788285656 1
37.30876 1127835
236935 1 30-NOV-22 09.15.45.852 AM 99v5172d7sg2r 3788285656 1
41.008219 1206658
237416 1 05-DEC-22 09.30.56.546 AM 99v5172d7sg2r 1370745820 1
14.640457 439093
238086 1 12-DEC-22 09.00.56.011 AM 99v5172d7sg2r 1370745820 1
23.707833 719818
238778 1 19-DEC-22 02.00.28.302 PM 99v5172d7sg2r 1370745820 1
42.164343 1024287
239436 1 26-DEC-22 10.30.20.445 AM 99v5172d7sg2r 1573610442 1 840.569558 15504539
239438 1 26-DEC-22 11.00.35.118 AM 99v5172d7sg2r 1573610442 1 978.608431 17804033
SNAP_ID
NODE BEGIN_INTERVAL_TIME
SQL_ID PLAN_HASH_VALUE EXECS
AVG_ETIME AVG_LIO
---------- ----------
----------------------------------- ------------- --------------- ----------
---------- ----------
239439 1 26-DEC-22 11.15.42.999 AM 99v5172d7sg2r 1573610442 1 1790.28067 30838064
/*In this scenario sql_id=>99v5172d7sg2r and plan_hash_value for good plan that we want to force is 1370745820.*/
Follow below steps to create sql baseline for sql_id
SELECT sql_handle,
plan_name,enabled,accepted,fixed FROM dba_sql_plan_baselines
WHERE signature IN
(SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID') order by accepted,enabled;
2 Enter value for sql_id: 99v5172d7sg2r
old 2: WHERE signature IN (SELECT
exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')
new 2: WHERE signature IN (SELECT
exact_matching_signature FROM v$sql WHERE sql_id='99v5172d7sg2r')
no rows selected
STEP 2: DROP SQL TUNING SET (STS) IF EXISTS
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name >
'SQL_FOR_99v5172d7sg2r');
END;
STEP 3: CREATE SQL TUNING
SET
BEGIN
DBMS_SQLTUNE.create_sqlset (
sqlset_name
> 'SQL_FOR_99v5172d7sg2r ',
description
> 'SQL tuning set for 99v5172d7sg2r ');
END;
/
/* Populate STS from AWR by
specifying snapshot for a desired plan which we found using the above query.
In this scenario snap id's are 239439
and 237416 and change plan_hash_value accordingly.*/
DECLARE
l_cursor
DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM
TABLE (DBMS_SQLTUNE.select_workload_repository (
23483, -- begin_snap
23484, -- end_snap
q'<sql_id in ('6hxw283cyw0ub') and
plan_hash_value in (1370745820)>', -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
100) -- result_limit
) p;
DBMS_SQLTUNE.load_sqlset
(
sqlset_name
> 'SQL_FOR_99v5172d7sg2r ',
populate_cursor
> l_cursor);
END;
/
STEP 4: CHECK SQL SET DETAILS
column text format a20
select sqlset_name,
sqlset_owner, sqlset_id, sql_id,substr(sql_text,1,20)
text,elapsed_time,buffer_gets,
parsing_schema_name, plan_hash_value, bind_data from dba_sqlset_statements where sqlset_name ='SQL_FOR_99v5172d7sg2r';
STEP 5: LOAD DESIRED PLAN FROM
STS AS SQL PLAN BASELINE
DECLARE
L_PLANS_LOADED
PLS_INTEGER;
BEGIN
L_PLANS_LOADED :
DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
SQLSET_NAME >
'SQL_FOR_99v5172d7sg2r ');
END;
STEP 6: CHECK SQL PLAN BASELINE
DETAILS
SELECT sql_handle,
plan_name,enabled,accepted,fixed FROM dba_sql_plan_baselines
WHERE signature IN (SELECT
exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')
order by accepted,enabled;
STEP 7: ENABLE FIXED=YES
var pbsts varchar2(30);
exec :pbsts :
dbms_spm.alter_sql_plan_baseline('SQL_64e76e773c55f1c4','SQL_PLAN_99v5172d7sg2r 0a6275e','FIXED','YES');
STEP 8: PURGE OLD EXECUTION
PLAN FROM SHARED POOL
Find below two parameter which are required to purge specific sql from the shared pool.
select address||','||hash_value from gv$sqlarea where sql_id ='99v5172d7sg2r ';
ADDRESS||','||HASH_VALUE
---------------------------------------------------------
C00000016FF92618,2591833175
Now use below command to purge
sql from shared pool.
exec
sys.dbms_shared_pool.purge(' C00000016FF92618,2591833175','C',1);
Re-run query or program to test
select count(*) from dba_sql_plan_baselines;
Load plan from cursor cache
DECLARE
l_plans_loaded PLS_INTEGER;
BEGIN
l_plans_loaded
:= DBMS_SPM.load_plans_from_cursor_cache(
sql_id =>
'1fkh93md0802n' ,plan_hash_value=>4453343434);
END;
/
END;
select count(*) from dba_sql_plan_baselines;
SELECT sql_handle, plan_name, enabled, accepted FROM
dba_sql_plan_baselines WHERE sql_text
LIKE '%spm_test_tab%' ;
SELECT sql_handle, plan_name, enabled, accepted FROM
dba_sql_plan_baselines WHERE sql_handle = 'SYS_SQL_7b76323ad90440b9';
select * from table(dbms.xplan.display_sql_plan_baseline(sql_handle=>'sqlccdd555555s'))
select sql_id,plan_hash_value,exact_matching_signature,sql_plan_baseline from v$sql where sql_id='&sql_id';
select sql_handle,plan_name,accepted, enabled,optimizer_cost from dba_sql_plan_baselines where SQL_HANDLE='SQL_540fb108407c78ed';
select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete" from v$session_longops where sofar <> totalwork;
Dear Recruiter,
ReplyDeleteGreetings from Internopening – One of most search job portal for Fresher in India.
This Christmas, Internopening is offering you free across all recruitment packages
Get access to internopening's resume database service with the lowest price ever. Cost effective way to hire fresher! We have 1.75 lakhs Database with 5 Lakh active candidates.
You can post job after registering https://internopening.com/login-register/
Looking for personal assistance? Do reach out to Vishnu, Your hiring manager @ 91-9606734482/+91-8041156843 or shoot an email to contact@InternOpening.com to avail this offer. Offer valid till 31th Jan, 2023.
Regards,
Team Internopening
Phone: +91 9606734482/+91 8041156843
Website : www.Internopening.com
Email: hr@internopening.com/contact@InternOpening.