Friday, 12 January 2018

Oracle SGA Instance Tuning And Redo log Tuning Commands

 Instance Tuning And Redo log Tuning


The SGA of the Oracle database is the area of ​​RAM that is used by the Oracle database and allocated during the startup phase. 

More specifically, nomount is allocated during the startup phase.

When the startup command is run, Oracle reads the information from the spfile.

If it is consistent with the environment in which it is started, that is, 

if the environment meets the parameters indicated in the spfile, 

the instance will allocate the memory by completing the nomount startup phase.


The main memory areas are:

SHARED POOL - Area dedicated to query libraries

BUFFER CACHE - Area dedicated to the loading of data in memory accessed by frequency

REDO BUFFER - Area dedicated to writing REDO LOG

JAVA POOL - Area dedicated to JAVA data processing

etc…

These memory areas can be managed automatically by Oracle.

If you want to disable the automatic management of the memory areas inside the SGA,

 you must specify the size of the various memory areas on the spfile.

At that point the management would no longer be dynamic, as with the fixed dimensions.

The automatic management mode of the internal areas of the SGA is called ASMM Automatic Shared Memory Management and was introduced in Oracle 10g, applicable through the parameters sga_target, sga_max_size, pga_aggregate and the resetting of the other memory areas.

AMM vs ASMM

Furthermore, from the Oracle 11g version, an additional automatic memory management mode has been implemented, extended to the size of the SGA and PGA.

This mode is called AMM Automatic Memory Management and is activated by populating the memory_target and memory_max_target parameters and removing the sga_target, sga_max_size, pga_aggregate parameters from the spfile: 

 To Check Current SGA and its component size

 show parameter memory

  show parameter sga

 show parameter shared pool
 show parameter db_cache_size



SET PAGESIZE 900
col 'Total SGA (Fixed+Variable)' format 99999999999999999999999
col 'Total PGA Allocated (Mb)' format 99999999999999999999999
col component format a40
col current_size format 99999999999999999999999
SPOOL DBMEMINFO_OEMPRD.log
/* Database Identification */
select NAME, PLATFORM_ID, DATABASE_ROLE from v$database;
select * from V$version where banner like 'Oracle Database%';
select INSTANCE_NAME, to_char(STARTUP_TIME,'DD/MM/YYYY HH24:MI:SS') "STARTUP_TIME" from v$instance;
SET LINESIZE 200
/* Memory resize and sga resize*/
COLUMN parameter FORMAT A25

SELECT start_time,
       end_time,
       component,
       oper_type,
       oper_mode,
       parameter,
       ROUND(initial_size/1024/1204) AS initial_size_mb,
       ROUND(target_size/1024/1204) AS target_size_mb,
       ROUND(final_size/1024/1204) AS final_size_mb,
       status
FROM   v$memory_resize_ops
ORDER BY start_time;

select (sga+pga)/1024/1024 as "sga_pga"
from
(select sum(value) sga from v$sga),
(select sum(pga_used_mem) pga from v$process);

or

select (sga+pga)/1024/1024 as "sga_pga"
from
(select sum(value) sga from v$sga),
(select sum(PGA_ALLOC_MEM) pga from v$process);

show parameter session

show parameter process

 set linesize 300
 select * from v$resource_limit where resource_name in ('processes','sessions');


set linesize 90
set pagesize 60
column component format a25
column Final format 99,999,999,999
column Started format A25
SELECT COMPONENT ,OPER_TYPE,FINAL_SIZE/1024/102 Final,to_char(start_time,'dd-mon hh24:mi:ss') Started FROM V$SGA_RESIZE_OPS;

select * from V$SGA_RESIZE_OPS;

   
set linesize 200
column component format a25
column Final format 99,999,999,999
column START_TIME   format A25
column END_TIMEformat A25
column PARAMETER format a25
column OPER_TYPE format a10
 
select * from DBA_HIST_MEMORY_RESIZE_OPS;

SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='shared pool';
SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='DEFAULT buffer cache';
SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='large pool';

SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='shared pool';

 SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='DEFAULT buffer cache';


SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='large pool';

SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='streams_pool_size';

SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component like 'strea%';



SET HEADING ON
SET PAGESIZE 20
SELECT name, bytes FROM v$sgastat WHERE pool = 'shared pool' AND (bytes > 999999 OR name = 'free memory') AND rownum < 16 ORDER BY bytes DESC;

/*buffer cache advice*/

select size_for_estimate,
size_factor,
ESTD_PHYSICAL_READ_FACTOR,
ESTD_PHYSICAL_READ_TIME
from v$db_cache_advice;


column size_for_estimate       format 999,999,999,999
column buffers_for_estimate    format 999,999,999
column estd_physical_read_factor format 999.90
column estd_physical_reads       format 999,999,999

SELECT size_for_estimate, buffers_for_estimate
     , estd_physical_read_factor, estd_physical_reads/1024/1024 estd_physical_reads
  FROM V$DB_CACHE_ADVICE
 WHERE name          = 'DEFAULT'
   AND block_size    = (SELECT value FROM V$PARAMETER
                         WHERE name = 'db_block_size')
   AND advice_status = 'ON';


/* Memory advice */


select * from v$memory_target_advice order by memory_size;

/* free sga */

SELECT bytes/1024/104 FROM v$sgainfo WHERE name = 'Free SGA Memory Available';
 /* AMM MEMORY settings */

show parameter MEMORY%TARGET
show parameter SGA%TARGET
show parameter PGA%TARGET

/* Current MEMORY settings */
select component, current_size/(1024*1024) from V$MEMORY_DYNAMIC_COMPONENTS;
/* SGA */
select sum(value)/(1024*1024)"Total SGA (Fixed+Variable)" from v$sga;

select * from v$sga_target_advice order by sga_size;
/* PGA */
select sum(PGA_ALLOC_MEM)/1024/1024 "Total PGA Allocated (Mb)" from v$process p, v$session s where p.addr = s.paddr;

select PGA_TARGET_FOR_ESTIMATE/1024/1024,BYTES_PROCESSED/1024/1024,ESTD_EXTRA_BYTES_RW/1024/1024 from  V$PGA_TARGET_ADVICE;
SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
       ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
       ESTD_OVERALLOC_COUNT
FROM   v$pga_target_advice;

COLUMN name FORMAT A30
COLUMN value FORMAT A10

SELECT name, value FROM   v$parameter
WHERE  name IN ('pga_aggregate_target', 'sga_target')
UNION
SELECT 'maximum PGA allocated' AS name, TO_CHAR(value) AS value
FROM   v$pgastat
WHERE  name = 'maximum PGA allocated';

-- Calculate MEMORY_TARGET
select memory_target/1024/1024  from (
SELECT sga.value + GREATEST(pga.value, max_pga.value) AS memory_target
FROM (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'sga_target') sga,
     (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'pga_aggregate_target') pga,
     (SELECT value FROM v$pgastat WHERE name = 'maximum PGA allocated') max_pga);

select to_char(sysdate,'Mondd hh24:mi:ss') TIME from dual;

set linesize 200
select name,value from v$sysstat where name in ('redo synch poll writes','redo synch polls');

column  EVENT  format a30


--to check job window

set linesize 120
column WINDOW_NAME format a20
column ENABLED format a7
column REPEAT_INTERVAL format a60
column DURATION format a15

select job_name,job_type,program_name,schedule_name,job_class,STATE from dba_scheduler_jobs where job_name = 'GATHER_STATS_JOB';


select m.table_name, m.timestamp, sum(m.inserts), sum(m.updates), sum(m.deletes), t.num_rows, t.last_analyzed  from sys.dba_tab_modifications m, dba_tables t
where m.table_owner = t.owner   and m.table_name = t.table_name  and  m.table_owner not in ('SYS','OUTLN','SYSTEM','TSMSYS','DBSNMP','WMSYS','EXFSYS','XDB','OLAPSYS','MDSYS','CAGAUDIT','SYSMAN','DMSYS','CTXSYS','ORDSYS') and rownum<=100
group by m.table_name, m.timestamp, t.num_rows, t.last_analyzed   order by 1,2
/


SPOOL OFF



check free space in index tablespace it should have enough free space

select object_name,object_type,owner,select LAST_DDL_TIME from dba_objects  where owner='DMS_USER' AND object_name IN('PRCL_ITEM_PRCL_FK_I','CUST_REQ_CUSTORD_FK_I','CUSTORD_PK','PRCL_ITEM_PK','PRCL_ITEM_PCDR_DATE_I','PRCL_PK','CUST_REQIT_CUST_REQ_FK_I','PRCL_ITEM_PP_FK_I',
'PRCL_ITEM_WMS_STAT_I','WMSORD_PK','P_LOCN_DC_LOCN_FK_I','PLS_PROD_I','WSF_PROD_I','CUST_REQIT_PP_FK_I') order by 3;

select table_name, index_name , degree
from dba_indexes where owner='DMS_USER' AND index_name IN('PRCL_ITEM_PRCL_FK_I','CUST_REQ_CUSTORD_FK_I','CUSTORD_PK','PRCL_ITEM_PK','PRCL_ITEM_PCDR_DATE_I','PRCL_PK','CUST_REQIT_CUST_REQ_FK_I','PRCL_ITEM_PP_FK_I',
'PRCL_ITEM_WMS_STAT_I','WMSORD_PK','P_LOCN_DC_LOCN_FK_I','PLS_PROD_I','WSF_PROD_I','CUST_REQIT_PP_FK_I') order by 3;
 
 


SQL> SELECT t.table_name, i.index_name,i.blevel,i.status,t.last_analyzed FROM dba_tables t, dba_indexes i WHERE t.table_name = i.table_name AND t.table_name ='QT_CRDR_DELETION';


select a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from dba_tables a, dba_segments b 
where  a.owner=b.owner and a.table_name=b.segment_name and a.table_name='&table_name' order by b.bytes desc;

 
spool off;



Redo log Tuning


The steps for tuning redo log performance are straightforward: 

1 - Determine the optimal sizing of the log_buffer. 

2 - Size online redo logs to control the frequency of log switches and minimize system waits. 

3 - Optimize the redo log disk to prevent bottlenecks.

select EVENT,TOTAL_WAITS,TOTAL_TIMEOUTS,TIME_WAITED,AVERAGE_WAIT from   v$system_event where  event in ('log file sync','log file parallel write');
/
select value from v$parameter where name = 'log_buffer';
/
column  member  format a50 wrap         heading 'Member'
column  group#  format 99990            heading 'Group#'
column  status  format a10 wrap         heading 'Status'
SELECT a.group#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
/
select group#,member from v$logfile;
/

1. How much redo generated per day

set linesize 200 pagesize 1000

column day format a3

column total format 9999

column h00 format 999

column h01 format 999

column h02 format 999

column h03 format 999

column h04 format 999

column h04 format 999

column h05 format 999

column h06 format 999

column h07 format 999

column h08 format 999

column h09 format 999

column h10 format 999

column h11 format 999

column h12 format 999

column h13 format 999

column h14 format 999

column h15 format 999

column h16 format 999

column h17 format 999

column h18 format 999

column h19 format 999

column h20 format 999

column h21 format 999

column h22 format 999

column h23 format 999

column h24 format 999

break on report

compute max of "total" on report

compute max of "h00" on report

compute max of "h01" on report

compute max of "h02" on report

compute max of "h03" on report

compute max of "h04" on report

compute max of "h05" on report

compute max of "h06" on report

compute max of "h07" on report

compute max of "h08" on report

compute max of "h09" on report

compute max of "h10" on report

compute max of "h11" on report

compute max of "h12" on report

compute max of "h13" on report

compute max of "h14" on report

compute max of "h15" on report

compute max of "h16" on report

compute max of "h17" on report

compute max of "h18" on report

compute max of "h19" on report

compute max of "h20" on report

compute max of "h21" on report

compute max of "h22" on report

compute max of "h23" on report

compute sum of NUM on report

compute sum of GB on report

compute sum of MB on report

compute sum of KB on report


REM Script to Report the Redo Log Switch History


alter session set nls_date_format='DD MON YYYY';

select thread#, trunc(completion_time) as "date", to_char(completion_time,'Dy') as "Day", count(1) as "total",

sum(decode(to_char(completion_time,'HH24'),'00',1,0)) as "h00",

sum(decode(to_char(completion_time,'HH24'),'01',1,0)) as "h01",

sum(decode(to_char(completion_time,'HH24'),'02',1,0)) as "h02",

sum(decode(to_char(completion_time,'HH24'),'03',1,0)) as "h03",

sum(decode(to_char(completion_time,'HH24'),'04',1,0)) as "h04",

sum(decode(to_char(completion_time,'HH24'),'05',1,0)) as "h05",

sum(decode(to_char(completion_time,'HH24'),'06',1,0)) as "h06",

sum(decode(to_char(completion_time,'HH24'),'07',1,0)) as "h07",

sum(decode(to_char(completion_time,'HH24'),'08',1,0)) as "h08",

sum(decode(to_char(completion_time,'HH24'),'09',1,0)) as "h09",

sum(decode(to_char(completion_time,'HH24'),'10',1,0)) as "h10",

sum(decode(to_char(completion_time,'HH24'),'11',1,0)) as "h11",

sum(decode(to_char(completion_time,'HH24'),'12',1,0)) as "h12",

sum(decode(to_char(completion_time,'HH24'),'13',1,0)) as "h13",

sum(decode(to_char(completion_time,'HH24'),'14',1,0)) as "h14",

sum(decode(to_char(completion_time,'HH24'),'15',1,0)) as "h15",

sum(decode(to_char(completion_time,'HH24'),'16',1,0)) as "h16",

sum(decode(to_char(completion_time,'HH24'),'17',1,0)) as "h17",

sum(decode(to_char(completion_time,'HH24'),'18',1,0)) as "h18",

sum(decode(to_char(completion_time,'HH24'),'19',1,0)) as "h19",

sum(decode(to_char(completion_time,'HH24'),'20',1,0)) as "h20",

sum(decode(to_char(completion_time,'HH24'),'21',1,0)) as "h21",

sum(decode(to_char(completion_time,'HH24'),'22',1,0)) as "h22",

sum(decode(to_char(completion_time,'HH24'),'23',1,0)) as "h23"

from

v$archived_log

where first_time > trunc(sysdate-10)

and dest_id = (select dest_id from V$ARCHIVE_DEST_STATUS where status='VALID' and type='LOCAL')

group by thread#, trunc(completion_time), to_char(completion_time, 'Dy') order by 2,1;


REM Script to calculate the archive log size generated per day for each Instances.

select THREAD#, trunc(completion_time) as "DATE"

, count(1) num

, trunc(sum(blocks*block_size)/1024/1024/1024) as GB

, trunc(sum(blocks*block_size)/1024/1024) as MB

, sum(blocks*block_size)/1024 as KB

from v$archived_log

where first_time > trunc(sysdate-10)

and dest_id = (select dest_id from V$ARCHIVE_DEST_STATUS where status='VALID' and type='LOCAL')

group by thread#, trunc(completion_time)

order by 2,1

;

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

select sysdate from dual;

set long 99999999

set pagesize 0

set linesize 120        

select max(bytes_redo) as max_redo_hour_bytes

from (select to_char(first_time,'YYYYMMDD HH24') as day_hour,sum(blocks*block_size) as bytes_redo

from v$archived_log

group by to_char(first_time,'YYYYMMDD HH24'));


per day archive generation


SQL> select to_char(first_time,'dd/mm/yyyy') "FIRST_DATE", count(*), sum(round(blocks*block_size/1024/1024)) "SIZE IN MB"

from v$archived_log WHERE TRUNC(first_time) > sysdate-30

group by to_char(first_time,'dd/mm/yyyy')

order by 1 asc;

  2    3    4

FIRST_DATE   COUNT(*) SIZE IN MB

---------- ---------- ----------

01/08/2022        440      74832

02/08/2022        372      56872

03/08/2022        346      48242

SQL>


col day for a8;

select

to_char(first_time,'YY-MM-DD') day,

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",

to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",

COUNT(*)

from v$log_history

group by to_char(first_time,'YY-MM-DD')

order by day ;


column  member  format a50 wrap         heading 'Member'

column  group#  format 99990            heading 'Group#'

column  status  format a10 wrap         heading 'Status'

SELECT a.group#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group# order by 1;

select group#,member from v$logfile order by 1;

select * from v$log;

select group#,members,status,bytes/1024/1024 as mb from v$log;



/*  archive generation */

select to_char(COMPLETION_TIME,'DD/MON/YYYY') Day,
     sum(blocks*block_size)/1048576/1024 "Size(GB)",
     count(sequence#) "Total Archives"
     from (select distinct sequence#,
                  thread#,
                  COMPLETION_TIME,
                  blocks,
                  block_size
           from v$archived_log
           where completion_time>=sysdate-14)
     group by to_char(COMPLETION_TIME,'DD/MON/YYYY')
     order by 1;

show parameter archive log


set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hh24mi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool lfsdiag_&&dbname&&timestamp&&suffix
set trim on
set trims on
set lines 140
set pages 100
set verify off


PROMPT LFSDIAG DATA FOR &&dbname&&timestamp
PROMPT Note: All timings are in milliseconds (1000 milliseconds = 1 second)

PROMPT
PROMPT IMPORTANT PARAMETERS RELATING TO LOG FILE SYNC WAITS:
column name format a40 wra
column value format a40 wra
select inst_id, name, value from gv$parameter
where ((value is not null and name like '%log_archive%') or
name like '%commit%' or name like '%event=%' or name like '%lgwr%')
and name not in (select name from gv$parameter where (name like '%log_archive_dest_state%'
and value = 'enable') or name = 'log_archive_format')
order by 1,2,3;

PROMPT
PROMPT ASH THRESHOLD...
PROMPT
PROMPT This will be the threshold in milliseconds for average log file sync
PROMPT times. This will be used for the next queries to look for the worst
PROMPT 'log file sync' minutes. Any minutes that have an average log file
PROMPT sync time greater than the threshold will be analyzed further.
column threshold_in_ms new_value threshold format 999999999.999
select min(threshold_in_ms) threshold_in_ms
from (select inst_id, to_char(sample_time,'Mondd_hh24mi') minute,
avg(time_waited)/1000 threshold_in_ms
from gv$active_session_history
where event = 'log file sync'
group by inst_id,to_char(sample_time,'Mondd_hh24mi')
order by 3 desc)
where rownum <= 10;

PROMPT
PROMPT ASH WORST MINUTES FOR LOG FILE SYNC WAITS:
PROMPT
PROMPT APPROACH: These are the minutes where the avg log file sync time
PROMPT was the highest (in milliseconds).
column minute format a12 tru
column event format a30 tru
column program format a40 tru
column total_wait_time format 999999999999.999
column avg_time_waited format 999999999999.999
select to_char(sample_time,'Mondd_hh24mi') minute, inst_id, event,
sum(time_waited)/1000 TOTAL_WAIT_TIME , count(*) WAITS,
avg(time_waited)/1000 AVG_TIME_WAITED
from gv$active_session_history
where event = 'log file sync'
group by to_char(sample_time,'Mondd_hh24mi'), inst_id, event
having avg(time_waited)/1000 > &&threshold
order by 1,2;

PROMPT
PROMPT ASH LFS BACKGROUND PROCESS WAITS DURING WORST MINUTES:
PROMPT
PROMPT APPROACH: What is LGWR doing when 'log file sync' waits
PROMPT are happening? LMS info may be relevent for broadcast
PROMPT on commit and LNS data may be relevant for dataguard.
PROMPT If more details are needed see the ASH DETAILS FOR WORST
PROMPT MINUTES section at the bottom of the report.
column inst format 999
column minute format a12 tru
column event format a30 tru
column program format a40 wra
select to_char(sample_time,'Mondd_hh24mi') minute, inst_id inst, program, event,
sum(time_waited)/1000 TOTAL_WAIT_TIME , count(*) WAITS,
avg(time_waited)/1000 AVG_TIME_WAITED
from gv$active_session_history
where to_char(sample_time,'Mondd_hh24mi') in (select to_char(sample_time,'Mondd_hh24mi')
from gv$active_session_history
where event = 'log file sync'
group by to_char(sample_time,'Mondd_hh24mi'), inst_id
having avg(time_waited)/1000 > &&threshold and sum(time_waited)/1000 > 1)
and (program like '%LGWR%' or program like '%LMS%' or
program like '%LNS%' or event = 'log file sync')
group by to_char(sample_time,'Mondd_hh24mi'), inst_id, program, event
order by 1,2,3,5 desc, 4;

PROMPT
PROMPT HISTOGRAM DATA FOR LFS AND OTHER RELATED WAITS:
PROMPT
PROMPT APPROACH: Look at the wait distribution for log file sync waits
PROMPT by looking at "wait_time_milli". Look at the high wait times then
PROMPT see if you can correlate those with other related wait events.
column event format a40 wra
select inst_id, event, wait_time_milli, wait_count
from gv$event_histogram
where event in ('log file sync','gcs log flush sync',
'log file parallel write','wait for scn ack',
'log file switch completion','gc cr grant 2-way',
'gc buffer busy','gc current block 2-way') or
event like '%LGWR%' or event like '%LNS%'
order by 2 desc,1,3;

PROMPT
PROMPT ORDERED BY WAIT_TIME_MILLI
select inst_id, event, wait_time_milli, wait_count
from gv$event_histogram
where event in ('log file sync','gcs log flush sync',
'log file parallel write','wait for scn ack',
'log file switch completion','gc cr grant 2-way',
'gc buffer busy','gc current block 2-way')
or event like '%LGWR%' or event like '%LNS%'
order by 3,1,2 desc;

PROMPT
PROMPT REDO WRITE STATS
PROMPT
PROMPT "redo write time" in centiseconds (100 per second)
PROMPT 11.1: "redo write broadcast ack time" in centiseconds (100 per second)
PROMPT 11.2: "redo write broadcast ack time" in microseconds (1000 per millisecond)
column value format 99999999999999999999
column milliseconds format 99999999999999.999
select v.version, ss.inst_id, ss.name, ss.value,
decode(substr(version,1,4),
'11.1',decode (name,'redo write time',value*10,
'redo write broadcast ack time',value*10),
'11.2',decode (name,'redo write time',value*10,
'redo write broadcast ack time',value/1000),
decode (name,'redo write time',value*10)) milliseconds
from gv$sysstat ss, v$instance v
where name like 'redo write%' and value > 0
order by 1,2,3;

PROMPT
PROMPT AWR WORST AVG LOG FILE SYNC SNAPS:
PROMPT
PROMPT APPROACH: These are the AWR snaps where the average 'log file sync'
PROMPT times were the highest.
column begin format a12 tru
column end format a12 tru
column name format a13 tru
select dhs.snap_id, dhs.instance_number inst, to_char(dhs.begin_interval_time,'Mondd_hh24mi') BEGIN,
to_char(dhs.end_interval_time,'Mondd_hh24mi') END,
en.name, se.time_waited_micro/1000 total_wait_time, se.total_waits,
se.time_waited_micro/1000 / se.total_waits avg_time_waited
from dba_hist_snapshot dhs, wrh$_system_event se, v$event_name en
where (dhs.snap_id = se.snap_id and dhs.instance_number = se.instance_number)
and se.event_id = en.event_id and en.name = 'log file sync' and
dhs.snap_id in (select snap_id from (
select se.snap_id, se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and en.name = 'log file sync'
order by avg_time_waited desc)
where rownum < 4)
order by 1,2;

PROMPT
PROMPT AWR REDO WRITE STATS
PROMPT
PROMPT "redo write time" in centiseconds (100 per second)
PROMPT 11.1: "redo write broadcast ack time" in centiseconds (100 per second)
PROMPT 11.2: "redo write broadcast ack time" in microseconds (1000 per millisecond)
column stat_name format a30 tru
select v.version, ss.snap_id, ss.instance_number inst, sn.stat_name, ss.value,
decode(substr(version,1,4),
'11.1',decode (stat_name,'redo write time',value*10,
'redo write broadcast ack time',value*10),
'11.2',decode (stat_name,'redo write time',value*10,
'redo write broadcast ack time',value/1000),
decode (stat_name,'redo write time',value*10)) milliseconds
from wrh$_sysstat ss, wrh$_stat_name sn, v$instance v
where ss.stat_id = sn.stat_id
and sn.stat_name like 'redo write%' and ss.value > 0
and ss.snap_id in (select snap_id from (
select se.snap_id, se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and en.name = 'log file sync'
order by avg_time_waited desc)
where rownum < 4)
order by 1,2,3;

PROMPT
PROMPT AWR LFS AND OTHER RELATED WAITS FOR WORST LFS AWRs:
PROMPT
PROMPT APPROACH: These are the AWR snaps where the average 'log file sync'
PROMPT times were the highest. Look at related waits at those times.
column name format a40 tru
select se.snap_id, se.instance_number inst, en.name,
se.total_waits, se.time_waited_micro/1000 total_wait_time,
se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and (en.name in
('log file sync','gcs log flush sync',
'log file parallel write','wait for scn ack',
'log file switch completion','gc cr grant 2-way',
'gc buffer busy','gc current block 2-way')
or en.name like '%LGWR%' or en.name like '%LNS%')
and se.snap_id in (select snap_id from (
select se.snap_id, se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and en.name = 'log file sync'
order by avg_time_waited desc)
where rownum < 4)
order by 1, 6 desc;

PROMPT
PROMPT AWR HISTOGRAM DATA FOR LFS AND OTHER RELATED WAITS FOR WORST LFS AWRs:
PROMPT Note: This query won't work on 10.2 - ORA-942
PROMPT
PROMPT APPROACH: Look at the wait distribution for log file sync waits
PROMPT by looking at "wait_time_milli". Look at the high wait times then
PROMPT see if you can correlate those with other related wait events.
select eh.snap_id, eh.instance_number inst, en.name, eh.wait_time_milli, eh.wait_count
from wrh$_event_histogram eh, v$event_name en
where eh.event_id = en.event_id and
(en.name in ('log file sync','gcs log flush sync',
'log file parallel write','wait for scn ack',
'log file switch completion','gc cr grant 2-way',
'gc buffer busy','gc current block 2-way')
or en.name like '%LGWR%' or en.name like '%LNS%')
and snap_id in (select snap_id from (
select se.snap_id, se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and en.name = 'log file sync'
order by avg_time_waited desc)
where rownum < 4)
order by 1,3 desc,2,4;

PROMPT
PROMPT ORDERED BY WAIT_TIME_MILLI
PROMPT Note: This query won't work on 10.2 - ORA-942
select eh.snap_id, eh.instance_number inst, en.name, eh.wait_time_milli, eh.wait_count
from wrh$_event_histogram eh, v$event_name en
where eh.event_id = en.event_id and
(en.name in ('log file sync','gcs log flush sync',
'log file parallel write','wait for scn ack',
'log file switch completion','gc cr grant 2-way',
'gc buffer busy','gc current block 2-way')
or en.name like '%LGWR%' or en.name like '%LNS%')
and snap_id in (select snap_id from (
select se.snap_id, se.time_waited_micro/1000 / se.total_waits avg_time_waited
from wrh$_system_event se, v$event_name en
where se.event_id = en.event_id and en.name = 'log file sync'
order by avg_time_waited desc)
where rownum < 4)
order by 1,4,2,3 desc;

PROMPT
PROMPT ASH DETAILS FOR WORST MINUTES:
PROMPT
PROMPT APPROACH: If you cannot determine the problem from the data
PROMPT above, you may need to look at the details of what each session
PROMPT is doing during each 'bad' snap. Most likely you will want to
PROMPT note the times of the high log file sync waits, look at what
PROMPT LGWR is doing at those times, and go from there...
column program format a45 wra
column sample_time format a25 tru
column event format a30 tru
column time_waited format 999999.999
column p1 format a40 tru
column p2 format a40 tru
column p3 format a40 tru
select sample_time, inst_id inst, session_id, program, event, time_waited/1000 TIME_WAITED,
p1text||': '||p1 p1,p2text||': '||p2 p2,p3text||': '||p3 p3
from gv$active_session_history
where to_char(sample_time,'Mondd_hh24mi') in (select
to_char(sample_time,'Mondd_hh24mi')
from gv$active_session_history
where event = 'log file sync'
group by to_char(sample_time,'Mondd_hh24mi'), inst_id
having avg(time_waited)/1000 > &&threshold)
and time_waited > 0.5
order by 1,2,3,4,5;

select to_char(sysdate,'Mondd hh24:mi:ss') TIME from dual;

set linesize 200
select name,value from v$sysstat where name in ('redo synch poll writes','redo synch polls');

column  EVENT  format a30

select EVENT,TOTAL_WAITS,TOTAL_TIMEOUTS,TIME_WAITED,AVERAGE_WAIT from   v$system_event where  event in ('log file sync','log file parallel write');
/
select value from v$parameter where name = 'log_buffer';
/
column  member  format a50 wrap         heading 'Member'
column  group#  format 99990            heading 'Group#'
column  status  format a10 wrap         heading 'Status'
SELECT a.group#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
/
select group#,member from v$logfile;
/

spool off

PROMPT
PROMPT OUTPUT FILE IS: lfsdiag_&&dbname&&timestamp&&suffix
PROMPT


 Proactive performance measure on database EASYPRD running Server1

As part of peak preparation, to avoid any future business impact and proactive method to safe guard the database and the business during Peak load and to improve the database performance

 We recommend below value

current SGA=4608 shared pool= 0 db_cache_size= 0

recommended  valuesga_target=5760M shared pool= 800M  db_cache_size= 2832M



Can you please do analysis from your end and confirm and let us know whether any other modification is required to improve database performance during peak period. .


I know as well PGA size is very dynamic but for proper functioning database ,still it is required to set optimal value during peak load as well
Therefor we have done ianalysis from AWR and predicted pga_aggregate_target =1200 ,We would like to know from you that this predicted value is good to go

We are not working on any performance issue .As Peak load will start from OCT and during period ,Load on database will be more .to accommodate that load and smooth functioning of database during peak time .We need to forecast optimized value for better performance of database


 
Internal Oracle DBA team has done analysis and recommended for below optimized value .Can you please confirm these value is good to go

We are not working on any performance issue .As Peak load will start from OCT and during peak period ,Load on database will be more To accommodate this load and smooth functioning of database during peak time .We need to forecast optimized value for better performance of database

We have done analysis internally and come up with the below optimized value to accommodate load during peak which will start from OCT but customer want your view as well before implementation on production database.Can you confirm from your end as well .We are expecting 25% additional load on database during peak period


and uploaded output


for V$SGA_RESIZE_OPS or DBA_HIST_MEMORY_RESIZE_OPS and V$SGA_TARGET_ADVICE










2 comments:

  1. Hi, I read your whole blog. This is very nice. Good to know about the career in qa automation is broad in future. We are also providing various Oracle DBA & RAC Training, anyone interested can Oracle DBA & RAC Training for making their career in this field .

    ReplyDelete
  2. https://easyreliabledba.blogspot.com/2018/02/racoracle12c-clusterwaregrid-oct-2017.html

    ReplyDelete