Monday, 14 November 2022

How to do Table Partitioning in Oracle with Example

Table Partitioning

Partitioning allows tables, indexes, and index-organized tables to be subdivided into smaller pieces, 
enabling these database objects to be managed and accessed at a finer level of granularity.

Partition pruning - selecting from specific partition is called as partition pruning. adding predicates to the where clause including the partition key(partition columns)

Advantages of Partitioning 

  •  Partitioning enables data management operations such data loads, index creation and rebuilding, and backup/recovery at the partition level, rather than on the entire table. This results in significantly reduced times for these operations.
  •  Partitioning improves query performance. 
  •  Partitioning can significantly reduce the impact of scheduled downtime for maintenance operations.
  •  Partition independence for partition maintenance operations lets you perform concurrent maintenance operations on different partitions of the same table or index. 
  •  Partitioning increases the availability of mission-critical databases if critical tables and indexes are divided into partitions to reduce the maintenance windows, recovery times, and impact of failures.
  •  Partitioning can be implemented without requiring any modifications to your applications. 
For example, you could convert a non-partitioned table to a partitioned table without needing 
to modify any of the SELECT statements or DML statements which access that table. You do not
need to rewrite 

Partitioning Methods

Oracle provides the following partitioning methods:

• Range Partitioning 

Divide the table based on a range of values.

• List Partitioning 

Divide the table based on a list of values.

• Hash Partitioning

Divide the table based on hash key generated from the values of a column.

• Composite Partitioning 

Helps in dividing your partition into sub partitions, by combining various partitioning
methods.

Range-Hash composite Partition

CREATE TABLE invoices
(invoice_no    NUMBER NOT NULL,
 invoice_date  DATE   NOT NULL,
 comments      VARCHAR2(500))
PARTITION BY RANGE (invoice_date)
SUBPARTITION BY HASH (invoice_no)
SUBPARTITIONS 8
(PARTITION invoices_q1 VALUES LESS THAN (TO_DATE('01/04/2001', 'DD/MM/YYYY')),
 PARTITION invoices_q2 VALUES LESS THAN (TO_DATE('01/07/2001', 'DD/MM/YYYY')),
 PARTITION invoices_q3 VALUES LESS THAN (TO_DATE('01/09/2001', 'DD/MM/YYYY')),
 PARTITION invoices_q4 VALUES LESS THAN (TO_DATE('01/01/2002', 'DD/MM/YYYY'));

• Interval Partitioning 

Interval Partitioning automatically creates time-based partitions as new data is added

Interval Table Partition in Oracle 

Once your table has been partitioned, you may come across a problem in the maintenance of the data. 

Let us say that the partitioning is based on the value of the PUBLISHED_DATE of a CD or phonograph record.

For example, when you insert a row with a date, the row is inserted into the first partition. Later,

you discover that the date is incorrect. You attempt to update the date, but you get an error message:

ORA-14402: updating partition key column would cause a partition change

Oracle has stopped the update because it would cause the row to belong to a different partition. 

Oracle has implemented a new feature, called row movement, which allows you to make this kind of update. 

When you complete the update, the row moves to the appropriate partition automatically.

Enable Row Movement

To change a partitioned table to allow row movement, use this command: 

ALTER TABLE tablename ENABLE ROW MOVEMENT; 

To change it back to not allowing row movement, use this command:

ALTER TABLE tablename DISABLE ROW MOVEMENT; 

The next lesson covers renaming, moving, and coalescing partitions.

Row movement: The automatic shifting of a row from one partition to another when the value of the partitioning column in the row changes 

so that the row belongs in a different partition.

Interval partitioning is an enhancement to range partitioning in Oracle 11g and interval partitioning automatically creates time-based partitions as new data is added.

Range partitioning allows an object to be partitioned by a specified range on the partitioning key.  For example, if a table was used to store sales data, it might be range partitioned by a DATE column, with each month in a different partition. 

Therefore, every month a new partition would need to be defined in order to store rows for that month.  If a row was inserted for a new month before a partition was defined for that month, the following error would result:

ORA-14400: inserted partition key does not map to any partition 

If this situation occurs, data loading will fail until the new partitions are created.  This can cause serious problems in larger data warehouses where complex reporting has many steps and dependencies in a batch process.  Mission critical reports might be delayed or incorrect due to this problem. 


An Interval Partitioning Example

Interval partitioning can simplify the manageability by automatically creating the new partitions as needed by the data.  Interval partitioning is enabled in the table's definition by defining one or more range partitions and including a specified interval.  For example, consider the following table:

create table pos_data (

   start_date        DATE,

   store_id          NUMBER,

   inventory_id      NUMBER(6),

   qty_sold          NUMBER(3)

)

PARTITION BY RANGE (start_date)

INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))

   PARTITION pos_data_p2 VALUES LESS THAN (TO_DATE('1-7-2007', 'DD-MM-YYYY')),

   PARTITION pos_data_p3 VALUES LESS THAN (TO_DATE('1-8-2007', 'DD-MM-YYYY'))

); 

Here, two partitions have been defined and an interval of one month has been specified.  If data is loaded into this table with a later date than the greatest defined partition, Oracle will automatically create a new partition for the new month.  In the table above, the greatest defined interval is between July 1, 2015 and August 1, 2015. 

Inserting a row that has a date later than August 1, 2015 would raise an error with normal range partitioning.  However, with interval partitioning, Oracle determines the high value of the defined range partitions, called the transition point, and creates new partitions for data that is beyond that high value.

insert into pos_data (start_date, store_id, inventory_id, qty_sold)

values ( '15-AUG-07', 1, 1, 1);

SELECT TABLE_NAME,PARTITION_NAME,PARTITION_POSITION,HIGH_VALUE

FROM  DBA_TAB_PARTITIONS WHERE

   TABLE_NAME='POS_DATA'

ORDER BY PARTITION_NAME;


PARTITION_NAME    HIGH_VALUE 

POS_DATA_P0       TO_DATE(' 2015-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

POS_DATA_P1       TO_DATE(' 2015-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

SYS_P81    TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

Notice that a system generated partition named SYS_P81 has been created upon inserting a row with a partition key greater than the transition point.  Oracle will manage the creation of new partitions for any value beyond the high value.  Therefore, the values do not need to be inserted in sequence.

Since the partitions are named automatically, Oracle has added a new syntax in order to reference specific partitions effectively.  The normal way to reference a specific partition is to use the partition (partition_name) in the query:

select * from pos_data partition (SYS_P81);

However, it would be cumbersome to look up the system generated partition name each time.  Therefore, the new syntax to specify a partition is by using the partition for (DATE) clause in the query:

select * from pos_data partition for (to_date('15-AUG-2007','dd-mon-yyyy')); 

Another useful feature of partitioning is the ability to distribute partitions across different tablespaces.  With interval partitioning, this can be accomplished by naming all of the tablespaces in the table definition's "store in" clause.  The system created partitions are then assigned to different tablespaces in a round robin manner.  For example, if the choice was to distribute the table across three tablespaces - tablespaceA, tablespaceB, and tablespac eC - use the following clause in the table definition.


INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))

STORE IN (tablespaceA, tablespaceB, tablespaceC)

Restrictions on Interval Partitioning

There are a few restrictions on interval partitioning that must be taken into consideration before deciding if it is appropriate for the business requirement:

  • Cannot be used for index organized tables
  • Must use only one partitioning key column and it must be a DATE or NUMBER
  • Cannot create domain indexes on interval partitioned tables

This feature should be used as an enhancement to range partitioning when uniform distribution of range intervals for new partitions is acceptable.  If the requirement demands the use of uneven intervals when adding new partitions, then interval partitioning would not be the best solution.

Interval Partitioning Commands

There are a few new commands to manage interval partitioning.  First, convert a range partitioned table to use interval partitioning by using alter table <table_name> set interval(expr).

Consider this range partitioned table:

create table pos_data_range (

   start_date        DATE,

   store_id          NUMBER,

   inventory_id      NUMBER(6),

   qty_sold          NUMBER(3)

)

   PARTITION BY RANGE (start_date)

   PARTITION pos_data_p0 VALUES LESS THAN (TO_DATE('1-7-2007', 'DD-MM-YYYY')),

   PARTITION pos_data_p1 VALUES LESS THAN (TO_DATE('1-8-2007', 'DD-MM-YYYY'))

); 


If a row with a date of August 15, 2015 is inserted into the table, it will cause an error.

SQL> insert into pos_data_range (start_date, store_id, inventory_id, qty_sold)

  2  values ( '15-AUG-07', 1, 1, 1);

insert into pos_data_range (start_date, store_id, inventory_id, qty_sold)

            *

ERROR at line 1:

ORA-14400: inserted partition key does not map to any partition


This range partitioned table can easily be converted to use interval partitioning by using the following command:

alter table pos_data_range set INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'));

Interval partitioning is now enabled, and the row with 15-AUG-07 can be inserted without error since Oracle will automatically create the new partition. To convert the table back to only range partitioning, use the following command:

alter table pos_data_range set INTERVAL();

The table is converted back to a range partitioned table and the boundaries for the interval partitions are set to the boundaries for the range partitions.

Using the same syntax, the interval can also be changed for existing interval partitioned tables.  If changing the original table to be partitioned every three months instead of monthly, use:

alter table pos_data set INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'));

After inserting a row with the date of 15-NOV-07, a new partition is automatically generated with a high value of 01-DEC-07.


insert into

   pos_data (start_date, store_id, inventory_id, qty_sold)

values

   ('15-NOV-07', 1, 1, 1);


SELECT

   TABLE_NAME, PARTITION_NAME, PARTITION_POSITION, HIGH_VALUE

FROM

   Remote DBA_TAB_PARTITIONS

WHERE

   TABLE_NAME='POS_DATA'

ORDER BY

   PARTITION_NAME;


PARTITION_NAME    HIGH_VALUE 


POS_DATA_P0       TO_DATE(' 2015-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

POS_DATA_P1       TO_DATE(' 2015-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

SYS_P81    TO_DATE(' 2015-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

SYS_P84    TO_DATE(' 2015-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')


The tablespace storage of the interval partitioned table can also be changed using a similar syntax.  For example, when using a round robin tablespace assignment for the table between tablespace1 to tablespace3, issue the following command:

alter table pos_data set STORE IN(tablespace1, tablespace2, tablespace3);


Oracle interval partitioning offers a very useful extension to range partitioning.  This greatly improves the manageability of range partitioned tables.  In addition to providing system generated new partitions, Oracle has provided a new syntax to simplify the reference of specific partitions. 

Realtime Example

CREATE TABLE EASYOWNER.EASYTABLE_PART(ADDRESSID NUMBER(38),

BANK_ID VARCHAR2(8 CHAR),

ISADDRESSVERIFIED VARCHAR2(1 CHAR),

NAME_ALT1 VARCHAR2(80 CHAR),

CORP_REP_ID NUMBER(38),

CITY VARCHAR2(200 CHAR),

START_DATE DATE,

END_DATE DATE,

RESIDENTIALSTATUS VARCHAR2(25 CHAR),

USERFIELD5 VARCHAR2(200 CHAR),

USERDATEFIELD1 DATE,

NAME VARCHAR2(100 CHAR),

SALUTATION_CODE VARCHAR2(5 CHAR),

PHONENO1 VARCHAR2(50 CHAR),

PHONENO1LOCALCODE VARCHAR2(20 CHAR),

CITY_CODE VARCHAR2(5 CHAR),

STATE_CODE VARCHAR2(5 CHAR),

COUNTRY_CODE VARCHAR2(5 CHAR),

CORE_CUST_ID VARCHAR2(9 CHAR),

NONCUSTOMERID NUMBER(38),

IS_ADDRESS_PROOF_RCVD VARCHAR2(1 CHAR),

SWIFT_NAME VARCHAR2(35 CHAR),

TMDATE DATE DEFAULT sysdate,

BUSINESS_DATE DATE,

CREATED_DATE DATE,

LST_UPD_JOB_ID NUMBER(19))

NOCOMPRESS 

TABLESPACE EASYOWNER_DATA

RESULT_CACHE (MODE DEFAULT)

PCTUSED    0

PCTFREE    10

INITRANS   1

MAXTRANS   255

STORAGE    (

            INITIAL          64K

            NEXT             1M

            MAXSIZE          UNLIMITED

            MINEXTENTS       1

            MAXEXTENTS       UNLIMITED

            PCTINCREASE      0

            BUFFER_POOL      DEFAULT

            FLASH_CACHE      DEFAULT

            CELL_FLASH_CACHE DEFAULT

           )

PARTITION BY RANGE (BUSINESS_DATE)

INTERVAL( NUMTODSINTERVAL(1,'DAY'))

(  

   PARTITION part1 VALUES LESS THAN (TO_DATE(' 2022-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))

    LOGGING

    NOCOMPRESS 

    TABLESPACE EASYOWNER_DATA

    PCTFREE    10

    INITRANS   1

    MAXTRANS   255

    STORAGE    (

                MAXSIZE          UNLIMITED

                BUFFER_POOL      DEFAULT

                FLASH_CACHE      DEFAULT

                CELL_FLASH_CACHE DEFAULT

               )

);


select max(partition_position) from dba_tab_partitions where table_owner='EASYOWNER' and table_name='EASYTABLE_PART';


select max(partition_position) from dba_tab_partitions where table_owner='EASYOWNER' and table_name='EASYTABLE_PART';



select partition_name from dba_tab_partitions where table_owner='EASYOWNER' and table_name='EASYTABLE_PART;


SELECT
   TABLE_NAME,
   PARTITION_NAME,
   PARTITION_POSITION,
   HIGH_VALUE
FROM
   DBA_TAB_PARTITIONS
WHERE
   TABLE_NAME='STG_ADDRESS_PART'
ORDER BY
   PARTITION_NAME;



Range Interval -Hash composite Partition with Real-Time Example


1)


Creating a composite interval-hash partitioned table


CREATE TABLE sales_interval_hash   

  ( prod_id       NUMBER(6)    

  , cust_id       NUMBER    

  , time_id       DATE    

  , channel_id    CHAR(1)    

  , promo_id      NUMBER(6)    

  , quantity_sold NUMBER(3)    

  , amount_sold   NUMBER(10,2)    

  )    

 PARTITION BY RANGE (time_id) INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))    

 SUBPARTITION BY HASH (cust_id) SUBPARTITIONS 4    

 (PARTITION before_2016 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy'))    

 )



SELECT SUBSTR(TABLE_NAME,1,32), SUBSTR(PARTITION_NAME,1,32), SUBSTR(SUBPARTITION_NAME,1,32) 

FROM USER_TAB_SUBPARTITIONS WHERE TABLE_NAME ='SALES_INTERVAL_HASH'



2) CREATE TABLE "testuser"."TEST_TABLE_BKP_PART" ("TEST_SRL_NUM" VARCHAR2(12 CHAR),

"B2K_ID" VARCHAR2(12 CHAR),

"B2K_TYPE" VARCHAR2(5 CHAR),

"SOL_ID" VARCHAR2(8 CHAR),

"TEST_DATE" DATE,

"TEST_AMT" NUMBER(20,4),

"SYS_MAN_FLG" CHAR(1),

"LAST_TESTSMITTED_BY" VARCHAR2(15 CHAR)

)

TABLESPACE TEST_TBLSPC

PARTITION BY RANGE (TEST_DATE) INTERVAL(NUMTOYMINTERVAL(1,'MONTH'))

SUBPARTITION BY HASH ("TEST_SRL_NUM") SUBPARTITIONS 128 STORE IN (TEST_TBLSPC)

(

PARTITION part1 VALUES LESS THAN (TO_DATE(' 2021-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE TEST_TBLSPC,

PARTITION part2 VALUES LESS THAN (TO_DATE(' 2021-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part3 VALUES LESS THAN (TO_DATE(' 2022-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part4 VALUES LESS THAN (TO_DATE(' 2022-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part5 VALUES LESS THAN (TO_DATE(' 2022-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part6 VALUES LESS THAN (TO_DATE(' 2022-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part7 VALUES LESS THAN (TO_DATE(' 2022-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part8 VALUES LESS THAN (TO_DATE(' 2022-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part9 VALUES LESS THAN (TO_DATE(' 2022-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part10 VALUES LESS THAN (TO_DATE(' 2022-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part11 VALUES LESS THAN (TO_DATE(' 2022-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part12 VALUES LESS THAN (TO_DATE(' 2022-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part13 VALUES LESS THAN (TO_DATE(' 2022-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part14 VALUES LESS THAN (TO_DATE(' 2022-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part15 VALUES LESS THAN (TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part16 VALUES LESS THAN (TO_DATE(' 2023-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part17 VALUES LESS THAN (TO_DATE(' 2023-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part18 VALUES LESS THAN (TO_DATE(' 2023-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part19 VALUES LESS THAN (TO_DATE(' 2023-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part20 VALUES LESS THAN (TO_DATE(' 2023-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part21 VALUES LESS THAN (TO_DATE(' 2023-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part22 VALUES LESS THAN (TO_DATE(' 2023-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part23 VALUES LESS THAN (TO_DATE(' 2023-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part24 VALUES LESS THAN (TO_DATE(' 2023-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))TABLESPACE TEST_TBLSPC,

PARTITION part25 VALUES LESS THAN (TO_DATE(' 2023-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE TEST_TBLSPC

)

NOCACHE

NOPARALLEL

MONITORING

ENABLE ROW MOVEMENT;


select partition_name,partition_position,high_value,interval,TABLESPACE_NAME from dba_tab_partitions where TABLE_NAME='TEST_TABLE_BKP_PART';

set pages 9999
col SUBSTR(TABLE_NAME,1,32) for a30
col SUBSTR(PARTITION_NAME,1,32) for a30
col SUBSTR(SUBPARTITION_NAME,1,32) for a30
SELECT SUBSTR(TABLE_NAME,1,32), SUBSTR(PARTITION_NAME,1,32), SUBSTR(SUBPARTITION_NAME,1,32),TABLESPACE_NAME FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME ='TEST_TABLE_BKP_PART';


SQL> select partition_name,partition_position,high_value,interval,TABLESPACE_NAME from dba_tab_partitions where TABLE_NAME='TEST_TABLE_BKP_PART';

PARTITION_NAME                                     PARTITION_POSITION HIGH_VALUE                                                                       INT TABLESPACE_NAME
-------------------------------------------------- ------------------ -------------------------------------------------------------------------------- --- ------------------------------
PART1                                                               1 TO_DATE(' 2021-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART2                                                               2 TO_DATE(' 2021-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART3                                                               3 TO_DATE(' 2022-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART4                                                               4 TO_DATE(' 2022-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART5                                                               5 TO_DATE(' 2022-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART6                                                               6 TO_DATE(' 2022-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART7                                                               7 TO_DATE(' 2022-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART8                                                               8 TO_DATE(' 2022-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART9                                                               9 TO_DATE(' 2022-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART10                                                             10 TO_DATE(' 2022-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART11                                                             11 TO_DATE(' 2022-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART12                                                             12 TO_DATE(' 2022-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART13                                                             13 TO_DATE(' 2022-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART14                                                             14 TO_DATE(' 2022-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART15                                                             15 TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART16                                                             16 TO_DATE(' 2023-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART17                                                             17 TO_DATE(' 2023-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART18                                                             18 TO_DATE(' 2023-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART19                                                             19 TO_DATE(' 2023-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART20                                                             20 TO_DATE(' 2023-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART21                                                             21 TO_DATE(' 2023-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART22                                                             22 TO_DATE(' 2023-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART23                                                             23 TO_DATE(' 2023-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART24                                                             24 TO_DATE(' 2023-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC
PART25                                                             25 TO_DATE(' 2023-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA NO  TEST_TBLSPC

25 rows selected.

SQL> set pages 9999
SELECT SUBSTR(TABLE_NAME,1,32), SUBSTR(PARTITION_NAME,1,32), SUBSTR(SUBPARTITION_NAME,1,32),TABLESPACE_NAME FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME ='TEST_TABLE_BKP_PART';SQL>

SUBSTR(TABLE_NAME,1,32)        SUBSTR(PARTITION_NAME,1,32)    SUBSTR(SUBPARTITION_NAME,1,32)           TABLESPACE_NAME
------------------------------ ------------------------------ ---------------------------------------- ------------------------------
TEST_TABLE_BKP_PART   PART1                          SYS_SUBP81663                            TEST_TBLSPC
TEST_TABLE_BKP_PART   PART1                          SYS_SUBP81664                            TEST_TBLSPC
TEST_TABLE_BKP_PART   PART1                          SYS_SUBP81665                            TEST_TBLSPC
TEST_TABLE_BKP_PART   PART1                          SYS_SUBP81666                            TEST_TBLSPC

SQL>

SQL>  create index testuser.MSG_pk_ind on testuser.TEST_TABLE_BKP_PART ( TEST_DATE ) tablespace TEST_TBLSPC local;

Index created.

select * from TEST_TABLE_BKP_PART partition (part1);

Table Purging Activity


02:50:26 SQL> select host_name from from v$instance;

HOST_NAME
----------------------------------------------------------------
TEST_HOST

02:50:48 SQL> select name from v$database;

NAME
---------
EASY_DB

02:51:12 SQL> select segment_name,sum(bytes)/1024/1024/1024 from dba_segments where segment_name='EASY_TABLE' group by segment_name;

SEGMENT_NA SUM(BYTES)/1024/1024/1024
---------- -------------------------
EASY_TABLE                70.7070313

02:51:24 SQL> col PARTITION_NAME for a30
select PARTITION_NAME,PARTITION_POSITION,HIGH_VALUE from dba_tab_partitions where TABLE_NAME='EASY_TABLE';
02:51:40 SQL> 
PARTITION_NAME                 PARTITION_POSITION HIGH_VALUE
------------------------------ ------------------ --------------------------------------------------------------------------------
P1                                              1 20200501
SYS_P10146                                      2 20211301
SYS_P10786                                      3 20220201
SYS_P11406                                      4 20220301
SYS_P11966                                      5 20220401
SYS_P12586                                      6 20220501
SYS_P13186                                      7 20220601
SYS_P13806                                      8 20220701
SYS_P14391                                      9 20220801
SYS_P15011                                     10 20220901
SYS_P15631                                     11 20221001

PARTITION_NAME                 PARTITION_POSITION HIGH_VALUE
------------------------------ ------------------ --------------------------------------------------------------------------------
SYS_P16231                                     12 20221101
SYS_P16836                                     13 20221201
SYS_P17436                                     14 20221301

14 rows selected.

02:51:40 SQL> select index_name,status,degree from dba_indexes where table_name='EASY_TABLE';

INDEX_NAME                                                                                                                       STATUS   DEGREE
-------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------
EASY_TABLE_NDX2                                                                                                                  VALID    1
INDX_EASY_TABLE4                                                                                                                 VALID    1
IND_TRAN_UDATE_TIME                                                                                                              VALID    1
IK_EASY_TABLE_ORGDEV                                                                                                             VALID    1
EASY_TABLE_NDX                                                                                                                   VALID    1
NDX_EASY_TABLE3                                                                                                                  VALID    1

6 rows selected.

02:51:58 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P10146;

Table altered.

02:59:17 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P10786;

Table altered.

02:59:29 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P11406;

Table altered.

02:59:41 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P11966;

Table altered.

02:59:51 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P12586;

Table altered.

03:00:02 SQL> alter table OWNER.EASY_TABLE drop partition SYS_P13186;

Table altered.

03:00:12 SQL> col PARTITION_NAME for a30
select PARTITION_NAME,PARTITION_POSITION,HIGH_VALUE from dba_tab_partitions where TABLE_NAME='EASY_TABLE';
03:00:29 SQL> 
PARTITION_NAME                 PARTITION_POSITION HIGH_VALUE
------------------------------ ------------------ --------------------------------------------------------------------------------
P1                                              1 20200501
SYS_P13806                                      2 20220701
SYS_P14391                                      3 20220801
SYS_P15011                                      4 20220901
SYS_P15631                                      5 20221001
SYS_P16231                                      6 20221101
SYS_P16836                                      7 20221201
SYS_P17436                                      8 20221301

8 rows selected.

03:00:29 SQL> select index_name,status,degree from dba_indexes where table_name='EASY_TABLE';

INDEX_NAME                                                                                                                       STATUS   DEGREE
-------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------
EASY_TABLE_NDX2                                                                                                                  UNUSABLE 1
INDX_EASY_TABLE4                                                                                                                 UNUSABLE 1
IND_TRAN_UDATE_TIME                                                                                                              UNUSABLE 1
IK_EASY_TABLE_ORGDEV                                                                                                             UNUSABLE 1
EASY_TABLE_NDX                                                                                                                   UNUSABLE 1
NDX_EASY_TABLE3                                                                                                                  UNUSABLE 1

6 rows selected.

03:00:44 SQL> alter index OWNER.EASY_TABLE_NDX  rebuild parallel 12;

Index altered.

03:02:17 SQL> 03:02:17 SQL> alter index OWNER.NDX_EASY_TABLE3 rebuild parallel 12;

Index altered.

03:03:57 SQL> 03:03:57 SQL> 
03:04:32 SQL> alter index OWNER.IK_EASY_TABLE_ORGDEV rebuild parallel 12;

Index altered.

03:06:03 SQL> 03:06:03 SQL> alter index OWNER.EASY_TABLE_NDX2 rebuild parallel 12;

Index altered.

03:07:32 SQL> 03:07:32 SQL> 
03:07:53 SQL> alter index OWNER.INDX_EASY_TABLE4 rebuild parallel 12;

Index altered.

03:09:12 SQL> 03:09:12 SQL> 
03:09:21 SQL> alter index OWNER.IND_TRAN_UDATE_TIME rebuild parallel 12;

Index altered.

03:10:44 SQL> 03:10:44 SQL> 
03:10:48 SQL> select index_name,status,degree from dba_indexes where table_name='EASY_TABLE';

INDEX_NAME                                                                                                                       STATUS   DEGREE
-------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------
EASY_TABLE_NDX2                                                                                                                  VALID    12
INDX_EASY_TABLE4                                                                                                                 VALID    12
IND_TRAN_UDATE_TIME                                                                                                              VALID    12
IK_EASY_TABLE_ORGDEV                                                                                                             VALID    12
EASY_TABLE_NDX                                                                                                                   VALID    12
NDX_EASY_TABLE3                                                                                                                  VALID    12

6 rows selected.

03:10:58 SQL> alter index OWNER.EASY_TABLE_NDX  noparallel;

Index altered.

03:11:10 SQL> alter index OWNER.NDX_EASY_TABLE3 noparallel;

Index altered.

03:11:19 SQL> alter index OWNER.IK_EASY_TABLE_ORGDEV noparallel;

Index altered.

03:11:27 SQL> alter index OWNER.EASY_TABLE_NDX2 noparallel;

Index altered.

03:11:35 SQL> alter index OWNER.INDX_EASY_TABLE4 noparallel;

Index altered.

03:11:44 SQL> alter index OWNER.IND_TRAN_UDATE_TIME noparallel;

03:11:54 SQL> alter index OWNER.IND_TRAN_UDATE_TIME noparallel;

Index altered.

03:12:10 SQL> select index_name,status,degree from dba_indexes where table_name='EASY_TABLE';

INDEX_NAME                                                                                                                       STATUS   DEGREE
-------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------
EASY_TABLE_NDX2                                                                                                                  VALID    1
INDX_EASY_TABLE4                                                                                                                 VALID    1
IND_TRAN_UDATE_TIME                                                                                                              VALID    1
IK_EASY_TABLE_ORGDEV                                                                                                             VALID    1
EASY_TABLE_NDX                                                                                                                   VALID    1
NDX_EASY_TABLE3                                                                                                                  VALID    1

6 rows selected.

03:12:24 SQL> set lines 300 pages 1000
col owner for a30
col object_type for a30
select owner ,object_name object_type ,status from dba_objects where object_name='EASY_TABLE';
03:12:40 SQL> 03:12:40 SQL> 03:12:40 SQL> 
OWNER                          OBJECT_TYPE                    STATUS
------------------------------ ------------------------------ -------
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
OWNER                         EASY_TABLE                     VALID
BZ1125                         EASY_TABLE                     VALID

10 rows selected.

03:12:40 SQL> 


======================

alter table sales move partition  sales_2009 compress;

alter table sales move partition sales_2009 tablespace crappy_old_disk;

alter tablespace crappy_old_disk read only;

alter index sales_IX partition sales_ix_2009 unusable;

From 12c onward

alter table sales modify partition sales_2012 indexing off;


Exchange partition

---------------

alter table sales exchange partition sales_2013 with table New_table;

Monday, 7 November 2022

Logs are not shipped to the physical standby database in oracle

 Logs are not shipped to the physical standby database in oracle 


May encounter different types of errors depending on the situations:


  • Logs are not shipped at log switch or fal archiving fails
  • Value of applied column on the primary is not consistent as that on the standby
  • RMAN does not delete logs and results in error RMAN-08137
  • In some rare cases ORA-12154 may show up in the primary alert log
  • Other possible errors ORA-3113, ORA-3135
  • WARN: ARCH: Terminating pid <PID> hung on an I/O operation krsv_proc_kill: Killing 1 processes (Process by index)


CHANGES

Problem usually occurs after OS or network errors, or restarting the primary or standby instance or reboot the primary or standby node that abruptly crashes log shipping between the primary and standby


CAUSE

ARCx processes on the primary stuck on the network forever or that are responsible to update the APPLIED column get stuck and can not recover themselves.

Additionally these processes that may be used for local and remote archiving, heartbeat and FAL fetching logs on the primary.

So when they are all stuck and reach the maximum number of values specified in log_archive_max_processes, they can cause ambiguous errors as shown above.


The worst case would be all ARCx processes on the primary are stuck and they couldn't do local archiving, so that all online redo log files are full which causes the primary database hangs.

This may be due to standby db crash, network errors or some abrupt outage on the standby or primary.

The other common cause is the firewall.


SOLUTION

ARCx processes on the primary need to be restarted.

Assuming that log transport from the primary is configured by log_archive_dest_2.

Please perform the following:


1) If the Data Guard Broker is running, disable Data Guard Broker on both primary and standby:

SQL> alter system set dg_broker_start=FALSE;


2) On the Primary Database:

- Set log transport state to DEFER status:

SQL> alter system set log_archive_dest_state_2='defer';

SQL> alter system switch logfile;

- Reset log_archive_dest_2

SQL> show parameter log_archive_dest_2   -------------------> Note this value

SQL> alter system set log_archive_dest_2 = '';

- Switch logfiles on the Primary

SQL> alter system switch logfile;


3) On the Standby Database:


- Cancel Managed Recovery

SQL> alter database recover managed standby database cancel;

- Shutdown the Standby Database

SQL> shutdown immediate


4) On the Primary: kill the ARCx Processes and the Database will respawn them automatically immediately without harming it.


ps -ef | grep -i arc

kill -9 <ospid of ARC process> <another ospid of ARC process> ...


5) On standby server, startup Standby Database and resume Managed Recovery


SQL> startup mount;

SQL> alter database recover managed standby database [using current logfile] disconnect;


6) Re-enable Log Transport Services on the Primary:

SQL>Alter system set log_archive_dest_2='Service=..............' ;        Set log_archive_dest_2 (Which was reset in step 2) to its original value

SQL> alter system set log_archive_dest_state_2='enable' ;


At this point all the ARCx processes should be up and running on the Primary.


7) Re-enable the Data Guard Broker for both, Primary and Standby if applicable:

SQL> alter system set dg_broker_start=true;

8) Please work with your Network Administrator to make sure the following Firewall Features are disabled.


  • SQLNet fixup protocol
  • Deep Packet Inspection (DPI)
  • SQLNet packet inspection
  • SQL Fixup
  • SQL ALG (Juniper firewall)

NOTE : To enable SQLNET tracing refer,


step by Step Method to Enable SQLNET(Server Tracing) Dynamically on Dataguard (Doc ID 2010476.1) 


The wording and features can vary by vendor but all the above have some impact on some packets (not all packets are affected).

Some firewalls can have an adverse effect on certain SQL packets transported across them (again, some not all).

In some rare cases if the above procedure doesn't work, you have to restart the primary instance to make the log shipping resuming again.


Troubleshooting Data Guard Log Transport Services in oracle

 Troubleshooting Data Guard Log Transport Services in oracle

Log Transport Services Overview:

Data Guard is using an Oracle NET-Connection from the Primary to the Standby Database. The Initialization Parameter ‘log_archive_dest_n’ is used for the Transport Services to a particular Standby Database. This Parameter consists of several Attributes to configure the Log Transport Services to the corresponding Standby Database.

If there is a log_archive_dest_n defined on the Primary Database which is pointing to a Standby Database (ie. the ‘SERVICE’-Attribute is used), the Primary Database tries to connect to the Standby Database using the Descriptor or TNS-Alias given in that Attribute. The LGWR-Process creates a Worker Process (LNS-, NSS-, NSA- or TTXX-Process) which is responsible for the Connection to the Standby Database and Transporting the Redo. The corresponding Server Process is a RFS-Process on the Standby Database. In Addition there is an ARCH-Process connected to the Standby Database responsible for the Heartbeat and Gap Resolution Mechanism.

Troubleshooting Steps:

 First of all verify if you can connect to the Standby Database at all. An easy Way to find out is simply open a SQL*PLUS-Session from the Primary Database ORACLE_HOME and try to connect to the Standby Database using the same Descriptor or TNS-Alias given in the ‘SERVICE’-Attribute of log_archive_dest_n:


SQL> connect sys/<Password>@<Standby Service> as sysdba

SQL> select database_role, db_unique_name from v$database;


If it connects and returns the correct Database Role and db_unique_name the Connection basically looks fine. If it returns any Error, likely the Heartbeat is hitting the same Error (you should see in the ALERT.LOG). In that Case you can follow up on

Note 1432367.1: Troubleshooting - Heartbeat failed to connect to standby 

Each log_archive_dest_n has a corresponding Initialization Parameter to toggle the Status for this Destination. This Parameter is called ‘log_archive_dest_state_n’. Verify it is set to ‘enable’ for the corresponding log_archive_dest_n.

Ensure there is sufficient Bandwith available on the Network. It must be able to cope with the Redo Generation Rate at any Time – even during Peak Times. See

Run the ORAchk-Tool against your Configuration to verify if your current Setup matches with the recommended Setup. You can get further Information about ORAchk here:

Note 1268927.2: ORAchk - Health Checks for the Oracle Stack 

Verify if there is any active Firewall or Router in the Network Connection between the Primary and Standby Database. Those can have Features enabled which are able to modify TCP-Packets which can impact the SQLNET-Communication and raise Errors like ORA-3135 or Redo Corruptions in the Log Transport Services

SERVICE

DB_UNIQUE_NAME

VALID_FOR

Compare the Settings for the DG_CONFIG in the Initialization Parameter ‘log_archive_config’ with the db_unique_name for your remote Archive Destinations. It must contain the db_unique_names from the Primary and all its Standby Databases. It must be set to the same Value on all the participating Standby Databases, too.

In order to have most Benefit, Standby RedoLogs should be configured and used by Log Transport Services. Review

Note 219344.1: Usage, Benefits and Limitations of Standby Redo Logs (SRL)

If the Data Guard Broker is used, note that all Initialization Parameters and Attributes are maintained by the Data Guard Broker. If you want to modify anything you have to modify the corresponding Data Guard Broker Property. In Case of any Problems you can also review the Data Guard Broker Logfile (DRC<SID>.LOG) for each Instance. It can be found in the ‘trace’-Directory of each Instance (see the Initialization Parameters 'background_dump_dest' or 'diagnostic_dest' for the exact Paths).

To gather further Tracing Information you can enable Archive Tracing on the Primary and Standby Database. Archive Tracing can be enabled via the Initialization Parameter ‘log_archive_trace’. There are different Levels available. Enabling different Levels at the same Time can be done by setting ‘log_archive_trace’ to the Sum of the desired Levels. Setting back to ‘0’ disables the Tracing. Relevant and common Levels for Log Transport Services are:

For the Primary Database:


1:            Tracks archiving of log files

2:            Tracks archive status by archive log file destination

64:         Tracks ARCn process state activity

128:       Tracks FAL server process activity

512:       Tracks LGWR redo shipping network activity

2048:     Tracks RFS/ARCn ping heartbeat


For the Standby Database:


1:            Tracks archiving of log files

2:            Tracks archive status by archive log file destination

64:          Tracks ARCn process state activity

256:       Tracks RFS Logical Client

1024:     Tracks RFS physical client

2048:     Tracks RFS/ARCn ping heartbeat


There are some more Levels which are concerning local Archiving and Log Apply Services which can be set as well if required. See

for a full List of available Levels. The Tracing Output can be found in the current Process-Tracefiles (eg. LGWR, NSS, RFS…). You can review those Traces to get detailed Information.


 


What is an Archive Gap And its possible Solution


 What is an Archive Gap And its possible Solution

PURPOSE

 This Document show the various Possibilities to detect and resolve a Redo Gap on a Standby Database.

 

What is an Archive Gap ??

 

An Archive Gap is a Range of missing Redo on the Standby Site that prevents Log Apply Services to proceed. This typically happens when the Standby Site is unable to receive Redo from the Primary Database or the Redo Information is not available on the Standby Database. Possible and common Causes for Archive Gaps are:

Network Disconnects or stop of Log Transport Services

Outages of the Standby Database

Misconfigurations of Log Transport Services

I/O-Issues on the Standby Site

Manual Deletion of ArchiveLogs before they are applied to the Standby

Insufficient Bandwith in the Network between the Primary and Standby Site

Once there is an Archive Gap on the Standby Database the Log Apply Services will get stuck until the Gap is resolved, ie. the missing Redo in Form of ArchiveLogs is fetched and made available on the Standby Site. Log Apply Services then can pick it up and proceed.

 

Methods of Gap Resolution


There are 4 Possibilities to resolve an Archive Gap on a Standby Database. Those are discussed below.

Automatic Gap Resolution

Automatic Gap Resolution is performed automatically by the Log Transport Services. Basically the currently transferred Redo is compared with the last received. If there is a Mismatch, it is detected by the receiving RFS-Process on the Standby Database, which will automatically request the missing Log Sequence from the Primary Database again via the ARCH-RFS Heartbeat Ping. This Type of Gap Resolution is using the Service defined in log_archive_dest_n on the Primary Database serving this Standby Database. In Addition the ARCH-RFS Heartbeat Ping performs a Poll of the current Sequence to detect an Archive Gap. If there is one detected it will get resolved the same Way. Once a Gap is resolved the Transport Process (ARCH or LGWR) is notified about the Resolution of the Gap. For Automatic Gap Resolution there is no special Setting or monitoring required.

 

FAL (Fetch Archive Log) Gap Resolution

Once an ArchiveLog is received or archived from a Standby RedoLog on the Standby Database, it is registered in the Standby Controlfile (you can query the Registration by v$archived_log on a Physical Standby Database and dba_logstdby_log on a Logical Standby Database). If such a File is missing or corrupted for any Reason (eg. it got deleted by Fault), FAL is called to perform a Gap Resolution. This is the Case because such missing Logfiles are typically detected by the Log Apply Services on the Standby Database. Those are working independent from the Log Transport Services and do not have a direct Link to the Primary Database. To use FAL, there must be one or two (prior Oracle 11.2.0) Initialization Parameters setup on the Standby Database:

FAL_SERVER: Specify an Oracle Net Service Name (TNS-Alias or Connect Descriptor) that points to the Database from where the missing ArchiveLog(s) should be requested. This can either be the Primary Database, but also another Standby-, ArchiveLog Repository- or Far Sync Standby (> Oracle 12.1.0) Database inside the Data Guard Configuration. It is possible to specify multiple Service Names (Comma separated). FAL will then sequentially attempt those Databases to resolve the Gap.

FAL_CLIENT (< Oracle 11.2.0): Specify an Oracle Net Service Name (TNS-Alias or Connect Descriptor) that points from the FAL_SERVER Database(s) back to the Standby Database (ie. that’s the Destination where the FAL_SERVER Database should send the Redo to). Ensure this TNS-Alias exists in the TNSNAMES.ORA of your FAL_SERVER Database(s). This Parameter is not required any more since Oracle 11.2.0. However you have to ensure there exists a corresponding log_archive_dest_n on your FAL_SERVER Database(s) which is pointing to the Standby Database requesting the Gap Resolution.

Once the Log Apply Services detect an Archive Gap it sends a FAL Request to the FAL_SERVER handing over the FAL_CLIENT (or db_unique_name for Version > 11.1.0). An ARCH-Process on the FAL_SERVER tries to pick up the request Sequence(s) from that Database and sends it back to the FAL_CLIENT (or uses the Destination valid for this db_unique_name). If the first FAL_SERVER is not able to resolve the Gap, the next FAL_SERVER in the List will be attempted. If it cannot be resolved by all FAL_SERVERs the FAL-Request fails and a corresponding Message will be put in the ALERT.LOG of the Standby Database.

In order to successfully complete a Gap Request the requested ArchiveLog Sequence(s) must be available on the FAL_SERVER Database (on Disk and the corresponding Entry in the Controlfile).

FAL is available since Oracle 9.2.0 for Physical Standby Database and Oracle 10.1.0 for Logical Standby Databases.

 

Manual Gap Resolution

If an Archive Gap cannot be resolved automatically by any of the previously mentioned Methods, you can still try to manually resolve an Archive Gap.

You can query v$archive_gap on a Physical Standby Database or dba_logstdby_log on the Logical Standby Database to determine a current Archive Gap, eg.

 

On Physical standby


SQL> select * from v$archive_gap;

On Logical standby

SQL> select thread#, sequence# from dba_logstdby_log l where next_change# not in

     (select first_change# from dba_logstdby_log where l.thread#=thread#)

     order by thread#, sequence#;

 

Now copy the returned Sequences to the Standby Database manually to the desired Location. If the missing are not yet registered on the Standby Database, you have to register them before the Log Apply Services are able to read those Logfiles. You can register ArchiveLogs using

Physical Standby:

SQL> alter database register logfile ‘<File-Specification>’;

 Logical Standby:

SQL> alter database register logical logfile ‘<File-Specification>’;

 

Once they are registered Log Apply Services will pick up the ArchiveLogs and proceed.

 

Roll forward using Incremental Backup (Physical Standby only)

If a Gap cannot be resolved with any of the previous Methods, the Gap is quite large and may take a long Time to resolve or the Gap cannot be resolved because the missing ArchiveLogs do not exist on any Database any more, you can still roll forward a Physical Standby Database using an incremental Backup from SCN. This Feature is available since Oracle 10.2.0. The Idea is to record the latest SCN applied to the Standby Database, then create an incremental Backup from that SCN on Primary Database using RMAN together with a Backup of the current Controlfile (as Standby Controlfile).

Then we first replace the old Standby Controlfile with the Standby Controlfile from the incremental Backup and apply the incremental Backup on the Standby Database. This is a real fast and easy Way to bring a Standby Database back close to the current Status of the Primary Database. Since the Steps to take can be different in the various Releases, please look into Chapter

“Using RMAN Incremental Backups to Roll Forward a Physical Standby Database”


Sunday, 6 November 2022

ORA-00392: log 3 of thread 1 is being cleared, operation not allowed

 

Problem Description

---------------------------------------------------

We restored NON-PROD Database from RMAN full backup(Production) but while opening database it is throwing below error

 

SQL> ALTER DATABASE OPEN RESETLOGS;

ALTER DATABASE OPEN RESETLOGS

*

ERROR at line 1:

ORA-00392: log 3 of thread 1 is being cleared, operation not allowed

ORA-00312: online log 3 thread 1: '/db1/oraredo/EASYDB/EASYDB_redo_03a.log'

ORA-00312: online log 3 thread 1: '/db1/oraredo/EASYDB/EASYDB_redo_03b.log'

 

Database is in mount stag, and we are not able to drop online redo logfile As it is current file.

 

SQL> SELECT group#,sequence#,status,next_change# FROM V$LOG;

 

GROUP# SEQUENCE# STATUS NEXT_CHANGE#

---------- ---------- ---------------- ------------

1 0 UNUSED 6.0919E+12

2 0 UNUSED 6.0919E+12

10 0 UNUSED 6.0919E+12

9 0 UNUSED 6.0919E+12

3 0 CLEARING_CURRENT 1.8447E+19

 

SQL> ALTER DATABASE DROP LOGFILE GROUP 3;

ALTER DATABASE DROP LOGFILE GROUP 3

*

ERROR at line 1:

ORA-01623: log 3 is current log for instance EASYDB (thread 1) - cannot drop

ORA-00312: online log 3 thread 1: '/db1/oraredo/EASYDB/EASYDB_redo_03aa.log'

ORA-00312: online log 3 thread 1: '/db1/oraredo/EASYDB/EASYDB_redo_03bb.log'

 

To fix this issue, we renamed group3 where space is available and created dummy files

 

a) rename problematic online redo file

alter database rename file '/db1/oraredo/EASYDB/EASYDB_redo_03aa.log' to '/db1/orafra/EASYDB/EASYDB_redo_03aa.log';

alter database rename file '/db1/oraredo/EASYDB/EASYDB_redo_03bb.log' to '/db1/orafra/EASYDB/EASYDB_redo_03bb.log';

 

b) create dummy file using os touch command 

{orauat@EASYSERVER:db1/orafra/EASYDB}bdf .

{orauat@EASYSERVER:db1/orafra/EASYDB}touch EASYDB_redo_03bb.log

{orauat@EASYSERVER:db1/orafra/EASYDB} touch EASYDB_redo_03aa.log

{orauat@EASYSERVER:/db1/oraredo/EASYDB}sqlplus

 

SQL*Plus: Release 12.2.0.1.0 Production on Sun Nov 6 23:55:11 2022

 

SQL> show parameter control

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

control_file_record_keep_time        integer     21

control_files                        string      /db1/oraredo/EASYDB/control/cont

                                                 rol01.ctl, /db1/orafra/EASYDB/co

                                                 ntrol/control02.ctl

control_management_pack_access       string      DIAGNOSTIC+TUNING

SQL> !ls -lr /db1/oraredo/EASYDB/control/control01.ctl

-rw-r-----   1 orauat     oinstall   88719360 Nov  7 00:05 /db1/oraredo/EASYDB/control/control01.ctl

 

SQL> !ls -ltr /db1/orafra/EASYDB/control/control02.ctl

-rw-r-----   1 orauat     oinstall   88719360 Nov  7 00:05 /db1/orafra/EASYDB/control/control02.ctl


c) clear problematic online redo log file 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

 

SQL>  alter database clear unarchived logfile group 3;

 

Database altered.

 d) check status , it is changed now

SQL> select GROUP#,THREAD#,SEQUENCE#,MEMBERS,ARCHIVED,STATUS,FIRST_CHANGE# from v$log order by first_change# ;

 

    GROUP#    THREAD#  SEQUENCE#    MEMBERS ARC STATUS           FIRST_CHANGE#

---------- ---------- ---------- ---------- --- ---------------- -------------

         9          1          0          2 YES UNUSED              6.0919E+12

        10          1          0          2 YES UNUSED              6.0919E+12

         1          1          0          2 YES UNUSED              6.0919E+12

         2          1          0          2 YES UNUSED              6.0919E+12

         3          1          0          2 NO  CURRENT             6.0919E+12

 

e) started database with resetlogs option and stop and start gracefully

SQL> alter database open resetlogs;

Database altered.

SQL>

SQL> select name,open_mode from v$database;

 

NAME      OPEN_MODE

--------- --------------------

EASYDB      READ WRITE

 

SQL>

 

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE

--------- --------------------

EASYDB      READ WRITE

 

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup

ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

ORACLE instance started.

Total System Global Area 8589934592 bytes

Fixed Size                 19289376 bytes

Variable Size            5502929632 bytes

Database Buffers         3053453312 bytes

Redo Buffers               14262272 bytes

Database mounted.

Database opened.

SQL>


select name,open_mode from v$database;