Tuesday 8 January 2019

Apply PSU Oct 2018 patch(Patch 28349311) on 12c RAC Database Step by step





============================= Apply PSU Oct 2018 patch on 12c RAC Database Step by step =============

Apply PSU Oct 2018 patch on TestDB residing on node1/node2


Patch 28349311: GRID INFRASTRUCTURE PATCH SET UPDATE 12.1.0.2.181016

Note : This patch will used for both Grid Home and Database patching and before applying patch on rdbms home ,it need to applied on Grid Home

1)Download PSU Oct 2018 (28349311)
and Unzip the Latest OPatch to all cluster nodes using Grid user

chown oracrs:dba p28349311_121020_AIX64-5L.zip

2)Stop oem Agent and Oswatcher on both node in case it is running and check free space in RDBMS Home (atleast 22 GB free space) and /TMP
3)Stop database instance on node1
4)Take backup of oracle RDBMS home on node1(delete unnecessary log from RDBMS home, otherwise backup will take more time , remove file like old trc, aud log
find . -name '*.trc' -mtime +10 -exec ls -lrt {} \;

find . -name '*.trc' -mtime +10 -exec rm -rf {} \;

5)Star database instance on node1
Since opatchauto will stop automatically while patching .Database instance should be up and running
7) Validate inventory using using opatch lsinventory 8) Verify  database registry 
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;
9) check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;
10) run gather stats for dictionary

 EXECUTE dbms_stats.gather_dictionary_stats;

14)Upgrade the Opatch version to 12.2.0.1.16 or later. it is recommended to have latest opatch version

To update Opatch, use the following instructions:
A. Download the OPatch utility to a temporary directory.
B. For each Oracle RAC database home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch version

or
rename old Opatch folder in rdbms home and uzipped downloaded patch using instance owner user
or change permission after uzipped

chown fuo44ppt:dba p6880880_121010_AIX64-5L.zip


11)Perform Validation of Oracle Home Inventory on both node
 <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

12)  Verify the Patches on both GI and RDBMS HOMES:
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME | grep <patch no>

13)Check for conflicts in  the 12.1 GI home from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>


15)
Go to opatch directory and validate/analyze patch  using opatchauto utility from root user as below

[node1:root:/u06/oracle/testdb/product/12.1.0.2/OPatch:]

export PATH=$PATH:/u06/oracle/testdb/product/12.1.0.2/OPatch
cd /u06/oracle/testdb/product/12.1.0.2/OPatch


 ./opatchauto apply  /oswatcher/28349311 -analyze -oh /u06/oracle/testdb/product/12.1.0.2

 16) apply patch on RDBMS  from root user


[node1:root:/u06/oracle/testdb/product/12.1.0.2/OPatch:]

export PATH=$PATH:/u06/oracle/testdb/product/12.1.0.2/OPatch
cd /u06/oracle/testdb/product/12.1.0.2/OPatch


 ./opatchauto apply  /oswatcher/28349311  -oh /u06/oracle/testdb

17)Start database instance instances on node1 if not started automatically
10)Stop database instance on node2
4)Take backup of oracle RAC home and inventory  on node2
5)Star database instance on node2
Since opatchauto will stop automatically while patching .Database instance  should be up and running
11)Repeat Steps   on node2
12)Start database database instances on node2 in case not started automatically
13)verify select * from dba_registry_sqlpatch ;
14)Validate objects , components and verify if patch is installed properly
15)Start the application and Release all related jobs which were on hold
16)Application team will check the application if that working as it is


Note:
The step "Loading Modified SQL Files into the Database" has been removed from this section as the execution of opatchauto automatically performs the load of the modified SQL files into the Database.


Details:


Precheck


node1[/u06/oracle/TestDB]$ cd
node1[/u06/oracle/TestDB]$ adrci

ADRCI: Release 12.1.0.2.0 - Production on Mon Dec 31 07:47:29 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u06/oracle/TestDB"
adrci> show alert

Choose the home from which to view the alert log:

1: diag/asm/user_TestDB/host_2865234568_80
2: diag/asmtool/user_TestDB/host_2865234568_80
3: diag/clients/user_TestDB/host_2865234568_80
4: diag/clients/user_TestDB/host_2865234568_82
5: diag/rdbms/TestDB/TestDB1
Q: to quit

Please select option:
node1[/u06/oracle/TestDB]$ cd /u06/oracle/TestDB
node1[/u06/oracle/TestDB]$ cd diag/rdbms/TestDB/TestDB1
node1[/u06/oracle/TestDB/diag/rdbms/TestDB/TestDB1]$ ls -ltr
total 624
drwxr-x---    2 TestDB dba             256 Dec 30 2011  metadata_dgif
drwxr-x---    2 TestDB dba             256 Dec 30 2011  incpkg
drwxr-x---    2 TestDB dba             256 Dec 30 2011  hm
drwxr-x---    2 TestDB dba             256 Dec 30 2011  cdump
drwxr-x---    2 TestDB dba             256 May 26 2014  ir
drwxr-x---    6 TestDB dba             256 Jun  1 2016  log
drwxr-x---    2 TestDB dba             256 Jun  1 2016  metadata_pv
drwxr-x---    2 TestDB dba            4096 Jun  1 2016  metadata
drwxr-x---    4 TestDB dba             256 Oct 15 01:28 incident
drwxr-x---    2 TestDB dba             256 Oct 15 01:28 sweep
drwxr-x---    2 TestDB dba             256 Oct 15 01:28 stage
drwxr-x---    2 TestDB dba            4096 Oct 15 01:28 lck
drwxr-x---    2 TestDB dba            4096 Oct 15 12:28 alert
drwxr-x---    2 TestDB dba          290816 Dec 31 07:45 trace
node1[/u06/oracle/TestDB/diag/rdbms/TestDB/TestDB1]$

find . -name '*.trc' -mtime +10 -exec ls -lrt {} \;

find . -name '*.trc' -mtime +10 -exec rm -rf {} \;

audit file is in base location

node1[/u06/oracle/TestDB]$
node1[/u06/oracle/TestDB]$ pwd
/u06/oracle/TestDB
node1[/u06/oracle/TestDB]$ echo $ORACLE_BASE
/u06/oracle/TestDB
node1[/u06/oracle/TestDB]$


find ./ -name "*.aud" -mtime +10 -exec ls -l {} \;

find ./ -name "*.aud" -mtime +10 -exec rm {} \;



SQL>
select name,open_mode from v$databaseSQL>
  2  ;

NAME      OPEN_MODE
--------- --------------------
TestDB  READ WRITE

SQL>
select count(1),inst_id from gv$session group by inst_id;SQL>

  COUNT(1)    INST_ID
---------- ----------
       735          1
       718          2




SQL> alter system archive log current;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL>


SQL>
SQL> set linesize 200
SQL>
SQL> spool precheck.log
SQL>
SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;
SP2-0734: unknown command beginning "elect to_c..." - rest of line ignored.
SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;

ACTION_TIME_2          PATCH_ID  PATCH_UID ACTION          VERSION              DESCRIPTION
-------------------- ---------- ---------- --------------- -------------------- ----------------------------------------------------------------------------------------------------
01-JUN-2016            22291127   19983562 APPLY           12.1.0.2             Database Patch Set Update : 12.1.0.2.160419 (22291127)
28-DEC-2016            24006101   20648640 APPLY           12.1.0.2             Database Patch Set Update : 12.1.0.2.161018 (24006101)
28-JAN-2018            26713565   21606134 APPLY           12.1.0.2             DATABASE PATCH SET UPDATE 12.1.0.2.171017

SQL> set head on feed on pagesize 200 linesize 200
SQL> select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';

no rows selected

SQL>
SQL>
SQL> col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;  SQL> SQL> SQL> SQL> SQL> SQL>

INSTANCE_NAME    ACTION_TIME   ACTION     NAMESPACE  VERSION               ID COMMENTS                  BUNDLE_SERIES
---------------- ------------- ---------- ---------- ------------- ---------- ------------------------- ------------------------------
TestDB1        21-OCT-11 04. APPLY      SERVER     11.2.0.3               0 Patchset 11.2.0.2.0       PSU
                 48.04.498723
                 AM

TestDB1        30-DEC-11 09. APPLY      SERVER     11.2.0.3               0 Patchset 11.2.0.2.0       PSU
                 38.34.326861
                 AM

TestDB1        04-FEB-13 05. APPLY      SERVER     11.2.0.3               5 CPUJan2013                CPU
                 41.51.480687
                 PM

TestDB1        26-MAY-14 10. APPLY      SERVER     11.2.0.3               9 PSU 11.2.0.3.9            PSU
                 57.13.686893
                 PM

TestDB1        01-JUN-16 01. VIEW INVAL                             8289601 view invalidation
                 14.51.453387  IDATE
                 PM

TestDB1                      BOOTSTRAP  DATAPATCH  12.1.0.2                 RDBMS_12.1.0.2.0DBPSU_AIX
                                                                              .PPC64_161210


6 rows selected.

SQL>  EXECUTE dbms_stats.gather_dictionary_stats;

PL/SQL procedure successfully completed.




To update Opatch, use the following instructions:
Download the OPatch utility to a temporary directory.
For each Oracle RAC database home and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch version


/u06/oracle/TestDB/product/12.1.0.2

cd /oswatcher

cp    p6880880_121010_AIX64-5L.zip /u06/oracle/TestDB/product/12.1.0.2/

chown TestDB:dba p6880880_121010_AIX64-5L.zip




unzip <OPATCH-ZIP> -d <ORACLE_HOME>
or

drwxr-xr-x    2 TestDB dba           12288 Jan 28 2018  bin
drwxr-xr-x    4 TestDB dba             256 Aug 16 13:54 log
drwxr-x---   15 TestDB dba            4096 Oct 23 11:24 OPatch_old
drwxr-xr-x    2 TestDB dba            4096 Dec 31 04:42 dbs
drwxr-xr-x    2 TestDB dba             256 Dec 31 04:43 QOpatch
-rw-------    1 TestDB dba       148094631 Dec 31 07:16 p6880880_121010_AIX64-5L.zip
node1[/u06/oracle/TestDB/product/12.1.0.2]$ unzp
node1[/u06/oracle/TestDB/product/12.1.0.2]$ unzip p6880880_121010_AIX64-5L.zip


node1[/u06/oracle/TestDB/product/12.1.0.2]$
node1[/u06/oracle/TestDB/product/12.1.0.2]$ cd OPatch
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$ opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
node1[/u06/oracle/TestDB/product/12.1.0.2/Opatch]$


<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME


node1[/u06/oracle/TestDB/product/12.1.0.2]$ cd OPatch
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$ opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$ opatch lsinventory -detail -oh $ORACLE_HOME
Oracle Interim Patch Installer version 12.2.0.1.16
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u06/oracle/TestDB/product/12.1.0.2
Central Inventory : /u02/oracle/oraInventory
   from           : /u06/oracle/TestDB/product/12.1.0.2/oraInst.loc
OPatch version    : 12.2.0.1.16
OUI version       : 12.1.0.2.0
Log file location : /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatch/opatch2018-12-31_07-21-36AM_1.log

Lsinventory Output file location : /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatch/lsinv/lsinventory2018-12-31_07-21-36AM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: node1
ARU platform id: 212
ARU platform description:: IBM_AIX

Installed Top-level Products (1):

Oracle Database 12c                                                  12.1.0.2.0
There are 1 products installed in this Oracle Home.


Installed Products (133):

Assistant Common Files                                               12.1.0.2.0
Buildtools Common Files                                              12.1.0.2.0
Cluster Verification Utility Common Files                            12.1.0.2.0
Database Configuration and Upgrade Assistants                        12.1.0.2.0
Database Migration Assistant for Unicode                             12.1.0.2.0
Database SQL Scripts                                                 12.1.0.2.0
Database Workspace Manager                                           12.1.0.2.0
DB TOOLS Listener                                                    12.1.0.2.0
Deinstallation Tool                                                  12.1.0.2.0
Enterprise Edition Options                                           12.1.0.2.0
Expat libraries                                                       2.0.1.0.2









from root user


ls -ltr /u06/oracle/TestDB/product/12.1.0.2

ls -ltr /oswatcher/28349311

node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$
node1[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$ echo $ORACLE_HOME
/u06/oracle/TestDB/product/12.1.0.2
node1[/u06/oracle/TestDB/product/12.1.0.2/Opatch]$

node1:root:/u06/oracle/TestDB/product/12.1.0.2:]
[node1:root:/u06/oracle/TestDB/product/12.1.0.2:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u06/oracle/TestDB/product/12.1.0.2:] cd OPatch
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/Opatch:]

[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:] df -g /u06/oracle/TestDB/product/12.1.0.2/OPatch
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u06_lv       94.00     61.74   35%   130298     1% /u06
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/Opatch:]

[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:] df -g /tmp
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd3           1.25      1.21    4%      560     1% /tmp
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]

export PATH=$PATH:/u06/oracle/TestDB/product/12.1.0.2/OPatch


 ./opatchauto apply  /oswatcher/28349311 -analyze -oh /u06/oracle/TestDB/product/12.1.0.2



[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:] ./opatchauto apply  /oswatcher/28349311 -analyze -oh /u06/oracle/TestDB/product/12.1.0.2

OPatchauto session is initiated at Mon Dec 31 07:29:37 2018

System initialization log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-12-31_07-30-37AM.log.

Session log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-12-31_07-32-41AM.log
The id for this session is UDM5

Executing OPatch prereq operations to verify patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
Patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Verifying SQL patch applicability on home /u06/oracle/TestDB/product/12.1.0.2

Following step failed during analysis:
 /bin/sh -c 'cd /u06/oracle/TestDB/product/12.1.0.2; ORACLE_HOME=/u06/oracle/TestDB/product/12.1.0.2 ORACLE_SID=TestDB1 /u06/oracle/TestDB/product/12.1.0.2/OPatch/datapatch -prereq -verbose'

SQL patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:node1
RAC Home:/u06/oracle/TestDB/product/12.1.0.2
Version:12.1.0.2.0


==Following patches were SKIPPED:

Patch: /oswatcher/28349311/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /oswatcher/28349311/28259950
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /oswatcher/28349311/28259833
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_07-33-12AM_1.log

Patch: /oswatcher/28349311/28259914
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_07-33-12AM_1.log



OPatchauto session completed at Mon Dec 31 07:35:04 2018
Time taken to complete the session 5 minutes, 30 seconds
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/Opatch:]


=============apply patch on particular database===============


 ./opatchauto apply  /oswatcher/28349311  -oh /u06/oracle/TestDB/product/12.1.0.2




OPatchauto session completed at Mon Dec 31 10:00:46 2018
Time taken to complete the session 4 minutes, 19 seconds
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]  ./opatchauto apply  /oswatcher/28349311  -oh /u06/oracle/TestDB/product/12.1.0.2

OPatchauto session is initiated at Mon Dec 31 10:01:34 2018

System initialization log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-12-31_10-02-13AM.log.

Session log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-12-31_10-03-14AM.log
The id for this session is GV9N

Executing OPatch prereq operations to verify patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
Patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Verifying SQL patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
SQL patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Preparing to bring down database service on home /u06/oracle/TestDB/product/12.1.0.2
Successfully prepared home /u06/oracle/TestDB/product/12.1.0.2 to bring down database service


Bringing down database service on home /u06/oracle/TestDB/product/12.1.0.2
Following database(s) and/or service(s) are stopped and will be restarted later during the session: TestDB
Database service successfully brought down on home /u06/oracle/TestDB/product/12.1.0.2


Performing prepatch operation on home /u06/oracle/TestDB/product/12.1.0.2
Perpatch operation completed successfully on home /u06/oracle/TestDB/product/12.1.0.2


Start applying binary patch on home /u06/oracle/TestDB/product/12.1.0.2
Successfully executed command: /usr/sbin/slibclean

Binary patch applied successfully on home /u06/oracle/TestDB/product/12.1.0.2


Performing postpatch operation on home /u06/oracle/TestDB/product/12.1.0.2
Postpatch operation completed successfully on home /u06/oracle/TestDB/product/12.1.0.2


Starting database service on home /u06/oracle/TestDB/product/12.1.0.2
Database service successfully started on home /u06/oracle/TestDB/product/12.1.0.2


Preparing home /u06/oracle/TestDB/product/12.1.0.2 after database service restarted
No step execution required.........


Trying to apply SQL patch on home /u06/oracle/TestDB/product/12.1.0.2
SQL patch applied successfully on home /u06/oracle/TestDB/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node1
RAC Home:/u06/oracle/TestDB/product/12.1.0.2
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /oswatcher/28349311/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /oswatcher/28349311/28259950
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /oswatcher/28349311/28259833
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-06-06AM_1.log

Patch: /oswatcher/28349311/28259914
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-06-06AM_1.log



OPatchauto session completed at Mon Dec 31 10:12:12 2018
Time taken to complete the session 10 minutes, 39 seconds
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]
[node1:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]




=========Database TestDB====node2


drwxr-xr-x    2 TestDB dba           16384 Jan 28 2018  lib
drwxr-xr-x    7 TestDB dba            4096 Jan 28 2018  install
drwxr-xr-x    2 TestDB dba           12288 Jan 28 2018  bin
drwxr-xr-x    2 TestDB dba             256 Dec 31 05:41 dbs
drwxr-xr-x    2 TestDB dba             256 Dec 31 07:34 QOpatch
node2[/u06/oracle/TestDB/product/12.1.0.2]$ mv OPatch OPatch_31_dec
node2[/u06/oracle/TestDB/product/12.1.0.2]$ exit
logout
[node2:root:/home/root:] cd /oswatcher
[node2:root:/oswatcher:] ls -ltr
total 6001304
drwxr-xr-x    2 root     system          256 Jun 24 2015  lost+found
drwxr-xr-x    3 oracrs   dba             256 Jul  6 2015  osw_install
drwxrwxr-x    4 fub44ppt dba            4096 Jan 11 2016  orachk
-rw-------    1 oracrs   dba               0 Jul 10 2016  nohup.out
drwxr-xr-x    7 oracrs   dba             256 Oct 12 13:43 28349311
-rw-rw-r--    1 oracrs   dba          256911 Oct 24 11:34 PatchSearch.xml
-rw-------    1 eibm994  staff     167499907 Dec 24 15:07 p6880880_112000_AIX64-5L.zip
-rw-------    1 oracrs   dba       148094631 Dec 24 15:09 p6880880_121010_AIX64-5L.zip
-rwx------    1 oracrs   dba      2753239498 Dec 24 15:19 p28349311_121020_AIX64-5L.zip
-rw-r--r--    1 oracrs   dba         3553280 Dec 27 11:16 grid_home_inventory.tar
[node2:root:/oswatcher:]


[node2:root:/oswatcher:] cp p6880880_121010_AIX64-5L.zip /u06/oracle/TestDB/product/12.1.0.2/
[node2:root:/oswatcher:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node2:root:/oswatcher:]


chown TestDB:dba p6880880_121010_AIX64-5L.zip


unzip then


node2[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$
node2[/u06/oracle/TestDB/product/12.1.0.2/OPatch]$ opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
node2[/u06/oracle/TestDB/product/12.1.0.2/Opatch]$


node1[/u06/oracle/TestDB]$ cd
node1[/u06/oracle/TestDB]$ adrci

ADRCI: Release 12.1.0.2.0 - Production on Mon Dec 31 07:47:29 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u06/oracle/TestDB"
adrci> show alert

Choose the home from which to view the alert log:

1: diag/asm/user_TestDB/host_2865234568_80
2: diag/asmtool/user_TestDB/host_2865234568_80
3: diag/clients/user_TestDB/host_2865234568_80
4: diag/clients/user_TestDB/host_2865234568_82
5: diag/rdbms/TestDB/TestDB1
Q: to quit

Please select option:
node1[/u06/oracle/TestDB]$ cd /u06/oracle/TestDB
node1[/u06/oracle/TestDB]$ cd diag/rdbms/TestDB/TestDB1
node1[/u06/oracle/TestDB/diag/rdbms/TestDB/TestDB1]$ ls -ltr
total 624
drwxr-x---    2 TestDB dba             256 Dec 30 2011  metadata_dgif
drwxr-x---    2 TestDB dba             256 Dec 30 2011  incpkg
drwxr-x---    2 TestDB dba             256 Dec 30 2011  hm
drwxr-x---    2 TestDB dba             256 Dec 30 2011  cdump
drwxr-x---    2 TestDB dba             256 May 26 2014  ir
drwxr-x---    6 TestDB dba             256 Jun  1 2016  log
drwxr-x---    2 TestDB dba             256 Jun  1 2016  metadata_pv
drwxr-x---    2 TestDB dba            4096 Jun  1 2016  metadata
drwxr-x---    4 TestDB dba             256 Oct 15 01:28 incident
drwxr-x---    2 TestDB dba             256 Oct 15 01:28 sweep
drwxr-x---    2 TestDB dba             256 Oct 15 01:28 stage
drwxr-x---    2 TestDB dba            4096 Oct 15 01:28 lck
drwxr-x---    2 TestDB dba            4096 Oct 15 12:28 alert
drwxr-x---    2 TestDB dba          290816 Dec 31 07:45 trace
node1[/u06/oracle/TestDB/diag/rdbms/TestDB/TestDB1]$

find . -name '*.trc' -mtime +10 -exec ls -lrt {} \;

find . -name '*.trc' -mtime +10 -exec rm -rf {} \;

audit file is in base location

node1[/u06/oracle/TestDB]$
node1[/u06/oracle/TestDB]$ pwd
/u06/oracle/TestDB
node1[/u06/oracle/TestDB]$ echo $ORACLE_BASE
/u06/oracle/TestDB
node1[/u06/oracle/TestDB]$


find ./ -name "*.aud" -mtime +10 -exec ls -l {} \;

find ./ -name "*.aud" -mtime +10 -exec rm {} \;


Backup Database home


cd /u02/oracle/oraInventory


Backup for inventory


srvctl stop instance -d TestDB -i TestDB2 -o immediate

cd /u02/oracle/oraInventory

tar -cvf /oswatcher/Database_TestDB_inventory.tar .
tar -cvf /oswatcher/Database_TestDB_home.tar .

du -sg  /oswatcher/Database_TestDB home_.tar

srvctl start instance -d TestDB -i TestDB2


from root user

export PATH=$PATH:/u06/oracle/TestDB/product/12.1.0.2/OPatch

./opatchauto apply  /oswatcher/28349311 -analyze -oh /u06/oracle/TestDB/product/12.1.0.2


 ./opatchauto apply  /oswatcher/28349311  -oh /u06/oracle/TestDB/product/12.1.0.2



[node2:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]
[node2:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:] ps -ef|grep pmon
  oracrs 12189922        1   0 05:31:26      -  0:00 asm_pmon_+ASM2
    root 16122084 30998976   0 10:49:23  pts/0  0:00 grep pmon
TestDB 20447448        1   0 10:44:41      -  0:00 ora_pmon_TestDB2
fub44ppt 28115072        1   0 05:42:09      -  0:03 ora_pmon_fub44ppt2
   fsnft 29819116        1   0 05:42:52      -  0:01 ora_pmon_FSNFT2
fug44ppt 42598422        1   0 05:43:51      -  0:01 ora_pmon_fug44ppt2
[node2:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:] ./opatchauto apply  /oswatcher/28349311 -analyze -oh /u06/oracle/TestDB/product/12.1.0.2

OPatchauto session is initiated at Mon Dec 31 10:49:33 2018

System initialization log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-12-31_10-50-27AM.log.

Session log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-12-31_10-51-32AM.log
The id for this session is 7KG3

Executing OPatch prereq operations to verify patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
Patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Verifying SQL patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
SQL patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:node2
RAC Home:/u06/oracle/TestDB/product/12.1.0.2
Version:12.1.0.2.0


==Following patches were SKIPPED:

Patch: /oswatcher/28349311/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /oswatcher/28349311/28259950
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /oswatcher/28349311/28259833
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-52-01AM_1.log

Patch: /oswatcher/28349311/28259914
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-52-01AM_1.log



OPatchauto session completed at Mon Dec 31 10:53:49 2018
Time taken to complete the session 4 minutes, 18 seconds
[node2:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]  ./opatchauto apply  /oswatcher/28349311  -oh /u06/oracle/TestDB/product/12.1.0.2

OPatchauto session is initiated at Mon Dec 31 10:54:31 2018

System initialization log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-12-31_10-55-09AM.log.

Session log file is /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-12-31_10-56-06AM.log
The id for this session is TRJ4

Executing OPatch prereq operations to verify patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
Patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Verifying SQL patch applicability on home /u06/oracle/TestDB/product/12.1.0.2
SQL patch applicability verified successfully on home /u06/oracle/TestDB/product/12.1.0.2


Preparing to bring down database service on home /u06/oracle/TestDB/product/12.1.0.2
Successfully prepared home /u06/oracle/TestDB/product/12.1.0.2 to bring down database service


Bringing down database service on home /u06/oracle/TestDB/product/12.1.0.2
Following database(s) and/or service(s) are stopped and will be restarted later during the session: TestDB
Database service successfully brought down on home /u06/oracle/TestDB/product/12.1.0.2


Performing prepatch operation on home /u06/oracle/TestDB/product/12.1.0.2
Perpatch operation completed successfully on home /u06/oracle/TestDB/product/12.1.0.2


Start applying binary patch on home /u06/oracle/TestDB/product/12.1.0.2
Successfully executed command: /usr/sbin/slibclean

Binary patch applied successfully on home /u06/oracle/TestDB/product/12.1.0.2


Performing postpatch operation on home /u06/oracle/TestDB/product/12.1.0.2
Postpatch operation completed successfully on home /u06/oracle/TestDB/product/12.1.0.2


Starting database service on home /u06/oracle/TestDB/product/12.1.0.2
Database service successfully started on home /u06/oracle/TestDB/product/12.1.0.2


Preparing home /u06/oracle/TestDB/product/12.1.0.2 after database service restarted
No step execution required.........


Trying to apply SQL patch on home /u06/oracle/TestDB/product/12.1.0.2
SQL patch applied successfully on home /u06/oracle/TestDB/product/12.1.0.2

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u06/oracle/TestDB/product/12.1.0.2
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /oswatcher/28349311/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /oswatcher/28349311/28259950
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /oswatcher/28349311/28259833
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-58-58AM_1.log

Patch: /oswatcher/28349311/28259914
Log: /u06/oracle/TestDB/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-12-31_10-58-58AM_1.log



OPatchauto session completed at Mon Dec 31 11:05:50 2018
Time taken to complete the session 11 minutes, 20 seconds
[node2:root:/u06/oracle/TestDB/product/12.1.0.2/OPatch:]






SQL> set linesize 200
SQL>
SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch  order by action_time;

ACTION_TIME_2          PATCH_ID  PATCH_UID ACTION          VERSION              DESCRIPTION
-------------------- ---------- ---------- --------------- -------------------- ----------------------------------------------------------------------------------------------------
01-JUN-2016            22291127   19983562 APPLY           12.1.0.2             Database Patch Set Update : 12.1.0.2.160419 (22291127)
28-DEC-2016            24006101   20648640 APPLY           12.1.0.2             Database Patch Set Update : 12.1.0.2.161018 (24006101)
28-JAN-2018            26713565   21606134 APPLY           12.1.0.2             DATABASE PATCH SET UPDATE 12.1.0.2.171017
31-DEC-2018            28259833   22490763 APPLY           12.1.0.2             DATABASE PATCH SET UPDATE 12.1.0.2.181016

SQL>



SQL> set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';SQL>

no rows selected

SQL> col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;  SQL> SQL> SQL> SQL> SQL> SQL>

INSTANCE_NAME    ACTION_TIME   ACTION     NAMESPACE  VERSION               ID COMMENTS                  BUNDLE_SERIES
---------------- ------------- ---------- ---------- ------------- ---------- ------------------------- ------------------------------
TestDB2        21-OCT-11 04. APPLY      SERVER     11.2.0.3               0 Patchset 11.2.0.2.0       PSU
                 48.04.498723
                 AM

TestDB2        30-DEC-11 09. APPLY      SERVER     11.2.0.3               0 Patchset 11.2.0.2.0       PSU
                 38.34.326861
                 AM

TestDB2        04-FEB-13 05. APPLY      SERVER     11.2.0.3               5 CPUJan2013                CPU
                 41.51.480687
                 PM

TestDB2        26-MAY-14 10. APPLY      SERVER     11.2.0.3               9 PSU 11.2.0.3.9            PSU
                 57.13.686893
                 PM

TestDB2        01-JUN-16 01. VIEW INVAL                             8289601 view invalidation
                 14.51.453387  IDATE
                 PM

TestDB2                      BOOTSTRAP  DATAPATCH  12.1.0.2                 RDBMS_12.1.0.2.0DBPSU_AIX


                                                                              .PPC64_161210


6 rows selected.

SQL>  EXECUTE dbms_stats.gather_dictionary_stats;

select * from dba_registry_sqlpatch




SQL> select * from dba_registry_sqlpatch;

  PATCH_ID  PATCH_UID VERSION       FLAGS      ACTION     STATUS          INSTALL_ID ACTION_TIME   DESCRIPTION
---------- ---------- ------------- ---------- ---------- --------------- ---------- ------------- ----------------------------------------------------------------------------------------------------
BUNDLE_SERIES                   BUNDLE_ID BUNDLE_DATA
------------------------------ ---------- --------------------------------------------------------------------------------
PATCH_DESCRIPTOR
--------------------------------------------------------------------------------
PATCH_DIRECTORY
----------------------------------------------------------------------------------------------------------------------------------------------------------------
LOGFILE
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  22291127   19983562 12.1.0.2      NB         APPLY      SUCCESS                    01-JUN-16 03. Database Patch Set Update : 12.1.0.2.160419 (22291127)
                                                                                     26.32.189231
                                                                                     PM
PSU                                160419 <bundledata version="12.1.0.2.1" series="Patch Set Update">
                                            <bundle id="1" des


/u06/oracle/TestDB/cfgtoollogs/sqlpatch/22291127/19983562/22291127_apply_TestDB_2016Jun01_15_25_36.log

  24006101   20648640 12.1.0.2      NB         APPLY      SUCCESS                    28-DEC-16 03. Database Patch Set Update : 12.1.0.2.161018 (24006101)
                                                                                     14.12.963805
                                                                                     PM
PSU                                161018 <bundledata version="12.1.0.2.1" series="Patch Set Update">
                                            <bundle id="1" des


/u06/oracle/TestDB/cfgtoollogs/sqlpatch/24006101/20648640/24006101_apply_TestDB_2016Dec28_15_13_51.log

  26713565   21606134 12.1.0.2      NB         APPLY      SUCCESS                  1 28-JAN-18 02. DATABASE PATCH SET UPDATE 12.1.0.2.171017
                                                                                     01.27.971955
                                                                                     PM
PSU                                171017 <bundledata version="12.1.0.2.1" series="Patch Set Update">
                                            <bundle id="1" des
<?xml version="1.0" encoding="utf-8"?>
<sqlPatch ID="26713565" uniquePatchID="21
504B03041400000008008D60434B06D872AA8C020000321000000C00000032363731333536352E786D6CBD9759739B3010C7DFF329347A6E6D8E806106C850DB3D669ADAAD9D4CFBE41120BBAA252173
/u06/oracle/TestDB/cfgtoollogs/sqlpatch/26713565/21606134/26713565_apply_TestDB_2018Jan28_14_00_44.log

  28259833   22490763 12.1.0.2      NB         APPLY      SUCCESS                  2 31-DEC-18 11. DATABASE PATCH SET UPDATE 12.1.0.2.181016
                                                                                     05.49.384504
                                                                                     AM
PSU                                181016 <?xml version="1.0" encoding="UTF-8"?>
                                          <bundledata version="12.1.0.2.1" series="
<?xml version="1.0" encoding="utf-8"?>
<sqlPatch ID="28259833" uniquePatchID="22
504B0304140000000800016D4C4D2F53D926F7020000641600000C00000032383235393833332E786D6CBD985D73E2201486EFFB2B18AE7735898D4D66341DB7BA1F33BBD55D6DA77BE524012D150892
/u06/oracle/TestDB/cfgtoollogs/sqlpatch/28259833/22490763/28259833_apply_TestDB_2018Dec31_11_05_01.log


4 rows selected.

SQL>

RMAN Image Full Backup copy and Restore

--Backup script which we have used

server_1:TESTDB:/RMAN/TESTDB/image_copy $cat IMAG_BACKUP_TESTDB.sh
rman cmdfile=/rman/oracle/backups/TESTDB/image_copy/IMAG_BACKUP_TESTDB.rcv log=/rman/oracle/backups/TESTDB/image_copy/IMAG_BACKUP_TESTDB.log
server_1:TESTDB:/RMAN/TESTDB/image_copy $cat IMAG_BACKUP_TESTDB.rcv

connect target /y
RUN
{
 ALLOCATE CHANNEL c1 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c2 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c3 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c4 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 BACKUP AS COPY DATABASE;
 RELEASE CHANNEL c1;
 RELEASE CHANNEL c2;
 RELEASE CHANNEL c3;
 RELEASE CHANNEL c4;
}

Backup location

server_1:TESTDB:/RMAN/TESTDB/image_copy $


--------Restoration steps---------

export ORACLE_SID=TESTDB
export ORACLE_HOME=/oracle/product/10.2.0.4
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus

start TESTDB database if not started and take backup of parameter file/spfile and drop database on server_1
1)
to drop database

start database in restrict mode
a) startup restrict
b) show parameter db_name
c) drop database

2) check control file backup as below  and transfer to server_1

server_prod:tatprd:/export/home/oracle $
server_prod:tatprd:/export/home/oracle $. oraenv
ORACLE_SID = [tatprd] ? TESTDB
server_prod:TESTDB:/export/home/oracle $rman

Recovery Manager: Release 10.2.0.4.0 - Production on Wed Dec 19 15:44:08 2018

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

RMAN> connect target

connected to target database: TESTDB (DBID=296424721)

RMAN> show all
2> ;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/rman/oracle/backups/TESTDB/controlfile/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/distprod1/product/10.2.0.4/dbs/snapcf_TESTDB.f'; # default

RMAN>

cd /rman/oracle/backups/TESTDB/controlfile/%F'
ls -ltr

copy control file on server_1 server




2) startup nomount using backup spfile

3)  restore a controlfile

connect target /

 run
{

allocate channel c1 device type disk;
allocate channel c2 device type disk;
RESTORE CONTROLFILE FROM AUTOBACKUP;
release channel ch1;
release channel ch1;

or

and give control file location

connect target /

 run
{

allocate channel c1 device type disk;
allocate channel c2 device type disk;
RESTORE CONTROLFILE FROM '/RMAN/TESTDB/image_copy/controlfile/c-296424721-20181216-00';
release channel ch1;
release channel ch1;



3)  catalog the backup:

RMAN>  catalog start with '/RMAN/TESTDB/image_copy

list datafilecopy all; it is for image copy


4) prepare scrip and run in no hub


server_1:TESTDB:/RMAN/TESTDB/image_copy/controlfile $cat image_restore.sh
rman cmdfile=/RMAN/TESTDB/image_copy/image_restore.rcv log=/RMAN/TESTDB/image_copy/image_restore.log
server_1:TESTDB:/RMAN/TESTDB/image_copy/controlfile $cat image_restore.rcv
connect target /
RUN
{
 ALLOCATE CHANNEL c1 TYPE DISK ;
 ALLOCATE CHANNEL c2 TYPE DISK ;
 ALLOCATE CHANNEL c3 TYPE DISK ;
 ALLOCATE CHANNEL c4 TYPE DISK ;
 restore database;
 RELEASE CHANNEL c1;
 RELEASE CHANNEL c2;
 RELEASE CHANNEL c3;
 RELEASE CHANNEL c4;
}

server_1:TESTDB:/RMAN/TESTDB/image_copy/controlfile $


nohup image_restore.sh &


monitor log alert log


/ora/tms01prd/trace/oradata/tms01prd/bdump $
tail - 200f alert.log

Database will be started in mount stage

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- ----------
TESTDB    MOUNTED

SQL>

Please check  all datafiles online and read write
and all tempfiles online and read write

SQL> set lines 140 pages 50
col name format A70
select file#, name, status, enabled from v$datafile;SQL> SQL>


select * from v$tempfile;

run db verify on mount stage of database

set head off
set linesize 200
select 'dbv file=' || name  from v$datafile;



server_1:TESTDB:/RMAN/TESTDB/image_copy $chmod 775 dbverify_tsmprd.sh
server_1:TESTDB:/RMAN/TESTDB/image_copy $

nohup dbverify_tsmprd.sh&

validate database as below

 run {
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
backup validate check logical database;
 RELEASE CHANNEL c1;
 RELEASE CHANNEL c2;
 RELEASE CHANNEL c3;
 RELEASE CHANNEL c4;
}


SQL> set linesize 200
SQL> select * from v$database_block_corruption;

no rows selected

SQL> select name from v$database
  2  ;

NAME
---------
TESTDB

SQL> D

SQL> !date
Thu Dec 20 15:29:49 GMT 2018

SQL>


=================TESTDB copy command===================

-rw-r-----   1 eibm994  dba      4269809664 Dec 16 08:28 backups_image_copy_TSMPRD_data_D-TESTDB_I-296424721_TS-TSPACED03_FNO-336_9stksq7g
server_1:TESTDB:/RMAN/TESTDB/image_copy $cat IMAG_BACKUP_TESTDB.sh
rman cmdfile=/rman/oracle/backups/TESTDB/image_copy/IMAG_BACKUP_TESTDB.rcv log=/rman/oracle/backups/TESTDB/image_copy/IMAG_BACKUP_TESTDB.log
server_1:TESTDB:/RMAN/TESTDB/image_copy $cat IMAG_BACKUP_TESTDB.rcv

connect target /
RUN
{
 ALLOCATE CHANNEL c1 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c2 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c3 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 ALLOCATE CHANNEL c4 TYPE DISK FORMAT '/rman/oracle/backups/TESTDB/image_copy/backups_image_copy_TSMPRD_%U';
 BACKUP AS COPY DATABASE;
 RELEASE CHANNEL c1;
 RELEASE CHANNEL c2;
 RELEASE CHANNEL c3;
 RELEASE CHANNEL c4;
}

server_1:TESTDB:/RMAN/TESTDB/image_copy $












Enter user-name: / as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select sum(bytes)/1024/1024/1024 "Total DB size in GB"  from dba_data_files;

Total DB size in GB
-------------------
         1911.25967

SQL> select sum(bytes/1024/1024/1024) tbspc_gb from dba_temp_files;

  TBSPC_GB
----------
53.7269287

SQL> select count(1) from dba_data_files;

  COUNT(1)
----------
       468

SQL> select sum(bytes)/1024/1024/1024 "Total DB size in GB" from dba_segments;

Total DB size in GB
-------------------
         1250.34795

SQL>

12c Oracle Clusterware OCT 2018 PSU Patching /Patch 28349311: GRID INFRASTRUCTURE PATCH STEP BY STEP

Types of Oracle Patches

Oracle regularly makes patches available to upgrade features, enhance security, or fix problems with supported software. The major types of patches are:

 

·         Interim patches - contain a single bug fix or a collection of bug fixes provided as required

·         Interim patches for security bug fixes - contain customer-specific security bug fixes

·         Diagnostic patches - intended to help diagnose or verify a fix or a collection of bug fixes

·         Bundle Patch Updates (BPUs) - a cumulative collection of fixes for a specific product or component

·         Patch Set Updates (PSUs) - a cumulative collection of high impact, low risk, and proven fixes for a specific product or component and Security Patch Updates

·         Security Patch Updates (SPU) - a cumulative collection of security bug fixes. SPUs were formerly known as Critical Patch Updates (CPU).

·         System Patch - contains several sub-patches in a format that can be used by OPatchAuto.

·         Merge Label Request (MLR) - a merge of two or more fixes. MLR creation requires a label for the new set of merged code and a Patch Set

 

 

 

 

 

Overview of the Patch Process

You can download patch from oracle support and will master sheet as well

https://support.oracle.com/knowledge/Oracle%20Cloud/2285558_1.html

 

 

Patch can be applied using Opatch and Opatchauto

 

OPatch consists of patching utilities that help ensure your Oracle software stays current and secure. The utilities are:

·         OPatch: A Java-based utility that enables the application and rollback of patches to Oracle software.

·         OPatchauto: A patch orchestration tool that generates patching instructions specific to your target configuration and then uses OPatch to perform the patching operations without user intervention. Specifically, OPatchAuto can:

1.  Perform pre-patch checks.

2.  Apply the patch

3.  Start and stop the running servers.

4.  Perform post-patch checks.

5.  Roll back patches when patch deinstallation is required.

 

·         OPatchAuto automates patch application to the Grid Infrastructure (GI) cluster, by applying patches to both the GI and the managed Oracle Real Application Cluster (RAC) homes.

·         Patch orchestration is the automated execution of the patching steps, such as the execution of pre-patch checks, stopping services, applying the binary patches, and starting the services. Patch orchestration for Oracle Database 12c applies the patch to the GI/RAC configuration on that machine, including all of its databases. The OPatchAuto patch orchestration utility is available with version 12.1 of the OPatch utility.

 

Real Application Clusters can be patched in different ways:

·         Rolling Patching

·         Non-Rolling Patching

Advantage of Roll Patch

 

o    The rolling Patching refers to patching different databases or different instances of the same database (in a Real Application Clusters environment) one at a time, without stopping the database.

o    The advantage of a RAC rolling patch is that it enables at least some instances of the RAC installation to be available during the scheduled outage required for patch upgrades. Only the RAC instance that is currently being patched needs to be brought down. The other instances can continue to remain available. This means that the impact on the application downtime required for such scheduled outages is further minimized. Oracle’s opatch utility enables the user to apply the patch successively to the different instances of the RAC installation.

o    Before Opatchauto utility, we need to manually down the instance and cluster related services(prepatch.sh) before applying rolling patch.

o    The GI System patch includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.

o    This patch is Oracle RAC Rolling Installable has to be mentioned in README.html.

 

 

 

 

 

 

Clusterware patch on Grid Home 12.2.0.1 Step by Step

 

1)Two Node RAC Database with Standby Database

 

 Patching Sequence should be as below, Since Each RAC Database in environment has standby database as well

 

1) Disable log shipping

2) apply PSU patch on standby clusterware/Grid and on Standby Database

3) Apply PSU patch on Primary Database

 

2) Two node RAC Patching

 

a)   Grid home patch

b)   Oracle Home patch

 

We need to apply PSU patch on Grid home and database as well

12c Oracle Clusterware OCT 2019 PSU Patching /Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015: GRID INFRASTRUCTURE PATCH STEP BY STEP

Patch details
==============
Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015

This patch is used to apply on Grid home and RAC database as well


Prerequisite
============

1) Download Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015 from Oracle Support
2) Check Free Space in Clusterware Home file system /u02 (approx free space 15-22 GB and /tmp file system
¨ Check ACFS File Systems  if  it is available on node ,Please unmount it before starting patching clusterware or stopping clusterware
3) Check clusterware and database component status
 ps -ef|grep pmon
 ps -ef|grep tns
 crsctl check crs
crsctl status resource -t
 crsctl query crs activeversion
 crsctl query crs releaseversion
 crsctl query crs softwareversion
./crsctl check crs
 ps -ef|grep d.bin

4) Check opatch version and recommended to have latest  opatch and download it from oracle support (6880880) and unzip it in Grid Home after renaming Old Opatch folder
  We must use the OPatch utility version 12.2.0.1.17 or later to apply this patch for all platforms
5) run opatch lsinventory from grid home to check consistency and corruption of inventory
6) unzip downloaded patch from Grid user
7) run analyze command using opatchauto from root user after setting proper path ,we must carefully should give grid home path
8) there is bug in this patch. So follow instruction from below doc id
 opatchauto/opatch apply failing with CLSRSC-46: Error: '<GRID_HOME>/suptools/tfa/release/tfa_home/jlib/jewt4.jar' does not exist (Doc ID 2409411.1)
9) apply patch from  opatchauto on grid home
10)verify patch details on Grid Home
11)
OPatch maintains logs for apply, rollback, and lsinventory operations.

OPatch Log files are located in

ORACLE_HOME/cfgtoollogs/opatch.

Each log file is tagged with the time stamp of the operation.

Each time you run OPatch, a new log file is created.

OPatch maintains an index of processed commands-and log files in the opatch_history.txt file

Eg


Patch location
-----------
Node1[/oswatcher]$ cd 2020_patch
Node1[/oswatcher/2020_patch]$ ls -ltr
total 3613688
-rw-------    1 users  staff     170920687 Feb 10 16:03 p6880880_122010_AIX64-5L.zip
-rw-------    1 users  staff    1679281294 Feb 10 16:09 p30116802_122010_AIX64-5L.zip
Node1[/oswatcher/2020_patch]$


1) Change patch owner if it is not owned by grid owner and unzip patch and provide 775 owners (precaution)

chown -R oracrs:oinstall 2020_patch

chown oracrs:oinstall p30116802_122010_AIX64-5L.zip
chmod -R 775 30116802

2) Check patch version from grid owner

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_HOME=/u02/oracle/12.2.0.1/grid/
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_SID=+ASM1

    $ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
    $ <ORACLE_HOME>/OPatch/opatch version

The version output of the previous command should be 12.2.0.1.17 or later.

3) Validation of Oracle Inventory

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid//oraInst.loc
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-12_15-07-11PM_1.log

Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-12_15-07-11PM.txt
or

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory -detail -oh /u02/oracle/12.2.0.1/grid
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-22-28AM_1.log

Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-15_04-22-28AM.txt
--------------------------------------------------------------------------------
Local Machine Information::



4)  Run OPatch Conflict Check

export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch/OPatch


Determine whether any currently installed one-off patches conflict with this patch 30116802 as follows:

    For Grid Infrastructure Home , From Grid owner

cd
/u02/oracle/12.2.0.1/grid/OPatch/OPatch

    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30138470
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122814
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122828
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/26839277
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30093408

eg

Testserver[/u02/oracle/12.2.0.1/grid/OPatch]$
Testserver[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30138470
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-24AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122814
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-36AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122828
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-47AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/26839277
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-57AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$  opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30093408
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-27-06AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$




5) Run OPatch SystemSpace Check from Grid owner


Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

    For Grid Infrastructure Home, as home user:

        Create file /tmp/patch_list_gihome.txt with the following content:

        % cat /tmp/patch_list_gihome.txt

        <UNZIPPED_PATCH_LOCATION>/30116802/30138470
        <UNZIPPED_PATCH_LOCATION>/30116802/30122814
        <UNZIPPED_PATCH_LOCATION>/30116802/30122828
        <UNZIPPED_PATCH_LOCATION>/30116802/26839277
        <UNZIPPED_PATCH_LOCATION>/30116802/30093408

        Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

        % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt


eg

Test server[/u02/oracle/12.2.0.1/grid/OPatch]$ ls -ltr /tmp/patch_list_gihome.txt
-rw-r--r--    1 oracrs   oinstall        200 Feb 14 16:58 /tmp/patch_list_gihome.txt
Test server[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-30-05AM_1.log

Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.

OPatch succeeded.
Test server[/u02/oracle/12.2.0.1/grid/OPatch]$


7) due to bug or jdev-rt.jar file not present .It will throw error . So please follow instruction and remove entry of files jdev-rt.jar and jewt4.jar
 from file (crsconfig_fileperms and crsconfig_fileperms.sbs)  as pre check.

Error details
----------
CRS-2673: Attempting to stop 'ora.cssd' on 'Test server'
CRS-2677: Stop of 'ora.cssd' on 'Test server' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'Test server'
CRS-2677: Stop of 'ora.gipcd' on 'Test server' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'Test server' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2020/02/15 06:18:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:51 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:52 CLSRSC-46: Error: '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar' does not exist
2020/02/15 06:18:53 CLSRSC-152: Could not set ownership on '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar'

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sat Feb 15 06:18:58 2020
Time taken to complete the session 5 minutes, 40 seconds

 opatchauto failed with error code 42
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ps -ef|grep pmon
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.

reason of issue
----------

[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.
ksh[2]: Timed out waiting for input.
Test server[/home/eibm994]$ sudo su -
Password:
[YOU HAVE NEW MAIL]
[Test server:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[Test server:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[Test server:root:/home/root:]

solution
-----------

[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/Test server
ksh: /u02/oracle/12.2.0.1/grid/crs/utl/Test server:  not found.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/Test server
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] ls -ltr crsconfig_fileperms
-rw-r-----    1 oracrs   oinstall      25199 May 16 2019  crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cp -p crsconfig_fileperms crsconfig_fileperms.backup
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] vi crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cat crsconfig_fileperms|grep -i jewt4.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cat crsconfig_fileperms|grep -i jdev-rt.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cd /u02/oracle/12.2.0.1/grid/crs/sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] ls -ltr crsconfig_fileperms.sbs
-rw-r-----    1 oracrs   oinstall      26177 Oct 17 2018  crsconfig_fileperms.sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cp -p crsconfig_fileperms.sbs  crsconfig_fileperms.sbs.backup
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] vi crsconfig_fileperms.sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jewt4.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jdev-rt.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]


6)  One-off Patch Conflict Detection and Resolution from root user

./u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid


7) Stop All RAC Database instance and clusterware from one node and Take backup of Grid Inventory  from one node


srvctl stop instance -d test1db -i test1db1 -o immediate
srvctl stop instance -d test2db  -i test2db1 -o immediate

check status of crs
./crsctl check crs
ps -ef|grep d.bin

before stop cluster , Please check ACFS file system is there , it avaialble at server , please unmont it , otherwise clusterware will not stop from root user

------------unmount acfs file system----------------------------


from root user


v/varbes_lv      2.00      1.29   36%     8196     3% /var/opt/BESClient
/dev/asm/alprod-7      2.00      1.66   17%   708512    17% /ppt/alprod
/dev/asm/gfprod-7      2.00      1.77   12%   477200    12% /ppt/gfprod

[Test:root:/u02/oracle/12.2.0.1/grid/bin:] ./crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
[Test:root:/u02/oracle/12.2.0.1/grid/bin:] /sbin/acfsutil registry
Mount Object:
  Device: /dev/asm/alprod-7
  Mount Point: /ppt/alprod
  Disk Group: OPS_DATA
  Volume: ALPROD
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfdata-7
  Mount Point: /ppt/gfdata
  Disk Group: OPS_DATA
  Volume: GFDATA
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfnew-7
  Mount Point: /ppt/gfnew
  Disk Group: OPS_DATA
  Volume: GFNEW
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfprod-7
  Mount Point: /ppt/gfprod
  Disk Group: OPS_DATA
  Volume: GFPROD
  Options: none
  Nodes: all
  Accelerator Volumes:
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
umount: 0506-347 Cannot find anything to unmount.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
umount: 0506-347 Cannot find anything to unmount.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]

[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/bos_hd4       2.00      1.54   24%    20747     6% /
/dev/bos_hd2      19.12      4.29   78%    82348     8% /usr
/dev/bos_hd9var      1.00      0.43   57%    18985    16% /var
/dev/hd3           5.00      4.93    2%      392     1% /tmp
/dev/hd1           3.00      1.11   64%    10839     4% /home
/proc                 -         -    -        -      - /proc
/dev/bos_hd10opt      1.25      0.24   82%    49382    45% /opt
/dev/hd11admin      0.25      0.25    1%        5     1% /admin
/dev/besclientlv      0.25      0.15   40%       21     1% /opt/BESClient
/dev/lv_doonce      0.25      0.07   71%      450     3% /opt/DoOnceAIX
/dev/itm_lv        2.62      1.13   57%    18522     7% /opt/IBM/ITM
/dev/scm_lv        2.00      1.90    5%      846     1% /opt/IBM/SCM
/dev/tiv_lv        1.00      0.97    4%      110     1% /opt/Tivoli
/dev/best1_lv      3.00      2.42   20%      169     1% /opt/best1
/dev/connectlv      2.00      1.95    3%      950     1% /opt/connect
/dev/ctm700lv      3.00      2.11   30%     5798     2% /opt/ctm700
/dev/patrol_lv      2.00      1.14   43%     4197     2% /opt/patrol
/dev/tivoli_lv      0.25      0.20   20%    13207    22% /opt/tivoli/cit
/dev/oswatchlv     98.00     45.94   54%     8789     1% /oswatcher
/dev/staginglv    105.00     14.80   86%    33423     1% /staging
/dev/sampptu01lv     35.00     10.00   72%    93706     4% /u01
/dev/sampptu02lv    145.00     43.13   71%   318269     3% /u02
/dev/sampptu03lv     10.00      8.57   15%    29304     2% /u03
/dev/fslv00        1.00      0.99    1%       79     1% /var/adm/perfmgr
/dev/livedump      0.25      0.25    1%        4     1% /var/adm/ras/livedump
/dev/lv_auditlog      2.00      0.66   67%     2129     2% /var/log/eprise
/dev/varbes_lv      2.00      1.29   36%     8196     3% /var/opt/BESClient

 

To backup of Grid Home

 

We need to stop database instance and clusterware/Grid

 

You can stop database instance from grid owner or instance owner

 

1.  Stop database instance

srvctl stop instance -d testdb -i testdb1 -o immediate
 

 

2.  Stop grid/crs from root user

 

      Cd /u02/oracle/12.2.0.1/grid/bin

            ./crsctl stop crs
 

 

A)  Backup GRID home and inventory USING Root

 

Cd /u02/oracle/12.2.0.1/grid

tar -cvf /staging/grid_home_database.tar .
 

du -sg /staging/grid_home_database.tar

Backup for inventory

cd /u02/oracle/oraInventory

tar -cvf /staging/grid_home_inventory.tar .
du -hP  /staging/grid_home_inventory.tar




8) Star clusterware on node1 and it is not required to start Database instance on node1

From root user

Cd /u02/oracle/12.2.0.1/grid/bin

./crsctl start crs


 
Note :Since opatchauto will stop automatically while patching .
Clusterware should be up and running, Database instance is not required to up and running since we are apply on Grid home only
 

Note :In case ,ACFS file system in server , We need to unmount ACSF file as per below steps

9) unmount  acfs file system again and check file system

ev/asm/alprod-7      2.00      1.66   17%   708512    17% /ppt/alprod
/dev/asm/gfnew-7      2.00      1.80   10%   411816    10% /ppt/gfnew
/dev/asm/gfprod-7      2.00      1.77   12%   477200    12% /ppt/gfprod
/dev/asm/gfdata-7     10.00      8.17   19%  3838312    19% /ppt/gfdata


[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/bos_hd4       3.00      2.55   15%    20373     4% /
/dev/bos_hd2       6.50      1.89   71%    80046    15% /usr
/dev/bos_hd9var      1.00      0.46   55%    17473    14% /var
/dev/hd3           5.00      4.93    2%      400     1% /tmp
/dev/hd1           4.50      3.68   19%     1404     1% /home
/proc                 -         -    -        -      - /proc
/dev/bos_hd10opt      1.25      0.24   82%    49395    45% /opt
/dev/hd11admin      0.25      0.25    1%        5     1% /admin
/dev/besclientlv      0.25      0.15   40%       21     1% /opt/BESClient
/dev/lv_doonce      0.25      0.07   71%      450     3% /opt/DoOnceAIX
/dev/itm_lv        2.00      0.82   59%    18193     9% /opt/IBM/ITM
/dev/scm_lv        2.00      1.90    5%      843     1% /opt/IBM/SCM
/dev/tiv_lv        1.00      1.00    1%       12     1% /opt/Tivoli
/dev/best1_lv      3.00      1.99   34%      242     1% /opt/best1
/dev/connectlv      2.00      1.94    3%     1848     1% /opt/connect
/dev/ctm700lv      3.00      2.12   30%     5749     2% /opt/ctm700
/dev/patrol_lv      2.00      1.14   43%     4196     2% /opt/patrol
/dev/tivoli_lv      0.25      0.20   20%    13142    22% /opt/tivoli/cit
/dev/oswatchlv     98.00     63.92   35%     8792     1% /oswatcher
/dev/staginglv    105.00     27.09   75%    37180     1% /staging
/dev/sampptu01lv     50.00     25.77   49%    88289     2% /u01
/dev/sampptu02lv    130.00     28.37   79%   370449     5% /u02
/dev/sampptu03lv     20.00     11.49   43%    31886     2% /u03
/dev/fslv00        1.00      0.99    1%       79     1% /var/adm/perfmgr
/dev/livedump      0.25      0.25    1%        4     1% /var/adm/ras/livedump
/dev/lv_auditlog      2.00      1.13   44%     2167     1% /var/log/eprise
/dev/varbes_lv      2.00      1.29   36%     8394     3% /var/opt/BESClient
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]


10) To check precheck, run analyze command from root user again

export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch


[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/OPatch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid

OPatchauto session is initiated at Sat Feb 15 11:18:28 2020

System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-19-15AM.log.

Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-20-56AM.log
The id for this session is 4465

Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:Test server
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0


==Following patches were SKIPPED:

Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log



OPatchauto session completed at Sat Feb 15 11:21:56 2020
Time taken to complete the session 3 minutes, 29 seconds

 

To see progress of command

 

Tail -200f /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-20-56AM.log


 

Implementation
---------------

1) Set proper PATH
¨   Apply PSU through patchauto from root user.  
 eg

[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 26839277
drwxr-x---    5 oracrs   oinstall        256 Sep 27 20:08 30122828
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 30093408
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 30138470
drwxr-x---    2 oracrs   oinstall       4096 Sep 27 20:12 automation
drwxr-x---    5 oracrs   oinstall        256 Sep 27 20:12 30122814
-rw-r-----    1 oracrs   oinstall          0 Sep 27 20:18 README.txt
-rw-r-----    1 oracrs   oinstall     124674 Sep 27 21:46 README.html
-rw-rw-r--    1 oracrs   oinstall       5828 Sep 27 22:16 bundle.xml
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] chmod -R 775 /oswatcher/2020_patch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]  ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 26839277
drwxrwxr-x    5 oracrs   oinstall        256 Sep 27 20:08 30122828
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 30093408
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 30138470
drwxrwxr-x    2 oracrs   oinstall       4096 Sep 27 20:12 automation
drwxrwxr-x    5 oracrs   oinstall        256 Sep 27 20:12 30122814
-rwxrwxr-x    1 oracrs   oinstall          0 Sep 27 20:18 README.txt
-rwxrwxr-x    1 oracrs   oinstall     124674 Sep 27 21:46 README.html
-rwxrwxr-x    1 oracrs   oinstall       5828 Sep 27 22:16 bundle.xml
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /u02/oracle/12.2.0.1/grid
total 6284552
-rw-r-----    1 oracrs   oinstall        500 Feb 06 2013  welcome.html
-rw-r-----    1 oracrs   oinstall         59 Oct 08 2014  oraInst.loc
-rw-r--r--    1 oracrs   oinstall        852 Sep 02 2015  env.ora
-rwxr-x---    1 oracrs   oinstall        628 Sep 05 2015  runcluvfy.sh
-rwxr-x---    1 oracrs   oinstall       5395 Jul 21 2016  gridSetup.sh
drwxr-xr-x    3 oracrs   oinstall        256 Apr 02 2017  slax
-rwxrwxr-x    1 oracrs   oinstall  170920687 Feb 13 14:40 p6880880_122010_AIX64-5L.zip
drwxr-xr-x    2 oracrs   oinstall       4096 Feb 15 11:11 dbs
drwx------    8 oracrs   oinstall       4096 Feb 15 11:17 .patch_storage
drwxrwxr-x    3 oracrs   oinstall        256 Feb 15 11:18 opatchautocfg
drwxrwxr-x    8 oracrs   oinstall       8192 Feb 15 11:19 cfgtoollogs
drwxr-x---    2 oracrs   oinstall        256 Feb 15 11:21 .opatchauto_storage



[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid

OPatchauto session is initiated at Sat Feb 15 11:23:59 2020

System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-24-16AM.log.

Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-25-54AM.log
The id for this session is PN8U

Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid


Bringing down CRS service on home /u02/oracle/12.2.0.1/grid
Prepatch operation log file location: /u02/oracle/PPT/crsdata/Test server/crsconfig/crspatch_Test server_2020-02-15_11-26-54AM.log
CRS service brought down successfully on home /u02/oracle/12.2.0.1/grid


Start applying binary patch on home /u02/oracle/12.2.0.1/grid
Successfully executed command: /usr/sbin/slibclean

Binary patch applied successfully on home /u02/oracle/12.2.0.1/grid


Starting CRS service on home /u02/oracle/12.2.0.1/grid

Postpatch operation log file location: /u02/oracle/PPT/crsdata/Test server/crsconfig/crspatch_Test server_2020-02-15_11-41-29AM.log
CRS service started successfully on home /u02/oracle/12.2.0.1/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:Test server
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0
Summary:

==Following patches were SKIPPED:

Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log



OPatchauto session completed at Sat Feb 15 11:54:12 2020
Time taken to complete the session 30 minutes, 14 seconds
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]


Post implementation
-------------------

During apply patch,it will stop clusterware and apply patch on clusterware Home and then start clusterware automatically
 

2) Validate PSU using Opatch Lsinventory from Grid user

crsctl query crs activeversion
crsctl query crs releaseversion

 

To verify patch on clusterware and oracle home

Cd /u02/oracle/12.2.0.1/grid/OPatch

./opatch lspatches

 

oracrs@testserver OPatch]$ ./opatch lspatches

31704034;ACFS PATCH SET UPDATE 12.1.0.2.201020 (31704034)

31704029;OCW PATCH SET UPDATE 12.1.0.2.201020 (31704029)

31550110;Database Patch Set Update : 12.1.0.2.201020 (31550110)

26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

 

OPatch succeeded.

[oracrs@testserver OPatch]$

 

 

check status of crs
./crsctl check crs
ps -ef|grep d.bin



su - oracrs

start database in case not started and check acfs mount point

srvctl start instance -d test1db -i test1db1
srvctl start instance -d test2db -i test2db1

Check Load Balancing/session on both Nodes using below commands and start apply patch on other Node



Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> set linesize 200
SQL> select name,open_mode,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') "DB Startup Time",host_name from v$database,v$instance;

NAME      OPEN_MODE            DB Startup Time               HOST_NAME
--------- -------------------- ----------------------------- ----------------------------------------------------------------
TESTDB  READ WRITE           15-feb-2020 09:05:08          Test server

SQL> select count(1),inst_id from gv$session group by inst_id;