Oracle 12c RAC DATABASE OCT2017 PSU
Summary
1)Download and Unzip the Latest OPatch to all cluster nodes
2)Stop oem Agent and Oswatcher on both node in case it is running
3)Stop database instance on node1
4)Take backup of oracle RAC home on node1(delete unnecessary log from RAC home, otherwise backup will take more time like old trc, aud log
5)Star database instance on node1
Since opatchauto will stop automatically while patching .Database instance should be up and running
verify
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;
EXECUTE dbms_stats.gather_dictionary_stats;
6)Perform Validation of Oracle Home Inventory on both node
<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>
10. Verify the Patches on both GI and RDBMS HOMES:
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME | grep <patch no>
7)Check for conflicts in the 12.1 GI home from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
8)Upgrade the Opatch version to 12.1.0.1.7 or later.
9)Apply the patch 26635815 using opatchauto utility from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
9)Start database instance instances on node1 if not started automatically
10)Stop database instance on node2
4)Take backup of oracle RAC home on node2
5)Star database instance on node2
Since opatchauto will stop automatically while patching .Database instance should be up and running
11)Repeat Steps from rows – 6-9 on node2
12)Start database database instances on node2 in case not started automatically
13)verify select * from dba_registry_sqlpatch where PATCH_ID=26635815;
14)Validate objects , components and verify if patch is installed properly
15)Start the application and Release all related jobs which were on hold
16)Application team will check the application if that working as it is
Backout steps :
Roll back the patches applied and steps as below
1)Stop oem Agent and Oswatcher on both node
2)Stop database instance on node1 and Validation of Oracle Inventory
3) "On the 1st node,In case you are rolling back the patch, run this command:
4)#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
5)"On the 1st node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
6)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
7)from node2 -#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
8)Start all database instance on node1 and Stop the all instance on node2
9)"On the 2nd node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
10)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
11)"Verify clusterware components and patch is installed properly and verify patch log
12)Check the following log files in $ORACLE_HOME/sqlpatch/26713565/ for errors:
13)26713565_rollback_<database SID>_<CDB name>_<timestamp>.log
14)Start database instances on node2
15)Check for any corruption in the inventory
16)downgrade database datapatch -rollback all -force
17)Start oem Agent and Oswatcher on both node
18)Inform Application to start the application
19)Application Verification and Check Business Objects
20)Free all the related jobs which were on hold
21)Start DB monitoring on the Database :
The Opatch utility from 11g onward has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.
The automated patch application process will automatically shutdown and restart all database processes on the node. However, we don't want the automatic restart – because we are applying two PSUs (one for grid and one for database). Disable the instance auto-start for node collabn1 and manually shutdown the instance for patch application.
Information This is rolling patch mean ,patch will applied on one node at a time and other node will available for application
On a production system, all active connections would need to be migrated to the other instance before doing this (for example, with services).
collabn1$ srvctl disable instance -d rac -i RAC1
collabn1$ srvctl stop instance -d rac -i RAC1
Note : Oracle Home which is being patched should have sufficient space (approx 20 GB) and There is not need to stop database instance and clusterware services when opatch auto utilities is being used As opatch auto will stop and start it automatically and also check hidden directory .patch_storage and clusterware patching is required from 4 to 6 hour for completion
Details steps
Prerequisite
1) check sufficient on grid home and /tmp and check health on both Node.
on grid home ,minimum 20 gb should be available
To check space on Oracle home and /tmp
df -g
2) run below command
1) ps -ef|grep pmon -to see Database status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
5) crsctl check crs
from Grid user
srvctl status database -d <Database Name>
srvctl config database -d <Database Name>
ps -ef | grep d.bin
server1[/u02/oracle/12.1.0/grid]$ ps -ef | grep d.bin
oracrs 7340258 1 3 11:12:06 - 5:26 /u02/oracle/12.1.0/grid/bin/gipcd.bin
root 7471224 1 6 11:11:26 - 7:59 /u02/oracle/12.1.0/grid/bin/ohasd.bin reboot
oracrs 8323106 12714060 1 11:12:07 - 1:48 /u02/oracle/12.1.0/grid/bin/evmlogger.bin -o /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracrs 8454172 1 5 11:12:03 - 7:16 /u02/oracle/12.1.0/grid/bin/oraagent.bin
root 9568452 1 3 11:12:42 - 5:26 /u02/oracle/12.1.0/grid/bin/octssd.bin reboot
root 9961542 1 9 11:14:18 - 13:06 /u02/oracle/12.1.0/grid/bin/crsd.bin reboot
oracrs 10027170 1 2 11:12:04 - 1:51 /u02/oracle/12.1.0/grid/bin/mdnsd.bin
oracrs 11075602 1 1 11:12:05 - 1:58 /u02/oracle/12.1.0/grid/bin/gpnpd.bin
root 11206846 1 0 11:12:16 - 0:00 /bin/sh /u02/oracle/12.1.0/grid/bin/ocssd
root 11534474 1 12 12:36:00 - 15:15 /u02/oracle/12.1.0/grid/bin/ologgerd -M -d /u02/oracle/12.1.0/grid/crf/db/server1
oracrs 11600026 1 0 11:15:01 - 0:28 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracrs 11730952 11206846 2 11:12:16 - 7:02 /u02/oracle/12.1.0/grid/bin/ocssd.bin
root 11993100 1 3 11:11:48 - 4:30 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
root 12058762 1 1 11:12:15 - 2:40 /u02/oracle/12.1.0/grid/bin/cssdagent
root 12189862 1 2 11:12:15 - 2:36 /u02/oracle/12.1.0/grid/bin/cssdmonitor
root 12255442 1 19 11:14:17 - 28:39 /u02/oracle/12.1.0/grid/bin/osysmond.bin
oracrs 12714060 1 4 11:12:04 - 7:52 /u02/oracle/12.1.0/grid/bin/evmd.bin
root 13435058 1 28 11:14:35 - 34:22 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
oracrs 13500578 1 21 11:14:34 - 11:59 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fsdatabase 13828286 1 5 11:15:09 - 7:31 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 26083376 1 0 12:32:54 - 0:11 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
fuo44ppt 27984070 1 5 11:36:17 - 7:39 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fug44ppt 39583962 1 3 11:39:05 - 7:18 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 43909340 1 2 12:32:58 - 2:14 /u02/oracle/12.1.0/grid/bin/scriptagent.bin
fub44ppt 51249192 1 5 11:40:43 - 7:32 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 61014038 1 0 12:32:54 - 0:13 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
oracrs 34144900 1 0 12:32:48 - 0:01 /u02/oracle/12.1.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracrs 36438724 38666928 2 06:09:03 pts/0 0:00 grep d.bin
server1[/u02/oracle/12.1.
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
crsctl status resource -t
crs_stat -t
server1[/u02/oracle/12.1.0/grid]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
diagsnap local_resource OFFLINE OFFLINE
ora.DATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRA1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ER.lsnr ora....er.type ONLINE ONLINE server1
ora....N1.lsnr ora....er.type ONLINE ONLINE server2
ora....N2.lsnr ora....er.type ONLINE ONLINE server1
ora....N3.lsnr ora....er.type ONLINE ONLINE server1
ora.MGMTLSNR ora....nr.type ONLINE ONLINE server1
ora.VOTING.dg ora....up.type ONLINE ONLINE server1
ora.asm ora.asm.type ONLINE ONLINE server1
ora.cvu ora.cvu.type ONLINE ONLINE server1
ora.fsdatabase.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora.mgmtdb ora....db.type ONLINE ONLINE server1
ora....network ora....rk.type ONLINE ONLINE server1
ora.oc4j ora.oc4j.type ONLINE ONLINE server1
ora.ons ora.ons.type ONLINE ONLINE server1
ora....SM1.asm application ONLINE ONLINE server1
ora....ON.lsnr application ONLINE ONLINE server1
ora....ton.ons application ONLINE ONLINE server1
ora....ton.vip ora....t1.type ONLINE ONLINE server1
ora.scan1.vip ora....ip.type ONLINE ONLINE server2
ora.scan2.vip ora....ip.type ONLINE ONLINE server1
ora.scan3.vip ora....ip.type ONLINE ONLINE server1
ora....SM2.asm application ONLINE ONLINE server2
ora....ON.lsnr application ONLINE ONLINE server2
ora.server2.ons application ONLINE ONLINE server2
ora.server2.vip ora....t1.type ONLINE ONLINE server2
server1[/u02/oracle/12.1.0/grid]$
d) login on each database running on the server and check session count
select name,open_mode from v$database;
select count(1),inst_id from gv$session group by inst_id
alter system archive log current; --- switch log on database.
e) Check invalid objects and stale stats.
spool pre_patch_invalid_objects_${ORACLE_SID}.log;
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
select comp_id, VERSION , STATUS from dba_registry;
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
spool off
If there are stale stats then run the following :
EXECUTE dbms_stats.gather_dictionary_stats;
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;
3) validate Grid Home inventory and Upgrade the Opatch version to 12.1.0.1.7 or later if require
<GRID_HOME>/OPatch/opatch lsinventory -detail -oh <GRID_HOME>
<GRID_HOME>/OPatch/opatch lsinventory
$opatch lsinv –detail
4) Check Opatch version and for each the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ <ORACLE_HOME>/OPatch/opatch version
server1[/u02/oracle/12.1.0/grid]$ OPatch/opatch version
OPatch Version: 12.2.0.1.8
OPatch succeeded.
server1[/u02/oracle/12.1.0/grid]$
The version output of the previous command should be 12.1.0.1.7 or later.
5) unzip downloaded OCT 2017 PSU from Grid user
$ cd <UNZIPPED_PATCH_LOCATION>
Check that the directory is empty.
$ ls
Unzip the patch as grid home owner.
$ unzip p26635815_121020_<platform>.zip
if require change permission
eg
chmod 775 p26635815_121020_AIX64-5L.zip
From root user
chown oracrs:dba p26635815_121020_AIX64-5L.zip
from oracrs
unzip p26635815_121020_AIX64-5L.zip
[node1:database:OPatch] ls -ltr /staging/12ccluster_patch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[heysham:fusppt:OPatch]
6) Check for conflicts in the 12.1 GI home (this is new in 12c)
a) This will check for all available GRID HOME and ORACLE HOME
run from root user
#GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
run from root user
b)To verify on clusterware home only
./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/staging/clusterware_12cpatch:] /staging/clusterware_12cpatch/26635815
ksh: /staging/clusterware_12cpatch/26635815: cannot execute
[server1:root:/staging/clusterware_12cpatch:] ls -ltr /staging/clusterware_12cpatch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[server1:root:/staging/clusterware_12cpatch:] cd /u02/oracle/12.1.0/grid/OPatch
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] df -g .
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/u02 139.00 27.37 81% 766737 11% /u02
[server1:root:/u02/oracle/12.1.0/grid/OPatch:]
5) check for conflicts in the 12.1 Oracle RAC home
run it from root user and opatchauto must run from one of the homes specified
[node1:root:/home/root:] cd /u01/oracle/database/product/12.1.0.2
[node1:root:/u01/oracle/database/product/12.1.0.2:] cd OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2
OPatchauto session is initiated at Sun Jan 21 12:15:08 2018
System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-15-20PM.log.
Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-17-13PM.log
The id for this session is NWNS
Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2
Following step failed during analysis:
/bin/sh -c 'ORACLE_HOME=/u01/oracle/database/product/12.1.0.2 ORACLE_SID=FSdatabase1 /u01/oracle/database/product/12.1.0.2/OPatch/datapatch -prereq'
SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log
OPatchauto session completed at Sun Jan 21 12:22:36 2018
Time taken to complete the session 7 minutes, 35 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:]
6) take backup of oracle RAC home
cd $ORACLE_HOME
pwd
tar -cvf /backup/home_database.tar .
ss -ltr /backup/home_database.tar
du -sg /backup/grid_home_database.tar
7) Apply the patch to the GI Home using the opatch auto command. As root user
export PATH=$PATH:<GI_HOME>/Opatch
eg
export PATH=$PATH:/u02/oracle/12.1.0.2/grid/Opatch
ls -ltr /u02/oracle/12.1.0/grid/OPatch
export PATH=$PATH:/u02/oracle/12.1.0/grid/OPatch
ls -ltr /u02/oracle/12.1.0/grid
ls -ltr /patch/clusterware_12cpatch/26635815
# $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME>
echo $PATH
# export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch
# /opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u01/oracle/database/product/12.1.0.2
echo $PATH
eg
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u01/oracle/database/product/12.1.0.2
OPatchauto session is initiated at Sun Jan 21 12:25:45 2018
System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-25-52PM.log.
Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-26-39PM.log
The id for this session is XDTU
Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2
SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Preparing to bring down database service on home /u01/oracle/database/product/12.1.0.2
Successfully prepared home /u01/oracle/database/product/12.1.0.2 to bring down database service
Bringing down database service on home /u01/oracle/database/product/12.1.0.2
Following database(s) and/or service(s) are stopped and will be restarted later during the session: fsdatabase
Database service successfully brought down on home /u01/oracle/database/product/12.1.0.2
Performing prepatch operation on home /u01/oracle/database/product/12.1.0.2
Perpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2
Start applying binary patch on home /u01/oracle/database/product/12.1.0.2
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home /u01/oracle/database/product/12.1.0.2
Performing postpatch operation on home /u01/oracle/database/product/12.1.0.2
Postpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2
Starting database service on home /u01/oracle/database/product/12.1.0.2
Database service successfully started on home /u01/oracle/database/product/12.1.0.2
Preparing home /u01/oracle/database/product/12.1.0.2 after database service restarted
No step execution required.........
Prepared home /u01/oracle/database/product/12.1.0.2 successfully after database service restarted
Trying to apply SQL patch on home /u01/oracle/database/product/12.1.0.2
SQL patch applied successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patches applied on home /u01/oracle/database/product/12.1.0.2
Patch verification completed with warning on home /u01/oracle/database/product/12.1.0.2
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2
Summary:
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY applied:
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log
OPatchauto session completed at Sun Jan 21 12:40:58 2018
Time taken to complete the session 15 minutes, 15 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ps -ef|grep pmon
8) Once patch completed on first node, clusterware and all Oracle database instances need to be started if it is not started automatically
we will follow same steps for another node
Verfification steps after clusterware and database instance startup
1) ps -ef|grep pmon -to see listerner status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
ACTION_TIME_2 PATCH_ID PATCH_UID ACTION VERSION
-------------------- ---------- ---------- --------------- --------------------
DESCRIPTION
--------------------------------------------------------------------------------
25-MAY-2016 22291127 19983562 APPLY 12.1.0.2
Database Patch Set Update : 12.1.0.2.160419 (22291127)
28-DEC-2016 24006101 20648640 APPLY 12.1.0.2
Database Patch Set Update : 12.1.0.2.161018 (24006101)
21-JAN-2018 26713565 21606134 APPLY 12.1.0.2
DATABASE PATCH SET UPDATE 12.1.0.2.171017
select comp_name, version, status from dba_registry order by comp_name;
SQL> exit
set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;
7) check alert log for ASM and all database instances.
8) check whether sessions are balanced
select name,open_mode from v$database
select count(1),inst_id from gv$session group by inst_id;
9) validate inventory after patch
<GRID_HOME>/OPatch/opatch lsinventory
EXECUTE dbms_stats.gather_dictionary_stats;
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
somehow if database is not updated with latest patch ,you can run datapatch again after setting ORACLE_HOME and ORACLE_SID from last node
shutdown all instance except last node
and run below commands
$ORACLE_HOME/OPatch/datapatch
=============startup and stop step==================
srvctl stop instance –d <database name> –i <database instance> -o immediate
From Root User
cd <grid_home>/bin
crsctl stop crs
startup process
From Root User
cd <grid_home>/bin
crsctl start crs
From GRID HOME
srvctl start instance -d <database name> -i <database instance>
Advanced Option for OPatch
Here are some of the advanced options of OPatch utility.
UpdateRemoteNodes
It is used to propagate/remove files/directories to/from remote nodes using files under ORACLE_HOME/.patch_storage/<ID>/rac/*.
The directories listed in copy_dirs.txt will be copied to remote nodes.
The files listed in copy_files.txt wil be copied to remote nodes.
The directories listed in remove_dirs.txt will be deleted from remote nodes.
The files listed in remove_files.txt will be deleted from remote nodes.
This is used where RAC setup is present.
Cleanup
It is used to clean up ‘restore.sh, make.txt’ files and ‘rac, scratch, backup’ directories in the ORACLE_HOME/.patch_storage directory. If -ps option is used, then it cleans the above specified areas only for that patch, else for all patches under ORACLE_HOME/.patch_storage. You will be still able to rollback patches after this cleanup.
Example: opatch util cleanup -ps 6121183_Ju _21_2007_04_19_42
CopyListedFiles
It is used to copy all files listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_files.txt to remote nodes. If -fp option is used, then one can specify the path of the file containing the list of files to be copied. The files mentioned in this file will be copied to the remote nodes.
Example: opatch util copylistedfiles -fp a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedFilesTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedFiles.
Example: opatch util copylistedfilestest -fp /home/oracle/a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedDirs
It is used to recursively copy all directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_dirs.txt to remote nodes. If -dp option is used, then one can specify the path of the file containing the list of directories to be copied. The directories mentioned in this file will be copied to the remote nodes.
This is used where RAC setup is present.
CopyListedDirsTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedDirs.
This is used where RAC setup is present.
RemoveListedFiles
It is used to remove files listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_files.txt on remote nodes. If -fr option is used, then one can specify the path of the file containing the list of files to be removed. The files mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedFilesTest
It is used to remove a single file from remote nodes. The usage remains the same as RemoveListedFiles.
This is used where RAC setup is present.
RemoveListedDirs
It is used to recursively remove directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_dirs.txt from remote nodes. If -dr option is used, then one can specify the path of the file containing the list of directories to be removed. The directories mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedDirsTest
It is used to remove a single directory from remote nodes. The usage remains the same as RemoveListedDirs.
This is used where RAC setup is present.
RunLocalMake
It is used to invoke re-link on the local node. The make commands are stored in ORACLE_HOME/.patch_storage/<ID>/make.txt. You need to use the -ps option to specify the Patch ID with timestamp. A directory by this name will be present under ORACLE_HOME/.patch_storage. The make.txt file present under ORACLE_HOME/.patch_storage/<Patch ID with timestamp>/ will be used to perform the local make operation. This command cannot be run if you have already run Cleanup as it would have removed these make.txt files.
Example: opatch util runlocalmake -ps 6121250_ un_21_2007_04_16_11
RunRemoteMake
It is used to invoke re-link on remote nodes. The make commands are stored in
ORACLE_HOME/.patch_storage/<ID>/rac/makes_cmd.txt. The usage remains the same as RunLocalMake.
This is used where RAC setup is present.
RunAnyCommand
It is used to run any command on remote nodes. The command should be specified using the -cmd option.
Example: opatch util runanycommand -remote_nodes ceintcb-a5 -cmd ls
This is used where RAC setup is present.
LoadXML
It is used to check the validity of an XML file. The -xmlInput option can be used to specify the path of the xml file.
@ Support can use this utility to verify the integrity of XML files contained in the local and central inventory.
Example: opatch util loadxml -xmlInput $ORACLE_HOME/inventory/ContentsXML/comps.xml
Verify
It is used to run the patch verification process to ensure that the patch was applied to the ORACLE_HOME. It uses the defined ORACLE_HOME and the given patch location via -ph, to run the check.
@ Support can use this utility to re-run the OPatch verification stage, if required for patch diagnosis. The patch staging area has to be present or created.
Example: opatch util verify -ph ~/6646853/6121183
Troubleshooting
Some times the inventory get corrupted because of some issues. In that case you need to repair the inventory. Following are the two methods which I know can be used to repair the inventory.
1) Development has released a utility (checkinv) which can be use to repair the inventory if it becomes corrupted for some reason.
Please note that this version of checkinv works only for 9.2 and 10.1 releases of oracle database. This doesn’t work for 10.2 releases of oracle database.
You can download the checkinv utility from Patch 4121946.
To cleanup the components, you should determine the unwanted or dangling components, then specify those using the option “-remcomp”, to cleanup the component list.
The utility detects and asks user’s permission to repair a bad inventory. In case you want to repair the inventory, the utility will first back up the files before any modification.
However, it is better that you backup the inventory before running the tool.
F
Summary
1)Download and Unzip the Latest OPatch to all cluster nodes
2)Stop oem Agent and Oswatcher on both node in case it is running
3)Stop database instance on node1
4)Take backup of oracle RAC home on node1(delete unnecessary log from RAC home, otherwise backup will take more time like old trc, aud log
5)Star database instance on node1
Since opatchauto will stop automatically while patching .Database instance should be up and running
verify
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;
EXECUTE dbms_stats.gather_dictionary_stats;
6)Perform Validation of Oracle Home Inventory on both node
<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>
10. Verify the Patches on both GI and RDBMS HOMES:
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME
$ORACLE_HOME/OPatch/opatch lsinventory -oh $RDBMS_HOME | grep <patch no>
7)Check for conflicts in the 12.1 GI home from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
8)Upgrade the Opatch version to 12.1.0.1.7 or later.
9)Apply the patch 26635815 using opatchauto utility from root user
#ORACLE_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze -oh <ORACLE_RAC_HOME>
9)Start database instance instances on node1 if not started automatically
10)Stop database instance on node2
4)Take backup of oracle RAC home on node2
5)Star database instance on node2
Since opatchauto will stop automatically while patching .Database instance should be up and running
11)Repeat Steps from rows – 6-9 on node2
12)Start database database instances on node2 in case not started automatically
13)verify select * from dba_registry_sqlpatch where PATCH_ID=26635815;
14)Validate objects , components and verify if patch is installed properly
15)Start the application and Release all related jobs which were on hold
16)Application team will check the application if that working as it is
Backout steps :
Roll back the patches applied and steps as below
1)Stop oem Agent and Oswatcher on both node
2)Stop database instance on node1 and Validation of Oracle Inventory
3) "On the 1st node,In case you are rolling back the patch, run this command:
4)#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
5)"On the 1st node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
6)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
7)from node2 -#GRID_HOME/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
8)Start all database instance on node1 and Stop the all instance on node2
9)"On the 2nd node, roll back the patch from the GI Home using the opatchauto command. As root user, execute the following command:
10)# <GI_HOME>/OPatch/opatchauto rollback <UNZIPPED_PATCH_LOCATION>/26635815 -oh <GI_HOME>"
11)"Verify clusterware components and patch is installed properly and verify patch log
12)Check the following log files in $ORACLE_HOME/sqlpatch/26713565/ for errors:
13)26713565_rollback_<database SID>_<CDB name>_<timestamp>.log
14)Start database instances on node2
15)Check for any corruption in the inventory
16)downgrade database datapatch -rollback all -force
17)Start oem Agent and Oswatcher on both node
18)Inform Application to start the application
19)Application Verification and Check Business Objects
20)Free all the related jobs which were on hold
21)Start DB monitoring on the Database :
The Opatch utility from 11g onward has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.
The automated patch application process will automatically shutdown and restart all database processes on the node. However, we don't want the automatic restart – because we are applying two PSUs (one for grid and one for database). Disable the instance auto-start for node collabn1 and manually shutdown the instance for patch application.
Information This is rolling patch mean ,patch will applied on one node at a time and other node will available for application
On a production system, all active connections would need to be migrated to the other instance before doing this (for example, with services).
collabn1$ srvctl disable instance -d rac -i RAC1
collabn1$ srvctl stop instance -d rac -i RAC1
Note : Oracle Home which is being patched should have sufficient space (approx 20 GB) and There is not need to stop database instance and clusterware services when opatch auto utilities is being used As opatch auto will stop and start it automatically and also check hidden directory .patch_storage and clusterware patching is required from 4 to 6 hour for completion
Details steps
Prerequisite
1) check sufficient on grid home and /tmp and check health on both Node.
on grid home ,minimum 20 gb should be available
To check space on Oracle home and /tmp
df -g
2) run below command
1) ps -ef|grep pmon -to see Database status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
5) crsctl check crs
from Grid user
srvctl status database -d <Database Name>
srvctl config database -d <Database Name>
ps -ef | grep d.bin
server1[/u02/oracle/12.1.0/grid]$ ps -ef | grep d.bin
oracrs 7340258 1 3 11:12:06 - 5:26 /u02/oracle/12.1.0/grid/bin/gipcd.bin
root 7471224 1 6 11:11:26 - 7:59 /u02/oracle/12.1.0/grid/bin/ohasd.bin reboot
oracrs 8323106 12714060 1 11:12:07 - 1:48 /u02/oracle/12.1.0/grid/bin/evmlogger.bin -o /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracrs 8454172 1 5 11:12:03 - 7:16 /u02/oracle/12.1.0/grid/bin/oraagent.bin
root 9568452 1 3 11:12:42 - 5:26 /u02/oracle/12.1.0/grid/bin/octssd.bin reboot
root 9961542 1 9 11:14:18 - 13:06 /u02/oracle/12.1.0/grid/bin/crsd.bin reboot
oracrs 10027170 1 2 11:12:04 - 1:51 /u02/oracle/12.1.0/grid/bin/mdnsd.bin
oracrs 11075602 1 1 11:12:05 - 1:58 /u02/oracle/12.1.0/grid/bin/gpnpd.bin
root 11206846 1 0 11:12:16 - 0:00 /bin/sh /u02/oracle/12.1.0/grid/bin/ocssd
root 11534474 1 12 12:36:00 - 15:15 /u02/oracle/12.1.0/grid/bin/ologgerd -M -d /u02/oracle/12.1.0/grid/crf/db/server1
oracrs 11600026 1 0 11:15:01 - 0:28 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracrs 11730952 11206846 2 11:12:16 - 7:02 /u02/oracle/12.1.0/grid/bin/ocssd.bin
root 11993100 1 3 11:11:48 - 4:30 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
root 12058762 1 1 11:12:15 - 2:40 /u02/oracle/12.1.0/grid/bin/cssdagent
root 12189862 1 2 11:12:15 - 2:36 /u02/oracle/12.1.0/grid/bin/cssdmonitor
root 12255442 1 19 11:14:17 - 28:39 /u02/oracle/12.1.0/grid/bin/osysmond.bin
oracrs 12714060 1 4 11:12:04 - 7:52 /u02/oracle/12.1.0/grid/bin/evmd.bin
root 13435058 1 28 11:14:35 - 34:22 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
oracrs 13500578 1 21 11:14:34 - 11:59 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fsdatabase 13828286 1 5 11:15:09 - 7:31 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 26083376 1 0 12:32:54 - 0:11 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
fuo44ppt 27984070 1 5 11:36:17 - 7:39 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fug44ppt 39583962 1 3 11:39:05 - 7:18 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 43909340 1 2 12:32:58 - 2:14 /u02/oracle/12.1.0/grid/bin/scriptagent.bin
fub44ppt 51249192 1 5 11:40:43 - 7:32 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 61014038 1 0 12:32:54 - 0:13 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
oracrs 34144900 1 0 12:32:48 - 0:01 /u02/oracle/12.1.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracrs 36438724 38666928 2 06:09:03 pts/0 0:00 grep d.bin
server1[/u02/oracle/12.1.
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
crsctl status resource -t
crs_stat -t
server1[/u02/oracle/12.1.0/grid]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
diagsnap local_resource OFFLINE OFFLINE
ora.DATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRA1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ER.lsnr ora....er.type ONLINE ONLINE server1
ora....N1.lsnr ora....er.type ONLINE ONLINE server2
ora....N2.lsnr ora....er.type ONLINE ONLINE server1
ora....N3.lsnr ora....er.type ONLINE ONLINE server1
ora.MGMTLSNR ora....nr.type ONLINE ONLINE server1
ora.VOTING.dg ora....up.type ONLINE ONLINE server1
ora.asm ora.asm.type ONLINE ONLINE server1
ora.cvu ora.cvu.type ONLINE ONLINE server1
ora.fsdatabase.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora.mgmtdb ora....db.type ONLINE ONLINE server1
ora....network ora....rk.type ONLINE ONLINE server1
ora.oc4j ora.oc4j.type ONLINE ONLINE server1
ora.ons ora.ons.type ONLINE ONLINE server1
ora....SM1.asm application ONLINE ONLINE server1
ora....ON.lsnr application ONLINE ONLINE server1
ora....ton.ons application ONLINE ONLINE server1
ora....ton.vip ora....t1.type ONLINE ONLINE server1
ora.scan1.vip ora....ip.type ONLINE ONLINE server2
ora.scan2.vip ora....ip.type ONLINE ONLINE server1
ora.scan3.vip ora....ip.type ONLINE ONLINE server1
ora....SM2.asm application ONLINE ONLINE server2
ora....ON.lsnr application ONLINE ONLINE server2
ora.server2.ons application ONLINE ONLINE server2
ora.server2.vip ora....t1.type ONLINE ONLINE server2
server1[/u02/oracle/12.1.0/grid]$
d) login on each database running on the server and check session count
select name,open_mode from v$database;
select count(1),inst_id from gv$session group by inst_id
alter system archive log current; --- switch log on database.
e) Check invalid objects and stale stats.
spool pre_patch_invalid_objects_${ORACLE_SID}.log;
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
select comp_id, VERSION , STATUS from dba_registry;
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
spool off
If there are stale stats then run the following :
EXECUTE dbms_stats.gather_dictionary_stats;
col action_time format a13;
col action format a10;
col namespace format a10;
col comments format a25;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.instance_name,r.* from v$instance i, registry$history r;
3) validate Grid Home inventory and Upgrade the Opatch version to 12.1.0.1.7 or later if require
<GRID_HOME>/OPatch/opatch lsinventory -detail -oh <GRID_HOME>
<GRID_HOME>/OPatch/opatch lsinventory
$opatch lsinv –detail
4) Check Opatch version and for each the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ <ORACLE_HOME>/OPatch/opatch version
server1[/u02/oracle/12.1.0/grid]$ OPatch/opatch version
OPatch Version: 12.2.0.1.8
OPatch succeeded.
server1[/u02/oracle/12.1.0/grid]$
The version output of the previous command should be 12.1.0.1.7 or later.
5) unzip downloaded OCT 2017 PSU from Grid user
$ cd <UNZIPPED_PATCH_LOCATION>
Check that the directory is empty.
$ ls
Unzip the patch as grid home owner.
$ unzip p26635815_121020_<platform>.zip
if require change permission
eg
chmod 775 p26635815_121020_AIX64-5L.zip
From root user
chown oracrs:dba p26635815_121020_AIX64-5L.zip
from oracrs
unzip p26635815_121020_AIX64-5L.zip
[node1:database:OPatch] ls -ltr /staging/12ccluster_patch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[heysham:fusppt:OPatch]
6) Check for conflicts in the 12.1 GI home (this is new in 12c)
a) This will check for all available GRID HOME and ORACLE HOME
run from root user
#GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
run from root user
b)To verify on clusterware home only
./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/staging/clusterware_12cpatch:] /staging/clusterware_12cpatch/26635815
ksh: /staging/clusterware_12cpatch/26635815: cannot execute
[server1:root:/staging/clusterware_12cpatch:] ls -ltr /staging/clusterware_12cpatch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[server1:root:/staging/clusterware_12cpatch:] cd /u02/oracle/12.1.0/grid/OPatch
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] df -g .
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/u02 139.00 27.37 81% 766737 11% /u02
[server1:root:/u02/oracle/12.1.0/grid/OPatch:]
5) check for conflicts in the 12.1 Oracle RAC home
run it from root user and opatchauto must run from one of the homes specified
[node1:root:/home/root:] cd /u01/oracle/database/product/12.1.0.2
[node1:root:/u01/oracle/database/product/12.1.0.2:] cd OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u01/oracle/database/product/12.1.0.2
OPatchauto session is initiated at Sun Jan 21 12:15:08 2018
System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-15-20PM.log.
Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-17-13PM.log
The id for this session is NWNS
Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2
Following step failed during analysis:
/bin/sh -c 'ORACLE_HOME=/u01/oracle/database/product/12.1.0.2 ORACLE_SID=FSdatabase1 /u01/oracle/database/product/12.1.0.2/OPatch/datapatch -prereq'
SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-17-30PM_1.log
OPatchauto session completed at Sun Jan 21 12:22:36 2018
Time taken to complete the session 7 minutes, 35 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:]
6) take backup of oracle RAC home
cd $ORACLE_HOME
pwd
tar -cvf /backup/home_database.tar .
ss -ltr /backup/home_database.tar
du -sg /backup/grid_home_database.tar
7) Apply the patch to the GI Home using the opatch auto command. As root user
export PATH=$PATH:<GI_HOME>/Opatch
eg
export PATH=$PATH:/u02/oracle/12.1.0.2/grid/Opatch
ls -ltr /u02/oracle/12.1.0/grid/OPatch
export PATH=$PATH:/u02/oracle/12.1.0/grid/OPatch
ls -ltr /u02/oracle/12.1.0/grid
ls -ltr /patch/clusterware_12cpatch/26635815
# $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME>
echo $PATH
# export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch
# /opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u01/oracle/database/product/12.1.0.2
echo $PATH
eg
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] export PATH=$PATH:/u01/oracle/database/product/12.1.0.2/OPatch
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u01/oracle/database/product/12.1.0.2
OPatchauto session is initiated at Sun Jan 21 12:25:45 2018
System initialization log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchautodb/systemconfig2018-01-21_12-25-52PM.log.
Session log file is /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/opatchauto2018-01-21_12-26-39PM.log
The id for this session is XDTU
Executing OPatch prereq operations to verify patch applicability on home /u01/oracle/database/product/12.1.0.2
Patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patch inventory on home /u01/oracle/database/product/12.1.0.2
Patch inventory verified successfully on home /u01/oracle/database/product/12.1.0.2
Verifying SQL patch applicablity on home /u01/oracle/database/product/12.1.0.2
SQL patch applicablity verified successfully on home /u01/oracle/database/product/12.1.0.2
Preparing to bring down database service on home /u01/oracle/database/product/12.1.0.2
Successfully prepared home /u01/oracle/database/product/12.1.0.2 to bring down database service
Bringing down database service on home /u01/oracle/database/product/12.1.0.2
Following database(s) and/or service(s) are stopped and will be restarted later during the session: fsdatabase
Database service successfully brought down on home /u01/oracle/database/product/12.1.0.2
Performing prepatch operation on home /u01/oracle/database/product/12.1.0.2
Perpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2
Start applying binary patch on home /u01/oracle/database/product/12.1.0.2
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home /u01/oracle/database/product/12.1.0.2
Performing postpatch operation on home /u01/oracle/database/product/12.1.0.2
Postpatch operation completed successfully on home /u01/oracle/database/product/12.1.0.2
Starting database service on home /u01/oracle/database/product/12.1.0.2
Database service successfully started on home /u01/oracle/database/product/12.1.0.2
Preparing home /u01/oracle/database/product/12.1.0.2 after database service restarted
No step execution required.........
Prepared home /u01/oracle/database/product/12.1.0.2 successfully after database service restarted
Trying to apply SQL patch on home /u01/oracle/database/product/12.1.0.2
SQL patch applied successfully on home /u01/oracle/database/product/12.1.0.2
Verifying patches applied on home /u01/oracle/database/product/12.1.0.2
Patch verification completed with warning on home /u01/oracle/database/product/12.1.0.2
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:node1
RAC Home:/u01/oracle/database/product/12.1.0.2
Summary:
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /staging/clusterware_12cpatch/26635815/26392164
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY applied:
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u01/oracle/database/product/12.1.0.2/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-21_12-33-00PM_1.log
OPatchauto session completed at Sun Jan 21 12:40:58 2018
Time taken to complete the session 15 minutes, 15 seconds
[node1:root:/u01/oracle/database/product/12.1.0.2/OPatch:] ps -ef|grep pmon
8) Once patch completed on first node, clusterware and all Oracle database instances need to be started if it is not started automatically
we will follow same steps for another node
Verfification steps after clusterware and database instance startup
1) ps -ef|grep pmon -to see listerner status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
SQL> select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
ACTION_TIME_2 PATCH_ID PATCH_UID ACTION VERSION
-------------------- ---------- ---------- --------------- --------------------
DESCRIPTION
--------------------------------------------------------------------------------
25-MAY-2016 22291127 19983562 APPLY 12.1.0.2
Database Patch Set Update : 12.1.0.2.160419 (22291127)
28-DEC-2016 24006101 20648640 APPLY 12.1.0.2
Database Patch Set Update : 12.1.0.2.161018 (24006101)
21-JAN-2018 26713565 21606134 APPLY 12.1.0.2
DATABASE PATCH SET UPDATE 12.1.0.2.171017
select comp_name, version, status from dba_registry order by comp_name;
SQL> exit
set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;
7) check alert log for ASM and all database instances.
8) check whether sessions are balanced
select name,open_mode from v$database
select count(1),inst_id from gv$session group by inst_id;
9) validate inventory after patch
<GRID_HOME>/OPatch/opatch lsinventory
EXECUTE dbms_stats.gather_dictionary_stats;
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
check invalid objects , components on database
set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col action_time format a13;
somehow if database is not updated with latest patch ,you can run datapatch again after setting ORACLE_HOME and ORACLE_SID from last node
shutdown all instance except last node
and run below commands
$ORACLE_HOME/OPatch/datapatch
=============startup and stop step==================
srvctl stop instance –d <database name> –i <database instance> -o immediate
From Root User
cd <grid_home>/bin
crsctl stop crs
startup process
From Root User
cd <grid_home>/bin
crsctl start crs
From GRID HOME
srvctl start instance -d <database name> -i <database instance>
Advanced Option for OPatch
Here are some of the advanced options of OPatch utility.
UpdateRemoteNodes
It is used to propagate/remove files/directories to/from remote nodes using files under ORACLE_HOME/.patch_storage/<ID>/rac/*.
The directories listed in copy_dirs.txt will be copied to remote nodes.
The files listed in copy_files.txt wil be copied to remote nodes.
The directories listed in remove_dirs.txt will be deleted from remote nodes.
The files listed in remove_files.txt will be deleted from remote nodes.
This is used where RAC setup is present.
Cleanup
It is used to clean up ‘restore.sh, make.txt’ files and ‘rac, scratch, backup’ directories in the ORACLE_HOME/.patch_storage directory. If -ps option is used, then it cleans the above specified areas only for that patch, else for all patches under ORACLE_HOME/.patch_storage. You will be still able to rollback patches after this cleanup.
Example: opatch util cleanup -ps 6121183_Ju _21_2007_04_19_42
CopyListedFiles
It is used to copy all files listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_files.txt to remote nodes. If -fp option is used, then one can specify the path of the file containing the list of files to be copied. The files mentioned in this file will be copied to the remote nodes.
Example: opatch util copylistedfiles -fp a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedFilesTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedFiles.
Example: opatch util copylistedfilestest -fp /home/oracle/a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedDirs
It is used to recursively copy all directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_dirs.txt to remote nodes. If -dp option is used, then one can specify the path of the file containing the list of directories to be copied. The directories mentioned in this file will be copied to the remote nodes.
This is used where RAC setup is present.
CopyListedDirsTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedDirs.
This is used where RAC setup is present.
RemoveListedFiles
It is used to remove files listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_files.txt on remote nodes. If -fr option is used, then one can specify the path of the file containing the list of files to be removed. The files mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedFilesTest
It is used to remove a single file from remote nodes. The usage remains the same as RemoveListedFiles.
This is used where RAC setup is present.
RemoveListedDirs
It is used to recursively remove directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_dirs.txt from remote nodes. If -dr option is used, then one can specify the path of the file containing the list of directories to be removed. The directories mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedDirsTest
It is used to remove a single directory from remote nodes. The usage remains the same as RemoveListedDirs.
This is used where RAC setup is present.
RunLocalMake
It is used to invoke re-link on the local node. The make commands are stored in ORACLE_HOME/.patch_storage/<ID>/make.txt. You need to use the -ps option to specify the Patch ID with timestamp. A directory by this name will be present under ORACLE_HOME/.patch_storage. The make.txt file present under ORACLE_HOME/.patch_storage/<Patch ID with timestamp>/ will be used to perform the local make operation. This command cannot be run if you have already run Cleanup as it would have removed these make.txt files.
Example: opatch util runlocalmake -ps 6121250_ un_21_2007_04_16_11
RunRemoteMake
It is used to invoke re-link on remote nodes. The make commands are stored in
ORACLE_HOME/.patch_storage/<ID>/rac/makes_cmd.txt. The usage remains the same as RunLocalMake.
This is used where RAC setup is present.
RunAnyCommand
It is used to run any command on remote nodes. The command should be specified using the -cmd option.
Example: opatch util runanycommand -remote_nodes ceintcb-a5 -cmd ls
This is used where RAC setup is present.
LoadXML
It is used to check the validity of an XML file. The -xmlInput option can be used to specify the path of the xml file.
@ Support can use this utility to verify the integrity of XML files contained in the local and central inventory.
Example: opatch util loadxml -xmlInput $ORACLE_HOME/inventory/ContentsXML/comps.xml
Verify
It is used to run the patch verification process to ensure that the patch was applied to the ORACLE_HOME. It uses the defined ORACLE_HOME and the given patch location via -ph, to run the check.
@ Support can use this utility to re-run the OPatch verification stage, if required for patch diagnosis. The patch staging area has to be present or created.
Example: opatch util verify -ph ~/6646853/6121183
Troubleshooting
Some times the inventory get corrupted because of some issues. In that case you need to repair the inventory. Following are the two methods which I know can be used to repair the inventory.
1) Development has released a utility (checkinv) which can be use to repair the inventory if it becomes corrupted for some reason.
Please note that this version of checkinv works only for 9.2 and 10.1 releases of oracle database. This doesn’t work for 10.2 releases of oracle database.
You can download the checkinv utility from Patch 4121946.
To cleanup the components, you should determine the unwanted or dangling components, then specify those using the option “-remcomp”, to cleanup the component list.
The utility detects and asks user’s permission to repair a bad inventory. In case you want to repair the inventory, the utility will first back up the files before any modification.
However, it is better that you backup the inventory before running the tool.
F