Types of Patches
One off Patch (Interim Patches)
this is a single patch to fix a speacific problem.
Diagnostic Patches
Mainly help diagnose and verify a fix, or a collection of bug fixes like in CDC IBM used to provide the Diagnostic Patches.
Merger Level Request Patch
If a one off Patch conflicts with an already installed patch. Oracle Developments can be asked to provide a merged set of these patches.
Critical Patxh Updates (Security Patch Updates)
CPU are relased quearerly and address security issues with Oracle products.
PSU
Cumulative patch bundles that contain well-tested and proven bug fixes for critical issues.
Bundle Patches
Bundle Patches were used before Patch Set updates and they bundled a number of patches per products are.
Patch Set
11.2.0.3 to 11.2.0.4
Major Release Updates
11g to 12c
Flow diagrams
All-Node Patch
. Shutdown all Oracle instances on all nodes
. Apply the patch to the RAC home on all nodes
. Bring all instances up
Minimum downtime
. Shutdown the Oracle instance on node 1
. Apply the patch to the RAC home on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the RAC home on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the RAC home on node 3
. Startup the Oracle instance on node 3
Rolling patch (no downtime)
. Shutdown the Oracle instance on node 1
. Apply the patch to the RAC home on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the RAC home on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the RAC home on node 3
. Start the Oracle instance on node 3
===copy to different host=========
scp -p p26635815_121020_AIX64-5L.zip usernamer@server2:/staging/clusterware_12cpatch/
verify patch applied
select * from dba_registry_sqlpatch ;
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
TO TAKE GRID_HOME BACKUP and inventory after shutting down database instanca and clusreware on one node
srvctl status database -d<database name>
from root user
goto bin directory of grid home
./crsctl stop crs
check status of crs
./crsctl check crs
ps -ef|grep d.bin
upon investigation,we found that orarootagent.bin was owned oracrs ,Ideally It should be owned by root ,While running pre check ,we did not notice any issue
we are not sure whether patch has caused permission issue
[node1:root:/u02/oracle/12.1.0.2/grid/bin:] ls -ltr orarootagent.bin
-rwxr----x 1 oracrs oinstall 482604228 Oct 06 2016 orarootagent.bin
echo $ORACLE_SID
echo $GRID_HOME
cd $GRID_HOME
pwd
du -sg .
tar -cvf /backup//grid_home_database.tar .
du -sg /backup/grid_home_database.tar
Backup for inventory
after taking backup of grid home and inventory since while applying patch clusterware should be up and running and database instance should be down
start crs
Oct PSU 2017 on 12c Clusterware
The Opatch utility from 11g onward has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.
The automated patch application process will automatically shutdown and restart all database processes on the node. However, we don't want the automatic restart – because we are applying two PSUs (one for grid and one for database). Disable the instance auto-start for node collabn1 and manually shutdown the instance for patch application.
InformationThis is rolling patch mean ,patch will applied on one node at a time and other node will available for application
On a production system, all active connections would need to be migrated to the other instance before doing this (for example, with services).
collabn1$ srvctl disable instance -d rac -i RAC1
collabn1$ srvctl stop instance -d rac -i RAC1
Note : Grid Home which is being patched should have sufficient space (approx 50 GB) and There is not need to stop database instance and clusterware services when opatch auto utilities is being used As opatch auto will stop and start it automatically and also check hidden directory .patch_storage and clusterware patching is required from 4 to 6 hour for completion
Prerequisite
1) check sufficient on grid home and tmp and check Oracle Clusterware both Node.
check sufficient on grid home and /tmp and check health on both Node.
on grid home ,minimum 20 gb should be available
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion
df -g
cd /tmp
df-g .
server[/u02/oracle/12.1.0/grid]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [server1] is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$
2) Capture IP from both Nodes, As sometimes we observed duplicate ip issue on nodes
ifconfig -a
From each grid home and Oracle Home on both node
node1[/u02/oracle/12.1.0/grid]$ cd lib
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntsh.so
-rwx------ 1 oracrs dba 61823455 Dec 30 07:42 libclntsh.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntshcore.so
-rwx------ 1 oracrs dba 4189584 Dec 30 07:42 libclntshcore.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libocci.so
-rwx------ 1 oracrs dba 4996222 Dec 30 07:42 libocci.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libttsh12.so
-rwx------ 1 oracrs dba 83317612 Dec 30 07:42 libttsh12.so
node1[/u02/oracle/12.1.0/grid/lib]$
node1[/u02/oracle/12.1.0/grid/lib}$ ls -ltr /u06/oracle/database/product/12.1.0.2/lib/libsqlplus.so
upon investigation,we found that orarootagent.bin was owned oracrs ,Ideally It should be owned by root ,While running pre check ,we did not notice any issue
we are not sure whether patch has caused permission issue
[node1 :root:/u02/oracle/12.1.0.2/grid/bin:] ls -ltr orarootagent.bin
-rwxr----x 1 oracrs oinstall 482604228 Oct 06 2016 orarootagent.bin
5) to check acfs mount point
Shut down the processes (including non-Oracle processes) that use the files on ACFS, then unmount the ACFS file system.
If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.
1. Execute the following command to find the names of the CRS managed ACFS file system resource.
# crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
2. Execute the following command to stop the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
# srvctl stop filesystem -d <volume device path> -n <node to stop file system on>
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.
1. Execute the following command to find all ACFS file system mount points.
As root user execute:
# /sbin/acfsutil registry
2. Unmount ACFS file systems found in Step 1.
As root user execute:
# /bin/umount <mount-point>
Note:
On Linux, use: /bin/umount
On Solaris, use: /sbin/umount
On AIX, use: /usr/sbin/umount
3. Verify that the ACFS file systems are unmounted. Execute the following command to verify.
As the root user execute:
# /sbin/acfsutil info fs
The previous command should return the following message if there is no ACFS file systems mounted.
"acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
1. Execute the following command to find the names of the CRS managed ACFS file system resource.
As root user execute:
# crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
2. Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
# srvctl start filesystem -d <volume device path> -n <node to start file system on>
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.
1. Execute the following command to find all ACFS file system mount points.
As the root user execute:
# /sbin/acfsutil registry
2. Mount ACFS file systems found in Step 1.
As the root user execute:
# /bin/mount <mount-point>
One off Patch (Interim Patches)
this is a single patch to fix a speacific problem.
Diagnostic Patches
Mainly help diagnose and verify a fix, or a collection of bug fixes like in CDC IBM used to provide the Diagnostic Patches.
Merger Level Request Patch
If a one off Patch conflicts with an already installed patch. Oracle Developments can be asked to provide a merged set of these patches.
Critical Patxh Updates (Security Patch Updates)
CPU are relased quearerly and address security issues with Oracle products.
PSU
Cumulative patch bundles that contain well-tested and proven bug fixes for critical issues.
Bundle Patches
Bundle Patches were used before Patch Set updates and they bundled a number of patches per products are.
Patch Set
11.2.0.3 to 11.2.0.4
Major Release Updates
11g to 12c
Flow diagrams
All-Node Patch
. Shutdown all Oracle instances on all nodes
. Apply the patch to the RAC home on all nodes
. Bring all instances up
Minimum downtime
. Shutdown the Oracle instance on node 1
. Apply the patch to the RAC home on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the RAC home on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the RAC home on node 3
. Startup the Oracle instance on node 3
Rolling patch (no downtime)
. Shutdown the Oracle instance on node 1
. Apply the patch to the RAC home on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the RAC home on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the RAC home on node 3
. Start the Oracle instance on node 3
===copy to different host=========
scp -p p26635815_121020_AIX64-5L.zip usernamer@server2:/staging/clusterware_12cpatch/
verify patch applied
select * from dba_registry_sqlpatch ;
select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version, description from dba_registry_sqlpatch order by action_time;
TO TAKE GRID_HOME BACKUP and inventory after shutting down database instanca and clusreware on one node
srvctl config
database -d <database name> -a
srvctl status
database -d<database name>
srvctl stop instance
-d <database name> -i <instance name> -o immediate
srvctl status database -d<database name>
from root user
goto bin directory of grid home
./crsctl stop crs
check status of crs
./crsctl check crs
ps -ef|grep d.bin
upon investigation,we found that orarootagent.bin was owned oracrs ,Ideally It should be owned by root ,While running pre check ,we did not notice any issue
we are not sure whether patch has caused permission issue
[node1:root:/u02/oracle/12.1.0.2/grid/bin:] ls -ltr orarootagent.bin
-rwxr----x 1 oracrs oinstall 482604228 Oct 06 2016 orarootagent.bin
echo $ORACLE_SID
echo $GRID_HOME
cd $GRID_HOME
pwd
du -sg .
tar -cvf /backup//grid_home_database.tar .
du -sg /backup/grid_home_database.tar
Backup for inventory
Cd /u02/oracle/oraInventorytar -cvf /backup/grid_home_inventory.tar .
after taking backup of grid home and inventory since while applying patch clusterware should be up and running and database instance should be down
start crs
Oct PSU 2017 on 12c Clusterware
The Opatch utility from 11g onward has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should not be run in parallel on the cluster nodes.
The automated patch application process will automatically shutdown and restart all database processes on the node. However, we don't want the automatic restart – because we are applying two PSUs (one for grid and one for database). Disable the instance auto-start for node collabn1 and manually shutdown the instance for patch application.
InformationThis is rolling patch mean ,patch will applied on one node at a time and other node will available for application
On a production system, all active connections would need to be migrated to the other instance before doing this (for example, with services).
collabn1$ srvctl disable instance -d rac -i RAC1
collabn1$ srvctl stop instance -d rac -i RAC1
Note : Grid Home which is being patched should have sufficient space (approx 50 GB) and There is not need to stop database instance and clusterware services when opatch auto utilities is being used As opatch auto will stop and start it automatically and also check hidden directory .patch_storage and clusterware patching is required from 4 to 6 hour for completion
Prerequisite
1) check sufficient on grid home and tmp and check Oracle Clusterware both Node.
check sufficient on grid home and /tmp and check health on both Node.
on grid home ,minimum 20 gb should be available
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion
df -g
cd /tmp
df-g .
server[/u02/oracle/12.1.0/grid]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [server1] is [12.1.0.2.0]
server[/u02/oracle/12.1.0/grid]$
2) Capture IP from both Nodes, As sometimes we observed duplicate ip issue on nodes
ifconfig -a
From each grid home and Oracle Home on both node
node1[/u02/oracle/12.1.0/grid]$ cd lib
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntsh.so
-rwx------ 1 oracrs dba 61823455 Dec 30 07:42 libclntsh.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libclntshcore.so
-rwx------ 1 oracrs dba 4189584 Dec 30 07:42 libclntshcore.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libocci.so
-rwx------ 1 oracrs dba 4996222 Dec 30 07:42 libocci.so
node1[/u02/oracle/12.1.0/grid/lib]$ ls -ltr libttsh12.so
-rwx------ 1 oracrs dba 83317612 Dec 30 07:42 libttsh12.so
node1[/u02/oracle/12.1.0/grid/lib]$
node1[/u02/oracle/12.1.0/grid/lib}$ ls -ltr /u06/oracle/database/product/12.1.0.2/lib/libsqlplus.so
upon investigation,we found that orarootagent.bin was owned oracrs ,Ideally It should be owned by root ,While running pre check ,we did not notice any issue
we are not sure whether patch has caused permission issue
[node1 :root:/u02/oracle/12.1.0.2/grid/bin:] ls -ltr orarootagent.bin
-rwxr----x 1 oracrs oinstall 482604228 Oct 06 2016 orarootagent.bin
5) to check acfs mount point
Unmounting ACFS File Systems
ACFS file systems can be used by Oracle RAC for hosting software files for the database. It can also be used as a general purpose file system for non-database files. The ACFS file system is managed and administered by the Oracle Grid Infrastructure, So ACFS file systems will be impacted when shutting down the GI stack for patching GI homes.Shut down the processes (including non-Oracle processes) that use the files on ACFS, then unmount the ACFS file system.
If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.
1. Execute the following command to find the names of the CRS managed ACFS file system resource.
# crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
2. Execute the following command to stop the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
# srvctl stop filesystem -d <volume device path> -n <node to stop file system on>
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.
1. Execute the following command to find all ACFS file system mount points.
As root user execute:
# /sbin/acfsutil registry
2. Unmount ACFS file systems found in Step 1.
As root user execute:
# /bin/umount <mount-point>
Note:
On Linux, use: /bin/umount
On Solaris, use: /sbin/umount
On AIX, use: /usr/sbin/umount
3. Verify that the ACFS file systems are unmounted. Execute the following command to verify.
As the root user execute:
# /sbin/acfsutil info fs
The previous command should return the following message if there is no ACFS file systems mounted.
"acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
Mounting ACFS File Systems
If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.1. Execute the following command to find the names of the CRS managed ACFS file system resource.
As root user execute:
# crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
2. Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.
As root user execute:
# srvctl start filesystem -d <volume device path> -n <node to start file system on>
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.
1. Execute the following command to find all ACFS file system mount points.
As the root user execute:
# /sbin/acfsutil registry
2. Mount ACFS file systems found in Step 1.
As the root user execute:
# /bin/mount <mount-point>
Note:
On Linux, use: /bin/mount
On Solaris, use: /sbin/mount
On AIX, use: /usr/sbin/mount
if acfs mount point is available ,unmount it before shutting down clusterware
a) check database instance and oswatcher on both nodes
ps -ef|grep pmon
ps -ef|grep oswa
from Grid user
srvctl status database -d <Database Name>
b) check listener from both nodes
ps -ef|grep tns
c) check cluster status from grid owner user on any node
ps -ef | grep d.bin
server1[/u02/oracle/12.1.0/grid]$ ps -ef | grep d.bin
oracrs 7340258 1 3 11:12:06 - 5:26 /u02/oracle/12.1.0/grid/bin/gipcd.bin
root 7471224 1 6 11:11:26 - 7:59 /u02/oracle/12.1.0/grid/bin/ohasd.bin reboot
oracrs 8323106 12714060 1 11:12:07 - 1:48 /u02/oracle/12.1.0/grid/bin/evmlogger.bin -o /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracrs 8454172 1 5 11:12:03 - 7:16 /u02/oracle/12.1.0/grid/bin/oraagent.bin
root 9568452 1 3 11:12:42 - 5:26 /u02/oracle/12.1.0/grid/bin/octssd.bin reboot
root 9961542 1 9 11:14:18 - 13:06 /u02/oracle/12.1.0/grid/bin/crsd.bin reboot
oracrs 10027170 1 2 11:12:04 - 1:51 /u02/oracle/12.1.0/grid/bin/mdnsd.bin
oracrs 11075602 1 1 11:12:05 - 1:58 /u02/oracle/12.1.0/grid/bin/gpnpd.bin
root 11206846 1 0 11:12:16 - 0:00 /bin/sh /u02/oracle/12.1.0/grid/bin/ocssd
root 11534474 1 12 12:36:00 - 15:15 /u02/oracle/12.1.0/grid/bin/ologgerd -M -d /u02/oracle/12.1.0/grid/crf/db/server1
oracrs 11600026 1 0 11:15:01 - 0:28 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracrs 11730952 11206846 2 11:12:16 - 7:02 /u02/oracle/12.1.0/grid/bin/ocssd.bin
root 11993100 1 3 11:11:48 - 4:30 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
root 12058762 1 1 11:12:15 - 2:40 /u02/oracle/12.1.0/grid/bin/cssdagent
root 12189862 1 2 11:12:15 - 2:36 /u02/oracle/12.1.0/grid/bin/cssdmonitor
root 12255442 1 19 11:14:17 - 28:39 /u02/oracle/12.1.0/grid/bin/osysmond.bin
oracrs 12714060 1 4 11:12:04 - 7:52 /u02/oracle/12.1.0/grid/bin/evmd.bin
root 13435058 1 28 11:14:35 - 34:22 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
oracrs 13500578 1 21 11:14:34 - 11:59 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fsnft 13828286 1 5 11:15:09 - 7:31 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 26083376 1 0 12:32:54 - 0:11 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
fuo44ppt 27984070 1 5 11:36:17 - 7:39 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fug44ppt 39583962 1 3 11:39:05 - 7:18 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 43909340 1 2 12:32:58 - 2:14 /u02/oracle/12.1.0/grid/bin/scriptagent.bin
fub44ppt 51249192 1 5 11:40:43 - 7:32 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 61014038 1 0 12:32:54 - 0:13 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
oracrs 34144900 1 0 12:32:48 - 0:01 /u02/oracle/12.1.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracrs 36438724 38666928 2 06:09:03 pts/0 0:00 grep d.bin
server1[/u02/oracle/12.1.
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
crsctl status resource -t
crs_stat -t
server1[/u02/oracle/12.1.0/grid]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
diagsnap local_resource OFFLINE OFFLINE
ora.DATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRA1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ER.lsnr ora....er.type ONLINE ONLINE server1
ora....N1.lsnr ora....er.type ONLINE ONLINE server2
ora....N2.lsnr ora....er.type ONLINE ONLINE server1
ora....N3.lsnr ora....er.type ONLINE ONLINE server1
ora.MGMTLSNR ora....nr.type ONLINE ONLINE server1
ora.VOTING.dg ora....up.type ONLINE ONLINE server1
ora.asm ora.asm.type ONLINE ONLINE server1
ora.cvu ora.cvu.type ONLINE ONLINE server1
ora.fsnft.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora.mgmtdb ora....db.type ONLINE ONLINE server1
ora....network ora....rk.type ONLINE ONLINE server1
ora.oc4j ora.oc4j.type ONLINE ONLINE server1
ora.ons ora.ons.type ONLINE ONLINE server1
ora....SM1.asm application ONLINE ONLINE server1
ora....ON.lsnr application ONLINE ONLINE server1
ora....ton.ons application ONLINE ONLINE server1
ora....ton.vip ora....t1.type ONLINE ONLINE server1
ora.scan1.vip ora....ip.type ONLINE ONLINE server2
ora.scan2.vip ora....ip.type ONLINE ONLINE server1
ora.scan3.vip ora....ip.type ONLINE ONLINE server1
ora....SM2.asm application ONLINE ONLINE server2
ora....ON.lsnr application ONLINE ONLINE server2
ora.server2.ons application ONLINE ONLINE server2
ora.server2.vip ora....t1.type ONLINE ONLINE server2
server1[/u02/oracle/12.1.0/grid]$
d) login on each database running on the server and check session count
select name,open_mode from v$database;
select count(1),inst_id from gv$session group by inst_id
alter system archive log current; --- switch log on database.
e) take backup of grid home and before taking backup older trc and audit log file and delete if it is older than 3 months
3) validate Grid Home inventory and Upgrade the Opatch version to 12.1.0.1.7 or later if require
<GRID_HOME>/OPatch/opatch lsinventory -detail -oh <GRID_HOME>
<GRID_HOME>/OPatch/opatch lsinventory
4) Check Opatch version and for each the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ <ORACLE_HOME>/OPatch/opatch version
server1[/u02/oracle/12.1.0/grid]$ OPatch/opatch version
OPatch Version: 12.2.0.1.8
OPatch succeeded.
server1[/u02/oracle/12.1.0/grid]$
The version output of the previous command should be 12.1.0.1.7 or later.
5) unzip downloaded OCT 2017 PSU from Grid user
$ cd <UNZIPPED_PATCH_LOCATION>
Check that the directory is empty.
$ ls
Unzip the patch as grid home owner.
$ unzip p26635815_121020_<platform>.zip
if require change permission
eg
chmod 775 p26635815_121020_AIX64-5L.zip
From root user
chown oracrs:dba p26635815_121020_AIX64-5L.zip
from oracrs
unzip p26635815_121020_AIX64-5L.zip
6) Check for conflicts in the 12.1 GI home (this is new in 12c) and try to stop clusterware on one node as a test and always run ps -ef|grep d.bin to check any clusterware services are running or not ,ideally it should not run
a) This will check for all available GRID HOME and ORACLE HOME
run from root user
#GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
run from root user
b)To verify on clusterware home only
./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u02/oracle/12.1.0/grid
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/staging/clusterware_12cpatch:] /staging/clusterware_12cpatch/26635815
ksh: /staging/clusterware_12cpatch/26635815: cannot execute
[server1:root:/staging/clusterware_12cpatch:] ls -ltr /staging/clusterware_12cpatch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[server1:root:/staging/clusterware_12cpatch:] cd /u02/oracle/12.1.0/grid/OPatch
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] df -g .
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/u02 139.00 27.37 81% 766737 11% /u02
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u02/oracle/12.1.0/grid
OPatchauto session is initiated at Fri Dec 29 14:28:01 2017
System initialization log file is /u02/oracle/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-12-29_02-28-17PM.log.
Session log file is /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2017-12-29_02-29-58PM.log
The id for this session is 1GTW
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.1.0/grid
Patch applicablity verified successfully on home /u02/oracle/12.1.0/grid
Verifying patch inventory on home /u02/oracle/12.1.0/grid
Patch inventory verified successfully on home /u02/oracle/12.1.0/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:server1
CRS Home:/u02/oracle/12.1.0/grid
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /staging/clusterware_12cpatch/26635815/26392164
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
OPatchauto session completed at Fri Dec 29 14:38:52 2017
Time taken to complete the session 10 minutes, 57 seconds
[server1:root:/u02/oracle/12.1.0/grid/OPatch:]
7) Apply the patch to the GI Home using the opatch auto command. As root user
export PATH=$PATH:<GI_HOME>/Opatch
eg
export PATH=$PATH:/u02/oracle/12.1.0.2/grid/Opatch
ls -ltr /u02/oracle/12.1.0/grid/OPatch
export PATH=$PATH:/u02/oracle/12.1.0/grid/OPatch
ls -ltr /u02/oracle/12.1.0/grid
ls -ltr /patch/clusterware_12cpatch/26635815
# $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME>
echo $PATH
./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u02/oracle/12.1.0/grid
8) Once patch completed on first node, clusterware and all Oracle database instances need to be started if it is not started automatically
we will follow same steps for another node
Verfification steps after clusterware and database instance startup
1) ps -ef|grep pmon -to see listerner status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
5) ps -ef | grep d.bin
6) in case of server reboot ,to check IP from both nodes. As in some cases ,we found duplicate IP on servers
ifconfig -a
7) check alert log for ASM and all database instances.
8) check whether sessions are balanced
select name,open_mode from v$database
select count(1),inst_id from gv$session group by inst_id .
9) validate inventory after patch
<GRID_HOME>/OPatch/opatch lsinventory
9)check sufficient on grid home and check Oracle Clusterware both Node.
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion
=============startup and stop step==================
srvctl stop instance –d <database name> –i <database instance> -o immediate
From Root User
cd <grid_home>/bin
crsctl stop crs
startup process
From Root User
cd <grid_home>/bin
crsctl start crs
From GRID HOME
srvctl start instance -d <database name> -i <database instance>
Advanced Option for OPatch
On Linux, use: /bin/mount
On Solaris, use: /sbin/mount
On AIX, use: /usr/sbin/mount
eg
node1[/home/oracrs]$
crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
CANONICAL_VOLUME_DEVICE=/dev/asm/test-7
VOLUME_DEVICE=/dev/asm/test-7
CANONICAL_VOLUME_DEVICE=/dev/asm/test-7
VOLUME_DEVICE=/dev/asm/test-7
CANONICAL_VOLUME_DEVICE=/dev/asm/testdata-7
VOLUME_DEVICE=/dev/asm/testdata-7
CANONICAL_VOLUME_DEVICE=/dev/asm/testdata-7
VOLUME_DEVICE=/dev/asm/testdata-7
CANONICAL_VOLUME_DEVICE=/dev/asm/testnew-7
VOLUME_DEVICE=/dev/asm/testnew-7
CANONICAL_VOLUME_DEVICE=/dev/asm/testnew-7
[node1:root:/u02/oracle/12.1.0.2/grid/bin:]
/sbin/acfsutil registry
Mount Object:
Device:
/dev/asm/test-7
Mount Point:
/ppt/test
Disk Group: TEST_DATA
Volume:TEST
Options: none
Nodes: all
Mount Object:
Device:
/dev/asm/testdata-7
Mount Point:
/ppt/testdata
Disk Group:TEST1_DATA
Volume: TEST1DATA
Options: none
Nodes: all
Mount Object:
Device:
/dev/asm/testnew-7
Mount Point:
/ppt/testnew
Disk Group: test_DATA
Volume: TESTNEW
Options: none
Nodes: all
Mount Object:
Device:
/dev/asm/testprod-7
Mount Point:
/ppt/testprod
Disk Group: TEST1_DATA
Volume: TESTPROD
Options: none
Nodes: all
[node1:root:/u02/oracle/12.1.0.2/grid/bin:]
/usr/sbin/umount
/dev/asm/testdata-7
/usr/sbin/umount
/dev/asm/testnew-7
/usr/sbin/umount
/dev/asm/testprod-7
ACFS will mounted after clusterware up
if not we need to mount manually
/usr/sbin/
/dev/asm/alprod-7
if acfs mount point is available ,unmount it before shutting down clusterware
a) check database instance and oswatcher on both nodes
ps -ef|grep pmon
ps -ef|grep oswa
from Grid user
srvctl status database -d <Database Name>
b) check listener from both nodes
ps -ef|grep tns
c) check cluster status from grid owner user on any node
ps -ef | grep d.bin
server1[/u02/oracle/12.1.0/grid]$ ps -ef | grep d.bin
oracrs 7340258 1 3 11:12:06 - 5:26 /u02/oracle/12.1.0/grid/bin/gipcd.bin
root 7471224 1 6 11:11:26 - 7:59 /u02/oracle/12.1.0/grid/bin/ohasd.bin reboot
oracrs 8323106 12714060 1 11:12:07 - 1:48 /u02/oracle/12.1.0/grid/bin/evmlogger.bin -o /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u02/oracle/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracrs 8454172 1 5 11:12:03 - 7:16 /u02/oracle/12.1.0/grid/bin/oraagent.bin
root 9568452 1 3 11:12:42 - 5:26 /u02/oracle/12.1.0/grid/bin/octssd.bin reboot
root 9961542 1 9 11:14:18 - 13:06 /u02/oracle/12.1.0/grid/bin/crsd.bin reboot
oracrs 10027170 1 2 11:12:04 - 1:51 /u02/oracle/12.1.0/grid/bin/mdnsd.bin
oracrs 11075602 1 1 11:12:05 - 1:58 /u02/oracle/12.1.0/grid/bin/gpnpd.bin
root 11206846 1 0 11:12:16 - 0:00 /bin/sh /u02/oracle/12.1.0/grid/bin/ocssd
root 11534474 1 12 12:36:00 - 15:15 /u02/oracle/12.1.0/grid/bin/ologgerd -M -d /u02/oracle/12.1.0/grid/crf/db/server1
oracrs 11600026 1 0 11:15:01 - 0:28 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracrs 11730952 11206846 2 11:12:16 - 7:02 /u02/oracle/12.1.0/grid/bin/ocssd.bin
root 11993100 1 3 11:11:48 - 4:30 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
root 12058762 1 1 11:12:15 - 2:40 /u02/oracle/12.1.0/grid/bin/cssdagent
root 12189862 1 2 11:12:15 - 2:36 /u02/oracle/12.1.0/grid/bin/cssdmonitor
root 12255442 1 19 11:14:17 - 28:39 /u02/oracle/12.1.0/grid/bin/osysmond.bin
oracrs 12714060 1 4 11:12:04 - 7:52 /u02/oracle/12.1.0/grid/bin/evmd.bin
root 13435058 1 28 11:14:35 - 34:22 /u02/oracle/12.1.0/grid/bin/orarootagent.bin
oracrs 13500578 1 21 11:14:34 - 11:59 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fsnft 13828286 1 5 11:15:09 - 7:31 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 26083376 1 0 12:32:54 - 0:11 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
fuo44ppt 27984070 1 5 11:36:17 - 7:39 /u02/oracle/12.1.0/grid/bin/oraagent.bin
fug44ppt 39583962 1 3 11:39:05 - 7:18 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 43909340 1 2 12:32:58 - 2:14 /u02/oracle/12.1.0/grid/bin/scriptagent.bin
fub44ppt 51249192 1 5 11:40:43 - 7:32 /u02/oracle/12.1.0/grid/bin/oraagent.bin
oracrs 61014038 1 0 12:32:54 - 0:13 /u02/oracle/12.1.0/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
oracrs 34144900 1 0 12:32:48 - 0:01 /u02/oracle/12.1.0/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracrs 36438724 38666928 2 06:09:03 pts/0 0:00 grep d.bin
server1[/u02/oracle/12.1.
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
crsctl status resource -t
crs_stat -t
server1[/u02/oracle/12.1.0/grid]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
diagsnap local_resource OFFLINE OFFLINE
ora.DATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRA1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ATA1.dg ora....up.type ONLINE ONLINE server1
ora....FRS1.dg ora....up.type ONLINE ONLINE server1
ora....ER.lsnr ora....er.type ONLINE ONLINE server1
ora....N1.lsnr ora....er.type ONLINE ONLINE server2
ora....N2.lsnr ora....er.type ONLINE ONLINE server1
ora....N3.lsnr ora....er.type ONLINE ONLINE server1
ora.MGMTLSNR ora....nr.type ONLINE ONLINE server1
ora.VOTING.dg ora....up.type ONLINE ONLINE server1
ora.asm ora.asm.type ONLINE ONLINE server1
ora.cvu ora.cvu.type ONLINE ONLINE server1
ora.fsnft.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server2
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora....4ppt.db ora....se.type ONLINE ONLINE server1
ora.mgmtdb ora....db.type ONLINE ONLINE server1
ora....network ora....rk.type ONLINE ONLINE server1
ora.oc4j ora.oc4j.type ONLINE ONLINE server1
ora.ons ora.ons.type ONLINE ONLINE server1
ora....SM1.asm application ONLINE ONLINE server1
ora....ON.lsnr application ONLINE ONLINE server1
ora....ton.ons application ONLINE ONLINE server1
ora....ton.vip ora....t1.type ONLINE ONLINE server1
ora.scan1.vip ora....ip.type ONLINE ONLINE server2
ora.scan2.vip ora....ip.type ONLINE ONLINE server1
ora.scan3.vip ora....ip.type ONLINE ONLINE server1
ora....SM2.asm application ONLINE ONLINE server2
ora....ON.lsnr application ONLINE ONLINE server2
ora.server2.ons application ONLINE ONLINE server2
ora.server2.vip ora....t1.type ONLINE ONLINE server2
server1[/u02/oracle/12.1.0/grid]$
d) login on each database running on the server and check session count
select name,open_mode from v$database;
select count(1),inst_id from gv$session group by inst_id
alter system archive log current; --- switch log on database.
e) take backup of grid home and before taking backup older trc and audit log file and delete if it is older than 3 months
3) validate Grid Home inventory and Upgrade the Opatch version to 12.1.0.1.7 or later if require
<GRID_HOME>/OPatch/opatch lsinventory -detail -oh <GRID_HOME>
<GRID_HOME>/OPatch/opatch lsinventory
4) Check Opatch version and for each the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.
$ <ORACLE_HOME>/OPatch/opatch version
server1[/u02/oracle/12.1.0/grid]$ OPatch/opatch version
OPatch Version: 12.2.0.1.8
OPatch succeeded.
server1[/u02/oracle/12.1.0/grid]$
The version output of the previous command should be 12.1.0.1.7 or later.
5) unzip downloaded OCT 2017 PSU from Grid user
$ cd <UNZIPPED_PATCH_LOCATION>
Check that the directory is empty.
$ ls
Unzip the patch as grid home owner.
$ unzip p26635815_121020_<platform>.zip
if require change permission
eg
chmod 775 p26635815_121020_AIX64-5L.zip
From root user
chown oracrs:dba p26635815_121020_AIX64-5L.zip
from oracrs
unzip p26635815_121020_AIX64-5L.zip
6) Check for conflicts in the 12.1 GI home (this is new in 12c) and try to stop clusterware on one node as a test and always run ps -ef|grep d.bin to check any clusterware services are running or not ,ideally it should not run
a) This will check for all available GRID HOME and ORACLE HOME
run from root user
#GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/26635815 -analyze
run from root user
b)To verify on clusterware home only
./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u02/oracle/12.1.0/grid
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/staging/clusterware_12cpatch:] /staging/clusterware_12cpatch/26635815
ksh: /staging/clusterware_12cpatch/26635815: cannot execute
[server1:root:/staging/clusterware_12cpatch:] ls -ltr /staging/clusterware_12cpatch/26635815
total 176
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:01 26392164
drwxr-xr-x 5 oracrs dba 256 Oct 3 12:04 26392192
drwxrwxr-x 15 oracrs dba 4096 Oct 3 12:04 26713565
drwxr-xr-x 6 oracrs dba 256 Oct 3 12:04 21436941
drwxr-xr-x 2 oracrs dba 4096 Oct 3 12:04 automation
-rw-r--r-- 1 oracrs dba 0 Oct 3 12:05 README.txt
-rw-rw-r-- 1 oracrs dba 4987 Oct 3 14:36 bundle.xml
-rw-rw-r-- 1 oracrs dba 73591 Oct 16 16:13 README.html
[server1:root:/staging/clusterware_12cpatch:] cd /u02/oracle/12.1.0/grid/OPatch
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] df -g .
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/u02 139.00 27.37 81% 766737 11% /u02
[server1:root:/u02/oracle/12.1.0/grid/OPatch:] ./opatchauto apply /staging/clusterware_12cpatch/26635815 -analyze -oh /u02/oracle/12.1.0/grid
OPatchauto session is initiated at Fri Dec 29 14:28:01 2017
System initialization log file is /u02/oracle/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2017-12-29_02-28-17PM.log.
Session log file is /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2017-12-29_02-29-58PM.log
The id for this session is 1GTW
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.1.0/grid
Patch applicablity verified successfully on home /u02/oracle/12.1.0/grid
Verifying patch inventory on home /u02/oracle/12.1.0/grid
Patch inventory verified successfully on home /u02/oracle/12.1.0/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:server1
CRS Home:/u02/oracle/12.1.0/grid
==Following patches were SKIPPED:
Patch: /staging/clusterware_12cpatch/26635815/21436941
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /staging/clusterware_12cpatch/26635815/26392164
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26392192
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
Patch: /staging/clusterware_12cpatch/26635815/26713565
Log: /u02/oracle/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2017-12-29_14-30-24PM_1.log
OPatchauto session completed at Fri Dec 29 14:38:52 2017
Time taken to complete the session 10 minutes, 57 seconds
[server1:root:/u02/oracle/12.1.0/grid/OPatch:]
7) Apply the patch to the GI Home using the opatch auto command. As root user
export PATH=$PATH:<GI_HOME>/Opatch
eg
export PATH=$PATH:/u02/oracle/12.1.0.2/grid/Opatch
ls -ltr /u02/oracle/12.1.0/grid/OPatch
export PATH=$PATH:/u02/oracle/12.1.0/grid/OPatch
ls -ltr /u02/oracle/12.1.0/grid
ls -ltr /patch/clusterware_12cpatch/26635815
# $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME>
echo $PATH
./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u02/oracle/12.1.0/grid
8) Once patch completed on first node, clusterware and all Oracle database instances need to be started if it is not started automatically
we will follow same steps for another node
Verfification steps after clusterware and database instance startup
1) ps -ef|grep pmon -to see listerner status
2) ps -ef|grep tns – to see listerner status
3) crsctl status resource -t --to see the status
4) crs_stat -t – to see the status
5) ps -ef | grep d.bin
6) in case of server reboot ,to check IP from both nodes. As in some cases ,we found duplicate IP on servers
ifconfig -a
7) check alert log for ASM and all database instances.
8) check whether sessions are balanced
select name,open_mode from v$database
select count(1),inst_id from gv$session group by inst_id .
9) validate inventory after patch
<GRID_HOME>/OPatch/opatch lsinventory
9)check sufficient on grid home and check Oracle Clusterware both Node.
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion
=============startup and stop step==================
srvctl stop instance –d <database name> –i <database instance> -o immediate
From Root User
cd <grid_home>/bin
crsctl stop crs
startup process
From Root User
cd <grid_home>/bin
crsctl start crs
From GRID HOME
srvctl start instance -d <database name> -i <database instance>
Advanced Option for OPatch
Here are some of the advanced options of OPatch utility.
UpdateRemoteNodes
It is used to propagate/remove files/directories to/from remote nodes using files under ORACLE_HOME/.patch_storage/<ID>/rac/*.
The directories listed in copy_dirs.txt will be copied to remote nodes.
The files listed in copy_files.txt wil be copied to remote nodes.
The directories listed in remove_dirs.txt will be deleted from remote nodes.
The files listed in remove_files.txt will be deleted from remote nodes.
This is used where RAC setup is present.
It is used to propagate/remove files/directories to/from remote nodes using files under ORACLE_HOME/.patch_storage/<ID>/rac/*.
The directories listed in copy_dirs.txt will be copied to remote nodes.
The files listed in copy_files.txt wil be copied to remote nodes.
The directories listed in remove_dirs.txt will be deleted from remote nodes.
The files listed in remove_files.txt will be deleted from remote nodes.
This is used where RAC setup is present.
Cleanup
It is used to clean up ‘restore.sh, make.txt’ files and ‘rac, scratch, backup’ directories in the ORACLE_HOME/.patch_storage directory. If -ps option is used, then it cleans the above specified areas only for that patch, else for all patches under ORACLE_HOME/.patch_storage. You will be still able to rollback patches after this cleanup.
Example: opatch util cleanup -ps 6121183_Ju _21_2007_04_19_42
It is used to clean up ‘restore.sh, make.txt’ files and ‘rac, scratch, backup’ directories in the ORACLE_HOME/.patch_storage directory. If -ps option is used, then it cleans the above specified areas only for that patch, else for all patches under ORACLE_HOME/.patch_storage. You will be still able to rollback patches after this cleanup.
Example: opatch util cleanup -ps 6121183_Ju _21_2007_04_19_42
CopyListedFiles
It is used to copy all files listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_files.txt
to remote nodes. If -fp option is used, then one can specify the path
of the file containing the list of files to be copied. The files
mentioned in this file will be copied to the remote nodes.
Example: opatch util copylistedfiles -fp a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
Example: opatch util copylistedfiles -fp a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedFilesTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedFiles.
Example: opatch util copylistedfilestest -fp /home/oracle/a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedFiles.
Example: opatch util copylistedfilestest -fp /home/oracle/a -remote_nodes ceintcb-a5
This is used where RAC setup is present.
CopyListedDirs
It is used to recursively copy all directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_dirs.txt to remote nodes. If -dp option is used, then one can specify the path of the file containing the list of directories to be copied. The directories mentioned in this file will be copied to the remote nodes.
This is used where RAC setup is present.
It is used to recursively copy all directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/copy_dirs.txt to remote nodes. If -dp option is used, then one can specify the path of the file containing the list of directories to be copied. The directories mentioned in this file will be copied to the remote nodes.
This is used where RAC setup is present.
CopyListedDirsTest
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedDirs.
This is used where RAC setup is present.
It is used to copy a single file to remote nodes. The usage remains the same as CopyListedDirs.
This is used where RAC setup is present.
RemoveListedFiles
It is used to remove files listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_files.txt on remote nodes. If -fr option is used, then one can specify the path of the file containing the list of files to be removed. The files mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
It is used to remove files listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_files.txt on remote nodes. If -fr option is used, then one can specify the path of the file containing the list of files to be removed. The files mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedFilesTest
It is used to remove a single file from remote nodes. The usage remains the same as RemoveListedFiles.
This is used where RAC setup is present.
It is used to remove a single file from remote nodes. The usage remains the same as RemoveListedFiles.
This is used where RAC setup is present.
RemoveListedDirs
It is used to recursively remove directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_dirs.txt from remote nodes. If -dr option is used, then one can specify the path of the file containing the list of directories to be removed. The directories mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
It is used to recursively remove directories listed in ORACLE_HOME/.patch_storage/<ID>/rac/remove_dirs.txt from remote nodes. If -dr option is used, then one can specify the path of the file containing the list of directories to be removed. The directories mentioned in this file will be removed from the remote nodes.
This is used where RAC setup is present.
RemoveListedDirsTest
It is used to remove a single directory from remote nodes. The usage remains the same as RemoveListedDirs.
This is used where RAC setup is present.
It is used to remove a single directory from remote nodes. The usage remains the same as RemoveListedDirs.
This is used where RAC setup is present.
RunLocalMake
It is used to invoke re-link on the local node. The make commands are stored in ORACLE_HOME/.patch_storage/<ID>/make.txt. You need to use the -ps option to specify the Patch ID with timestamp. A directory by this name will be present under ORACLE_HOME/.patch_storage. The make.txt file present under ORACLE_HOME/.patch_storage/<Patch ID with timestamp>/ will be used to perform the local make operation. This command cannot be run if you have already run Cleanup as it would have removed these make.txt files.
Example: opatch util runlocalmake -ps 6121250_ un_21_2007_04_16_11
It is used to invoke re-link on the local node. The make commands are stored in ORACLE_HOME/.patch_storage/<ID>/make.txt. You need to use the -ps option to specify the Patch ID with timestamp. A directory by this name will be present under ORACLE_HOME/.patch_storage. The make.txt file present under ORACLE_HOME/.patch_storage/<Patch ID with timestamp>/ will be used to perform the local make operation. This command cannot be run if you have already run Cleanup as it would have removed these make.txt files.
Example: opatch util runlocalmake -ps 6121250_ un_21_2007_04_16_11
RunRemoteMake
It is used to invoke re-link on remote nodes. The make commands are stored in
ORACLE_HOME/.patch_storage/<ID>/rac/makes_cmd.txt. The usage remains the same as RunLocalMake.
This is used where RAC setup is present.
It is used to invoke re-link on remote nodes. The make commands are stored in
ORACLE_HOME/.patch_storage/<ID>/rac/makes_cmd.txt. The usage remains the same as RunLocalMake.
This is used where RAC setup is present.
RunAnyCommand
It is used to run any command on remote nodes. The command should be specified using the -cmd option.
Example: opatch util runanycommand -remote_nodes ceintcb-a5 -cmd ls
This is used where RAC setup is present.
It is used to run any command on remote nodes. The command should be specified using the -cmd option.
Example: opatch util runanycommand -remote_nodes ceintcb-a5 -cmd ls
This is used where RAC setup is present.
LoadXML
It is used to check the validity of an XML file. The -xmlInput option can be used to specify the path of the xml file.
@ Support can use this utility to verify the integrity of XML files contained in the local and central inventory.
Example: opatch util loadxml -xmlInput $ORACLE_HOME/inventory/ContentsXML/comps.xml
It is used to check the validity of an XML file. The -xmlInput option can be used to specify the path of the xml file.
@ Support can use this utility to verify the integrity of XML files contained in the local and central inventory.
Example: opatch util loadxml -xmlInput $ORACLE_HOME/inventory/ContentsXML/comps.xml
VerifyIt is used to run the patch verification process to ensure that the
patch was applied to the ORACLE_HOME. It uses the defined ORACLE_HOME
and the given patch location via -ph, to run the check.
@ Support can use this utility to re-run the OPatch verification
stage, if required for patch diagnosis. The patch staging area has to be
present or created.
Example: opatch util verify -ph ~/6646853/6121183
Example: opatch util verify -ph ~/6646853/6121183
Troubleshooting
Some times the inventory get corrupted because of some issues. In that case you need to repair the inventory. Following are the two methods which I know can be used to repair the inventory.
1) Development has released a utility (checkinv) which can be use to repair the inventory if it becomes corrupted for some reason.
Please note that this version of checkinv works only for 9.2 and 10.1 releases of oracle database. This doesn’t work for 10.2 releases of oracle database.
You can download the checkinv utility from Patch 4121946.
To cleanup the components, you should determine the unwanted or dangling components, then specify those using the option “-remcomp”, to cleanup the component list.
The utility detects and asks user’s permission to repair a bad inventory. In case you want to repair the inventory, the utility will first back up the files before any modification.
However, it is better that you backup the inventory before running the tool.
F
Error description
=============
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# ./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u02/oracle/12.1.0.2/grid
OPatchauto session is initiated at Wed Feb 21 06:40:59 2018
System initialization log file is /u02/oracle/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2018-02-21_06-41-04AM.log.
Session log file is /u02/oracle/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2018-02-21_06-41-43AM.log
The id for this session is 3D7C
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.1.0.2/grid
Patch applicablity verified successfully on home /u02/oracle/12.1.0.2/grid
Verifying patch inventory on home /u02/oracle/12.1.0.2/grid
Patch inventory verified successfully on home /u02/oracle/12.1.0.2/grid
Bringing down CRS service on home /u02/oracle/12.1.0.2/grid
Prepatch operation log file location: /u02/oracle/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_buxton_2018-02-21_06-47-45AM.log
CRS service brought down successfully on home /u02/oracle/12.1.0.2/grid
Start applying binary patch on home /u02/oracle/12.1.0.2/grid
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home /u02/oracle/12.1.0.2/grid
Starting CRS service on home /u02/oracle/12.1.0.2/grid
Postpatch operation log file location: /u02/oracle/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_buxton_2018-02-21_07-06-54AM.log
Failed to start CRS service on home /u02/oracle/12.1.0.2/grid
Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : buxton->/u02/oracle/12.1.0.2/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u02/oracle/12.1.0.2/grid, host: buxton.
Command failed: /u02/oracle/12.1.0.2/grid/perl/bin/perl -I/u02/oracle/12.1.0.2/grid/perl/lib -I/u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install /u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output:
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install/crsconfig_params
2018/02/21 07:07:12 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:08:39 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:08:43 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
2018/02/21 07:09:31 CLSRSC-400: A system reboot is required to continue installing.
After fixing the cause of failure Run opatchauto resume with session id "3D7C"
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Wed Feb 21 07:09:36 2018
Time taken to complete the session 28 minutes, 38 seconds
opatchauto failed with error code 42
You have mail in /usr/spool/mail/root
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# id
uid=0(root) gid=0(system) groups=208(tivlogs)
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.sh -patch
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/crs/install/crsconfig_params
2018/02/21 07:15:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
workaround
step 1)
node1[/u02/oracle/12.1.0.2/grid/OPatch]# /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.sh -patch
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/crs/install/crsconfig_params
2018/02/21 07:15:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:15:42 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:15:46 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
2018/02/21 07:16:28 CLSRSC-400: A system reboot is required to continue installing.
Died at /u02/oracle/12.1.0.2/grid/crs/install/crspatch.pm line 371.
The command '/u02/oracle/12.1.0.2/grid/perl/bin/perl -I/u02/oracle/12.1.0.2/grid/perl/lib -I/u02/oracle/12.1.0.2/grid/crs/install /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.pl -patch' execution failed
node1[/u02/oracle/12.1.0.2/grid/OPatch]#
step2
reboot node
step3
restart clusterware manually if it is not started automatically
if not work
as root from ../grid/bin
./clscfg -localpatch
node1[/u02/oracle/12.1.0.2/grid]# cd bin
node2[/u02/oracle/12.1.0.2/grid/bin]# ./clscfg -localpatch
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
node2[/u02/oracle/12.1.0.2/grid/bin]# id
uid=0(root) gid=0(system) groups=208(tivlogs)
node2[/u02/oracle/12.1.0.2/grid/bin]#
./rootcrs.pl -lock
5) Kill the following process :-
ps -ef|grep ora.gpnpd
kill -9 process id
./crsctl start crs
Some times the inventory get corrupted because of some issues. In that case you need to repair the inventory. Following are the two methods which I know can be used to repair the inventory.
1) Development has released a utility (checkinv) which can be use to repair the inventory if it becomes corrupted for some reason.
Please note that this version of checkinv works only for 9.2 and 10.1 releases of oracle database. This doesn’t work for 10.2 releases of oracle database.
You can download the checkinv utility from Patch 4121946.
To cleanup the components, you should determine the unwanted or dangling components, then specify those using the option “-remcomp”, to cleanup the component list.
The utility detects and asks user’s permission to repair a bad inventory. In case you want to repair the inventory, the utility will first back up the files before any modification.
However, it is better that you backup the inventory before running the tool.
F
Error description
=============
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# ./opatchauto apply /staging/clusterware_12cpatch/26635815 -oh /u02/oracle/12.1.0.2/grid
OPatchauto session is initiated at Wed Feb 21 06:40:59 2018
System initialization log file is /u02/oracle/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2018-02-21_06-41-04AM.log.
Session log file is /u02/oracle/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2018-02-21_06-41-43AM.log
The id for this session is 3D7C
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.1.0.2/grid
Patch applicablity verified successfully on home /u02/oracle/12.1.0.2/grid
Verifying patch inventory on home /u02/oracle/12.1.0.2/grid
Patch inventory verified successfully on home /u02/oracle/12.1.0.2/grid
Bringing down CRS service on home /u02/oracle/12.1.0.2/grid
Prepatch operation log file location: /u02/oracle/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_buxton_2018-02-21_06-47-45AM.log
CRS service brought down successfully on home /u02/oracle/12.1.0.2/grid
Start applying binary patch on home /u02/oracle/12.1.0.2/grid
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home /u02/oracle/12.1.0.2/grid
Starting CRS service on home /u02/oracle/12.1.0.2/grid
Postpatch operation log file location: /u02/oracle/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_buxton_2018-02-21_07-06-54AM.log
Failed to start CRS service on home /u02/oracle/12.1.0.2/grid
Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : buxton->/u02/oracle/12.1.0.2/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u02/oracle/12.1.0.2/grid, host: buxton.
Command failed: /u02/oracle/12.1.0.2/grid/perl/bin/perl -I/u02/oracle/12.1.0.2/grid/perl/lib -I/u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install /u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output:
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/OPatch/auto/dbtmp/bootstrap_buxton/patchwork/crs/install/crsconfig_params
2018/02/21 07:07:12 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:08:39 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:08:43 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
2018/02/21 07:09:31 CLSRSC-400: A system reboot is required to continue installing.
After fixing the cause of failure Run opatchauto resume with session id "3D7C"
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Wed Feb 21 07:09:36 2018
Time taken to complete the session 28 minutes, 38 seconds
opatchauto failed with error code 42
You have mail in /usr/spool/mail/root
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]#
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# id
uid=0(root) gid=0(system) groups=208(tivlogs)
buxton[/u02/oracle/12.1.0.2/grid/OPatch]# /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.sh -patch
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/crs/install/crsconfig_params
2018/02/21 07:15:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
workaround
step 1)
node1[/u02/oracle/12.1.0.2/grid/OPatch]# /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.sh -patch
Using configuration parameter file: /u02/oracle/12.1.0.2/grid/crs/install/crsconfig_params
2018/02/21 07:15:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:15:42 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/02/21 07:15:46 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
2018/02/21 07:16:28 CLSRSC-400: A system reboot is required to continue installing.
Died at /u02/oracle/12.1.0.2/grid/crs/install/crspatch.pm line 371.
The command '/u02/oracle/12.1.0.2/grid/perl/bin/perl -I/u02/oracle/12.1.0.2/grid/perl/lib -I/u02/oracle/12.1.0.2/grid/crs/install /u02/oracle/12.1.0.2/grid/crs/install/rootcrs.pl -patch' execution failed
node1[/u02/oracle/12.1.0.2/grid/OPatch]#
step2
reboot node
step3
restart clusterware manually if it is not started automatically
if not work
as root from ../grid/bin
./clscfg -localpatch
node1[/u02/oracle/12.1.0.2/grid]# cd bin
node2[/u02/oracle/12.1.0.2/grid/bin]# ./clscfg -localpatch
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
node2[/u02/oracle/12.1.0.2/grid/bin]# id
uid=0(root) gid=0(system) groups=208(tivlogs)
node2[/u02/oracle/12.1.0.2/grid/bin]#
./rootcrs.pl -lock
5) Kill the following process :-
ps -ef|grep ora.gpnpd
kill -9 process id
./crsctl start crs
This article is really helpful for oracle dba.great info with detail info
ReplyDelete