12c Oracle Clusterware OCT 2019 PSU Patching /Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015: GRID INFRASTRUCTURE PATCH STEP BY STEP
Patch details
==============
Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015
This patch is used to apply on Grid home and RAC database as well
Prerequisite
============
1) Download Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015 from Oracle Support
5 patches is related to grid home and 2 from oracle home
GI
----
BUG 30122814 - OCW OCT 2019
BUG 30122828 - ACFS OCT 2019
BUG 30093408 - TOMCAT
BUG 26839277 - DBWLM
BUG 30138470 - DATABASE OCT 2019
RDBMS
-------------
BUG 30122814 - OCW OCT 2019
BUG 30138470 - DATABASE OCT 2019
To verif applied patch
opatch lsinventory|grep -i 30122814
opatch lsinventory|grep -i 30122828
opatch lsinventory|grep -i 30093408
opatch lsinventory|grep -i 26839277
opatch lsinventory|grep -i 30138470
2) Check Free Space in Cluster Home(approx free space 22 GB and /tmp file system
¨ Check ACFS File Systems if it is available on node ,Please unmount it before starting patching clusterware or stopping clusterware
3) Check clusterware and database component status
ps -ef|grep pmon
ps -ef|grep tns
crsctl check crs
crsctl status resource -t
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion
./crsctl check crs
ps -ef|grep d.bin
4) Check opatch version and recommended to have latest opatch and download it from oracle support (6880880) and unzip it in Grid Home after renaming Old Opatch folder
We must use the OPatch utility version 12.2.0.1.17 or later to apply this patch for all platforms
5) run opatch lsinventory from grid home to check consistency and corruption of inventory
6) unzip downloaded patch from Grid user
7) run analyze command using opatchauto from root user after setting proper path ,we must carefully should give grid home path
8) there is bug in this patch. So follow instruction from below doc id
opatchauto/opatch apply failing with CLSRSC-46: Error: '<GRID_HOME>/suptools/tfa/release/tfa_home/jlib/jewt4.jar' does not exist (Doc ID 2409411.1)
9) apply patch from opatchauto on grid home
10)verify patch details
Eg
Patch location
-----------
Node1[/oswatcher]$ cd 2020_patch
Node1[/oswatcher/2020_patch]$ ls -ltr
total 3613688
-rw------- 1 users staff 170920687 Feb 10 16:03 p6880880_122010_AIX64-5L.zip
-rw------- 1 users staff 1679281294 Feb 10 16:09 p30116802_122010_AIX64-5L.zip
Node1[/oswatcher/2020_patch]$
1) change patch owner if it is not owned by grid onwer and unzip patch and provide 775 owner (precoution )
chown -R oracrs:oinstall 2020_patch
chown oracrs:oinstall p30116802_122010_AIX64-5L.zip
chmod -R 775 30116802
2) check patch version from grid owner
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_HOME=/u02/oracle/12.2.0.1/grid/
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_SID=+ASM1
$ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
$ <ORACLE_HOME>/OPatch/opatch version
if not as per readme ,upgrade it as below
-Old version
node1[/u02/oracle/12.2.0.1/grid/OPatch]$
node1[/u02/oracle/12.2.0.1/grid/OPatch]$
node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch version
OPatch Version: 12.2.0.1.14
To change new version
a) rename old OPatch from root user as below
node1[/u02/oracle/12.2.0.1/grid]$ mv OPatch OPatch_old
mv: 0653-401 Cannot rename OPatch to OPatch_old:
The file access permissions do not allow the specified action.
node1[/u02/oracle/12.2.0.1/grid]$ exit
[node1:root:/etc:] cd /u02/oracle/12.2.0.1/grid
[node1:root:/u02/oracle/12.2.0.1/grid:] mv OPatch OPatch_old
[node1:root:/u02/oracle/12.2.0.1/grid:]
b) copy new downloaded in Grid home
[node1:root:/u02/oracle/12.2.0.1/grid:] ls -ltr /oswatcher/2020_patch/
total 3613696
drwxr-x--- 8 oracrs oinstall 4096 Sep 27 2019 30116802
-rwxrwxr-x 1 oracrs oinstall 170920687 Apr 14 12:23 p6880880_122010_AIX64-5L.zip
-rwxrwxr-x 1 oracrs oinstall 1679281294 Apr 14 12:25 p30116802_122010_AIX64-5L.zip
[node1:root:/u02/oracle/12.2.0.1/grid:] cp /oswatcher/2020_patch/p6880880_122010_AIX64-5L.zip /u02/oracle/12.2.0.1/grid/
c) unzip from root user
[node1:root:/u02/oracle/12.2.0.1/grid:] unzip p6880880_122010_AIX64-5L.zip
d) change ownership of OPatch
chown -R oracrs:oinstall OPatch
e) login through grid user and verify opatch version
sudo su - oracrs
node1[/u02/oracle/12.2.0.1/grid/OPatch]$
node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch version
OPatch Version: 12.2.0.1.19
OPatch succeeded.
node1[/u02/oracle/12.2.0.1/grid/OPatch]$
The version output of the previous command should be 12.2.0.1.17 or later.
3) Validation of Oracle Inventory
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2020, Oracle Corporation. All rights reserved.
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid//oraInst.loc
OPatch version : 12.2.0.1.14
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-12_15-07-11PM_1.log
Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-12_15-07-11PM.txt
or
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory -detail -oh /u02/oracle/12.2.0.1/grid
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-22-28AM_1.log
Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-15_04-22-28AM.txt
--------------------------------------------------------------------------------
Local Machine Information::
4) Run OPatch Conflict Check
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch/OPatch
Determine whether any currently installed one-off patches conflict with this patch 30116802 as follows:
For Grid Infrastructure Home , From Grid owner
cd
/u02/oracle/12.2.0.1/grid/OPatch/OPatch
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30138470
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122814
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122828
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/26839277
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30093408
eg
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30138470
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-24AM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122814
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-36AM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122828
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-47AM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/26839277
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-57AM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30093408
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-27-06AM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$
5) Run OPatch SystemSpace Check from Grid owner
Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:
For Grid Infrastructure Home, as home user:
Create file /tmp/patch_list_gihome.txt with the following content:
% cat /tmp/patch_list_gihome.txt
<UNZIPPED_PATCH_LOCATION>/30116802/30138470
<UNZIPPED_PATCH_LOCATION>/30116802/30122814
<UNZIPPED_PATCH_LOCATION>/30116802/30122828
<UNZIPPED_PATCH_LOCATION>/30116802/26839277
<UNZIPPED_PATCH_LOCATION>/30116802/30093408
Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:
% $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
eg
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ ls -ltr /tmp/patch_list_gihome.txt
-rw-r--r-- 1 oracrs oinstall 200 Feb 14 16:58 /tmp/patch_list_gihome.txt
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.19
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-30-05AM_1.log
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.
heanor[/u02/oracle/12.2.0.1/grid/OPatch]$
7) due to bug or jdev-rt.jar file not present .It will throw error . So please follow instruction and remove entry of files jdev-rt.jar and jewt4.jar
from file (crsconfig_fileperms and crsconfig_fileperms.sbs) as pre check.
Error details
----------
CRS-2673: Attempting to stop 'ora.cssd' on 'heanor'
CRS-2677: Stop of 'ora.cssd' on 'heanor' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'heanor'
CRS-2677: Stop of 'ora.gipcd' on 'heanor' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'heanor' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2020/02/15 06:18:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:51 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:52 CLSRSC-46: Error: '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar' does not exist
2020/02/15 06:18:53 CLSRSC-152: Could not set ownership on '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar'
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Sat Feb 15 06:18:58 2020
Time taken to complete the session 5 minutes, 40 seconds
opatchauto failed with error code 42
[heanor:root:/u02/oracle/12.2.0.1/grid/OPatch:] ps -ef|grep pmon
[heanor:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.
reason of issue
----------
[heanor:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.
ksh[2]: Timed out waiting for input.
heanor[/home/eibm994]$ sudo su -
Password:
[YOU HAVE NEW MAIL]
[heanor:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[heanor:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[heanor:root:/home/root:]
solution
-----------
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/heanor
ksh: /u02/oracle/12.2.0.1/grid/crs/utl/heanor: not found.
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/hykeham
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] ls -ltr crsconfig_fileperms
-rw-r----- 1 oracrs oinstall 25199 May 16 2019 crsconfig_fileperms
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] cp -p crsconfig_fileperms crsconfig_fileperms.backup
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] vi crsconfig_fileperms
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] cat crsconfig_fileperms|grep -i jewt4.jar
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] cat crsconfig_fileperms|grep -i jdev-rt.jar
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/utl/hykeham:] cd /u02/oracle/12.2.0.1/grid/crs/sbs
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] ls -ltr crsconfig_fileperms.sbs
-rw-r----- 1 oracrs oinstall 26177 Oct 17 2018 crsconfig_fileperms.sbs
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cp -p crsconfig_fileperms.sbs crsconfig_fileperms.sbs.backup
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] vi crsconfig_fileperms.sbs
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jewt4.jar
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jdev-rt.jar
[hykeham:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
6) One-off Patch Conflict Detection and Resolutio from root user
./u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid
7) Stop All RAC Database instance and clusterware from one node and Take backup of Grid Inventory from one node
srvctl stop instance -d testdb -i testdb1 -o immediate
srvctl stop instance -d test1db -i test1db1 -o immediate
srvctl stop instance -d test2db -i test2db1 -o immediate
check status of crs
./crsctl check crs
ps -ef|grep d.bin
before stop cluster , Please check ACFS file system is there , it avaialble at server , please unmont it , otherwise clusterware will not stop from root user
------------unmount acfs file system----------------------------
from root user
v/varbes_lv 2.00 1.29 36% 8196 3% /var/opt/BESClient
/dev/asm/alprod-7 2.00 1.66 17% 708512 17% /ppt/alprod
/dev/asm/gfprod-7 2.00 1.77 12% 477200 12% /ppt/gfprod
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] ./crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] /sbin/acfsutil registry
Mount Object:
Device: /dev/asm/alprod-7
Mount Point: /ppt/alprod
Disk Group: OPS_DATA
Volume: ALPROD
Options: none
Nodes: all
Accelerator Volumes:
Mount Object:
Device: /dev/asm/gfdata-7
Mount Point: /ppt/gfdata
Disk Group: OPS_DATA
Volume: GFDATA
Options: none
Nodes: all
Accelerator Volumes:
Mount Object:
Device: /dev/asm/gfnew-7
Mount Point: /ppt/gfnew
Disk Group: OPS_DATA
Volume: GFNEW
Options: none
Nodes: all
Accelerator Volumes:
Mount Object:
Device: /dev/asm/gfprod-7
Mount Point: /ppt/gfprod
Disk Group: OPS_DATA
Volume: GFPROD
Options: none
Nodes: all
Accelerator Volumes:
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
umount: 0506-347 Cannot find anything to unmount.
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
umount: 0506-347 Cannot find anything to unmount.
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:]
[heanor:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/bos_hd4 2.00 1.54 24% 20747 6% /
/dev/bos_hd2 19.12 4.29 78% 82348 8% /usr
/dev/bos_hd9var 1.00 0.43 57% 18985 16% /var
/dev/hd3 5.00 4.93 2% 392 1% /tmp
/dev/hd1 3.00 1.11 64% 10839 4% /home
/proc - - - - - /proc
/dev/bos_hd10opt 1.25 0.24 82% 49382 45% /opt
/dev/hd11admin 0.25 0.25 1% 5 1% /admin
/dev/besclientlv 0.25 0.15 40% 21 1% /opt/BESClient
/dev/lv_doonce 0.25 0.07 71% 450 3% /opt/DoOnceAIX
/dev/itm_lv 2.62 1.13 57% 18522 7% /opt/IBM/ITM
/dev/scm_lv 2.00 1.90 5% 846 1% /opt/IBM/SCM
/dev/tiv_lv 1.00 0.97 4% 110 1% /opt/Tivoli
/dev/best1_lv 3.00 2.42 20% 169 1% /opt/best1
/dev/connectlv 2.00 1.95 3% 950 1% /opt/connect
/dev/ctm700lv 3.00 2.11 30% 5798 2% /opt/ctm700
/dev/patrol_lv 2.00 1.14 43% 4197 2% /opt/patrol
/dev/tivoli_lv 0.25 0.20 20% 13207 22% /opt/tivoli/cit
/dev/oswatchlv 98.00 45.94 54% 8789 1% /oswatcher
/dev/staginglv 105.00 14.80 86% 33423 1% /staging
/dev/sampptu01lv 35.00 10.00 72% 93706 4% /u01
/dev/sampptu02lv 145.00 43.13 71% 318269 3% /u02
/dev/sampptu03lv 10.00 8.57 15% 29304 2% /u03
/dev/fslv00 1.00 0.99 1% 79 1% /var/adm/perfmgr
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/lv_auditlog 2.00 0.66 67% 2129 2% /var/log/eprise
/dev/varbes_lv 2.00 1.29 36% 8196 3% /var/opt/BESClient
./crsctl stop crs
Backup home
tar -cvf /oswatcher/grid_home_database.tar .
du -sg /oswatcher/grid_home_database.tar
Backup for inventory
cd /u02/oracle/oraInventory
tar -cvf /staging/grid_home_inventory.tar .
du -hP /staging/grid_home_inventory.tar
8) Star clusterware on node1 and it is not required to start Database instance on node1
from root user
./crsctl start crs
Note :Since opatchauto will stop automatically while patching .
Clusterware should be up and running , Database instance is not required to up and running since we are apply on Grid home only
9) unmount acfs file system again and check file system
ev/asm/alprod-7 2.00 1.66 17% 708512 17% /ppt/alprod
/dev/asm/gfnew-7 2.00 1.80 10% 411816 10% /ppt/gfnew
/dev/asm/gfprod-7 2.00 1.77 12% 477200 12% /ppt/gfprod
/dev/asm/gfdata-7 10.00 8.17 19% 3838312 19% /ppt/gfdata
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/bos_hd4 3.00 2.55 15% 20373 4% /
/dev/bos_hd2 6.50 1.89 71% 80046 15% /usr
/dev/bos_hd9var 1.00 0.46 55% 17473 14% /var
/dev/hd3 5.00 4.93 2% 400 1% /tmp
/dev/hd1 4.50 3.68 19% 1404 1% /home
/proc - - - - - /proc
/dev/bos_hd10opt 1.25 0.24 82% 49395 45% /opt
/dev/hd11admin 0.25 0.25 1% 5 1% /admin
/dev/besclientlv 0.25 0.15 40% 21 1% /opt/BESClient
/dev/lv_doonce 0.25 0.07 71% 450 3% /opt/DoOnceAIX
/dev/itm_lv 2.00 0.82 59% 18193 9% /opt/IBM/ITM
/dev/scm_lv 2.00 1.90 5% 843 1% /opt/IBM/SCM
/dev/tiv_lv 1.00 1.00 1% 12 1% /opt/Tivoli
/dev/best1_lv 3.00 1.99 34% 242 1% /opt/best1
/dev/connectlv 2.00 1.94 3% 1848 1% /opt/connect
/dev/ctm700lv 3.00 2.12 30% 5749 2% /opt/ctm700
/dev/patrol_lv 2.00 1.14 43% 4196 2% /opt/patrol
/dev/tivoli_lv 0.25 0.20 20% 13142 22% /opt/tivoli/cit
/dev/oswatchlv 98.00 63.92 35% 8792 1% /oswatcher
/dev/staginglv 105.00 27.09 75% 37180 1% /staging
/dev/sampptu01lv 50.00 25.77 49% 88289 2% /u01
/dev/sampptu02lv 130.00 28.37 79% 370449 5% /u02
/dev/sampptu03lv 20.00 11.49 43% 31886 2% /u03
/dev/fslv00 1.00 0.99 1% 79 1% /var/adm/perfmgr
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/lv_auditlog 2.00 1.13 44% 2167 1% /var/log/eprise
/dev/varbes_lv 2.00 1.29 36% 8394 3% /var/opt/BESClient
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:]
10) run analyz command again
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
./u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid
eg
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/OPatch
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid
OPatchauto session is initiated at Sat Feb 15 11:18:28 2020
System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-19-15AM.log.
Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-20-56AM.log
The id for this session is 4465
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:hykeham
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0
==Following patches were SKIPPED:
Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log
Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log
Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log
Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log
OPatchauto session completed at Sat Feb 15 11:21:56 2020
Time taken to complete the session 3 minutes, 29 seconds
Implementation
---------------
1) Set proper PATH
¨ Apply PSU through patchauto from root user.
eg
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxr-x--- 4 oracrs oinstall 256 Sep 27 20:08 26839277
drwxr-x--- 5 oracrs oinstall 256 Sep 27 20:08 30122828
drwxr-x--- 4 oracrs oinstall 256 Sep 27 20:08 30093408
drwxr-x--- 4 oracrs oinstall 256 Sep 27 20:08 30138470
drwxr-x--- 2 oracrs oinstall 4096 Sep 27 20:12 automation
drwxr-x--- 5 oracrs oinstall 256 Sep 27 20:12 30122814
-rw-r----- 1 oracrs oinstall 0 Sep 27 20:18 README.txt
-rw-r----- 1 oracrs oinstall 124674 Sep 27 21:46 README.html
-rw-rw-r-- 1 oracrs oinstall 5828 Sep 27 22:16 bundle.xml
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] chmod 775
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] chmod -R 775 /oswatcher/2020_patch
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 20:08 26839277
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 20:08 30122828
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 20:08 30093408
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 20:08 30138470
drwxrwxr-x 2 oracrs oinstall 4096 Sep 27 20:12 automation
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 20:12 30122814
-rwxrwxr-x 1 oracrs oinstall 0 Sep 27 20:18 README.txt
-rwxrwxr-x 1 oracrs oinstall 124674 Sep 27 21:46 README.html
-rwxrwxr-x 1 oracrs oinstall 5828 Sep 27 22:16 bundle.xml
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /u02/oracle/12.2.0.1/grid
total 6284552
-rw-r----- 1 oracrs oinstall 500 Feb 06 2013 welcome.html
-rw-r----- 1 oracrs oinstall 59 Oct 08 2014 oraInst.loc
-rw-r--r-- 1 oracrs oinstall 852 Sep 02 2015 env.ora
-rwxr-x--- 1 oracrs oinstall 628 Sep 05 2015 runcluvfy.sh
-rwxr-x--- 1 oracrs oinstall 5395 Jul 21 2016 gridSetup.sh
drwxr-xr-x 3 oracrs oinstall 256 Apr 02 2017 slax
-rwxrwxr-x 1 oracrs oinstall 170920687 Feb 13 14:40 p6880880_122010_AIX64-5L.zip
drwxr-xr-x 2 oracrs oinstall 4096 Feb 15 11:11 dbs
drwx------ 8 oracrs oinstall 4096 Feb 15 11:17 .patch_storage
drwxrwxr-x 3 oracrs oinstall 256 Feb 15 11:18 opatchautocfg
drwxrwxr-x 8 oracrs oinstall 8192 Feb 15 11:19 cfgtoollogs
drwxr-x--- 2 oracrs oinstall 256 Feb 15 11:21 .opatchauto_storage
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid
OPatchauto session is initiated at Sat Feb 15 11:23:59 2020
System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-24-16AM.log.
Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-25-54AM.log
The id for this session is PN8U
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid
Bringing down CRS service on home /u02/oracle/12.2.0.1/grid
Prepatch operation log file location: /u02/oracle/PPT/crsdata/hykeham/crsconfig/crspatch_hykeham_2020-02-15_11-26-54AM.log
CRS service brought down successfully on home /u02/oracle/12.2.0.1/grid
Start applying binary patch on home /u02/oracle/12.2.0.1/grid
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home /u02/oracle/12.2.0.1/grid
Starting CRS service on home /u02/oracle/12.2.0.1/grid
Postpatch operation log file location: /u02/oracle/PPT/crsdata/hykeham/crsconfig/crspatch_hykeham_2020-02-15_11-41-29AM.log
CRS service started successfully on home /u02/oracle/12.2.0.1/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:hykeham
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0
Summary:
==Following patches were SKIPPED:
Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.
==Following patches were SUCCESSFULLY applied:
Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log
Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log
Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log
Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log
OPatchauto session completed at Sat Feb 15 11:54:12 2020
Time taken to complete the session 30 minutes, 14 seconds
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[hykeham:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Post implementation
-------------------
during apply patch,it will stop clusterware and apply patch on clusterware home and then start clusterware automatically
2) Validate PSU using Opatch Lsinventory from Grid user
crsctl query crs activeversion
crsctl query crs releaseversion
check status of crs
./crsctl check crs
ps -ef|grep d.bin
su - oracrs
start database in case not started and check acfs mount point
srvctl start instance -d test1db -i test1db1
srvctl start instance -d test2db -i test2db1
and check load balancing
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> set linesize 200
SQL> select name,open_mode,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') "DB Startup Time",host_name from v$database,v$instance;
NAME OPEN_MODE DB Startup Time HOST_NAME
--------- -------------------- ----------------------------- ----------------------------------------------------------------
SAM44PPT READ WRITE 15-feb-2020 09:05:08 heanor
SQL> select count(1),inst_id from gv$session group by inst_id;
COUNT(1) INST_ID
Issue during 12c patching
-------------------------------
Error whil applying patching on cluserware
----------------------------------
OPatchauto session is initiated at Sat Apr 18 03:17:56 2020
System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-04-18_03-18-27AM.log.
Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-04-18_03-19-42AM.log
The id for this session is BXND
Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid
Bringing down CRS service on home /u02/oracle/12.2.0.1/grid
Prepatch operation log file location: /u02/oracle/PRD/crsdata/fleetwood/crsconfig/crspatch_fleetwood_2020-04-18_03-21-07AM.log
CRS service brought down successfully on home /u02/oracle/12.2.0.1/grid
Start applying binary patch on home /u02/oracle/12.2.0.1/grid
Successfully executed command: /usr/sbin/slibclean
Failed while applying binary patches on home /u02/oracle/12.2.0.1/grid
Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : fleetwood->/u02/oracle/12.2.0.1/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u02/oracle/12.2.0.1/grid, host: fleetwood.
Command failed: /u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid -target_type cluster -binary -invPtrLoc /u02/oracle/12.2.0.1/grid/oraInst.loc -jre /u02/oracle/12.2.0.1/grid/OPatch/jre -persistresult /u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_fleetwood_crs.ser -analyzedresult /u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_analy
IT look like permission issue .in first node it is onwed by root . it should be grid user
fleetwood:root:/home/root:] ls -lrt /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
-rw-r----- 1 root system 140210504 Aug 08 2018 /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
[fleetwood:root:/home/root:]
in second node ,permission look fine.
pending to apply patch
[fairfield:root:/home/root:] ls -lrt /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
-rw-r----- 1 oracrs oinstall 140210504 Oct 17 2018 /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
[fairfield:root:/home/root:]
As per log , there is bug
As per oracle support comments
Hi,
++ As below bug patches are in place on node#1 now.
BUG 30122814 - OCW OCT 2019
BUG 30122828 - ACFS OCT 2019
BUG 30093408 - TOMCAT
++ You cannot continue with "opatchauto apply" for remaining patches on GI home. The only option is to apply the patch manually using "opatch apply" on node#1.
++ Can you provide the below details?
# ls -lrt /oswatcher/2020_patch/30116802
# <GI_HOME>/bin/kfod op=patches
# <GI_HOME>/bin/kfod op=patchlvl
++ Run the below command on node#1 and share again the below command output
# $GI_HOME/crs/install/rootcrs.sh -prepatch
# $GI_HOME/crs/install/rootcrs.sh -postpatch
# ls -lrt /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
output of above
--------------------
[fleetwood:root:/home/root:]
[fleetwood:root:/home/root:] ls -lrt /oswatcher/2020_patch/30116802
total 272
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 26839277
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 2019 30122828
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 30093408
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 30138470
drwxrwxr-x 2 oracrs oinstall 4096 Sep 27 2019 automation
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 2019 30122814
-rwxrwxr-x 1 oracrs oinstall 0 Sep 27 2019 README.txt
-rwxrwxr-x 1 oracrs oinstall 124674 Sep 27 2019 README.html
-rwxrwxr-x 1 oracrs oinstall 5828 Sep 27 2019 bundle.xml
[fleetwood:root:/home/root:] /u02/oracle/12.2.0.1/grid/bin/kfod op=patches
/u02/oracle/12.2.0.1/grid/bin/kfod[22]: %ORACLE_HOME%/bin/kfod.bin: not found.
[fleetwood:root:/home/root:] /u02/oracle/12.2.0.1/grid/bin/kfod op=patchlvl
/u02/oracle/12.2.0.1/grid/bin/kfod[22]: %ORACLE_HOME%/bin/kfod.bin: not found.
[fleetwood:root:/home/root:] /u02/oracle/12.2.0.1/grid/crs/install/rootcrs.sh -prepatch
Using configuration parameter file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u02/oracle/PRD/crsdata/fleetwood/crsconfig/crspatch_fleetwood_2020-04-18_04-58-22AM.log
[fleetwood:root:/home/root:] /u02/oracle/12.2.0.1/grid//crs/install/rootcrs.sh -postpatch
Using configuration parameter file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u02/oracle/PRD/crsdata/fleetwood/crsconfig/crspatch_fleetwood_2020-04-18_04-58-46AM.log
2020/04/18 05:00:06 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/04/18 05:00:07 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/04/18 05:00:55 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'fleetwood'
CRS-2672: Attempting to start 'ora.evmd' on 'fleetwood'
CRS-2676: Start of 'ora.mdnsd' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.evmd' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'fleetwood'
CRS-2676: Start of 'ora.gpnpd' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'fleetwood'
CRS-2676: Start of 'ora.gipcd' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'fleetwood'
CRS-2676: Start of 'ora.cssdmonitor' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'fleetwood'
CRS-2672: Attempting to start 'ora.diskmon' on 'fleetwood'
CRS-2676: Start of 'ora.diskmon' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.cssd' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'fleetwood'
CRS-2672: Attempting to start 'ora.ctssd' on 'fleetwood'
CRS-2676: Start of 'ora.ctssd' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'fleetwood'
CRS-2676: Start of 'ora.crf' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'fleetwood'
CRS-2676: Start of 'ora.crsd' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'fleetwood'
CRS-2676: Start of 'ora.asm' on 'fleetwood' succeeded
CRS-6017: Processing resource auto-start for servers: fleetwood
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'fleetwood'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'fairfield'
CRS-2673: Attempting to stop 'ora.fleetwood.vip' on 'fairfield'
CRS-2672: Attempting to start 'ora.ons' on 'fleetwood'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'fairfield' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'fairfield'
CRS-2677: Stop of 'ora.fleetwood.vip' on 'fairfield' succeeded
CRS-2672: Attempting to start 'ora.fleetwood.vip' on 'fleetwood'
CRS-2677: Stop of 'ora.scan1.vip' on 'fairfield' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'fleetwood'
CRS-2676: Start of 'ora.fleetwood.vip' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'fleetwood'
CRS-2676: Start of 'ora.scan1.vip' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'fleetwood'
CRS-2676: Start of 'ora.ons' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'fleetwood'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.asm' on 'fleetwood' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'fleetwood'
CRS-2672: Attempting to start 'ora.E2B01PRD_DATA1.dg' on 'fleetwood'
CRS-2672: Attempting to start 'ora.E2B01PRD_FRA1.dg' on 'fleetwood'
CRS-2672: Attempting to start 'ora.E2B01PRD_REDO1.dg' on 'fleetwood'
CRS-2672: Attempting to start 'ora.E2B01PRD_REDO2.dg' on 'fleetwood'
CRS-2672: Attempting to start 'ora.MGMT.dg' on 'fleetwood'
CRS-2676: Start of 'ora.E2B01PRD_FRA1.dg' on 'fleetwood' succeeded
CRS-2664: Resource 'ora.e2b01prd.db' is already running on 'fairfield'
CRS-2676: Start of 'ora.E2B01PRD_REDO2.dg' on 'fleetwood' succeeded
CRS-2664: Resource 'ora.e2b01prd.db' is already running on 'fairfield'
CRS-2676: Start of 'ora.MGMT.dg' on 'fleetwood' succeeded
CRS-2676: Start of 'ora.E2B01PRD_REDO1.dg' on 'fleetwood' succeeded
CRS-2664: Resource 'ora.e2b01prd.db' is already running on 'fairfield'
CRS-2676: Start of 'ora.E2B01PRD_DATA1.dg' on 'fleetwood' succeeded
CRS-2664: Resource 'ora.e2b01prd.db' is already running on 'fairfield'
CRS-2676: Start of 'ora.CRS.dg' on 'fleetwood' succeeded
CRS-6016: Resource auto-start has completed for server fleetwood
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1221707709].
SQL Patching tool version 12.2.0.1.0 Production on Sat Apr 18 05:11:01 2020
Copyright (c) 2012, 2018, Oracle. All rights reserved.
Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10
Nothing to roll back
Nothing to apply
SQL Patching tool complete on Sat Apr 18 05:12:20 2020
[fleetwood:root:/home/root:]
[fleetwood:root:/home/root:]
[fleetwood:root:/home/root:] ls -lrt /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
-rw-r----- 1 root system 140210504 Aug 08 2018 /u02/oracle/12.2.0.1/grid/suptools/orachk.zip
[fleetwood:root:/home/root:]
./u02/oracle/12.2.0.1/grid/crs/install/rootcrs.sh -postpatch
./rootcrs.sh -postpatch
============
As per oracle support recommendation ,we need to apply missing patch manually
Steps for Applying the Patch
======================
Execute the following on each node of the cluster in non-shared CRS and DB home environment to apply the patch.
1. Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed.
2. Run the pre root script.
If this is a GI Home, as the root user execute:
# <GI_HOME>/crs/install/rootcrs.sh -prepatch
3. Patch GI home.
As the GI home owner execute:
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB WLM TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB RU TRACKING BUG%
5. Run the post script.
As the root user execute:
# <GI_HOME>/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
# <GI_HOME>/crs/install/rootcrs.sh -postpatch
6. If the message, "A system reboot is recommended before using ACFS is shown, then a reboot must be issued before continuing.
Failure to do so will result in running with an unpatched ACFS\ADVM\OKS driver.
7. Start the CRS managed resources that were earlier running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$ <ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name>
8. For each database instance running on the Oracle home being patched, run the datapatch utility as described in next table.
eg
from root user
earlier used command
./opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid
[Apr 18, 2020 3:24:48 AM] [INFO] Patch 30093408 successfully applied.
[Apr 18, 2020 3:25:56 AM] [INFO] Patch 30122814 successfully applied.
[Apr 18, 2020 3:29:30 AM] [INFO] Patch 30122828 successfully applied.
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 26839277
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 2019 30122828 -done
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 30093408 -done
drwxrwxr-x 4 oracrs oinstall 256 Sep 27 2019 30138470
drwxrwxr-x 5 oracrs oinstall 256 Sep 27 2019 30122814 -done
done means already applied OPatch autom
<UNZIPPED_PATCH_LOCATION>/30116802/30138470
<UNZIPPED_PATCH_LOCATION>/30116802/30122814-done
<UNZIPPED_PATCH_LOCATION>/30116802/30122828-done
<UNZIPPED_PATCH_LOCATION>/30116802/26839277
<UNZIPPED_PATCH_LOCATION>/30116802/30093408- done
we need to apply on missing patching as below
/oswatcher/2020_patch/30116802/26839277
/oswatcher/2020_patch/30116802/30138470
for 11 g
from root user
cd /u02/oracle/12.2.0.1/grid/crs/install
./rootcrs.pl -unlock
from grid user
cd /u02/oracle/12.2.0.1/grid/OPatch
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
opatch napply -oh /u02/oracle/12.2.0.1/grid -local /oswatcher/2020_patch/30116802/26839277
opatch napply -oh /u02/oracle/12.2.0.1/grid -local /oswatcher/2020_patch/30116802/30138470
Root user
/u02/oracle/12.2.0.1/grid/crs/install/rootcrs.pl -patch
start crs on node1
start database on node1
from 12c syntax has changed
from root user
cd /u02/oracle/12.2.0.1/grid/crs/install/
./rootcrs.sh -prepatch
from grid user
cd /u02/oracle/12.2.0.1/grid/OPatch
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
opatch napply -oh /u02/oracle/12.2.0.1/grid -local /oswatcher/2020_patch/30116802/26839277
./opatch apply -oh /u02/oracle/12.2.0.1/grid -local /oswatcher/2020_patch/30116802/30138470
from root user
cd /u02/oracle/12.2.0.1/grid/crs/install/
.rootcrs.sh -postpatch
./u02/oracle/12.2.0.1/grid/crs/install/rdbms/install/rootadd_rdbms.sh
Your Affiliate Money Making Machine is waiting -
ReplyDeletePlus, making profit with it is as easy as 1---2---3!
Here is how it all works...
STEP 1. Input into the system what affiliate products you want to promote
STEP 2. Add PUSH button traffic (this ONLY takes 2 minutes)
STEP 3. See how the system grow your list and sell your affiliate products all on it's own!
Do you want to start making profits???
Click here to start running the system