General
Troubleshooting steps when opatchauto failed while apply patch on
Clusterware/Grid Environment
April
patch need to be unzip using jar command
jar
xvf p30783658_121020_AIX64-5L.zip
jar
xvf p30920127_122010_AIX64-5L.zip
chmod
-R 775 30920127
Note
– when opatchauto failed .always check status for clusterware using
below command
Please
share below information from both node 1 & 2
<GI_HOME>/bin/kfod op=patches
<GI_HOME>/bin/kfod op=patchlvl
And also please share below details
ps -ef|grep d.bin
ps -ef|grep pmon
crsctl check crs
crsctl check cluster -all
crsctl stat res -t
crsctl stat res -t -init
olsnodes -n -t
Please share latest opatch ls inventory from patch home
&
opatch lspatch ==> output from both nodes
<GI_HOME>/bin/kfod op=patches
<GI_HOME>/bin/kfod op=patchlvl
And also please share below details
ps -ef|grep d.bin
ps -ef|grep pmon
crsctl check crs
crsctl check cluster -all
crsctl stat res -t
crsctl stat res -t -init
olsnodes -n -t
Please share latest opatch ls inventory from patch home
&
opatch lspatch ==> output from both nodes
Scenario 1) We
have done pre-check and done amendment as per oracle support readme
for patching
mean check free
space on temp and home file system and updated opatch version and
checked patch conflict and run opatchauto
analyze command but opatchauto failed with permission issue
on suptools/orachk.zip" and
rectified permission on it and resume opatchauto
operation .
It get successfully
Precheck
run analyze command to do pre-check and completed successfully
[Node1:root:/home/root:] export
PATH=$PATH:/u01/oracle/Node2/product/12.2.0.1/OPatch
[Node1:root:/home/root:] cd /u01/oracle/Node2/product/12.2.0.1/OPatch
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
./opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh
/u01/oracle/Node2/product/12.2.0.1
OPatchauto session is initiated at Wed May 20 01:09:00 2020
System initialization log file is
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchautodb/systemconfig2020-05-20_01-09-43AM.log.
Session log file is
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/opatchauto2020-05-20_01-13-11AM.log
The id for this session is IXRW
Executing OPatch prereq operations to verify patch applicability on
home /u01/oracle/Node2/product/12.2.0.1
Patch applicability verified successfully on home
/u01/oracle/Node2/product/12.2.0.1
Verifying SQL patch applicability on home
/u01/oracle/Node2/product/12.2.0.1
SQL patch applicability verified successfully on home
/u01/oracle/Node2/product/12.2.0.1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:Node1
RAC Home:/u01/oracle/Node2/product/12.2.0.1
Version:12.2.0.1.0
==Following patches were SKIPPED:
Patch: /oswatcher/2020_patch/30116802/30122828
Reason: This patch is not applicable to this specified target type -
"rac_database"
Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is not applicable to this specified target type -
"rac_database"
Patch: /oswatcher/2020_patch/30116802/30093408
Reason: This patch is not applicable to this specified target type -
"rac_database"
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /oswatcher/2020_patch/30116802/30122814
Log:
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-20_01-13-43AM_1.log
Patch: /oswatcher/2020_patch/30116802/30138470
Log:
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-20_01-13-43AM_1.log
OPatchauto session completed at Wed May 20 01:16:28 2020
Time taken to complete the session 7 minutes, 29 seconds
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
Implementation
1) started applying
patch but it failed with permission issue
+ASM1:/u02/oracle/12.2.0.1/grid:N
-MGMTDB:/u02/oracle/12.2.0.1/grid:N
Node2:/u01/oracle/Node2/product/12.2.0.1:N
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
./opatchauto apply /oswatcher/2020_patch/30116802 -oh
/u01/oracle/Node2/product/12.2.0.1
OPatchauto session
is initiated at Wed May 20 01:17:46 2020
System
initialization log file is
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchautodb/systemconfig2020-05-20_01-18-15AM.log.
Session log file is
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/opatchauto2020-05-20_01-20-28AM.log
The id for this
session is GJLW
Executing OPatch
prereq operations to verify patch applicability on home
/u01/oracle/Node2/product/12.2.0.1
Patch applicability
verified successfully on home /u01/oracle/Node2/product/12.2.0.1
Verifying SQL patch
applicability on home /u01/oracle/Node2/product/12.2.0.1
SQL patch
applicability verified successfully on home
/u01/oracle/Node2/product/12.2.0.1
Preparing to bring
down database service on home /u01/oracle/Node2/product/12.2.0.1
Successfully
prepared home /u01/oracle/Node2/product/12.2.0.1 to bring down
database service
Bringing down
database service on home /u01/oracle/Node2/product/12.2.0.1
Following
database(s) and/or service(s) are stopped and will be restarted later
during the session: Node2
Database service
successfully brought down on home /u01/oracle/Node2/product/12.2.0.1
Performing prepatch
operation on home /u01/oracle/Node2/product/12.2.0.1
Perpatch operation
completed successfully on home /u01/oracle/Node2/product/12.2.0.1
Start applying
binary patch on home /u01/oracle/Node2/product/12.2.0.1
Successfully
executed command: /usr/sbin/slibclean
Failed while
applying binary patches on home /u01/oracle/Node2/product/12.2.0.1
Execution of
[OPatchAutoBinaryAction] patch action failed, check log for more
details. Failures:
Patch Target :
Node1->/u01/oracle/Node2/product/12.2.0.1 Type[rac]
Details: [
---------------------------Patching
Failed---------------------------------
Command execution
failed during patching in home: /u01/oracle/Node2/product/12.2.0.1,
host: Node1.
Command failed:
/u01/oracle/Node2/product/12.2.0.1/OPatch/opatchauto apply
/oswatcher/2020_patch/30116802 -oh /u01/oracle/Node2/product/12.2.0.1
-target_type rac_database -binary -invPtrLoc
/u01/oracle/Node2/product/12.2.0.1/oraInst.loc -jre
/u01/oracle/Node2/product/12.2.0.1/OPatch/jre -persistresult
/u01/oracle/Node2/product/12.2.0.1/opatchautocfg/db/sessioninfo/sessionresult_Node1_rac.ser
-analyzedresult
/u01/oracle/Node2/product/12.2.0.1/opatchautocfg/db/sessioninfo/sessionresult_analyze_Node1_rac.ser
Command failure
output:
==Following
patches FAILED in apply:
Patch:
/oswatcher/2020_patch/30116802/30138470
Log:
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-20_01-24-09AM_1.log
Reason:
Failed during Analysis: CheckPatchApplicable Failed, [ Prerequisite
Status: FAILED, Prerequisite output:
The
details are:
Patch
30138470:
Copy
Action: Destination File
"/u01/oracle/Node2/product/12.2.0.1/suptools/orachk.zip" is
not writeable.
'oracle.tfa,
12.2.0.1.0': Cannot copy file from 'orachk.zip' to
'/u01/oracle/Node2/product/12.2.0.1/suptools/orachk.zip'
]
After fixing the
cause of failure Run opatchauto resume
]
OPATCHAUTO-68061:
The orchestration engine failed.
OPATCHAUTO-68061:
The orchestration engine failed with return code 1
OPATCHAUTO-68061:
Check the log for more details.
OPatchAuto failed.
OPatchauto session
completed at Wed May 20 01:25:34 2020
Time taken to
complete the session 7 minutes, 49 seconds
opatchauto failed
with error code 42
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
2) Check
permission both node and rectify permission issue for orachk.zip and
change as per recommendation
airfield[/home/eibm994]$
sudo su -
Password:
[YOU HAVE NEW MAIL]
[Node2:root:/home/root:]
ls -ltr /u01/oracle/Node2/product/12.2.0.1/suptools/orachk.zip
-rw-r--r-- 1 root
system 140210504 Aug 08 2018
/u01/oracle/Node2/product/12.2.0.1/suptools/orachk.zip
[Node2:root:/home/root:]
ls
[Node2:root:/home/root:]
ls -ltr /u01/oracle/Node2/product/12.2.0.1/suptools
total 273880
-rw-r--r-- 1 root
system 140210504 Aug 08 2018 orachk.zip
drwxr-xr-x 3
e2bprd oinstall 256 Jun 21 2019 tfa
drwxr-xr-x 2
e2bprd oinstall 256 Jun 21 2019 oratop
drwxr-xr-x 6
e2bprd oinstall 4096 May 17 08:29 orachk
[Node2:root:/home/root:]
Node2:root:/u01/oracle/Node2/product/12.2.0.1:]
cd suptools
[Node2:root:/u01/oracle/Node2/product/12.2.0.1/suptools:]
ls -ltr
total 273880
-rw-r--r-- 1 root
system 140210504 Aug 08 2018 orachk.zip
drwxr-xr-x 3
e2bprd oinstall 256 Jun 21 2019 tfa
drwxr-xr-x 2
e2bprd oinstall 256 Jun 21 2019 oratop
drwxr-xr-x 6
e2bprd oinstall 4096 May 17 08:29 orachk
[Node2:root:/u01/oracle/Node2/product/12.2.0.1/suptools:]
chmod 777 orachk.zip
[Node2:root:/u01/oracle/Node2/product/12.2.0.1/suptools:]
ls -ltr
total 273880
-rwxrwxrwx 1 root
system 140210504 Aug 08 2018 orachk.zip
drwxr-xr-x 3
e2bprd oinstall 256 Jun 21 2019 tfa
drwxr-xr-x 2
e2bprd oinstall 256 Jun 21 2019 oratop
drwxr-xr-x 6
e2bprd oinstall 4096 May 17 08:29 orachk
[Node2:root:/u01/oracle/Node2/product/12.2.0.1/suptools:]
3) post
permission rectification and run resume command as below and patching
completed successful for Grid Home
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:] opatchauto
resume
OPatchauto session is initiated at Wed May 20 01:39:21 2020
Session log file is
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/opatchauto2020-05-20_01-39-25AM.log
Resuming existing session with id GJLW
Start applying binary patch on home
/u01/oracle/Node2/product/12.2.0.1
Successfully executed command: /usr/sbin/slibclean
Binary patch applied successfully on home
/u01/oracle/Node2/product/12.2.0.1
Performing postpatch operation on home
/u01/oracle/Node2/product/12.2.0.1
Postpatch operation completed successfully on home
/u01/oracle/Node2/product/12.2.0.1
Starting database service on home /u01/oracle/Node2/product/12.2.0.1
Database service successfully started on home
/u01/oracle/Node2/product/12.2.0.1
Preparing home /u01/oracle/Node2/product/12.2.0.1 after database
service restarted
No step execution required.........
Trying to apply SQL patch on home /u01/oracle/Node2/product/12.2.0.1
SQL patch applied successfully on home
/u01/oracle/Node2/product/12.2.0.1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as
follows:
Host:Node1
RAC Home:/u01/oracle/Node2/product/12.2.0.1
Version:12.2.0.1.0
Summary:
==Following patches were SKIPPED:
Patch: /oswatcher/2020_patch/30116802/30122828
Reason: This patch is not applicable to this specified target type -
"rac_database"
Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is not applicable to this specified target type -
"rac_database"
Patch: /oswatcher/2020_patch/30116802/30093408
Reason: This patch is not applicable to this specified target type -
"rac_database"
Patch: /oswatcher/2020_patch/30116802/30122814
Reason: This patch is already been applied, so not going to apply
again.
==Following patches were
SUCCESSFULLY applied:
Patch:
/oswatcher/2020_patch/30116802/30138470
Log:
/u01/oracle/Node2/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-20_01-39-55AM_1.log
OPatchauto session completed at Wed
May 20 01:51:16 2020
Time taken to complete the session
11 minutes, 56 seconds
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[Node1:root:/u01/oracle/Node2/product/12.2.0.1/OPatch:]
[YOU HAVE NEW MAIL]
[Node1:root:/home/root:] su - ora
[Node1:root:/home/root:] ps -ef|grep pmon
oracrs 12189898 1 0 Apr 21 - 0:52 mdb_pmon_-MGMTDB
oracrs 15597642 1 0 Apr 21 - 0:50 asm_pmon_+ASM1
root 23658642 51380270 0 01:53:56 pts/1 0:00 grep pmon
e2bprd 36438040 1 0 01:46:46 - 0:00 ora_pmon_Node21
[Node1:root:/home/root:] su - e2bprd
Node1[/home/e2bprd]$ . oraenv
ORACLE_SID = [Node21] ? Node2
The Oracle base remains unchanged with value /u01/oracle/Node2
Node1[/home/e2bprd]$ export ORACLE_SID=Node21
Node1[/home/e2bprd]$ sqlplus
SQL*Plus: Release 12.2.0.1.0 Production on Wed May 20 01:54:20 2020
Copyright (c) 1982, 2017, Oracle. All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit
Production
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
Node2 READ WRITE
SQL> select count(1),inst_id from gv$session group by inst_id
2 ;
COUNT(1) INST_ID
---------- ----------
95 1
100 2
SQL>
Environment
and issue summary
There is two node RAC server Node1 and Node2, While apply patch using opatchauto on Node1(first node). We faced permission issue
We performed action as below
1) changed owner and permission for folder jre and applied patch manually .It failed
2) copied jdk folder from working node (Node2 ) to Node1 and relink grid binary and applied patched successfully
3) then we are started applying patch manually on second node (Node2) and patch apply failed with permission issue
4) We change permission of JDK folder and tried to apply patch manually and it failed again with permission issue
5) We started clusterware and database for application use without applying patch on second node(Node2)
6) Next day, We successfully shutdown database instance and clusterware as per oracle support instruction.
7) copied JDK folder from Node1(now working node) to second node(Node2) and relink grid binary and started applying patch manually and it failed again
8) restored Grid home using tar from backup which was taken at time of pre-check
1) changed owner and permission for folder jre and applied patch manually .It failed
2) copied jdk folder from working node (Node2 ) to Node1 and relink grid binary and applied patched successfully
3) then we are started applying patch manually on second node (Node2) and patch apply failed with permission issue
4) We change permission of JDK folder and tried to apply patch manually and it failed again with permission issue
5) We started clusterware and database for application use without applying patch on second node(Node2)
6) Next day, We successfully shutdown database instance and clusterware as per oracle support instruction.
7) copied JDK folder from Node1(now working node) to second node(Node2) and relink grid binary and started applying patch manually and it failed again
8) restored Grid home using tar from backup which was taken at time of pre-check
and replied patch
manually on grid home and then applied patch on RAC database home
using opatchauto and It applied successfully
Scenario 2) We
have done pre-check and done amendment as per oracle support readme
for patching
mean check free
space on temp and home file system and updated opatch version and
checked patch conflict and run opatchauto analyze command but
opatchauto failed with permission issue on jdk/jre/bin/java
and rectified permission
for
JRE folder
and tried to apply patch
manually but Still failed with same issue. Then copied
complete JDK folder from working node using TAR command and transfer
tar file on trouble node and the untar it and then relink Grid home
and then Applied patching manually It get successfully .This issue
is during GI april 2020 patching
April patch
details
176870590 May
14 10:33 p6880880_200000_AIX64-5L.zip
2237218303 May
14 10:40 p30920127_122010_AIX64-5L.zip
Note : apply
is used to apply one patch at a time, napply is used to apply
multiple patches at one time.
Error Details
while applying patch using opatchauto
opatchauto resume
OPatchauto session is initiated
at Sat May 16 12:04:05 2020
Session log file is
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-05-16_12-04-10PM.log
Resuming existing session with
id 9ECR
Start applying binary patch on
home /u02/oracle/12.2.0.1/grid
Successfully executed command:
/usr/sbin/slibclean
Failed while applying binary
patches on home /u02/oracle/12.2.0.1/grid
Execution of
[OPatchAutoBinaryAction] patch action failed, check log for more
details. Failures:
Patch Target :
Node1->/u02/oracle/12.2.0.1/grid Type[crs]
Details: [
---------------------------Patching
Failed---------------------------------
Command execution failed during
patching in home: /u02/oracle/12.2.0.1/grid, host: Node1.
Command failed:
/u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply
/staging/grid_path_apr2020/30920127 -oh /u02/oracle/12.2.0.1/grid
-target_type cluster -binary -invPtrLoc
/u02/oracle/12.2.0.1/grid/oraInst.loc -jre
/u02/oracle/12.2.0.1/grid/OPatch/jre -persistresult
/u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_Node1_crs_5.ser
-analyzedresult
/u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_Node1_crs_5.ser
Command
failure output:
==Following
patches FAILED in apply:
Patch:
/staging/grid_path_apr2020/30920127/30886680
Log:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-16_12-05-07PM_1.log
Reason:
Failed during Analysis: CheckPatchApplicable Failed, [ Prerequisite
Status: FAILED, Prerequisite output:
The
details are:
Patch
30886680:
DeleteAction
: Destination File "/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java"
is not writeable.
'oracle.jdk,
1.8.0.3.91': Cannot delete file
'/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java'
DeleteAction
: Destination File
"/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper"
is not writeable.
'oracle.jdk,
1.8.0.3.91': Cannot delete file '/u02/oracle/12.2.0.1 ...
After
fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061:
The orchestration engine failed.
OPATCHAUTO-68061:
The orchestration engine failed with return code 1
OPATCHAUTO-68061:
Check the log for more details.
OPatchAuto
failed.
OPatchauto
session completed at Sat May 16 12:05:27 2020
Time
taken to complete the session 1 minute, 23 seconds
and
after changed permission resume patching but it failed again
opatchauto failed with error
code 42
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
chmod -R 777 /u02/oracle/12.2.0.1/grid/jdk/jre
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
opatchauto resume
OPatchauto session is initiated
at Sat May 16 12:07:30 2020
Session log file is
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-05-16_12-07-35PM.log
Resuming existing session with
id 9ECR
Start applying binary patch on
home /u02/oracle/12.2.0.1/grid
Successfully executed command:
/usr/sbin/slibclean
Failed while applying binary
patches on home /u02/oracle/12.2.0.1/grid
Execution of
[OPatchAutoBinaryAction] patch action failed, check log for more
details. Failures:
Patch Target :
Node1->/u02/oracle/12.2.0.1/grid Type[crs]
Details: [
---------------------------Patching
Failed---------------------------------
Command execution failed during
patching in home: /u02/oracle/12.2.0.1/grid, host: Node1.
Command failed:
/u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply
/staging/grid_path_apr2020/30920127 -oh /u02/oracle/12.2.0.1/grid
-target_type cluster -binary -invPtrLoc
/u02/oracle/12.2.0.1/grid/oraInst.loc -jre
/u02/oracle/12.2.0.1/grid/OPatch/jre -persistresult
/u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_Node1_crs_5.ser
-analyzedresult
/u02/oracle/12.2.0.1/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_Node1_crs_5.ser
Command failure output:
==Following patches FAILED in
apply:
Patch:
/staging/grid_path_apr2020/30920127/30886680
Log:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-16_12-08-27PM_1.log
Reason: Failed during Analysis:
CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite
output:
The details are:
Prerequisite check
"CheckApplicable" failed.]
After fixing the cause of failure
Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration
engine failed.
OPATCHAUTO-68061: The orchestration
engine failed with return code 1
OPATCHAUTO-68061: Check the log for
more details.
OPatchAuto failed.
OPatchauto session completed at Sat
May 16 12:08:48 2020
Time taken to complete the session
1 minute, 19 seconds
opatchauto failed with error
code 42
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Node1:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Now
We check total applied patch
and It look that only
two
patch is not applied
change
permission and tried to apply patch manually but manual patching
failed again with same permission issue
opatch
lsinventory|grep -i 30886680
opatch
lsinventory|grep -i 30882603
opatch
lsinventory|grep -i 30869447
opatch
lsinventory|grep -i 26839277
opatch
lsinventory|grep -i 30888810
Node1[/u02/oracle/12.2.0.1/grid]$ cd OPatch
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory|grep -i
30886680
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory|grep -i
30882603
Patch 30882603 : applied on Sat May 16 11:53:51 BST 2020
Patch description: "OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414
(30882603)"
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory|grep -i
30869447
Patch 30869447 : applied on Sat May 16 11:49:27 BST 2020
Patch description: "ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414
(30869447)"
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory|grep -i
26839277
Patch 26839277 : applied on Fri Jan 04 11:13:40 GMT 2019
Patch description: "DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913)
(26839277)"
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory|grep -i
30888810
Node1[/u02/oracle/12.2.0.1/grid/Opatch]$
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch lspatches
30882603;OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)
30869447;ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)
30138470;Database Oct 2019 Release Update : 12.2.0.1.191015 (30138470)
30093408;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:190208.0920) (30093408)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)
Checked
also from both node before apply patch manually
ps -ef|grep pmon
ps -eaf | grep d.bin
crsctl check crs
crsctl check cluster -all
crsctl stat res -t
crsctl stat res -t -init
olsnodes -n -t
ps -eaf | grep [ grid home path ] :
crsctl check crs
crsctl check cluster -all
crsctl stat res -t
crsctl stat res -t -init
olsnodes -n -t
ps -eaf | grep [ grid home path ] :
opatch lspatch ==> output from both nodes
crsctl query crs softwarepatch
crsctl query crs releasepatch
crsctl query crs activeversion
-f
<GI_HOME>/bin/kfod
op=patches
<GI_HOME>/bin/kfod op=patchlvl
<GI_HOME>/bin/kfod op=patchlvl
Output
Node1[/home/oracrs]$ crsctl
check crs
CRS-4639: Could not contact
Oracle High Availability Services
Node1[/home/oracrs]$ ps -eaf |
grep d.bin
oracrs 17629576 67043474 1
17:12:17 pts/1 0:00 grep d.bin
Node1[/home/oracrs]$
ps -eaf | grep /u02/oracle/12.2.0.1/grid
root 14745660 25297074 0 Feb 22
- 9:14 /u02/oracle/12.2.0.1/grid/jdk/jre/bin/java -Xms128m -Xmx256m
oracle.rat.tfa.CommandLine Node1:checkTFAMain -port 5000 -tfaHome
/u02/oracle/12.2.0.1/grid/tfa/Node1/tfa_home
root 25297074 30277806 0 Feb 22
- 0:01 /u02/oracle/12.2.0.1/grid/perl/bin/perl
/u02/oracle/12.2.0.1/grid/tfa/Node1/tfa_home/bin/tfactl.pl -initstart
root 30277806 1 0 Feb 22 - 0:00
/bin/sh /u02/oracle/12.2.0.1/grid/tfa/Node1/tfa_home/bin/tfactl
-initstart
oracrs 57278650 67043474 1
17:13:27 pts/1 0:00 grep /u02/oracle/12.2.0.1/grid
Node1[/home/oracrs]$
change
permission and tried to apply patch manually but manual patching
failed again with same permission issue
chown -R
/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java
chown -R root:oinstall
/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java
chown -R root:oinstall
/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper
chown -R root:oinstall
/u02/oracle/12.2.0.1/grid/jdk/jre/lib
chown -R root:oinstall ppc64
chmod -R 777 ppc64
chown -R root:oinstall jdk
chmod -R 777 jdk
General
Method to apply patch on RAC Environment
Oracle
support suggested to run below
./cluutil -ckpt -oraclebase <GI_BASE> -writeckpt -name ROOTCRS_PREPATCH -state START
>rootcrs.sh -preaptch
>rootcrs.sh -unlock
>apply the patch manually
>rootcrs.sh -lock
>rootcrs.sh -postpatch
or
Oracle
recommended to apply pending patch manually
From root user
cd /u02/oracle/12.2.0.1/grid/crs/install/
./rootcrs.sh -prepatch
from grid user
$ <GI_HOME>/OPatch/opatch
apply -oh <GI_HOME> -local
<UNZIPPED_PATCH_LOCATION>/%BUGNO%/%OCW TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%ACFS TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB WLM TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB RU TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%TOMCAT RU TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%ACFS TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB WLM TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%DB RU TRACKING BUG%
$ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/%BUGNO%/%TOMCAT RU TRACKING BUG%
eg
cd /u02/oracle/12.2.0.1/grid/OPatch
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
opatch napply -oh /u02/oracle/12.2.0.1/grid -local /staging/grid_path_apr2020/30920127/30886680
export
PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
eanor[/home/oracrs]$ cd
/staging/grid_path_apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
Node1[/staging/grid_path_apr2020/30920127/30886680]$
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node1[/staging/grid_path_apr2020/30920127/30886680]$
pwd
/staging/grid_path_apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/staging/grid_path_apr2020/30920127/30886680]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
,opatch apply -oh
/u02/oracle/12.2.0.1/grid -local /patch_location/
opatch apply -oh
/u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30886680
opatch apply -oh
/u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30882603
opatch apply -oh
/u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30869447
opatch apply -oh
/u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/26839277
opatch apply -oh
/u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30888810
from root user
cd /u02/oracle/12.2.0.1/grid/crs/install/
.rootcrs.sh -postpatch
Easiest
way to apply patch manually , Go to patch location and set
environment and applied patch from grid user as below
Node1[/home/oracrs]$ cd
/staging/grid_path_apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
Node1[/staging/grid_path_apr2020/30920127/30886680]$
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node1[/staging/grid_path_apr2020/30920127/30886680]$
pwd
/staging/grid_path_apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/staging/grid_path_apr2020/30920127/30886680]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_08-06-03AM_1.log
Verifying environment and
performing prerequisite checks...
Prerequisite check
"CheckApplicable" failed.
The details are:
Patch 30886680:
DeleteAction : Destination File
"/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java" is not
writeable.
'oracle.jdk, 1.8.0.3.91': Cannot
delete file '/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java'
DeleteAction : Destination File
"/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper"
is not writeable.
'oracle.jdk, 1.8.0.3.91': Cannot
delete file
'/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper'
Copy Action: Destination File
"/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper"
is not writeable.
'oracle.jdk, 1.8.0.3.91': Cannot
copy file from 'jspawnhelper' to
'/u02/oracle/12.2.0.1/grid/jdk/jre/lib/ppc64/jspawnhelper'
Copy Action: Destination File
"/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java" is not
writeable.
'oracle.jdk, 1.8.0.3.91': Cannot
copy file from 'java' to '/u02/oracle/12.2.0.1/grid/jdk/jre/bin/java'
UtilSession failed:
Prerequisite check
"CheckApplicable" failed.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_08-06-03AM_1.log
OPatch failed with error code 73
Node1[/staging/grid_path_apr2020/30920127/30886680]$
when
we applied manually after running prepatch ,It look like inventory
was locked
so we check permission and
rename lock folder and applied again and it get successfully applied
Node1[/home/oracrs]$ cd /u02/oracle/12.2.0.1/grid/OPatch
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch apply -oh /u02/oracle/12.2.0.1/grid -local /staging/grid_path_apr2020/30920127/30886680
Oracle Interim Patch Installer version 12.2.0.1.21
Copyright (c) 2020, Oracle Corporation. All rights reserved.
Oracle Home : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
from : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-16_19-02-15PM_1.log
Unable to lock Central Inventory. OPatch will attempt to re-lock.
Do you want to proceed? [y|n]
y
User Responded with: Y
OPatch will sleep for few seconds, before re-trying to get the lock...
failed with below error
User Responded with: Y
OPatch will sleep for few
seconds, before re-trying to get the lock...
Unable
to lock Central Inventory. OPatch will attempt to re-lock.
Do
you want to proceed? [y|n]
y
User
Responded with: Y
OPatch
will sleep for few seconds, before re-trying to get the lock...
OPatchSession
cannot load inventory for the given Oracle Home
/u02/oracle/12.2.0.1/grid. Possible causes are:
No read or write permission to ORACLE_HOME/.patch_storage
Central Inventory is locked by another OUI instance
No read permission to Central Inventory
The lock file exists in ORACLE_HOME/.patch_storage
The Oracle Home does not exist in Central Inventory
UtilSession
failed: Cannot lock inventory. Total retry time has exceeded 3600
seconds allowed.
Log
file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-16_19-02-15PM_1.log
OPatch failed with error code
73
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$
Oracle
Recommendation
Please
check if there is a permission problem to access central inventory.
Not having WRITE permission on "<oraInventory>/locks"
in order to create writer.lock for locking the central inventory. You
may use "ls -la <oraInventory>" to determine whether
this condition is true for the user who run the installer. If this
the is case, then ensure central inventory (i.e. oraInventory) has
GROUP "WRITE" permission.
- Please check if there is a previously failed installation that left the write.lock in oraInventory/locks. If this is the case, then either rename the lock or remove the lock.
- Please check if there is a previously failed installation that left the write.lock in oraInventory/locks. If this is the case, then either rename the lock or remove the lock.
#
chown oracrs:oinstall /u02/oracle/oraInventory/locks
The permissions should also be 'drwxr-x---'
The permissions should also be 'drwxr-x---'
changed
ownership of locks folderand renamed it and given permission 775
[Node1:root:/home/root:] ls -la
/u02/oracle/oraInventory
total 48
drwxrwx--- 7 oracrs oinstall
256 May 16 18:15 .
drwxr-xr-x 9 root oinstall 256
Jun 11 2019 ..
drwxrwx--- 2 oracrs oinstall
256 Oct 08 2014 ContentsXML
drwxrwx--- 21 oracrs oinstall
4096 Jun 11 2019 backup
drwxr----- 2 oracle dba 256 May
16 18:14 locks
drwxrwx--- 2 oracrs oinstall
12288 May 16 12:08 logs
-rwxrwx--- 1 oracrs oinstall 59
Oct 08 2014 oraInst.loc
-rwxrwx--- 1 oracrs oinstall
1647 Oct 08 2014 orainstRoot.sh
drwxrwx--- 2 oracrs oinstall
256 Oct 08 2014 oui
[Node1:root:/home/root:]
[Node1:root:/home/root:] cd
/u02/oracle/oraInventory
[Node1:root:/u02/oracle/oraInventory:]
cd locks
[Node1:root:/u02/oracle/oraInventory/locks:]
ls -ltr
total 0
-rw-r----- 1 oracle dba 0 May
16 18:14 inventory.lock
[Node1:root:/u02/oracle/oraInventory/locks:]
renamed file inventory.lock
----------------
/u02/oracle/oraInventory/locks:]
ls -ltr
total 0
-rw-r----- 1 oracle dba
0 May 16 18:14 inventory.lock_old
[Node1:root:/u02/oracle/oraInventory/locks:]
ls -ltr
We
try to applied patch manually ,locking for inventory resolved but
again it failed with permission issue so we copied jdk folder from
working and untar in trouble and applied again it Applied
successfully
Node1[/u02/oracle/12.2.0.1/grid/bin]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/u02/oracle/12.2.0.1/grid/bin]$
su
Node1[/u02/oracle/12.2.0.1/grid/bin]$
cd ..
Node1[/u02/oracle/12.2.0.1/grid]$
rdbms/install/
Node1[/u02/oracle/12.2.0.1/grid]$
Node1[/u02/oracle/12.2.0.1/grid]$
cd rdbms/install/
Node1[/u02/oracle/12.2.0.1/grid/rdbms/install]$
exit
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
cd /u02/oracle/12.2.0.1/grid/rdbms/install
[Node1:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
./rootadd _rdbms.sh
[Node1:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
cd ../..
[Node1:root:/u02/oracle/12.2.0.1/grid:]
cd /crs/install
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -lock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node1/crsconfig/crslock_Node1_2020-05-17_09-12-44AM.log
2020/05/17 09:13:00 CLSRSC-329:
Replacing Clusterware entries in file '/etc/inittab'
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
cd ../..
[Node1:root:/u02/oracle/12.2.0.1/grid:]
cd bin
[Node1:root:/u02/oracle/12.2.0.1/grid/bin:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node1:root:/u02/oracle/12.2.0.1/grid/bin:]
crsctl start crs
ksh: crsctl: not found.
[Node1:root:/u02/oracle/12.2.0.1/grid/bin:]
./crsctl start crs
CRS-6706: Oracle Clusterware
Release patch level ('3773405221') does not match Software patch
level ('1833732982'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed,
or completed with errors.
[Node1:root:/u02/oracle/12.2.0.1/grid/bin:]
cd ..
[Node1:root:/u02/oracle/12.2.0.1/grid:]
cd crs/install
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -unlock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node1/crsconfig/crsunlock_Node1_2020-05-17_09-15-33AM.log
2020/05/17 09:15:37
CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA)
Collector.
2020/05/17 09:15:53
CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA)
Collector.
2020/05/17 09:15:59 CLSRSC-347:
Successfully unlock /u02/oracle/12.2.0.1/grid
[Node1:root:/u02/oracle/12.2.0.1/grid/crs/install:]
su - oracrs
Node1[/home/oracrs]$ c
Node1[/home/oracrs]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains
unchanged with value /u02/oracle/PPT
Node1[/home/oracrs]$ cd $ORACLE
_HOME
Node1[/u02/oracle/12.2.0.1/grid]$
cd export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node1[/u02/oracle/12.2.0.1/grid]$
Node1[/u02/oracle/12.2.0.1/grid]$
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node1[/u02/oracle/12.2.0.1/grid]$
cd /staging/grid _path _apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
pwd
/staging/grid_path_apr2020/30920127/30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Applied
two pending patch successfully as below
Node1[/staging/grid_path_apr2020/30920127/30886680]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-19-56AM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30886680
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30886680' to OH '/u02/oracle/12.2.0.1/grid'
ApplySession: Optional
component(s) [ oracle.ctx, 12.2.0.1.0 ] , [ oracle.sdo, 12.2.0.1.0 ]
, [ oracle.swd.oui, 12.2.0.1.0 ] , [ oracle.rdbms.dv, 12.2.0.1.0 ] ,
[ oracle.oid.client, 12.2.0.1.0 ] , [ oracle.rdbms.drdaas, 12.2.0.1.0
] , [ oracle.oraolap, 12.2.0.1.0 ] , [ oracle.xdk, 12.2.0.1.0 ] , [
oracle.rdbms.lbac, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ] ,
[ oracle.network.cman, 12.2.0.1.0 ] , [ oracle.rdbms.oci, 12.2.0.1.0
] , [ oracle.precomp.lang, 12.2.0.1.0 ] , [ oracle.precomp.common,
12.2.0.1.0 ] not present in the Oracle Home or a higher version is
found.
Patching component
oracle.rdbms.rsf, 12.2.0.1.0...
Patching component
oracle.rdbms, 12.2.0.1.0...
Patching component
oracle.rdbms.util, 12.2.0.1.0...
Patching component
oracle.network.rsf, 12.2.0.1.0...
Patching component
oracle.ctx.rsf, 12.2.0.1.0…
Patch 30886680 successfully
applied.
Sub-set patch [30138470] has
become inactive due to the application of a super-set patch
[30886680].
Please refer to Doc ID
2161861.1 for any possible further required actions.
OPatch Session completed with
warnings.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-19-56AM_1.log
OPatch completed with warnings.
Node1[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/staging/grid_path_apr2020/30920127/30886680]$
cd $ORACLE _HOME/OPatch
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch lspatches
30886680;Database
Apr 2020 Release Update : 12.2.0.1.200414 (30886680)
30882603;OCW
APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)
30869447;ACFS
APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)
30093408;TOMCAT
RELEASE UPDATE 12.2.0.1.0(ID:190208.0920) (30093408)
26839277;DBWLM
RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)
OPatch
succeeded.
Now
after comparing and checking applied second missing patch
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$
cd /staging/grid _path _apr2020/30920127
Node1[/staging/grid_path_apr2020/30920127]$
ls -ltr
total 264
-rwxrwxr-x 1 oracrs
oinstall 121078 Apr 12 22:34 README.html
-rwxrwxr-x 1 oracrs
oinstall 5828 Apr 12 23:11 bundle.xml
drwxrwxr-x 2 oracrs
oinstall 4096 Apr 13 04:25 automation
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30869447
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:25 26839277
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30882603
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:32 30886680
-rwxrwxr-x 1 oracrs
oinstall 0 Apr 13 04:39 README.txt
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:39 30888810
Node1[/staging/grid_path_apr2020/30920127]$
cd 30888810
Node1[/staging/grid_path_apr2020/30920127/30888810]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-31-48AM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30888810
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30888810' to OH '/u02/oracle/12.2.0.1/grid'
Patching component
oracle.tomcat.crs, 12.2.0.1.0...
Patch 30888810 successfully
applied.
Sub-set patch [30093408] has
become inactive due to the application of a super-set patch
[30888810].
Please refer to Doc ID
2161861.1 for any possible further required actions.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-31-48AM_1.log
OPatch succeeded.
Node1[/staging/grid_path_apr2020/30920127/30888810]$
After
completing april 2020 patch successfully then we applied manually
on database home as below
Node1[/home/sam44ppt]$ export
PATH=$PATH:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch
Node1[/home/sam44ppt]$ cd
/staging/grid _path _apr2020/30920127
Node1[/staging/grid_path_apr2020/30920127]$
ls -ltr
total 264
-rwxrwxr-x 1 oracrs
oinstall 121078 Apr 12 22:34 README.html
-rwxrwxr-x 1 oracrs
oinstall 5828 Apr 12 23:11 bundle.xml
drwxrwxr-x 2 oracrs
oinstall 4096 Apr 13 04:25 automation
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30869447
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:25 26839277
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30882603
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:32 30886680
-rwxrwxr-x 1 oracrs
oinstall 0 Apr 13 04:39 README.txt
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:39 30888810
Node1[/staging/grid_path_apr2020/30920127]$
id
uid=257(sam44ppt)
gid=207(oinstall)
groups=1(staff),203(dba),206(gprd),220(controlm),212(asmdba)
Node1[/staging/grid_path_apr2020/30920127]$
cd 30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
ls -ltr
total 160
drwxrwxr-x 21 oracrs
oinstall 4096 Apr 13 04:32 files
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:32 etc
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:32 custom
-rwxrwxr-x 1 oracrs
oinstall 21 Apr 13 04:32 README.txt
-rwxrwxr-x 1 oracrs
oinstall 73035 Apr 13 04:32 README.html
Node1[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=257(sam44ppt)
gid=207(oinstall)
groups=1(staff),203(dba),206(gprd),220(controlm),212(asmdba)
Node1[/staging/grid_path_apr2020/30920127/30886680]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u01/oracle/sam44ppt/product/12.2.0.1
Central Inventory :
/u02/oracle/oraInventory
from :
/u01/oracle/sam44ppt/product/12.2.0.1/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatch/opatch2020-05-17_09-40-57AM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30886680
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u01/oracle/sam44ppt/product/12.2.0.1')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30886680' to OH '/u01/oracle/sam44ppt/product/12.2.0.1'
ApplySession: Optional
component(s) [ oracle.swd.oui, 12.2.0.1.0 ] , [ oracle.oid.client,
12.2.0.1.0 ] , [ oracle.has.crs, 12.2.0.1.0 ] , [
oracle.rdbms.drdaas, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ]
, [ oracle.network.cman, 12.2.0.1.0 ] not present in the Oracle Home
or a higher version is found.
Patching component
oracle.rdbms.rsf, 12.2.0.1.0...
Method
3 -
Take
backup of existing JDK file from trouble node for precaution and
take backup from working node and transferred jdk tar file on
trouble node and restored it. Then relink grid home verify relink log
. There should not be any error in log and apply patch on Grid home
again manually
On
trouble node
cd /u02/oracle/12.2.0.1/grid/
tar -cvf jdk.tar jdk
on
working node
Node2[/u02/oracle/12.2.0.1/grid/jdk]$
scp jre.tar eibm994@Node1:/staging/
scp jdk.tar
eibm994@Node2:/oswatcher/
On
trouble Node
untar JDK folder on trouble
node and relink as per below doc
[Node1:root:/staging:]
[Node1:root:/staging:] cp
jdk.tar /u02/oracle/12.2.0.1/grid/
[Node1:root:/u02/oracle/12.2.0.1/grid:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node1:root:/u02/oracle/12.2.0.1/grid:]
mv jdk jdk _oldmay17
[Node1:root:/u02/oracle/12.2.0.1/grid:]
ls -ltr jdk _oldmay17
[Node1:root:/u02/oracle/12.2.0.1/grid:]
tar -xvf jdk.tar
[Node1:root:/u02/oracle/12.2.0.1/grid:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
Untar
JDK folder in grid home from root user on trouble node
[Node1:root:/u02/oracle/12.2.0.1/grid:]
tar -xvf jdk.tar
x jdk
x jdk/.manifest, 80511 bytes,
158 media blocks.
x jdk/bin
x jdk/bin/ControlPanel, 6262
bytes, 13 media blocks.
x jdk/bin/appletviewer, 185551
bytes, 363 media blocks.
x jdk/bin/extcheck, 185567
bytes, 363 media blocks.
x jdk/bin/idlj, 185575 bytes,
363 media blocks.
x jdk/bin/jar, 185559 bytes,
363 media blocks.
x jdk/bin/jarsigner, 185575
bytes, 363 media blocks.
x jdk/bin/java, 185349 bytes,
363 media blocks.
x jdk/bin/javac, 185567 bytes,
363 media blocks.
and
relink binary after JDK restoration on trouble node
B) How To Relink The Oracle
Grid Infrastructure RAC/Cluster Installation.
In order to relink the Oracle
Grid Infrastructure RAC/Cluster Installation, please follow the next
steps after stopping all the services running:
1) As root OS user, please
unlock the Grid Infrastructure Oracle Home (on every node) as
follows:
# cd < Grid Infrastructure
Oracle Home >/crs/install
# perl rootcrs.pl -unlock
/u02/oracle/12.2.0.1/grid/crs/install
./rootcrs.sh
-unlock
Note: On 12.1/12.2/18c
releases, rootcrs.sh can be used instead of rootcrs.pl. Command as
"rootcrs.sh -unlock"
2) As the Oracle Grid
Infrastructure owner (on every node) relink the Grid Infrastructure
Oracle Home as follows:
$ export ORACLE_HOME=< Grid
Infrastructure Oracle Home >
$ < Grid Infrastructure
Oracle Home>/bin/relink all
export
ORACLE_HOME=/u02/oracle/12.2.0.1/grid
cd
/u02/oracle/12.2.0.1/grid/bin
./relink
all
Please
check relink log for error as below
Node2[/u02/oracle/12.2.0.1/grid/bin]$
Node2[/u02/oracle/12.2.0.1/grid/bin]$
cat /u02/oracle/12.2.0.1/grid/install/relink.log|grep -i ERROR
ld:
0711-224 WARNING: Duplicate symbol: .kgamen_error_name
ld:
0711-224 WARNING: Duplicate symbol: .qjsngThrowPathError
ld:
0711-224 WARNING: Duplicate symbol: .OCIPGetErrorMessage
Node2[/u02/oracle/12.2.0.1/grid/bin]$
cat /u02/oracle/12.2.0.1/grid/install/relink.log|grep -i fail
ar:
0707-117 The fopen system call failed on file
/u02/oracle/12.2.0.1/grid/lib/shr.o.
ld:
0711-224 WARNING: Duplicate symbol: .skgfqio_failure_check
ld:
0711-224 WARNING: Duplicate symbol: .skgfqio_failure_check
ld:
0711-224 WARNING: Duplicate symbol: .skgfqio_failure_check
3) As root OS user again,
please execute the next commands (on every node):
# cd < Grid Infrastructure
Oracle Home >/rdbms/install/
# ./rootadd_rdbms.sh
cd
/u02/oracle/12.2.0.1/grid/rdbms/install/
./rootadd_rdbms.sh
# cd < Grid Infrastructure
Oracle Home >/crs/install
# perl rootcrs.pl -patch
./rootcrs.sh -patch
./rootcrs.sh
-lock
On 12.1, use "rootcrs.sh
-patch". On 12.2/18c, use "rootcrs.sh -lock" then
"crsctl start crs"
Note: CRS services (CRS, CSS
ASM instances, diskgroups, listeners, DB instances, etc.) will
automatically start.
4) Please review the relink.log
file located at the following directory (to confirm or discard any
error):
$ < Grid Infrastructure
Oracle Home>/install/relink.log
eg
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
./rootadd _rdbms.sh
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
cd /u02/oracle/12.2.0.1/grid/crs/install
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -lock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node2/crsconfig/crslock_Node2_2020-05-17_10-51-39AM.log
2020/05/17 10:51:53 CLSRSC-329:
Replacing Clusterware entries in file '/etc/inittab'
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/crs:]
cd ..
[Node2:root:/u02/oracle/12.2.0.1/grid:]
cd bin
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
ps -ef|grep pmon
root 10551546 20578766 1
10:52:45 pts/0 0:00 grep pmon
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
./crsctl start crs
CRS-4123: Oracle High
Availability Services has been started.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
ps -ef|grep pmon
oracrs 5374016 1 0
10:54:46 - 0:00 apx_pmon_+APX2
oracrs 38141970 1 0
10:54:28 - 0:00 asm_pmon_+ASM2
root 43450554 20578766 1
10:55:14 pts/0 0:00 grep pmon
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
when
we run unlock command , clusterware will stopped automatically
Now
applied patch manually but it applied successfully
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -unlock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node2/crsconfig/crsunlock_Node2_2020-05-17_10-57-34AM.log
2020/05/17 10:58:12
CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA)
Collector.
2020/05/17
10:58:28 CLSRSC-4013: Successfully shut down Oracle Trace File
Analyzer (TFA) Collector.
2020/05/17
10:58:32 CLSRSC-347: Successfully unlock /u02/oracle/12.2.0.1/grid
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
cd /staging/grid _path _apr2020/30920127
Now applying patch manually
[Node2:root:/staging/grid_path_apr2020/30920127:]
ls -ltr
total 264
-rwxrwxr-x 1 oracrs
oinstall 121078 Apr 12 22:34 README.html
-rwxrwxr-x 1 oracrs
oinstall 5828 Apr 12 23:11 bundle.xml
drwxrwxr-x 2 oracrs
oinstall 4096 Apr 13 04:25 automation
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30869447
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:25 26839277
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:25 30882603
drwxrwxr-x 5 oracrs
oinstall 256 Apr 13 04:32 30886680
-rwxrwxr-x 1 oracrs
oinstall 0 Apr 13 04:39 README.txt
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:39 30888810
[Node2:root:/staging/grid_path_apr2020/30920127:]
su - oracrs
Node2[/home/oracrs]$ cd
/staging/grid _path _apr2020/30920127
Node2[/staging/grid_path_apr2020/30920127]$
. oraenv
ORACLE_SID = [+ASM2] ?
The Oracle base remains
unchanged with value /u02/oracle/PPT
Node2[/staging/grid_path_apr2020/30920127]$
export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node2[/staging/grid_path_apr2020/30920127]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/staging/grid_path_apr2020/30920127]$
cd 30886680
Node1[/staging/grid_path_apr2020/30920127/30886680]$
opatch apply
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-19-56AM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30886680
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30886680' to OH '/u02/oracle/12.2.0.1/grid'
ApplySession: Optional
component(s) [ oracle.ctx, 12.2.0.1.0 ] , [ oracle.sdo, 12.2.0.1.0 ]
, [ oracle.swd.oui, 12.2.0.1.0 ] , [ oracle.rdbms.dv, 12.2.0.1.0 ] ,
[ oracle.oid.client, 12.2.0.1.0 ] , [ oracle.rdbms.drdaas, 12.2.0.1.0
] , [ oracle.oraolap, 12.2.0.1.0 ] , [ oracle.xdk, 12.2.0.1.0 ] , [
oracle.rdbms.lbac, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ] ,
[ oracle.network.cman, 12.2.0.1.0 ] , [ oracle.rdbms.oci, 12.2.0.1.0
] , [ oracle.precomp.lang, 12.2.0.1.0 ] , [ oracle.precomp.common,
12.2.0.1.0 ] not present in the Oracle Home or a higher version is
found.
Patching component
oracle.rdbms.rsf, 12.2.0.1.0...
Patching component
oracle.rdbms, 12.2.0.1.0...
Patching component
oracle.rdbms.util, 12.2.0.1.0...
Patching component
oracle.network.rsf, 12.2.0.1.0...
Patching component
oracle.ctx.rsf, 12.2.0.1.0…
Patch 30886680 successfully
applied.
Sub-set patch [30138470] has
become inactive due to the application of a super-set patch
[30886680].
Please refer to Doc ID
2161861.1 for any possible further required actions.
OPatch Session completed with
warnings.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-17_09-19-56AM_1.log
OPatch completed with
warnings
After
that we can proceed with pending Grid patching and subsequently
oracle home patching either by manually or opatchauto
Scenario 3) We
have done pre-check and done amendment as per oracle support readme
for patching
mean check free
space on temp and home file system and updated opatch version and
checked patch conflict and run opatchauto analyze command but
opatchauto failed with permission issue on jdk/jre/bin/java
and rectified permission
issue
and resume opatchauto
operation but
it failed and again apply patch manually but it also failed again
and multiple time jdk copied
and grid binary link has corrupted Grid Binary as
well .So now we have
last option to restored Grid binary from backup which was taken as
part of pre-requisite
For
Verification
Log
Location
prepatch
log Location
$<ORACLE_BASE>/crsdata/<HOSTNAME>/crsconfig/
relink
log Location
$ < Grid Infrastructure
Oracle Home>/install/relink.log
Check
oracle binary is created in below location .if it is not available
mean grid binary is corrupted
cd /GRID_HOME/bin/
ls -lrt oracle
patch
log location
<GI_HOME>/cfgtoollogs/opatch
To
check Binary Corruption
Check grid home "oracle" binary is identical between the nodes
Provide the below from both the nodes
cd /GRID_HOME/bin/
ls -lrt oracle
As informed earlier provide the below output from working node
crsctl query crs activeversion -f
crsctl query crs softwareversion <node 1 hostname>
crsctl query crs softwareversion Node2
Provide the asm alert log
eg
Hi
From working node ----------------- Node1[/u02/oracle/12.2.0.1/grid]$ cd bin Node1[/u02/oracle/12.2.0.1/grid/bin]$ ls -lrt oracle -rwsr-s--x 1 oracrs oinstall 583834740 May 17 09:28 oracle Node1[/u02/oracle/12.2.0.1/grid/bin]$ Node1[/u02/oracle/12.2.0.1/grid/bin]$ Node1[/u02/oracle/12.2.0.1/grid/bin]$ crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1833732982]. Node1[/u02/oracle/12.2.0.1/grid/bin]$ crsctl query crs softwareversion Node1 Oracle Clusterware version on node [Node1] is [12.2.0.1.0] Node1[/u02/oracle/12.2.0.1/grid/bin]$ crsctl query crs softwareversion Node2 Oracle Clusterware version on node [Node2] is [12.2.0.1.0] Node1[/u02/oracle/12.2.0.1/grid/bin]$ ------------------Node2---------issue node------------------------ Node2[/u02/oracle/12.2.0.1/grid/bin]$ ls -lrt oracle ls: 0653-341 The file oracle does not exist. Node2[/u02/oracle/12.2.0.1/grid/bin]$ id uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper) Node2[/u02/oracle/12.2.0.1/grid/bin]$ |
|
Since few of the gi home binary
are missing in node 2 ,
we recommend you to restore the GI home from the backup and perform the manual patching
we recommend you to restore the GI home from the backup and perform the manual patching
copied
grid binary from backup and applied patch manually on Grid home and
used patchauto for Database home
1) To take backup of existing
grid home if it is not taken as part of pre-requisite
cd /u02/oracle/12.2.0.1/grid/
tar -cvf
/staging/grid_home_database.tar .
2) Go to Grid Home and delete
all files and folder
cd /u02/oracle/12.2.0.1/grid/
and remove all directory
rm -rf *
ls -ltr
restored grid home binary from
backup
Backup is located in Staging
folder .We copied it in grid home after deleting all content (files
and
folders) unzip it as below
From root user
cd /staging
cp
/staging/grid_home_database.tar /u02/oracle/12.2.0.1/grid/
tar -xvf grid_home_database.tar
Node2[/u02/oracle/12.2.0.1/grid]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/u02/oracle/12.2.0.1/grid]$
exit
[Node2:root:/home/root:] cd
/staging
[Node2:root:/staging:] cp grid
_home _database.tar /u02/oracle/12.2.0.1/grid/
[Node2:root:/staging:] cd
/u02/oracle/12.2.0.1/grid/
[Node2:root:/u02/oracle/12.2.0.1/grid:]
ls -ltr
-rw-r----- 1 oracrs
oinstall 500 Feb 06 2013 welcome.html
-rw-r----- 1 oracrs
oinstall 59 Oct 08 2014 oraInst.loc
-rw-r--r-- 1 oracrs
oinstall 852 Sep 02 2015 env.ora
-rwxr-x--- 1 oracrs
oinstall 628 Sep 05 2015 runcluvfy.sh
-rwxr-x--- 1 oracrs
oinstall 5395 Jul 21 2016 gridSetup.sh
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 slax
drwxr-xr-x 4 oracrs
oinstall 256 Apr 02 2017 scheduler
-rw-r----- 1 oracrs
oinstall 10 Apr 02 2017 root.sh.old.1
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 relnotes
drwxr-xr-x 5 oracrs
oinstall 256 Apr 02 2017 qos
drwxr-xr-x 5 root
oinstall 256 Apr 02 2017 perl
drwxr-xr-x 4 oracrs
oinstall 256 Apr 02 2017 owm
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 oss
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 ordim
drwxr-xr-x 7 oracrs
oinstall 256 Apr 02 2017 ord
drwxr-xr-x 4 oracrs
oinstall 256 Apr 02 2017 oracore
drwxr-xr-x 5 oracrs
oinstall 256 Apr 02 2017 md
drwxr-xr-x 8 oracrs
oinstall 256 Apr 02 2017 javavm
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 hs
drwxr-xr-x 5 oracrs
oinstall 256 Apr 02 2017 has
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 diagnostics
drwxr-xr-x 3 oracrs
oinstall 256 Apr 02 2017 demo
drwxr-xr-x 2 root
oinstall 20480 May 20 15:41 lib
drwxr-xr-x 2 root
oinstall 20480 May 20 15:51 bin
drwx------ 12 oracrs
oinstall 4096 May 21 15:42 .patch_storage
deleted
all files from grid home as below from root user
[Node2:root:/u02/oracle/12.2.0.1/grid:]
rm -rf *
[Node2:root:/u02/oracle/12.2.0.1/grid:]
df -g .
Filesystem GB blocks
Free %Used Iused %Iused Mounted on
/dev/sampptu02lv 160.00
72.93 55% 247865 2% /u02
[Node2:root:/u02/oracle/12.2.0.1/grid:]
cp /staging/grid _home _database.tar /u02/oracle/12.2.0.1/grid/
[Node2:root:/u02/oracle/12.2.0.1/grid:]
[Node2:root:/u02/oracle/12.2.0.1/grid:]
ls -ltr
total 70763952
drwxr-x--- 3 oracrs
oinstall 256 Feb 15 11:29 .opatchauto_storage
drwx------ 12 oracrs
oinstall 4096 May 21 15:42 .patch_storage
-rw-r----- 1 root system
36230666240 May 22 12:48 grid_home_database.tar
[Node2:root:/u02/oracle/12.2.0.1/grid:]
rm .opatchauto _storage .patch _storage
rm: 0653-603 Cannot remove
directory .opatchauto_storage.
rm: 0653-603 Cannot remove
directory .patch_storage.
[Node2:root:/u02/oracle/12.2.0.1/grid:]
[Node2:root:/u02/oracle/12.2.0.1/grid:]
rm -rf .opatchauto _storage
[Node2:root:/u02/oracle/12.2.0.1/grid:]
rm -rf .patch _storage
[Node2:root:/u02/oracle/12.2.0.1/grid:]
ls -ltr
total 70763944
-rw-r----- 1 root system
36230666240 May 22 12:48 grid_home_database.tar
[Node2:root:/u02/oracle/12.2.0.1/grid:]
tar -xvf grid _home _database.tar
x .
x ./.opatchauto_storage
x
./.opatchauto_storage/system_patches
x
./.opatchauto_storage/system_patches/30116802
x
./.opatchauto_storage/system_patches/30116802/bundle.xml, 5828 bytes,
12 media blocks.
x
./.opatchauto_storage/system_patches/systemPatchInformationStore.xml,
628 bytes, 2 media blocks.
x ./.patch_storage
x
./.patch_storage/26839277_Sep_27_2017_07_42_01
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/files
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/files/qos
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/files/qos/jlib
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/files/qos/jlib/dbwlm.war,
2335249 bytes, 4562 media blocks.
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/original_patch
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/original_patch/README.txt,
6250 bytes, 13 media blocks.
x
./.patch_storage/26839277_Sep_27_2017_07_42_01/origi
Now
Grid home has been restored and follow below steps to apply patch on
Grid home manually
1) As root OS user, please unlock the Grid Infrastructure Oracle Home as follows:
# cd < Grid Infrastructure Oracle Home >/crs/install
# rootcrs.sh -unlock
Note: On 12.1/12.2/18c releases, rootcrs.sh can be used instead of rootcrs.pl. Command as "rootcrs.sh -unlock"
As grid owner
2,opatch apply -oh /u02/oracle/12.2.0.1/grid -local /patch_location/
As root user
3<GI_HOME>/rdbms/install/rootadd_rdbms.sh
As root user
4, cd < Grid Infrastructure Oracle Home >/crs/install
# rootcrs.sh -lock
eg
From
Root user
Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:] ./rootadd_rdbms.sh
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:] ./rootcrs.sh -lock
Using configuration parameter file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u02/oracle/PPT/crsdata/Node2/crsconfig/crslock_Node2_2020-05-22_03-29-10PM.log
2020/05/22 15:29:24 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:] s
Patches applied between the
nodes should be identical
Sequence of april patch to be
applied
30886680
30882603
30869447
26839277
30888810
Node2[/home/oracrs]$ cd
/staging/grid_path_apr2020/30920127/30886680
Node2[/staging/grid_path_apr2020/30920127/30886680]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/staging/grid_path_apr2020/30920127/30886680]$
Node2[/staging/grid_path_apr2020/30920127/30886680]$
cd $ORACLE_HOME/OPatch
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch apply -oh /u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30886680
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_13-35-18PM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30886680
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30886680' to OH '/u02/oracle/12.2.0.1/grid'
ApplySession: Optional
component(s) [ oracle.ctx, 12.2.0.1.0 ] , [ oracle.sdo, 12.2.0.1.0 ]
, [ oracle.swd.oui, 12.2.0.1.0 ] , [ oracle.rdbms.dv, 12.2.0.1.0 ] ,
[ oracle.oid.client, 12.2.0.1.0 ] , [ oracle.rdbms.drdaas, 12.2.0.1.0
] , [ oracle.oraolap, 12.2.0.1.0 ] , [ oracle.xdk, 12.2.0.1.0 ] , [
oracle.rdbms.lbac, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ] ,
[ oracle.network.cman, 12.2.0.1.0 ] , [ oracle.rdbms.oci, 12.2.0.1.0
] , [ oracle.precomp.lang, 12.2.0.1.0 ] , [ oracle.precomp.common,
12.2.0.1.0 ] not present in the Oracle Home or a higher version is
found.
Patching component
oracle.rdbms.rsf, 12.2.0.1.0...
Patching component
oracle.rdbms, 12.2.0.1.0...
Patching component
oracle.rdbms.util, 12.2.0.1.0...
ld: 0711-224 WARNING: Duplicate
symbol: skgxprqhi
ld: 0711-224 WARNING: Duplicate
symbol: skgxpvrpc
ld: 0711-224 WARNING: Duplicate
symbol: skgxpwait
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[sl.o], imported symbol
slgsst
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/rdbms/lib/kfod.o, imported symbol vsnnum
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[kgg.o], imported symbol
kgghash3__larch_dydif
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargv
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrcfg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[skgp.o], imported symbol
slts_runmode
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[skgfr.o], imported
symbol slts_runmode
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[kge.o], imported symbol
lxdobl
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "error"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: p_xargc
ld: 0711-224 WARNING: Duplicate
symbol: p_xargv
ld: 0711-224 WARNING: Duplicate
symbol: lsfmai
ld: 0711-224 WARNING: Duplicate
symbol: .lfiwr
ld: 0711-224 WARNING: Duplicate
symbol: lfiwr
ld: 0711-224 WARNING: Duplicate
symbol: skgxppost
ld: 0711-224 WARNING: Duplicate
symbol: skgxpwait
ld: 0711-224 WARNING: Duplicate
symbol: skgxpmmap
ld: 0711-224 WARNING: Duplicate
symbol: skgxpunmap
ld: 0711-224 WARNING: Duplicate
symbol: skgxpgettabledef
ld: 0711-224 WARNING: Duplicate
symbol: skgxpnetmappush
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[sl.o], imported symbol
slgsst
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[kgg.o], imported symbol
kgghash3__larch_dydif
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargv
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrcfg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[skgp.o], imported symbol
slts_runmode
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[skgfr.o], imported
symbol slts_runmode
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libgeneric12.a[kge.o], imported symbol
lxdobl
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "fail"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: p_xargc
ld: 0711-224 WARNING: Duplicate
symbol: p_xargv
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib//libgeneric12.a[skgfr.o], imported
symbol slts_runmode
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 87880 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
+
PATH=/bin:/usr/bin:/usr/ccs/bin
+ export PATH
+
lib=/u02/oracle/12.2.0.1/grid/lib/libasmperl12.so
+
makefile=/u02/oracle/12.2.0.1/grid/rdbms/lib/ins_rdbms.mk
+ so_ext=so
+ target=dlopenlib
+ + basename
/u02/oracle/12.2.0.1/grid/lib/libasmperl12.so .so
libname=libasmperl12
+ + dirname
/u02/oracle/12.2.0.1/grid/lib/libasmperl12.so
sodir=/u02/oracle/12.2.0.1/grid/lib
+
ardir=/u02/oracle/12.2.0.1/grid/lib/
+ [ var = dlopenlib ]
+ suffix=LIBS
+ var=
+ [ ! -f
/u02/oracle/12.2.0.1/grid/lib/libasmperl12.a ]
+ [ != ]
+ make -f
/u02/oracle/12.2.0.1/grid/rdbms/lib/ins_rdbms.mk dlopenlib
_FULL_LIBNAME=/u02/oracle/12.2.0.1/grid/lib/libasmperl12.so
_LIBNAME=libasmperl12 _LIBDIR=/u02/oracle/12.2.0.1/grid/lib/
_LIBNAME_LIBS=$(libasmperl12LIBS)
_LIBNAME_EXTRALIBS=$(libasmperl12EXTRALIBS)
ld: 0711-224 WARNING: Duplicate
symbol: .kgfnGetConnDetailsInt2
ld: 0711-224 WARNING: Duplicate
symbol: kgfnGetConnDetailsInt2
ld: 0711-224 WARNING: Duplicate
symbol: procr_terminate
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libasmperl12.a[kgfasmcmd.o], imported
symbol PL_thr_key
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libasmperl12.a[kgfasmcmd.o], imported
symbol vsnnum
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 76456 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 84288 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: p_xargc
ld: 0711-224 WARNING: Duplicate
symbol: p_xargv
ld: 0711-224 WARNING: Duplicate
symbol: .SlfVfprintf
ld: 0711-224 WARNING: Duplicate
symbol: SlfVfprintf
ld: 0711-224 WARNING: Duplicate
symbol: lnxcopy
ld: 0711-224 WARNING: Duplicate
symbol: .strcat
ld: 0711-224 WARNING: Duplicate
symbol: strcat
ld: 0711-224 WARNING: Duplicate
symbol: .SlfAccess
ld: 0711-224 WARNING: Duplicate
symbol: SlfAccess
ld: 0711-224 WARNING: Duplicate
symbol: p_xrcfg
ld: 0711-224 WARNING: Duplicate
symbol: p_xrc
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib//libclient12.a[khpf.o], imported symbol
vsnnum
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargv
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrcfg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 76016 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: p_xargc
ld: 0711-224 WARNING: Duplicate
symbol: naectc
ld: 0711-224 WARNING: Duplicate
symbol: .naedhsd_dummy
ld: 0711-224 WARNING: Duplicate
symbol: naedhsd_dummy
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-319 WARNING: Exported
symbol not defined: sldext
ld: 0711-319 WARNING: Exported
symbol not defined: ztapis
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/lib/libagent12.a[homt.o], imported symbol
vsnnum
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargc
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrcfg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: p_xargc
ld: 0711-224 WARNING: Duplicate
symbol: .lxlterm
ld: 0711-224 WARNING: Duplicate
symbol: p_xrcfg
ld: 0711-224 WARNING: Duplicate
symbol: p_xrc
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xargv
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrcfg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/lib/crt0_64.o, imported symbol p_xrc
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 76616 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 76016 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 77448 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-783 WARNING: TOC
overflow. TOC size: 85424 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
OPatch found the word "error"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
+
PATH=/bin:/usr/bin:/usr/ccs/bin
+ export PATH
+
lib=/u02/oracle/12.2.0.1/grid/sqlplus/lib/libsqlplus.so
+
makefile=/u02/oracle/12.2.0.1/grid/sqlplus/lib/ins_sqlplus.mk
+ so_ext=so
+ target=dlopenlib
+ + basename
/u02/oracle/12.2.0.1/grid/sqlplus/lib/libsqlplus.so .so
libname=libsqlplus
+ + dirname
/u02/oracle/12.2.0.1/grid/sqlplus/lib/libsqlplus.so
sodir=/u02/oracle/12.2.0.1/grid/sqlplus/lib
+
ardir=/u02/oracle/12.2.0.1/grid/lib/
+ [ var = dlopenlib ]
+ suffix=LIBS
+ var=
+ [ ! -f
/u02/oracle/12.2.0.1/grid/lib/libsqlplus.a ]
+ [ != ]
+ make -f
/u02/oracle/12.2.0.1/grid/sqlplus/lib/ins_sqlplus.mk dlopenlib
_FULL_LIBNAME=/u02/oracle/12.2.0.1/grid/sqlplus/lib/libsqlplus.so
_LIBNAME=libsqlplus _LIBDIR=/u02/oracle/12.2.0.1/grid/lib/
_LIBNAME_LIBS=$(libsqlplusLIBS)
_LIBNAME_EXTRALIBS=$(libsqlplusEXTRALIBS)
ld: 0711-224 WARNING: Duplicate
symbol: .lxmcpen
ld: 0711-224 WARNING: Duplicate
symbol: lxmcpen
ld: 0711-224 WARNING: Duplicate
symbol: .lxmspax
ld: 0711-224 WARNING: Duplicate
symbol: lxmspax
ld: 0711-224 WARNING: Duplicate
symbol: .lxmfwtx
ld: 0711-224 WARNING: Duplicate
symbol: lxmfwtx
ld: 0711-224 WARNING: Duplicate
symbol: .lxmfwdx
ld: 0711-224 WARNING: Duplicate
symbol: upicls
ld: 0711-224 WARNING: Duplicate
symbol: .upidfn
ld: 0711-224 WARNING: Duplicate
symbol: upidfn
ld: 0711-224 WARNING: Duplicate
symbol: .upiefn
ld: 0711-224 WARNING: Duplicate
symbol: upiefn
ld: 0711-224 WARNING: Duplicate
symbol: .nigsui
ld: 0711-224 WARNING: Duplicate
symbol: nigsui
ld: 0711-224 WARNING: Duplicate
symbol: .OCIDateTimeConvert
ld: 0711-224 WARNING: Duplicate
symbol: OCIDateTimeConvert
ld: 0711-224 WARNING: Duplicate
symbol: lsfmai
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/ldap/lib/s0gsldsearch.o, imported symbol
optind
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-773 WARNING: Object
/u02/oracle/12.2.0.1/grid/ldap/lib/s0gsldsearch.o, imported symbol
optarg
Symbol was expected to
be local. Extra instructions
are being generated to
reference the symbol.
ld: 0711-783 WARNING: TOC
overflow. TOC size: 75952 Maximum size: 65536
Extra instructions are
being generated for each reference to a TOC
symbol if the symbol is
in the TOC overflow area.
Patch 30886680 successfully
applied.
Sub-set patch [30138470] has
become inactive due to the application of a super-set patch
[30886680].
Please refer to Doc ID
2161861.1 for any possible further required actions.
OPatch Session completed with
warnings.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_13-35-18PM_1.log
OPatch completed with warnings.
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ps -ef|grep pmon
Node2[/home/oracrs]$ export
PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
Node2[/home/oracrs]$ cd
/u02/oracle/12.2.0.1/grid/OPatch
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ls -ltr
total 312
drwxr-x--- 5 oracrs
oinstall 256 Mar 28 07:07 jre
-rw-r----- 1 oracrs
oinstall 27 Apr 22 11:51 version.txt
drwxr-x--- 2 oracrs
oinstall 256 Apr 22 11:51 scripts
-rw-r----- 1 oracrs
oinstall 3177 Apr 22 11:51 operr_readme.txt
-rwxr-x--- 1 oracrs
oinstall 4218 Apr 22 11:51 operr.bat
-rwxr-x--- 1 oracrs
oinstall 3159 Apr 22 11:51 operr
-rwxr-x--- 1 oracrs
oinstall 4290 Apr 22 11:51 opatch_env.sh
-rw-r----- 1 oracrs
oinstall 2551 Apr 22 11:51 opatch.pl
drwxr-x--- 2 oracrs
oinstall 256 Apr 22 11:56 oplan
-rwxr-x--- 1 oracrs
oinstall 393 Apr 22 11:56 opatchauto.cmd
-rwxr-x--- 1 oracrs
oinstall 1442 Apr 22 11:56 opatchauto
drwxr-x--- 9 oracrs
oinstall 4096 Apr 22 11:56 modules
drwxr-x--- 6 oracrs
oinstall 256 Apr 22 11:56 auto
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
/staging/grid_path_apr2020/30920127/30882603
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ls -ltr /staging/grid_path_apr2020/30920127/30882603
total 112
drwxrwxr-x 19 oracrs
oinstall 4096 Apr 13 04:25 files
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:32 etc
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:32 custom
-rwxrwxr-x 1 oracrs
oinstall 50882 Apr 13 04:32 README.txt
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ls -lr /staging/grid_path_apr2020/30920127/30869447
total 32
drwxrwxr-x 6 oracrs
oinstall 256 Apr 13 04:25 files
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:25 etc
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:25 custom
-rwxrwxr-x 1 oracrs
oinstall 16233 Apr 13 04:25 README.txt
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ls -ltr /staging/grid_path_apr2020/30920127/26839277
total 16
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:25 files
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:25 etc
-rwxrwxr-x 1 oracrs
oinstall 6250 Apr 13 04:25 README.txt
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
ls -ltr /staging/grid_path_apr2020/30920127/30888810
total 16
drwxrwxr-x 4 oracrs
oinstall 256 Apr 13 04:39 files
drwxrwxr-x 3 oracrs
oinstall 256 Apr 13 04:39 etc
-rwxrwxr-x 1 oracrs
oinstall 7580 Apr 13 04:39 README.txt
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch apply -oh /u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30882603
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_14-58-10PM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30882603
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30882603' to OH '/u02/oracle/12.2.0.1/grid'
Patching component
oracle.rdbms, 12.2.0.1.0...
Patching component
oracle.has.db, 12.2.0.1.0...
Patching component
oracle.has.rsf, 12.2.0.1.0...
OPatch found the word "warning"
in the stderr of the make command.
Please look at this stderr. You
can re-run this make command.
Stderr output:
ld: 0711-224 WARNING: Duplicate
symbol: .sleprom
ld: 0711-224 WARNING: Duplicate
symbol: .lstss
ld: 0711-224 WARNING: Duplicate
symbol: .sltrusleep
ld: 0711-345 Use the -bloadmap
or -bnoquiet option to obtain more information.
Patch 30882603 successfully
applied.
Sub-set patch [30122814] has
become inactive due to the application of a super-set patch
[30882603].
Please refer to Doc ID
2161861.1 for any possible further required actions.
OPatch Session completed with
warnings.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_14-58-10PM_1.log
OPatch completed with warnings.
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch apply -oh /u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30869447
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-03-39PM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30869447
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30869447' to OH '/u02/oracle/12.2.0.1/grid'
Patching component oracle.usm,
12.2.0.1.0...
Patch 30869447 successfully
applied.
Sub-set patch [30122828] has
become inactive due to the application of a super-set patch
[30869447].
Please refer to Doc ID
2161861.1 for any possible further required actions.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-03-39PM_1.log
OPatch succeeded.
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch apply -oh /u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/26839277
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-06-08PM_1.log
Verifying environment and
performing prerequisite checks...
The following patch(es) are
duplicate patches with patches installed in the Oracle Home.
[ 26839277]
You have already installed same
patch(es) with same UPI(s) or same version(s).
These patch(es) will be
skipped.
OPatch Session completed with
warnings.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-06-08PM_1.log
OPatch completed with warnings.
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
opatch apply -oh /u02/oracle/12.2.0.1/grid -local
/staging/grid_path_apr2020/30920127/30888810
Oracle Interim Patch Installer
version 12.2.0.1.21
Copyright (c) 2020, Oracle
Corporation. All rights reserved.
Oracle Home :
/u02/oracle/12.2.0.1/grid
Central Inventory :
/u02/oracle/oraInventory
from :
/u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version : 12.2.0.1.21
OUI version : 12.2.0.1.4
Log file location :
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-07-52PM_1.log
Verifying environment and
performing prerequisite checks...
OPatch continues with these
patches: 30888810
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle
instances running out of this ORACLE_HOME on the local system.
(Oracle Home =
'/u02/oracle/12.2.0.1/grid')
Is the local system ready for
patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch
'30888810' to OH '/u02/oracle/12.2.0.1/grid'
Patching component
oracle.tomcat.crs, 12.2.0.1.0...
Patch 30888810 successfully
applied.
Sub-set patch [30093408] has
become inactive due to the application of a super-set patch
[30888810].
Please refer to Doc ID
2161861.1 for any possible further required actions.
Log file location:
/u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-05-22_15-07-52PM_1.log
OPatch succeeded.
Node2[/u02/oracle/12.2.0.1/grid/OPatch]$
Shell will time out in 60
seconds.
ksh: Timed out waiting for
input.
[Node2:root:/home/root:]
[Node2:root:/home/root:]
[Node2:root:/home/root:] cd cd
/u02/oracle/12.2.0.1/grid/rdbms/install
[Node2:root:/home/root:] cd
/u02/oracle/12.2.0.1/grid/rdbms/install
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
./rootadd_rdbms.sh
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
cd /u02/oracle/12.2.0.1/crs/install
ksh:
/u02/oracle/12.2.0.1/crs/install: not found.
[Node2:root:/u02/oracle/12.2.0.1/grid/rdbms/install:]
cd /u02/oracle/12.2.0.1/grid/crs/install
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -lock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node2/crsconfig/crslock_Node2_2020-05-22_03-29-10PM.log
2020/05/22 15:29:24 CLSRSC-329:
Replacing Clusterware entries in file '/etc/inittab'
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
ps -ef|grep pmon
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
ps -ef|grep d.bin
root 28967328 40108130 1
15:30:15 pts/0 0:00 grep d.bin
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
Shell will time out in 60
seconds.
ksh: Timed out waiting for
input.
Node2[/home/eibm994]$
Node2[/home/eibm994]$
Node2[/home/eibm994]$ ps
-ef|grep pmon
Node2[/home/eibm994]$ ps
-ef|grep d.bin
Node2[/home/eibm994]$
Node2[/home/eibm994]$ sudo su -
Password:
[YOU HAVE NEW MAIL]
[Node2:root:/home/root:] su -
oracrs
Node2[/home/oracrs]$
Node2[/home/oracrs]$ crsctl
query crs activeversion -f
CRS-6750: unable to get the
active version
CRS-6752: Active version query
failed.
Node2[/home/oracrs]$ crsctl
query crs softwarepatch Node2
CRS-6751: unable to get the
software patch version
CRS-6753: Software patch query
failed.
Node2[/home/oracrs]$ crsctl
query crs softwarepatch Node1
CRS-6751: unable to get the
software patch version
CRS-6753: Software patch query
failed.
Node2[/home/oracrs]$ id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Output
from working node
Node1[/home/oracrs]$ . oraenv
ORACLE_SID = [+ASM1] ?
ORACLE_HOME = [/home/oracle] ?
ORACLE_BASE environment
variable is not being set since this
information is not available
for the current user ID oracrs.
You can set ORACLE_BASE
manually if it is required.
Resetting ORACLE_BASE to its
previous value or ORACLE_HOME
The Oracle base remains
unchanged with value /u02/oracle/PPT
Node1[/home/oracrs]$ cd
$ORACLE_HOME
ksh: /home/oracle: Permission
denied.
Node1[/home/oracrs]$ ps
-ef|grep pmon
oracrs 12714028 1 0
May 17 - 0:07 apx_pmon_+APX1
oracrs 51839178 1 0
May 17 - 0:08 mdb_pmon_-MGMTDB
oracrs 6554010 1 0
May 17 - 0:08 asm_pmon_+ASM1
oracrs 8388926 23527656 1
15:56:50 pts/1 0:00 grep pmon
sam44ppt 31588850 1 0
15:32:03 - 0:00 ora_pmon_SAM44PPT1
Node1[/home/oracrs]$ id
uid=350(oracrs)
gid=207(oinstall)
groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/home/oracrs]$ . oraenv
ORACLE_SID = [+ASM1] ? -MGMTDB
The Oracle base remains
unchanged with value /u02/oracle/PPT
Node1[/home/oracrs]$ export
ORACLE_SID=+ASM1
Node1[/home/oracrs]$ crsctl
query crs activeversion -f
Oracle Clusterware active
version on the cluster is [12.2.0.1.0]. The cluster upgrade state is
[NORMAL]. The cluster active patch level is [1833732982].
Node1[/home/oracrs]$ crsctl
query crs softwarepatch Node2
Oracle Clusterware patch level
on node Node2 is [1833732982].
Node1[/home/oracrs]$ crrsctl
query crs softwarepatch Node1
Node1[/home/oracrs]$ crsctl
query crs softwarepatch Node1
Oracle Clusterware patch level
on node Node1 is [1833732982].
Node1[/home/oracrs]$ exit
Connection to Node1 closed.
Node2[/home/oracrs]$
Node2[/home/oracrs]$
To
verify that all patches are applied or not. We need to use below
commands
Now
Patch on grid home completed successfully
Please run these commands on
both nodes.
<GI_HOME>/bin/kfod op=patches
<GI_HOME>/bin/kfod op=patchlvl
<GI_HOME>/bin/kfod op=patches
<GI_HOME>/bin/kfod op=patchlvl
Hostname: Node1
"OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)"
"ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)"
"Database Oct 2019 Release Update : 12.2.0.1.191015 (30138470)"
"TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:190208.0920) (30093408)"
"DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
Hostname: Node2
"OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)"
"ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)"
"Database Apr 2020 Release Update : 12.2.0.1.200414 (30886680)"
"TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:RELEASE) (30888810)"
"DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
"OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)"
"ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)"
"Database Oct 2019 Release Update : 12.2.0.1.191015 (30138470)"
"TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:190208.0920) (30093408)"
"DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
Hostname: Node2
"OCW APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30882603)"
"ACFS APR 2020 RELEASE UPDATE 12.2.0.1.200414 (30869447)"
"Database Apr 2020 Release Update : 12.2.0.1.200414 (30886680)"
"TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:RELEASE) (30888810)"
"DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)"
--------------issue
node----------
Node2[/home/oracrs]$ cd $ORACLE_HOME/bin
Node2[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patches
---------------
List of Patches
===============
26839277
30869447
30882603
30886680
30888810
Node2[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patchlvl
-------------------
Current Patch level
===================
475801424
Node2[/u02/oracle/12.2.0.1/grid/bin]$
------------working node--------------
Node1[/u02/oracle/12.2.0.1/grid/bin]$
Node1[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patches
---------------
List of Patches
===============
26839277
30869447
30882603
30886680
30888810
Node1[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patchlvl
-------------------
Current Patch level
===================
475801424
Node2[/home/oracrs]$ cd $ORACLE_HOME/bin
Node2[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patches
---------------
List of Patches
===============
26839277
30869447
30882603
30886680
30888810
Node2[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patchlvl
-------------------
Current Patch level
===================
475801424
Node2[/u02/oracle/12.2.0.1/grid/bin]$
------------working node--------------
Node1[/u02/oracle/12.2.0.1/grid/bin]$
Node1[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patches
---------------
List of Patches
===============
26839277
30869447
30882603
30886680
30888810
Node1[/u02/oracle/12.2.0.1/grid/bin]$ kfod op=patchlvl
-------------------
Current Patch level
===================
475801424
Even
after applying all patches. We are not able start cluserware below
error
so
we need to run command related to
patching set up behind the scenes.It is generally happened
in manual patching
Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
./crsctl start crs
CRS-6706: Oracle Clusterware Release patch level ('475801424') does not match Software patch level ('1833732982'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
CRS-6706: Oracle Clusterware Release patch level ('475801424') does not match Software patch level ('1833732982'). Oracle Clusterware cannot be started.
CRS-4000: Command Start failed, or completed with errors.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
Here is what I know.
Current issue is clusterware on Node2 will not start due to a patch level not matching Node Node1
Node2:root:/u02/oracle/12.2.0.1/grid/bin:] ./crsctl start crs
CRS-6706: Oracle Clusterware Release patch level ('475801424') does not match Software patch level ('1833732982'). Oracle Clusterware cannot be started.
Node Node1 was successfully upgraded.
Node1[/home/oracrs]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1833732982].
Current issue is clusterware on Node2 will not start due to a patch level not matching Node Node1
Node2:root:/u02/oracle/12.2.0.1/grid/bin:] ./crsctl start crs
CRS-6706: Oracle Clusterware Release patch level ('475801424') does not match Software patch level ('1833732982'). Oracle Clusterware cannot be started.
Node Node1 was successfully upgraded.
Node1[/home/oracrs]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1833732982].
Please
use the following steps to complete the patching:
1. Run the following command
as the root user to complete the patching set up behind the scenes:
#GI_HOME/bin:> ./clscfg
-localpatch
2. Run the following command
as the root user to lock the GI home:
#GI_HOME/crs/install:>
./rootcrs.sh -lock
3. Run the following command
as the root user to start the GI:
#GI_HOME/bin:> ./crsctl
start crs
eham:root:/u02/oracle/12.2.0.1/grid/bin:]
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
pwd
/u02/oracle/12.2.0.1/grid/bin
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
id
uid=0(root) gid=0(system)
groups=208(tivlogs)
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
./clscfg -localpatch
Creating OCR keys for user
'root', privgrp 'system'..
Operation successful.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
cd ..
[Node2:root:/u02/oracle/12.2.0.1/grid:]
cd crs/install
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
pwd
/u02/oracle/12.2.0.1/grid/crs/install
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
./rootcrs.sh -lock
Using configuration parameter
file: /u02/oracle/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can
be found at:
/u02/oracle/PPT/crsdata/Node2/crsconfig/crslock_Node2_2020-05-22_08-55-34PM.log
2020/05/22 20:55:47 CLSRSC-329:
Replacing Clusterware entries in file '/etc/inittab'
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
[Node2:root:/u02/oracle/12.2.0.1/grid/crs/install:]
cd ../..
[Node2:root:/u02/oracle/12.2.0.1/grid:]
cd bin
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
./crsctl start crs
CRS-4123: Oracle High
Availability Services has been started.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
CRS-4123: Oracle High
Availability Services has been started.
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
ps -ef|grep pmon
sam44ppt 4784574 1 0
20:58:39 - 0:00 ora_pmon_SAM44PPT2
oracrs 14549256 1 0
20:57:34 - 0:00 asm_pmon_+ASM2
root 15532384 28377574 1
20:59:50 pts/0 0:00 grep pmon
oracrs 16646528 1 0
20:57:52 - 0:00 apx_pmon_+APX2
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
df -g
Filesystem GB blocks
Free %Used Iused %Iused Mounted on
/dev/bos_hd4 2.62
2.17 18% 20399 4% /
/dev/bos_hd2 6.50
1.90 71% 80067 15% /usr
/dev/bos_hd9var 1.00
0.45 55% 17838 15% /var
/dev/hd3 1.50
1.43 5% 777 1% /tmp
/dev/hd1 3.50
2.68 24% 1410 1% /home
/proc -
- - - - /proc
/dev/bos_hd10opt 1.75
0.58 67% 57649 30% /opt
/dev/hd11admin 0.25
0.25 1% 5 1% /admin
/dev/besclientlv 0.25
0.15 40% 21 1% /opt/BESClient
/dev/lv_doonce 0.25
0.07 71% 450 3% /opt/DoOnceAIX
/dev/itm_lv 2.00
1.01 50% 18178 8% /opt/IBM/ITM
/dev/scm_lv 1.62
1.53 6% 843 1% /opt/IBM/SCM
/dev/tiv_lv 1.00
1.00 1% 12 1% /opt/Tivoli
/dev/best1_lv 3.00
1.99 34% 242 1% /opt/best1
/dev/connectlv 2.00
1.94 3% 1946 1% /opt/connect
/dev/ctm700lv 3.00
2.11 30% 5757 2% /opt/ctm700
/dev/patrol_lv 2.00
1.14 43% 4197 2% /opt/patrol
/dev/tivoli_lv 0.25
0.20 20% 13142 22% /opt/tivoli/cit
/dev/oswatchlv 98.00
49.13 50% 8794 1% /oswatcher
/dev/staginglv 125.00
4.58 97% 42226 4% /staging
/dev/sampptu01lv 70.00
34.48 51% 98468 2% /u01
/dev/sampptu02lv 160.00
44.49 73% 529106 5% /u02
/dev/sampptu03lv 20.00
11.49 43% 31889 2% /u03
/dev/fslv00 1.00
0.99 1% 79 1% /var/adm/perfmgr
/dev/livedump 0.25
0.25 1% 4 1% /var/adm/ras/livedump
/dev/lv_auditlog 2.00
1.13 44% 2167 1% /var/log/eprise
/dev/varbes_lv 2.00
1.27 37% 7779 3% /var/opt/BESClient
/dev/ansiblelv 5.00
5.00 1% 4 1% /var/opt/ansible
/dev/asm/gfnew-7 2.00
1.80 10% 411816 10% /ppt/gfnew
/dev/asm/alprod-7 2.00
1.66 17% 708512 17% /ppt/alprod
/dev/asm/gfdata-7 10.00
8.19 19% 3792376 19% /ppt/gfdata
/dev/asm/gfprod-7 2.00
1.77 12% 477200 12% /ppt/gfprod
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
Trace
file location - Node2[/u02/oracle/PPT/diag/crs/Node2/crs/trace]$ ls
-ltr
total
8372808
-rw-rw----
1 root oinstall 26215758 Jun 05 2019 osysmond_9.trc
-rw-rw----
1 root oinstall 2353913 Jun 05 2019 osysmond_9.trm
clusterware
and database started successfully and Applied patch on database home
using opatchauto
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
ps -ef|grep pmon
sam44ppt 4784574 1 0
20:58:39 - 0:00 ora_pmon_SAM44PPT2
oracrs 14549256 1 0
20:57:34 - 0:00 asm_pmon_+ASM2
root 15532384 28377574 1
20:59:50 pts/0 0:00 grep pmon
oracrs 16646528 1 0
20:57:52 - 0:00 apx_pmon_+APX2
[Node2:root:/u02/oracle/12.2.0.1/grid/bin:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
27 -analyze -oh /u01/oracle/sam44ppt/product/12.2.0.1
<
OPatchauto session is initiated
at Fri May 22 23:45:15 2020
System initialization log file
is
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchautodb/systemconfig2020-05-22_11-45-43PM.log.
Session log file is
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/opatchauto2020-05-22_11-47-50PM.log
The id for this session is E25K
Executing OPatch prereq
operations to verify patch applicability on home
/u01/oracle/sam44ppt/product/12.2.0.1
Patch applicability verified
successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Verifying SQL patch
applicability on home /u01/oracle/sam44ppt/product/12.2.0.1
SQL patch applicability
verified successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches
has completed successfully:
Host:Node2
RAC
Home:/u01/oracle/sam44ppt/product/12.2.0.1
Version:12.2.0.1.0
==Following patches were
SKIPPED:
Patch:
/staging/grid_path_apr2020/30920127/30869447
Reason: This patch is not
applicable to this specified target type - "rac_database"
Patch:
/staging/grid_path_apr2020/30920127/26839277
Reason: This patch is not
applicable to this specified target type - "rac_database"
Patch:
/staging/grid_path_apr2020/30920127/30888810
Reason: This patch is not
applicable to this specified target type - "rac_database"
==Following patches were
SUCCESSFULLY analyzed to be applied:
Patch:
/staging/grid_path_apr2020/30920127/30882603
Log:
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-22_23-48-29PM_1.log
Patch:
/staging/grid_path_apr2020/30920127/30886680
Log:
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-22_23-48-29PM_1.log
OPatchauto session completed at
Fri May 22 23:52:55 2020
Time taken to complete the
session 7 minutes, 41 seconds
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
./opatchauto apply /staging/grid_path_apr2020/30920127 -oh
/u01/oracle/sam44ppt/product/12.2.0.1
OPatchauto session is initiated
at Fri May 22 23:53:30 2020
System initialization log file
is
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchautodb/systemconfig2020-05-22_11-53-47PM.log.
Session log file is
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/opatchauto2020-05-22_11-54-59PM.log
The id for this session is 7D66
Executing OPatch prereq
operations to verify patch applicability on home
/u01/oracle/sam44ppt/product/12.2.0.1
Patch applicability verified
successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Verifying SQL patch
applicability on home /u01/oracle/sam44ppt/product/12.2.0.1
SQL patch applicability
verified successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Preparing to bring down
database service on home /u01/oracle/sam44ppt/product/12.2.0.1
Successfully prepared home
/u01/oracle/sam44ppt/product/12.2.0.1 to bring down database service
Bringing down database service
on home /u01/oracle/sam44ppt/product/12.2.0.1
Following database(s) and/or
service(s) are stopped and will be restarted later during the
session: sam44ppt
Database service successfully
brought down on home /u01/oracle/sam44ppt/product/12.2.0.1
Performing prepatch operation
on home /u01/oracle/sam44ppt/product/12.2.0.1
Perpatch operation completed
successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Start applying binary patch on
home /u01/oracle/sam44ppt/product/12.2.0.1
Successfully executed command:
/usr/sbin/slibclean
Binary patch applied
successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Performing postpatch operation
on home /u01/oracle/sam44ppt/product/12.2.0.1
Postpatch operation completed
successfully on home /u01/oracle/sam44ppt/product/12.2.0.1
Starting database service on
home /u01/oracle/sam44ppt/product/12.2.0.1
Database service successfully
started on home /u01/oracle/sam44ppt/product/12.2.0.1
Preparing home
/u01/oracle/sam44ppt/product/12.2.0.1 after database service
restarted
No step execution
required.........
Trying to apply SQL patch on
home /u01/oracle/sam44ppt/product/12.2.0.1
SQL patch applied successfully
on home /u01/oracle/sam44ppt/product/12.2.0.1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed
successfully. Please find the summary as follows:
Host:Node2
RAC
Home:/u01/oracle/sam44ppt/product/12.2.0.1
Version:12.2.0.1.0
Summary:
==Following patches were
SKIPPED:
Patch:
/staging/grid_path_apr2020/30920127/30869447
Reason: This patch is not
applicable to this specified target type - "rac_database"
Patch:
/staging/grid_path_apr2020/30920127/26839277
Reason: This patch is not
applicable to this specified target type - "rac_database"
Patch:
/staging/grid_path_apr2020/30920127/30888810
Reason: This patch is not
applicable to this specified target type - "rac_database"
==Following patches were
SUCCESSFULLY applied:
Patch:
/staging/grid_path_apr2020/30920127/30882603
Log:
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-23_00-00-17AM_1.log
Patch:
/staging/grid_path_apr2020/30920127/30886680
Log:
/u01/oracle/sam44ppt/product/12.2.0.1/cfgtoollogs/opatchauto/core/opatch/opatch2020-05-23_00-00-17AM_1.log
OPatchauto session completed at
Sat May 23 00:22:02 2020
Time taken to complete the
session 28 minutes, 34 seconds
You have mail in
/usr/spool/mail/root
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
[Node2:root:/u01/oracle/sam44ppt/product/12.2.0.1/OPatch:]
To
verify job running on server and we need to kill it and to check
which file system is busy due to which users.
How to Stop All Processes
Accessing a File System
Become superuser or assume
an equivalent role.
List all the processes that
are accessing the file system so that you know which processes you
are going to stop.
# fuser -c [ -u ]
/mount-point
-c Reports on files that
are mount points for file systems and any files within those mounted
file systems.
-u Displays the user login
name for each process ID.
/mount-point
Specifies the name of the
file system for which you want to stop processes.
Stop all processes that are
accessing the file system.
# fuser -c -k /mount-point
A SIGKILL is sent to each
process that is using the file system.
Node1:root:/home/root:]
[Node1:root:/home/root:] fuser
-c -u /dev/asm/gfdata-7
/dev/asm/gfdata-7:
14877076c(pgfuser) 15794664c(pgfuser) 17564050c(pgfuser)
17957316c(pgfuser)
[Node1:root:/home/root:] fuser
-c -u /dev/asm/gfprod-7
/dev/asm/gfprod-7:
14877076(pgfuser) 15794664(pgfuser) 17564050(pgfuser)
17957316(pgfuser) 19988968(pgfuser) 23855612(pgfuser)
[Node1:root:/home/root:]
[Node1:root:/home/root:] ps
-ef| grep pgfuser
root 6816046 2818332 1
10:08:02 pts/2 0:00 grep pgfuser
pgfuser 14877076 15794664 0
May 14 - 0:00 sqlplus
pgfuser 15794664 19988968 0
May 14 - 0:00 /bin/ksh /ppt/gfprod/pipe2/scripts/runsql.sh
gf10450u 9
pgfuser 17564050 17957316 0
May 14 - 0:00 sqlplus
pgfuser 17957316 23855612 0
May 14 - 0:00 /bin/ksh /ppt/gfprod/pipe2/scripts/runsql.sh
gf10450u 3
pgfuser 19988968 14287186 0
May 14 - 0:00 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1458.sh
pgfuser 23855612 8520188 0
May 14 - 0:00 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1452.sh
[Node1:root:/home/root:]
[Node1:root:/home/root:]
[Node1:root:/home/root:]
[Node1:root:/home/root:]
proctree 23855612
8520188
/opt/ctm700/ctm_agent/ctm/exe/p_ctmag
23855612 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1452.sh
17957316 /bin/ksh
/ppt/gfprod/pipe2/scripts/runsql.sh gf10450u 3
17564050 sqlplus
[Node1:root:/home/root:] kill
-9 23855612 17957316
[Node1:root:/home/root:]
proctree 23855612
proctree : no such process :
23855612
[Node1:root:/home/root:] ps
-ef| grep pgfuser
root 9830848 2818332 1
10:11:54 pts/2 0:00 grep pgfuser
pgfuser 14877076 15794664 0
May 14 - 0:00 sqlplus
pgfuser 15794664 19988968 0
May 14 - 0:00 /bin/ksh /ppt/gfprod/pipe2/scripts/runsql.sh
gf10450u 9
pgfuser 17564050 1 0
May 14 - 0:00 sqlplus
pgfuser 19988968 14287186 0
May 14 - 0:00 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1458.sh
[Node1:root:/home/root:]
[Node1:root:/home/root:] ps
-ef| grep pgfuser
root 9830848 2818332 1
10:11:54 pts/2 0:00 grep pgfuser
pgfuser 14877076 15794664 0
May 14 - 0:00 sqlplus
pgfuser 15794664 19988968 0
May 14 - 0:00 /bin/ksh /ppt/gfprod/pipe2/scripts/runsql.sh
gf10450u 9
pgfuser 17564050 1 0
May 14 - 0:00 sqlplus
pgfuser 19988968 14287186 0
May 14 - 0:00 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1458.sh
[Node1:root:/home/root:]
[Node1:root:/home/root:] ps
-ef| grep rgfd1458
pgfuser 19988968 14287186 0
May 14 - 0:00 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1458.sh
root 29950268 2818332 1
10:13:19 pts/2 0:00 grep rgfd1458
[Node1:root:/home/root:]
[Node1:root:/home/root:]
proctree 19988968
14287186
/opt/ctm700/ctm_agent/ctm/exe/p_ctmag
19988968 /bin/ksh -x
/ppt/gfprod/pipe2/jobs_exec/rgfd1458.sh
15794664 /bin/ksh
/ppt/gfprod/pipe2/scripts/runsql.sh gf10450u 9
14877076 sqlplus
[Node1:root:/home/root:] kill
-9 19988968 15794664 14877076
[Node1:root:/home/root:]
[Node1:root:/home/root:]
[Node1:root:/home/root:] ps
-ef| grep pgfuser
root 15794678 2818332 1
10:15:55 pts/2 0:00 grep pgfuser
pgfuser 17564050 1 0
May 14 - 0:00 sqlplus
[Node1:root:/home/root:]
[Node1:root:/home/root:]
[Node1:root:/home/root:] kill
-9 17564050
[Node1:root:/home/root:]
[Node1:root:/home/root:]
[Node1:root:/home/root:] ps
-ef| grep pgfuser
root 17564080 2818332 1
10:16:45 pts/2 0:00 grep pgfuser
[Node1:root:/home/root:]
fuser -c -u /dev/asm/gfdata-7
fuser -c -u /dev/asm/gfprod-7