Apply PSU Oct 2016 patch(24436338) and PSU Oct 2017 Patch (26635745) on RAC Database and RDBMS 11.2.0.4
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 1: Unzip the patch from Grid user (Note: We have the same patch for both clusterware and Database Homes.)
Patch 24436338: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.161018 (OCT2016)
unzip < patch file >
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 1: Unzip the patch from Grid user (Note: We have the same patch for both clusterware and Database Homes.)
Patch 24436338: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.161018 (OCT2016)
unzip < patch file >
[/home/oracrs]$ cd /oswatcher/24436338
[/oswatcher/24436338]$ ls -ltr
drwxr-xr-x 5 oracrs oinstall 256 Feb 22 2016 22502505 ==> (ACFS PSU 11.2.0.4.160419 )
drwxr-xr-x 5 oracrs oinstall 256 Jul 19 2016 23054319 ==> (OCW PSU 11.2.0.4.160719 )
drwxrwxr-x 14 oracrs oinstall 4096 Aug 29 2016 24006111 ==> (DB PSU 11.2.0.4.161018 (INCLUDES CPUOCT2016))
-rw-rw-r-- 1 oracrs oinstall 73477 Sep 13 2016 PatchSearch.xml
-rw-r--r-- 1 oracrs oinstall 549 Sep 14 2016 bundle.xml
-rw-r--r-- 1 oracrs oinstall 0 Sep 14 2016 README.txt
-rw-r--r-- 1 oracrs oinstall 64061 Oct 17 2016 README.html
[/oswatcher/24436338]$
Note : we should be running the opatch command from 24436338 folder and not from in individual folders as instance owner.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 2 : Check the optach utility version:
We must use the OPatch utility version 11.2.0.3.6 or later to apply this patch
Check for DATABASE Home opatch version
Set the env to Database Home and run below command
$ORACLE_HOME/OPatch/opatch version
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 3 : Check Cluster Health and ACFS file system if any
crsctl check cluster -all
crsctl status resource -t
olsnodes -a
srvctl status scan_listener
To check ACFS file system
From GRID user
1. Execute the following command to find the names of the CRS managed ACFS file system resource.
# crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.
1. Execute the following command to find all ACFS file system mount points.
As root user execute:
# /sbin/acfsutil registry
Take the config backup from both nodes:
ifconfig -a
ps -ef|grep smon
ps -ef|grep tns
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 4: Database Prechecks
set lines 200 pages 200
select name,open_mode from gv$database;
select INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') start_time, STATUS from gv$instance;
select owner, object_name, object_type,status from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
If there are stale stats then run the following :
EXECUTE dbms_stats.gather_dictionary_stats;
set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;
For 12 c
select * from dba_registry_sqlpatch ;
col value format a70
select * from gv$diag_info where name='Diag Trace';
set lines 200 pages 200
column comp_name format a50
col SCHEMA format a20
SELECT comp_id,comp_name, status,schema version FROM dba_registry;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 4 : Validation of Oracle Inventory
Do it for Both HOME (Grid and Database home)
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 5 : To take Oracle Home Backup and Inventory Backup.
Note : First we should shutdown all the services running from particular Home.
A) ==> Shutdown the database instance
srvctl stop instance -d <database name> -i <instance _name> -o immediate
B) ==> Shutdown any db console/agent if running any
emctl status dbconsole
emctl stop dbconsole
emclt stop agent
C) Status the clusterware on that node and shutdown database
As a oracrs User
cd $ORACLE_HOME/bin
./crsctl check crs
select name,open_mode from v$database;
select count(1),inst_id from gv$session group by inst_id
alter system archive log current; --- switch log on database.
Now :
To Take INVENTORY Backup after shutting down database from instance owner
To stop Database Instance from instance owner:
srvctl start instance -d <database name> -i <instance _name> -o immediate
cd /u02/oracle/oraInventory
du -sg .
pwd
tar -cvf /oswatcher/oracle_inventory_<database_name>.tar .
du -sg /oswatcher/oracle_inventory_<database_name>.tar
TO TAKE ORACLE_HOME BACKUP
echo $ORACLE_SID
echo $ORACLE_HOME
cd $ORACLE_HOME
pwd
du -sg .
tar -cvf /oswatcher/CH13574/oracle_home_database.tar .
du -sg /oswatcher/CH13574/oracle_home_database.tar
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 6 : Stop OSWATCHER (if installed)
log in as Oracrs user (Grid Owner)
ps -ef|grep -i osw
cd /oswatcher
find . -name stopOSWbb.sh* 2>/dev/null
cd /oswatcher/oswbb/
ls -ltr stopOSWbb.sh
./stopOSWbb.sh
ps -ef|grep -i osw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 7: Stop the emagent (Not Mandatory) , DBCONSOLE (Mandatory)
You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home and prior to rolling back the patch from Oracle RAC database or GI Home.
Set the env
emctl status emagent
emctl stop emagent
Also stop the dbconsole
As the Oracle RAC database home owner execute:
$ <ORACLE_HOME>/bin/emctl stop dbconsole
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 8 : Start the database instance.
:root:/home/root:]
:root:/home/root:] crsctl check crs
To start Database Instance from instance owner:
srvctl start instance -d <database name> -i <instance _name>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 9: Check if OCM file is craeted already , If not create one.
Same OCM file can be used for Both Clusterware and Database Home
To Create OCM Configuration file :
To Create OCM Configuration file :
To generate :
export ORACLE_HOME= [either Grid Home / Oracle Database home]
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp [-no_banner] -output <specify_the_location>/ocm.rsp
eg
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp [-no_banner] -output <specify_the_location>/ocm.rsp
eg
From from root owner
node1[/u01/oracle/database/product/11.2.0.4/OPatch/ocm/bin]$ ./emocmrsp -no_banner
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: yes
The OCM configuration response file (ocm.rsp) was successfully created.node1[/u01/oracle/database/product/11.2.0.4/OPatch/ocm/bin]
Also note , This file should have the required permission level so that it can be used by Oracle Grid Owner and Oracle Database Owner
Please give permission from root
chmod 775 ocm.rsp
How to Create an OCM Response file to Apply a Patch in Silent Mode - opatch silent (Doc ID 966023.1)
export ORACLE_HOME=<my_oracle_home_path>
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp [-no_banner] -output <specify_the_location>/file.rsp
different option
emocmrsp -help
-no_banner ==> Indicates that the banner for the response utility is not to be displayed.
-output file ==> Creates a response file with the specified name. By default, the response file is created in your current working directory with the name ocm.rsp.
-verify file ==> Verifies that the contents of the specified response file are valid.
emocmrsp -verify <fill path of ocm rsp file>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 9: validate oracle inventory
from instance owner and set environment using oraenv
/u01/oracle/sam44ppt/product/11.2.0.4/OPatch:] opatch lsinventory -detail -oh $ORACLE_HOME
Step 10 : Check for Patch Conflict
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/24436338/
eg
node1[/u01/oracle/database/product/11.2.0.4/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /staging/12ccluster_patch/11grac/26635745
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2018, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u01/oracle/database/product/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
from : /u01/oracle/database/product/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.12
OUI version : 11.2.0.4.0
Log file location : /u01/oracle/database/product/11.2.0.4/cfgtoollogs/opatch/opatch2018-02-16_13-34-53PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
node1[/u01/oracle/sam44ppt/product/11.2.0.4/OPatch]$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 11: Apply the patch to Oracle Database home.
Note : We do not need to stop the instance on the node. Since we are applying the Opatch auto , the instance should be running.
The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with
root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should
not be run in parallel on the cluster nodes.
To apply the patch
There is no need to unmount acfs if you are only applying patch on Oracle RAC home.
Log in as root user.
node1[/u01/oracle/database/product/11.2.0.4/OPatch/ocm/bin]$ ./emocmrsp -no_banner
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: yes
The OCM configuration response file (ocm.rsp) was successfully created.node1[/u01/oracle/database/product/11.2.0.4/OPatch/ocm/bin]
Also note , This file should have the required permission level so that it can be used by Oracle Grid Owner and Oracle Database Owner
Please give permission from root
chmod 775 ocm.rsp
How to Create an OCM Response file to Apply a Patch in Silent Mode - opatch silent (Doc ID 966023.1)
export ORACLE_HOME=<my_oracle_home_path>
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp [-no_banner] -output <specify_the_location>/file.rsp
different option
emocmrsp -help
-no_banner ==> Indicates that the banner for the response utility is not to be displayed.
-output file ==> Creates a response file with the specified name. By default, the response file is created in your current working directory with the name ocm.rsp.
-verify file ==> Verifies that the contents of the specified response file are valid.
emocmrsp -verify <fill path of ocm rsp file>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 9: validate oracle inventory
from instance owner and set environment using oraenv
/u01/oracle/sam44ppt/product/11.2.0.4/OPatch:] opatch lsinventory -detail -oh $ORACLE_HOME
Step 10 : Check for Patch Conflict
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/24436338/
eg
node1[/u01/oracle/database/product/11.2.0.4/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /staging/12ccluster_patch/11grac/26635745
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2018, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u01/oracle/database/product/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
from : /u01/oracle/database/product/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.12
OUI version : 11.2.0.4.0
Log file location : /u01/oracle/database/product/11.2.0.4/cfgtoollogs/opatch/opatch2018-02-16_13-34-53PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
node1[/u01/oracle/sam44ppt/product/11.2.0.4/OPatch]$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 11: Apply the patch to Oracle Database home.
Note : We do not need to stop the instance on the node. Since we are applying the Opatch auto , the instance should be running.
The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes when run with
root privileges. It must be executed on each node in the cluster if the GI home or Oracle RAC database home is in non-shared storage. The utility should
not be run in parallel on the cluster nodes.
To apply the patch
There is no need to unmount acfs if you are only applying patch on Oracle RAC home.
Log in as root user.
set the environment to the Oracle home we need to patch.
echo $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch auto <patch unziped path> -oh $ORACLE_HOME
/u01/oracle/<database>/product/11.2.0.4/OPatch/opatch auto /oswatcher/24436338 -oh $ORACLE_HOME -ocmrf $ORACLE_HOME/OPatch/ocm/bin/ocm.rsp
eg
echo $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch auto <patch unziped path> -oh $ORACLE_HOME
/u01/oracle/<database>/product/11.2.0.4/OPatch/opatch auto /oswatcher/24436338 -oh $ORACLE_HOME -ocmrf $ORACLE_HOME/OPatch/ocm/bin/ocm.rsp
eg
From root user
export PATH=/u01/oracle/sam44ppt/product/11.2.0.4/OPatch:$PATH
echo PATH
ls -ltr /u01/oracle/sam01prd/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
ls -ltr /u01/oracle/sam01prd/product/11.2.0.4
df -g /u01/oracle/sam01prd/product/11.2.0.4
ls -ltr /staging/octpatch/26635745
Note : the instance will be down during patching activity and will start automatically
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] ls -ltr /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
-rwxrwxr-x 1 root system 625 Feb 16 13:16 /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] cle/testppt/product/11.2.0.4 -ocmrf /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp <
Executing /u02/oracle/12.1.0.2/grid/perl/bin/perl /u01/oracle/testppt/product/11.2.0.4/OPatch/crs/patchDB.pl -patchdir /staging/12ccluster_patch/11grac -patchn 26635745 -oh /u01/oracle/testppt/product/11.2.0.4 -ocmrf /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
This is the main log file: /u01/oracle/testppt/product/11.2.0.4/cfgtoollogs/opatchauto2018-02-17_12-14-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/testppt/product/11.2.0.4/cfgtoollogs/opatchauto2018-02-17_12-14-10.report.log
Stopping RAC /u01/oracle/testppt/product/11.2.0.4 ...
Stopped RAC /u01/oracle/testppt/product/11.2.0.4 successfully
patch /staging/12ccluster_patch/11grac/26635745/26392168 apply successful for home /u01/oracle/testppt/product/11.2.0.4
patch /staging/12ccluster_patch/11grac/26635745/26609929/custom/server/26609929 apply successful for home /u01/oracle/testppt/product/11.2.0.4
Starting RAC /u01/oracle/testppt/product/11.2.0.4 ...
Started RAC /u01/oracle/testppt/product/11.2.0.4 successfully
opatch auto succeeded.
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:]
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] ps -ef|grep pmon
oracrs 49741880 1 0 Feb 11 - 0:18 asm_pmon_+ASM1
testppt 52166826 1 0 12:18:29 - 0:00 ora_pmon_testPPT1
oracrs 64946390 1 0 Feb 11 - 0:09 mdb_pmon_-MGMTDB
root 4653382 61669430 0 12:24:45 pts/0 0:00 grep pmon
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:]
eg 2
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] echo $PATH
/u01/oracle/test_db/product/11.2.0.4/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java7_64/jre/bin:/usr/java7_64/bin:/usr/local/bin:/usr/ETC/bin:/opt/DoOnceAIX/SecTools/aix64db2cops:/opt/DoOnceAIX/SecTools/aixdb2cops:/opt/DoOnceAIX/SecTools/aixtsmcops:/opt/DoOnceAIX/SecTools/logkeeper:/opt/DoOnceAIX/SecTools/loglooker
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] roduct/11.2.0.4 -ocmrf /u01/oracle/test_db/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp <
Executing /u02/oracle/11.2.0.4/grid/perl/bin/perl ./crs/patch11203.pl -patchdir /staging/octpatch -patchn 26635745 -oh /u01/oracle/test_db/product/11.2.0.4 -ocmrf /u01/oracle/test_db/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp -paramfile /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
This is the main log file: /u01/oracle/test_db/product/11.2.0.4/cfgtoollogs/opatchauto2018-04-29_02-22-45.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/test_db/product/11.2.0.4/cfgtoollogs/opatchauto2018-04-29_02-22-45.report.log
2018-04-29 02:22:45: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
Stopping RAC /u01/oracle/test_db/product/11.2.0.4 ...
Stopped RAC /u01/oracle/test_db/product/11.2.0.4 successfully
patch /staging/octpatch/26635745/26392168 apply successful for home /u01/oracle/test_db/product/11.2.0.4
patch /staging/octpatch/26635745/26609929/custom/server/26609929 apply successful for home /u01/oracle/test_db/product/11.2.0.4
Starting RAC /u01/oracle/test_db/product/11.2.0.4 ...
Started RAC /u01/oracle/test_db/product/11.2.0.4 successfully
opatch auto succeeded.
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:]
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 12 : Loading Modified SQL Files into the Database
export PATH=/u01/oracle/sam44ppt/product/11.2.0.4/OPatch:$PATH
echo PATH
ls -ltr /u01/oracle/sam01prd/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
ls -ltr /u01/oracle/sam01prd/product/11.2.0.4
df -g /u01/oracle/sam01prd/product/11.2.0.4
ls -ltr /staging/octpatch/26635745
opatch auto
/staging/12ccluster_patch/11grac/26635745 -oh
/u01/oracle/testppt/product/11.2.0.4 -ocmrf
/u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
Note : the instance will be down during patching activity and will start automatically
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] ls -ltr /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
-rwxrwxr-x 1 root system 625 Feb 16 13:16 /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] cle/testppt/product/11.2.0.4 -ocmrf /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp <
Executing /u02/oracle/12.1.0.2/grid/perl/bin/perl /u01/oracle/testppt/product/11.2.0.4/OPatch/crs/patchDB.pl -patchdir /staging/12ccluster_patch/11grac -patchn 26635745 -oh /u01/oracle/testppt/product/11.2.0.4 -ocmrf /u01/oracle/testppt/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp
This is the main log file: /u01/oracle/testppt/product/11.2.0.4/cfgtoollogs/opatchauto2018-02-17_12-14-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/testppt/product/11.2.0.4/cfgtoollogs/opatchauto2018-02-17_12-14-10.report.log
Stopping RAC /u01/oracle/testppt/product/11.2.0.4 ...
Stopped RAC /u01/oracle/testppt/product/11.2.0.4 successfully
patch /staging/12ccluster_patch/11grac/26635745/26392168 apply successful for home /u01/oracle/testppt/product/11.2.0.4
patch /staging/12ccluster_patch/11grac/26635745/26609929/custom/server/26609929 apply successful for home /u01/oracle/testppt/product/11.2.0.4
Starting RAC /u01/oracle/testppt/product/11.2.0.4 ...
Started RAC /u01/oracle/testppt/product/11.2.0.4 successfully
opatch auto succeeded.
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:]
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:] ps -ef|grep pmon
oracrs 49741880 1 0 Feb 11 - 0:18 asm_pmon_+ASM1
testppt 52166826 1 0 12:18:29 - 0:00 ora_pmon_testPPT1
oracrs 64946390 1 0 Feb 11 - 0:09 mdb_pmon_-MGMTDB
root 4653382 61669430 0 12:24:45 pts/0 0:00 grep pmon
[node1:root:/u01/oracle/testppt/product/11.2.0.4/OPatch:]
eg 2
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] echo $PATH
/u01/oracle/test_db/product/11.2.0.4/OPatch:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java7_64/jre/bin:/usr/java7_64/bin:/usr/local/bin:/usr/ETC/bin:/opt/DoOnceAIX/SecTools/aix64db2cops:/opt/DoOnceAIX/SecTools/aixdb2cops:/opt/DoOnceAIX/SecTools/aixtsmcops:/opt/DoOnceAIX/SecTools/logkeeper:/opt/DoOnceAIX/SecTools/loglooker
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:] roduct/11.2.0.4 -ocmrf /u01/oracle/test_db/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp <
Executing /u02/oracle/11.2.0.4/grid/perl/bin/perl ./crs/patch11203.pl -patchdir /staging/octpatch -patchn 26635745 -oh /u01/oracle/test_db/product/11.2.0.4 -ocmrf /u01/oracle/test_db/product/11.2.0.4/OPatch/ocm/bin/ocm.rsp -paramfile /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
This is the main log file: /u01/oracle/test_db/product/11.2.0.4/cfgtoollogs/opatchauto2018-04-29_02-22-45.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/test_db/product/11.2.0.4/cfgtoollogs/opatchauto2018-04-29_02-22-45.report.log
2018-04-29 02:22:45: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
Stopping RAC /u01/oracle/test_db/product/11.2.0.4 ...
Stopped RAC /u01/oracle/test_db/product/11.2.0.4 successfully
patch /staging/octpatch/26635745/26392168 apply successful for home /u01/oracle/test_db/product/11.2.0.4
patch /staging/octpatch/26635745/26609929/custom/server/26609929 apply successful for home /u01/oracle/test_db/product/11.2.0.4
Starting RAC /u01/oracle/test_db/product/11.2.0.4 ...
Started RAC /u01/oracle/test_db/product/11.2.0.4 successfully
opatch auto succeeded.
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:]
[node1:root:/u01/oracle/test_db/product/11.2.0.4/OPatch:]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 12 : Loading Modified SQL Files into the Database
The following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.
Only one node should be up and running.
So better to shutdown the other instance.
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
CONNECT / AS SYSDBA
STARTUP
@catbundle.sql psu apply
QUIT
###########
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @utlrp.sql
catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 13 : Do the Post checks :
set lines 200 pages 200
select name,open_mode from gv$database;
select INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') start_time, STATUS from gv$instance;
select owner, object_name, object_type,status from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;
The
select * from
col value format a70
select * from gv$diag_info where name='Diag Trace';
set lines 200 pages 200
column comp_name format a50
col SCHEMA format a20
SELECT comp_id,comp_name, status,schema version FROM dba_registry;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Repeat Step 1 to Step 12 on the other node.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 13: Update the CATALOG
Upgrade Oracle Recovery Manager Catalog
If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:
rman target /
connect catalog <user_name>/<password>@<catalogdatabase_name>
UPGRADE CATALOG;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 14: Start the EMAGENT.
set the env:
emctl start agent
emctl status agent.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Start the oswatcher :
So better to shutdown the other instance.
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
CONNECT / AS SYSDBA
STARTUP
@catbundle.sql psu apply
QUIT
###########
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @utlrp.sql
catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 13 : Do the Post checks :
set lines 200 pages 200
select name,open_mode from gv$database;
select INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') start_time, STATUS from gv$instance;
select owner, object_name, object_type,status from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';
col TABLE_NAME for a30
col PARTITION_NAME for a20
col SUBPARTITION_NAME for a20
select OWNER,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,NUM_ROWS,LAST_ANALYZED from dba_TAB_STATISTICS where STALE_STATS='YES';
set lines 200 pages 200
col action_time format a40;
col action format a30;
col namespace format a10;
col comments format a30;
col version format a13;
set lin 200 head on feed on pagesize 100;
select i.INST_ID,i.instance_name,r.* from gv$instance i, registry$history r;
The
catbundle.sql
execution is reflected in the dba_registry_history
view by a row associated with bundle series PSU
.select * from
dba_registry_history
col value format a70
select * from gv$diag_info where name='Diag Trace';
set lines 200 pages 200
column comp_name format a50
col SCHEMA format a20
SELECT comp_id,comp_name, status,schema version FROM dba_registry;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Repeat Step 1 to Step 12 on the other node.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 13: Update the CATALOG
Upgrade Oracle Recovery Manager Catalog
If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:
rman target /
connect catalog <user_name>/<password>@<catalogdatabase_name>
UPGRADE CATALOG;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Step 14: Start the EMAGENT.
set the env:
emctl start agent
emctl status agent.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Start the oswatcher :
Stop OSWATCHER
log in as Oracrs user (Grid Owner)
ps -ef|grep -i osw
cd /oswatcher
find . -name startOSWbb.shh* 2>/dev/null
cd /oswatcher/oswbb/
ls -ltr stopOSWbb.sh
nohup ./startOSWbb.sh 15 360 gzip &
collects every 15sec and keeps the logs for 360hrs (15days)
ps -ef|grep -i osw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
log in as Oracrs user (Grid Owner)
ps -ef|grep -i osw
cd /oswatcher
find . -name startOSWbb.shh* 2>/dev/null
cd /oswatcher/oswbb/
ls -ltr stopOSWbb.sh
nohup ./startOSWbb.sh 15 360 gzip &
collects every 15sec and keeps the logs for 360hrs (15days)
ps -ef|grep -i osw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
No comments:
Post a Comment