Tuesday, 8 January 2019

12c Oracle Clusterware OCT 2018 PSU Patching /Patch 28349311: GRID INFRASTRUCTURE PATCH STEP BY STEP

Types of Oracle Patches

Oracle regularly makes patches available to upgrade features, enhance security, or fix problems with supported software. The major types of patches are:

 

·         Interim patches - contain a single bug fix or a collection of bug fixes provided as required

·         Interim patches for security bug fixes - contain customer-specific security bug fixes

·         Diagnostic patches - intended to help diagnose or verify a fix or a collection of bug fixes

·         Bundle Patch Updates (BPUs) - a cumulative collection of fixes for a specific product or component

·         Patch Set Updates (PSUs) - a cumulative collection of high impact, low risk, and proven fixes for a specific product or component and Security Patch Updates

·         Security Patch Updates (SPU) - a cumulative collection of security bug fixes. SPUs were formerly known as Critical Patch Updates (CPU).

·         System Patch - contains several sub-patches in a format that can be used by OPatchAuto.

·         Merge Label Request (MLR) - a merge of two or more fixes. MLR creation requires a label for the new set of merged code and a Patch Set

 

 

 

 

 

Overview of the Patch Process

You can download patch from oracle support and will master sheet as well

https://support.oracle.com/knowledge/Oracle%20Cloud/2285558_1.html

 

 

Patch can be applied using Opatch and Opatchauto

 

OPatch consists of patching utilities that help ensure your Oracle software stays current and secure. The utilities are:

·         OPatch: A Java-based utility that enables the application and rollback of patches to Oracle software.

·         OPatchauto: A patch orchestration tool that generates patching instructions specific to your target configuration and then uses OPatch to perform the patching operations without user intervention. Specifically, OPatchAuto can:

1.  Perform pre-patch checks.

2.  Apply the patch

3.  Start and stop the running servers.

4.  Perform post-patch checks.

5.  Roll back patches when patch deinstallation is required.

 

·         OPatchAuto automates patch application to the Grid Infrastructure (GI) cluster, by applying patches to both the GI and the managed Oracle Real Application Cluster (RAC) homes.

·         Patch orchestration is the automated execution of the patching steps, such as the execution of pre-patch checks, stopping services, applying the binary patches, and starting the services. Patch orchestration for Oracle Database 12c applies the patch to the GI/RAC configuration on that machine, including all of its databases. The OPatchAuto patch orchestration utility is available with version 12.1 of the OPatch utility.

 

Real Application Clusters can be patched in different ways:

·         Rolling Patching

·         Non-Rolling Patching

Advantage of Roll Patch

 

o    The rolling Patching refers to patching different databases or different instances of the same database (in a Real Application Clusters environment) one at a time, without stopping the database.

o    The advantage of a RAC rolling patch is that it enables at least some instances of the RAC installation to be available during the scheduled outage required for patch upgrades. Only the RAC instance that is currently being patched needs to be brought down. The other instances can continue to remain available. This means that the impact on the application downtime required for such scheduled outages is further minimized. Oracle’s opatch utility enables the user to apply the patch successively to the different instances of the RAC installation.

o    Before Opatchauto utility, we need to manually down the instance and cluster related services(prepatch.sh) before applying rolling patch.

o    The GI System patch includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.

o    This patch is Oracle RAC Rolling Installable has to be mentioned in README.html.

 

 

 

 

 

 

Clusterware patch on Grid Home 12.2.0.1 Step by Step

 

1)Two Node RAC Database with Standby Database

 

 Patching Sequence should be as below, Since Each RAC Database in environment has standby database as well

 

1) Disable log shipping

2) apply PSU patch on standby clusterware/Grid and on Standby Database

3) Apply PSU patch on Primary Database

 

2) Two node RAC Patching

 

a)   Grid home patch

b)   Oracle Home patch

 

We need to apply PSU patch on Grid home and database as well

12c Oracle Clusterware OCT 2019 PSU Patching /Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015: GRID INFRASTRUCTURE PATCH STEP BY STEP

Patch details
==============
Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015

This patch is used to apply on Grid home and RAC database as well


Prerequisite
============

1) Download Patch 30116802: GI OCT 2019 RELEASE UPDATE 12.2.0.1.191015 from Oracle Support
2) Check Free Space in Clusterware Home file system /u02 (approx free space 15-22 GB and /tmp file system
¨ Check ACFS File Systems  if  it is available on node ,Please unmount it before starting patching clusterware or stopping clusterware
3) Check clusterware and database component status
 ps -ef|grep pmon
 ps -ef|grep tns
 crsctl check crs
crsctl status resource -t
 crsctl query crs activeversion
 crsctl query crs releaseversion
 crsctl query crs softwareversion
./crsctl check crs
 ps -ef|grep d.bin

4) Check opatch version and recommended to have latest  opatch and download it from oracle support (6880880) and unzip it in Grid Home after renaming Old Opatch folder
  We must use the OPatch utility version 12.2.0.1.17 or later to apply this patch for all platforms
5) run opatch lsinventory from grid home to check consistency and corruption of inventory
6) unzip downloaded patch from Grid user
7) run analyze command using opatchauto from root user after setting proper path ,we must carefully should give grid home path
8) there is bug in this patch. So follow instruction from below doc id
 opatchauto/opatch apply failing with CLSRSC-46: Error: '<GRID_HOME>/suptools/tfa/release/tfa_home/jlib/jewt4.jar' does not exist (Doc ID 2409411.1)
9) apply patch from  opatchauto on grid home
10)verify patch details on Grid Home
11)
OPatch maintains logs for apply, rollback, and lsinventory operations.

OPatch Log files are located in

ORACLE_HOME/cfgtoollogs/opatch.

Each log file is tagged with the time stamp of the operation.

Each time you run OPatch, a new log file is created.

OPatch maintains an index of processed commands-and log files in the opatch_history.txt file

Eg


Patch location
-----------
Node1[/oswatcher]$ cd 2020_patch
Node1[/oswatcher/2020_patch]$ ls -ltr
total 3613688
-rw-------    1 users  staff     170920687 Feb 10 16:03 p6880880_122010_AIX64-5L.zip
-rw-------    1 users  staff    1679281294 Feb 10 16:09 p30116802_122010_AIX64-5L.zip
Node1[/oswatcher/2020_patch]$


1) Change patch owner if it is not owned by grid owner and unzip patch and provide 775 owners (precaution)

chown -R oracrs:oinstall 2020_patch

chown oracrs:oinstall p30116802_122010_AIX64-5L.zip
chmod -R 775 30116802

2) Check patch version from grid owner

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_HOME=/u02/oracle/12.2.0.1/grid/
Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ export ORACLE_SID=+ASM1

    $ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
    $ <ORACLE_HOME>/OPatch/opatch version

The version output of the previous command should be 12.2.0.1.17 or later.

3) Validation of Oracle Inventory

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid//oraInst.loc
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-12_15-07-11PM_1.log

Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-12_15-07-11PM.txt
or

Node1[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch lsinventory -detail -oh /u02/oracle/12.2.0.1/grid
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-22-28AM_1.log

Lsinventory Output file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2020-02-15_04-22-28AM.txt
--------------------------------------------------------------------------------
Local Machine Information::



4)  Run OPatch Conflict Check

export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch/OPatch


Determine whether any currently installed one-off patches conflict with this patch 30116802 as follows:

    For Grid Infrastructure Home , From Grid owner

cd
/u02/oracle/12.2.0.1/grid/OPatch/OPatch

    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30138470
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122814
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30122828
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/26839277
    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30116802/30093408

eg

Testserver[/u02/oracle/12.2.0.1/grid/OPatch]$
Testserver[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30138470
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-24AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122814
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-36AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30122828
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-47AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/26839277
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-26-57AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$  opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oswatcher/2020_patch/30116802/30093408
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-27-06AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Test[/u02/oracle/12.2.0.1/grid/OPatch]$




5) Run OPatch SystemSpace Check from Grid owner


Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

    For Grid Infrastructure Home, as home user:

        Create file /tmp/patch_list_gihome.txt with the following content:

        % cat /tmp/patch_list_gihome.txt

        <UNZIPPED_PATCH_LOCATION>/30116802/30138470
        <UNZIPPED_PATCH_LOCATION>/30116802/30122814
        <UNZIPPED_PATCH_LOCATION>/30116802/30122828
        <UNZIPPED_PATCH_LOCATION>/30116802/26839277
        <UNZIPPED_PATCH_LOCATION>/30116802/30093408

        Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

        % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt


eg

Test server[/u02/oracle/12.2.0.1/grid/OPatch]$ ls -ltr /tmp/patch_list_gihome.txt
-rw-r--r--    1 oracrs   oinstall        200 Feb 14 16:58 /tmp/patch_list_gihome.txt
Test server[/u02/oracle/12.2.0.1/grid/OPatch]$ opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.19
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/12.2.0.1/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/12.2.0.1/grid/oraInst.loc
OPatch version    : 12.2.0.1.19
OUI version       : 12.2.0.1.4
Log file location : /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatch/opatch2020-02-15_04-30-05AM_1.log

Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.

OPatch succeeded.
Test server[/u02/oracle/12.2.0.1/grid/OPatch]$


7) due to bug or jdev-rt.jar file not present .It will throw error . So please follow instruction and remove entry of files jdev-rt.jar and jewt4.jar
 from file (crsconfig_fileperms and crsconfig_fileperms.sbs)  as pre check.

Error details
----------
CRS-2673: Attempting to stop 'ora.cssd' on 'Test server'
CRS-2677: Stop of 'ora.cssd' on 'Test server' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'Test server'
CRS-2677: Stop of 'ora.gipcd' on 'Test server' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'Test server' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2020/02/15 06:18:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:51 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2020/02/15 06:18:52 CLSRSC-46: Error: '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar' does not exist
2020/02/15 06:18:53 CLSRSC-152: Could not set ownership on '/u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar'

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sat Feb 15 06:18:58 2020
Time taken to complete the session 5 minutes, 40 seconds

 opatchauto failed with error code 42
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ps -ef|grep pmon
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.

reason of issue
----------

[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
Shell will time out in 60 seconds.
ksh[2]: Timed out waiting for input.
Test server[/home/eibm994]$ sudo su -
Password:
[YOU HAVE NEW MAIL]
[Test server:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[Test server:root:/home/root:] ls -ltr /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar
ls: 0653-341 The file /u02/oracle/12.2.0.1/grid/suptools/tfa/release/tfa_home/jlib/jdev-rt.jar does not exist.
[Test server:root:/home/root:]

solution
-----------

[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/Test server
ksh: /u02/oracle/12.2.0.1/grid/crs/utl/Test server:  not found.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/crs/utl/Test server
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] ls -ltr crsconfig_fileperms
-rw-r-----    1 oracrs   oinstall      25199 May 16 2019  crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cp -p crsconfig_fileperms crsconfig_fileperms.backup
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] vi crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cat crsconfig_fileperms|grep -i jewt4.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cat crsconfig_fileperms|grep -i jdev-rt.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/utl/Test server:] cd /u02/oracle/12.2.0.1/grid/crs/sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] ls -ltr crsconfig_fileperms.sbs
-rw-r-----    1 oracrs   oinstall      26177 Oct 17 2018  crsconfig_fileperms.sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cp -p crsconfig_fileperms.sbs  crsconfig_fileperms.sbs.backup
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] vi crsconfig_fileperms.sbs
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jewt4.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:] cat crsconfig_fileperms.sbs|grep -i jdev-rt.jar
[Test server:root:/u02/oracle/12.2.0.1/grid/crs/sbs:]


6)  One-off Patch Conflict Detection and Resolution from root user

./u02/oracle/12.2.0.1/grid/OPatch/opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid


7) Stop All RAC Database instance and clusterware from one node and Take backup of Grid Inventory  from one node


srvctl stop instance -d test1db -i test1db1 -o immediate
srvctl stop instance -d test2db  -i test2db1 -o immediate

check status of crs
./crsctl check crs
ps -ef|grep d.bin

before stop cluster , Please check ACFS file system is there , it avaialble at server , please unmont it , otherwise clusterware will not stop from root user

------------unmount acfs file system----------------------------


from root user


v/varbes_lv      2.00      1.29   36%     8196     3% /var/opt/BESClient
/dev/asm/alprod-7      2.00      1.66   17%   708512    17% /ppt/alprod
/dev/asm/gfprod-7      2.00      1.77   12%   477200    12% /ppt/gfprod

[Test:root:/u02/oracle/12.2.0.1/grid/bin:] ./crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/alprod-7
VOLUME_DEVICE=/dev/asm/alprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfdata-7
VOLUME_DEVICE=/dev/asm/gfdata-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfnew-7
VOLUME_DEVICE=/dev/asm/gfnew-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
AUX_VOLUMES=
CANONICAL_VOLUME_DEVICE=/dev/asm/gfprod-7
VOLUME_DEVICE=/dev/asm/gfprod-7
[Test:root:/u02/oracle/12.2.0.1/grid/bin:] /sbin/acfsutil registry
Mount Object:
  Device: /dev/asm/alprod-7
  Mount Point: /ppt/alprod
  Disk Group: OPS_DATA
  Volume: ALPROD
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfdata-7
  Mount Point: /ppt/gfdata
  Disk Group: OPS_DATA
  Volume: GFDATA
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfnew-7
  Mount Point: /ppt/gfnew
  Disk Group: OPS_DATA
  Volume: GFNEW
  Options: none
  Nodes: all
  Accelerator Volumes:
Mount Object:
  Device: /dev/asm/gfprod-7
  Mount Point: /ppt/gfprod
  Disk Group: OPS_DATA
  Volume: GFPROD
  Options: none
  Nodes: all
  Accelerator Volumes:
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
umount: 0506-347 Cannot find anything to unmount.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
umount: 0506-347 Cannot find anything to unmount.
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]

[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/bos_hd4       2.00      1.54   24%    20747     6% /
/dev/bos_hd2      19.12      4.29   78%    82348     8% /usr
/dev/bos_hd9var      1.00      0.43   57%    18985    16% /var
/dev/hd3           5.00      4.93    2%      392     1% /tmp
/dev/hd1           3.00      1.11   64%    10839     4% /home
/proc                 -         -    -        -      - /proc
/dev/bos_hd10opt      1.25      0.24   82%    49382    45% /opt
/dev/hd11admin      0.25      0.25    1%        5     1% /admin
/dev/besclientlv      0.25      0.15   40%       21     1% /opt/BESClient
/dev/lv_doonce      0.25      0.07   71%      450     3% /opt/DoOnceAIX
/dev/itm_lv        2.62      1.13   57%    18522     7% /opt/IBM/ITM
/dev/scm_lv        2.00      1.90    5%      846     1% /opt/IBM/SCM
/dev/tiv_lv        1.00      0.97    4%      110     1% /opt/Tivoli
/dev/best1_lv      3.00      2.42   20%      169     1% /opt/best1
/dev/connectlv      2.00      1.95    3%      950     1% /opt/connect
/dev/ctm700lv      3.00      2.11   30%     5798     2% /opt/ctm700
/dev/patrol_lv      2.00      1.14   43%     4197     2% /opt/patrol
/dev/tivoli_lv      0.25      0.20   20%    13207    22% /opt/tivoli/cit
/dev/oswatchlv     98.00     45.94   54%     8789     1% /oswatcher
/dev/staginglv    105.00     14.80   86%    33423     1% /staging
/dev/sampptu01lv     35.00     10.00   72%    93706     4% /u01
/dev/sampptu02lv    145.00     43.13   71%   318269     3% /u02
/dev/sampptu03lv     10.00      8.57   15%    29304     2% /u03
/dev/fslv00        1.00      0.99    1%       79     1% /var/adm/perfmgr
/dev/livedump      0.25      0.25    1%        4     1% /var/adm/ras/livedump
/dev/lv_auditlog      2.00      0.66   67%     2129     2% /var/log/eprise
/dev/varbes_lv      2.00      1.29   36%     8196     3% /var/opt/BESClient

 

To backup of Grid Home

 

We need to stop database instance and clusterware/Grid

 

You can stop database instance from grid owner or instance owner

 

1.  Stop database instance

srvctl stop instance -d testdb -i testdb1 -o immediate
 

 

2.  Stop grid/crs from root user

 

      Cd /u02/oracle/12.2.0.1/grid/bin

            ./crsctl stop crs
 

 

A)  Backup GRID home and inventory USING Root

 

Cd /u02/oracle/12.2.0.1/grid

tar -cvf /staging/grid_home_database.tar .
 

du -sg /staging/grid_home_database.tar

Backup for inventory

cd /u02/oracle/oraInventory

tar -cvf /staging/grid_home_inventory.tar .
du -hP  /staging/grid_home_inventory.tar




8) Star clusterware on node1 and it is not required to start Database instance on node1

From root user

Cd /u02/oracle/12.2.0.1/grid/bin

./crsctl start crs


 
Note :Since opatchauto will stop automatically while patching .
Clusterware should be up and running, Database instance is not required to up and running since we are apply on Grid home only
 

Note :In case ,ACFS file system in server , We need to unmount ACSF file as per below steps

9) unmount  acfs file system again and check file system

ev/asm/alprod-7      2.00      1.66   17%   708512    17% /ppt/alprod
/dev/asm/gfnew-7      2.00      1.80   10%   411816    10% /ppt/gfnew
/dev/asm/gfprod-7      2.00      1.77   12%   477200    12% /ppt/gfprod
/dev/asm/gfdata-7     10.00      8.17   19%  3838312    19% /ppt/gfdata


[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/alprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfnew-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfdata-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] /usr/sbin/umount /dev/asm/gfprod-7
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/bos_hd4       3.00      2.55   15%    20373     4% /
/dev/bos_hd2       6.50      1.89   71%    80046    15% /usr
/dev/bos_hd9var      1.00      0.46   55%    17473    14% /var
/dev/hd3           5.00      4.93    2%      400     1% /tmp
/dev/hd1           4.50      3.68   19%     1404     1% /home
/proc                 -         -    -        -      - /proc
/dev/bos_hd10opt      1.25      0.24   82%    49395    45% /opt
/dev/hd11admin      0.25      0.25    1%        5     1% /admin
/dev/besclientlv      0.25      0.15   40%       21     1% /opt/BESClient
/dev/lv_doonce      0.25      0.07   71%      450     3% /opt/DoOnceAIX
/dev/itm_lv        2.00      0.82   59%    18193     9% /opt/IBM/ITM
/dev/scm_lv        2.00      1.90    5%      843     1% /opt/IBM/SCM
/dev/tiv_lv        1.00      1.00    1%       12     1% /opt/Tivoli
/dev/best1_lv      3.00      1.99   34%      242     1% /opt/best1
/dev/connectlv      2.00      1.94    3%     1848     1% /opt/connect
/dev/ctm700lv      3.00      2.12   30%     5749     2% /opt/ctm700
/dev/patrol_lv      2.00      1.14   43%     4196     2% /opt/patrol
/dev/tivoli_lv      0.25      0.20   20%    13142    22% /opt/tivoli/cit
/dev/oswatchlv     98.00     63.92   35%     8792     1% /oswatcher
/dev/staginglv    105.00     27.09   75%    37180     1% /staging
/dev/sampptu01lv     50.00     25.77   49%    88289     2% /u01
/dev/sampptu02lv    130.00     28.37   79%   370449     5% /u02
/dev/sampptu03lv     20.00     11.49   43%    31886     2% /u03
/dev/fslv00        1.00      0.99    1%       79     1% /var/adm/perfmgr
/dev/livedump      0.25      0.25    1%        4     1% /var/adm/ras/livedump
/dev/lv_auditlog      2.00      1.13   44%     2167     1% /var/log/eprise
/dev/varbes_lv      2.00      1.29   36%     8394     3% /var/opt/BESClient
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]


10) To check precheck, run analyze command from root user again

export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch


[Test server:root:/u02/oracle/12.2.0.1/grid/bin:]
[Test server:root:/u02/oracle/12.2.0.1/grid/bin:] cd /u02/oracle/12.2.0.1/grid/OPatch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] export PATH=$PATH:/u02/oracle/12.2.0.1/grid/OPatch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -analyze -oh /u02/oracle/12.2.0.1/grid

OPatchauto session is initiated at Sat Feb 15 11:18:28 2020

System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-19-15AM.log.

Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-20-56AM.log
The id for this session is 4465

Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:Test server
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0


==Following patches were SKIPPED:

Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log

Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-21-43AM_1.log



OPatchauto session completed at Sat Feb 15 11:21:56 2020
Time taken to complete the session 3 minutes, 29 seconds

 

To see progress of command

 

Tail -200f /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-20-56AM.log


 

Implementation
---------------

1) Set proper PATH
¨   Apply PSU through patchauto from root user.  
 eg

[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 26839277
drwxr-x---    5 oracrs   oinstall        256 Sep 27 20:08 30122828
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 30093408
drwxr-x---    4 oracrs   oinstall        256 Sep 27 20:08 30138470
drwxr-x---    2 oracrs   oinstall       4096 Sep 27 20:12 automation
drwxr-x---    5 oracrs   oinstall        256 Sep 27 20:12 30122814
-rw-r-----    1 oracrs   oinstall          0 Sep 27 20:18 README.txt
-rw-r-----    1 oracrs   oinstall     124674 Sep 27 21:46 README.html
-rw-rw-r--    1 oracrs   oinstall       5828 Sep 27 22:16 bundle.xml
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] chmod -R 775 /oswatcher/2020_patch
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]  ls -ltr /oswatcher/2020_patch/30116802
total 272
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 26839277
drwxrwxr-x    5 oracrs   oinstall        256 Sep 27 20:08 30122828
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 30093408
drwxrwxr-x    4 oracrs   oinstall        256 Sep 27 20:08 30138470
drwxrwxr-x    2 oracrs   oinstall       4096 Sep 27 20:12 automation
drwxrwxr-x    5 oracrs   oinstall        256 Sep 27 20:12 30122814
-rwxrwxr-x    1 oracrs   oinstall          0 Sep 27 20:18 README.txt
-rwxrwxr-x    1 oracrs   oinstall     124674 Sep 27 21:46 README.html
-rwxrwxr-x    1 oracrs   oinstall       5828 Sep 27 22:16 bundle.xml
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ls -ltr /u02/oracle/12.2.0.1/grid
total 6284552
-rw-r-----    1 oracrs   oinstall        500 Feb 06 2013  welcome.html
-rw-r-----    1 oracrs   oinstall         59 Oct 08 2014  oraInst.loc
-rw-r--r--    1 oracrs   oinstall        852 Sep 02 2015  env.ora
-rwxr-x---    1 oracrs   oinstall        628 Sep 05 2015  runcluvfy.sh
-rwxr-x---    1 oracrs   oinstall       5395 Jul 21 2016  gridSetup.sh
drwxr-xr-x    3 oracrs   oinstall        256 Apr 02 2017  slax
-rwxrwxr-x    1 oracrs   oinstall  170920687 Feb 13 14:40 p6880880_122010_AIX64-5L.zip
drwxr-xr-x    2 oracrs   oinstall       4096 Feb 15 11:11 dbs
drwx------    8 oracrs   oinstall       4096 Feb 15 11:17 .patch_storage
drwxrwxr-x    3 oracrs   oinstall        256 Feb 15 11:18 opatchautocfg
drwxrwxr-x    8 oracrs   oinstall       8192 Feb 15 11:19 cfgtoollogs
drwxr-x---    2 oracrs   oinstall        256 Feb 15 11:21 .opatchauto_storage



[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:] ./opatchauto apply /oswatcher/2020_patch/30116802 -oh /u02/oracle/12.2.0.1/grid

OPatchauto session is initiated at Sat Feb 15 11:23:59 2020

System initialization log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-15_11-24-16AM.log.

Session log file is /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2020-02-15_11-25-54AM.log
The id for this session is PN8U

Executing OPatch prereq operations to verify patch applicability on home /u02/oracle/12.2.0.1/grid
Patch applicability verified successfully on home /u02/oracle/12.2.0.1/grid


Bringing down CRS service on home /u02/oracle/12.2.0.1/grid
Prepatch operation log file location: /u02/oracle/PPT/crsdata/Test server/crsconfig/crspatch_Test server_2020-02-15_11-26-54AM.log
CRS service brought down successfully on home /u02/oracle/12.2.0.1/grid


Start applying binary patch on home /u02/oracle/12.2.0.1/grid
Successfully executed command: /usr/sbin/slibclean

Binary patch applied successfully on home /u02/oracle/12.2.0.1/grid


Starting CRS service on home /u02/oracle/12.2.0.1/grid

Postpatch operation log file location: /u02/oracle/PPT/crsdata/Test server/crsconfig/crspatch_Test server_2020-02-15_11-41-29AM.log
CRS service started successfully on home /u02/oracle/12.2.0.1/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:Test server
CRS Home:/u02/oracle/12.2.0.1/grid
Version:12.2.0.1.0
Summary:

==Following patches were SKIPPED:

Patch: /oswatcher/2020_patch/30116802/26839277
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /oswatcher/2020_patch/30116802/30093408
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122814
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30122828
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log

Patch: /oswatcher/2020_patch/30116802/30138470
Log: /u02/oracle/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-15_11-28-58AM_1.log



OPatchauto session completed at Sat Feb 15 11:54:12 2020
Time taken to complete the session 30 minutes, 14 seconds
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]
[Test server:root:/u02/oracle/12.2.0.1/grid/OPatch:]


Post implementation
-------------------

During apply patch,it will stop clusterware and apply patch on clusterware Home and then start clusterware automatically
 

2) Validate PSU using Opatch Lsinventory from Grid user

crsctl query crs activeversion
crsctl query crs releaseversion

 

To verify patch on clusterware and oracle home

Cd /u02/oracle/12.2.0.1/grid/OPatch

./opatch lspatches

 

oracrs@testserver OPatch]$ ./opatch lspatches

31704034;ACFS PATCH SET UPDATE 12.1.0.2.201020 (31704034)

31704029;OCW PATCH SET UPDATE 12.1.0.2.201020 (31704029)

31550110;Database Patch Set Update : 12.1.0.2.201020 (31550110)

26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

 

OPatch succeeded.

[oracrs@testserver OPatch]$

 

 

check status of crs
./crsctl check crs
ps -ef|grep d.bin



su - oracrs

start database in case not started and check acfs mount point

srvctl start instance -d test1db -i test1db1
srvctl start instance -d test2db -i test2db1

Check Load Balancing/session on both Nodes using below commands and start apply patch on other Node



Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> set linesize 200
SQL> select name,open_mode,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') "DB Startup Time",host_name from v$database,v$instance;

NAME      OPEN_MODE            DB Startup Time               HOST_NAME
--------- -------------------- ----------------------------- ----------------------------------------------------------------
TESTDB  READ WRITE           15-feb-2020 09:05:08          Test server

SQL> select count(1),inst_id from gv$session group by inst_id;

No comments:

Post a Comment