Thursday, 12 March 2020

11g Oracle RAC Database OCT 2019 PSU Patching /30070097/STEP BY STEP



11g Oracle RAC Database OCT 2019 PSU Patching /30070097/STEP BY STEP

Patch 30070097 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.191015


1) check opatch version


Node1[/u01/oracle/RACDBTST/product/11.2.0.4/OPatch]$ opatch version
OPatch Version: 11.2.0.3.23
OPatch succeeded.



The version output of the previous command should be 11.2.0.3.21 or later.



2) create OCM file from instance onwner--

 create ocm.rsp and change chmod 775 ocm.rsp (after going in $ORACLE_HOME/OPatch/ocm/bin in case bin directory is empty mean emocmrsp is not exist
   create an OCM response file from this directory /u01/oracle/RACDBTST/product/11.2.0.4/ccr/bin/emocmrsp


3) check patch conflict

 check  One-off Patch Conflict Detection and Resolution as below
 /u01/oracle/RACDBTST/product/11.2.0.4/OPatch]$ opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch/30070097



4) apply patch

run from root user
----------------
Node1:root:/u01/oracle/RACDBTST/product/11.2.0.4/OPatch:] ./opatch auto /staging/2020_rac_patch_new/30070097 -oh /u01/oracle/RACDBTST/product/11.2.0.4 -ocmrf /u01/oracle/RACDBTST/product/11.2.0.4/ccr/bin/ocm.rsp
Executing /u02/oracle/11.2.0.4/grid/perl/bin/perl /u01/oracle/RACDBTST/product/11.2.0.4/OPatch/crs/patch11203.pl -patchdir /staging/2020_rac_patch_new -patchn 30070097 -oh /u01/oracle/RACDBTST/product/11.2.0.4 -ocmrf /u01/oracle/RACDBTST/product/11.2.0.4/ccr/bin/ocm.rsp -paramfile /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
This is the main log file: /u01/oracle/RACDBTST/product/11.2.0.4/cfgtoollogs/opatchauto2020-03-11_10-43-43.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/oracle/RACDBTST/product/11.2.0.4/cfgtoollogs/opatchauto2020-03-11_10-43-43.report.log
2020-03-11 10:43:43: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params

2020-03-11 10:43:43: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/grid/crs/install/crsconfig_params
Unable to determine if /u01/oracle/RACDBTST/product/11.2.0.4 is shared oracle home
Enter 'yes' if this is not a shared home or if the prerequiste actions are performed to patch this shared home (yes/no):



press yes

5) start database instance if it is not started automatically

srvctl start instance -d RACDBTST -i RACDBTST1
4:24
Node1[/home/RACDBTST]$ srvctl start instance -d RACDBTST -i RACDBTST1
Node1[/home/RACDBTST]$ ps -ef |grep pmon
  oracrs 65470786        1   0   Mar 10      -  0:09 asm_pmon_+ASM1
RACDBTST 66322908        1   0 10:51:22      -  0:00 ora_pmon_RACDBTST1
Node1[/home/RACDBTST]$


same steps on other node


6)finallly   Loading Modified SQL Files into the Database


stop one instance and run @catbundle.sql after doing pre-check at database level



1.2.5.2 Loading Modified SQL Files into the Database

The following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.

    For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle.sql psu apply
    SQL> QUIT

    The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.

    For information about the catbundle.sql script, see My Oracle Support Document 605795.1 Introduction to Oracle Database catbundle.sql.

    If the OJVM PSU was applied for a previous GI PSU patch, you may see invalid Java classes after execution of the catbundle.sql script in the previous step. If this is the case, run utlrp.sql to re-validate these Java classes.

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> @utlrp.sql

    Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:

    catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
    catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log

    where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, see Known Issues.

    This patch now includes the OJVM Mitigation patch (Patch:19721304). If an OJVM PSU is installed or planned to be installed, no further actions are necessary. Otherwise, the workaround of using the OJVM Mitigation patch can be activated. As SYSDBA do the following from the admin directory:

    SQL > @dbmsjdev.sql
    SQL > exec dbms_java_dev.disable

    For more information on the OJVM mitigation patch, see Document 1929745.1 Oracle Recommended Patches -- "Oracle JavaVM Component Database PSU and Update" (OJVM PSU and OJVM Update) Patches.

1.2.5.3 Upgrade Oracle Recovery Manager Catalog

If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:

$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;





Pre check at database level

=================================


set lines 132
set pages 9999
SET FEEDBACK OFF
SET ECHO OFF
SET HEADING ON
SET VERIFY ON
SET TERMOUT OFF
SET TRIMSPOOL ON


spool  /oracle/OCT19_12_2_PSU/b2b47ppt_prechecks.log

set linesize 200 pages 200
/

select name ,open_mode from v$database
/

select * from v$version
/

archive log list
/

sho parameter control
/

sho parameter pfile
/

select owner, object_name, object_type,status from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID'
/

select owner, object_name, object_type,status from dba_objects where status like 'INVALID'
/

select count(1) from dba_objects where status like 'INVALID'
/

set lines 200 pages 200
col action_time format a40
col action format a30
col namespace format a10
col comments format a30
col version format a13
set lin 200 head on feed on pagesize 100
/


select i.instance_name,r.* from v$instance i, registry$history r
/

set lines 200 pages 200
/

column comp_name format a50
/

SELECT comp_id,comp_name, status, version FROM dba_registry
/


set lines 200 pages 200
col action_time format a40
col DESCRIPTION format a60
/

select PATCH_ID,SOURCE_VERSION,ACTION_TIME,STATUS,ACTION,DESCRIPTION from dba_registry_sqlpatch
/

select * from dba_registry_sqlpatch

spool off
/

exit



cd $ORACLE_HOME/rdbms/admin

@utlrp.sql

select to_char(action_time,'DD-MON-YYYY') as action_time_2, patch_id, patch_uid, action, version,  description from dba_registry_sqlpatch order by action_time;


set head on feed on pagesize 200 linesize 200
select owner, object_name, object_type from dba_objects where owner in ('SYS','SYSTEM') and status like 'INVALID';


11g Oracle Clusterware OCT 2019 PSU Patching /30070097/GRID INFRASTRUCTURE PATCH STEP BY STEP



11g Oracle Clusterware OCT 2019 PSU Patching /30070097/GRID INFRASTRUCTURE PATCH STEP BY STEP


Patch 30070097 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.191015

You must use the OPatch utility version 11.2.0.3.21 or later to apply this patch

opatch auto as well as manual

As per oracle support instruction

use opatch auto it fails manual patching


Patch details
==============
Patch 30070097 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.191015

This patch is used to apply on Grid home and RAC database as well


Prerequisite
============

1) Download Patch 30070097 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.191015 from Oracle Support
2) Check Free Space in Cluster Home(approx free space 22 GB and /tmp file system
¨ Check ACFS File Systems  if  it is available on node ,Please unmount it before starting patching clusterware or stopping clusterware
3) Check clusterware and database component status
 ps -ef|grep pmon
 ps -ef|grep tns
 crsctl check crs
crsctl status resource -t
 crsctl query crs activeversion
 crsctl query crs releaseversion
 crsctl query crs softwareversion
./crsctl check crs
 ps -ef|grep d.bin

4) Check opatch version and recommended to have latest  opatch and download it from oracle support (6880880) and unzip it in Grid Home after r enaming Old Opatch folder
  We  should use 11.2.0.3.21 or later to apply this patch for all platforms
5) create ocm.rsp and change chmod 775 ocm.rsp (after going in $ORACLE_HOME/OPatch/ocm/bin in case bin directory is empty mean emocmrsp is not exist
   create an OCM response file from this directory /u01/app/12.1.0/grid/ccr/bin/emocmrsp

6) run opatch lsinventory from grid home to check consistency and corruption of inventory
7) unzip downloaded patch from Grid user and change permission chmod -R 775 30070097
8) check  One-off Patch Conflict Detection and Resolution as below
 /u02/oracle/11.2.0.4/OPatch]$ opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch/30070097

9) stop database and clusterware on node1 and take backup of grid home and inventory
10) start clusterware and don't start database
11) Apply patch on grid home using opatch auto from root user (check propert path and clusterware should be up and running) 
eg
./opatch auto /staging/2020_rac_patch/30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
12) incase it hang for one hour during Installing Trace File Analyzer .Please use manaual method of patching

eg , it hang for two hours
   
rwxr-x--- 27 oracrs oinstall 4096 Feb 28 07:58 .patch_storage
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
-rwxrwxr-x 1 oracrs oinstall 623 Feb 28 07:45 /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /staging/2020_rac_patch/30070097
total 424
drwxrwxr-x 5 oracrs oinstall 256 Jun 30 2019 29509309
drwxrwxr-x 26 oracrs oinstall 4096 Sep 16 07:34 29913194
-rwxrwxr-x 1 oracrs oinstall 116427 Sep 20 08:41 PatchSearch.xml
drwxrwxr-x 5 oracrs oinstall 256 Sep 20 16:43 29938455
-rwxrwxr-x 1 oracrs oinstall 0 Sep 20 17:24 README.txt
-rwxrwxr-x 1 oracrs oinstall 549 Sep 20 17:30 bundle.xml
-rwxrwxr-x 1 oracrs oinstall 0 Sep 20 17:30 atp_lfp
-rwxrwxr-x 1 oracrs oinstall 87111 Oct 09 11:09 README.html
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ./opatch auto /staging/2020_rac_patch/30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
Executing /u02/oracle/11.2.0.4/perl/bin/perl /u02/oracle/11.2.0.4/OPatch/crs/patch11203.pl -patchdir /staging/2020_rac_patch -patchn 30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp -paramfile /u02/oracle/11.2.0.4/crs/install/crsconfig_params

This is the main log file: /u02/oracle/11.2.0.4/cfgtoollogs/opatchauto2020-02-28_08-05-08.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u02/oracle/11.2.0.4/cfgtoollogs/opatchauto2020-02-28_08-05-08.report.log

2020-02-28 08:05:08: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/crs/install/crsconfig_params

Stopping CRS...

Stopped CRS successfully

patch /staging/2020_rac_patch/30070097/29938455 apply successful for home /u02/oracle/11.2.0.4
patch /staging/2020_rac_patch/30070097/29913194 apply successful for home /u02/oracle/11.2.0.4
patch /staging/2020_rac_patch/30070097/29509309 apply successful for home /u02/oracle/11.2.0.4

Starting CRS...
Installing Trace File Analyzer


13)verify patch details




Details step



Patch location
-----------
Node1[/staging]$ cd 2020_rac_patch
Node1[/staging/2020_rac_patch]$ ls -ltr
total 3609712
drwxrwxr-x    5 oracrs   oinstall        256 Sep 20 17:30 30070097
-rw-rw-r--    1 oracrs   oinstall     159922 Oct 15 12:32 PatchSearch.xml
-rwxrwxr-x    1 oracrs   oinstall  172439196 Feb 27 16:56 p6880880_112000_AIX64-5L.zip
-rwxrwxr-x    1 oracrs   oinstall 1675564537 Feb 27 16:58 p30070097_112040_AIX64-5L.zip
Node1[/staging/2020_rac_patch]$



1) change patch owner if it is not owned by grid onwer and unzip patch and provide 775 owner (precoution )

chown -R oracrs:oinstall 2020_rac_patch]

chown oracrs:oinstall p30116802_122010_AIX64-5L.zip


unzip from grid onwer and  give permission 775

nft1x028[/staging/2020_rac_patch_new]$ ls -ltr
total 3272592
-rwxrwxr-x    1 oracrs   oinstall 1675564537 Mar 10 10:28 p30070097_112040_AIX64-5L.zip
nft1x028[/staging/2020_rac_patch_new]$  unzip p30070097_112040_AIX64-5L.zip

chmod -R 775 30116802

2) check patch version from grid owner

Node1[/u02/oracle/11.2.0.4/OPatch]$
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch version
OPatch Version: 11.2.0.3.20

OPatch succeeded.
Node1[/u02/oracle/11.2.0.4/OPatch]$

    $ unzip <OPATCH-ZIP> -d <ORACLE_HOME>
    $ <ORACLE_HOME>/OPatch/opatch version

unzip from root user and change onwer and permission


rwxr-x---    4 root     oinstall        256 Jan 30 2017  crf
drwxr-xr-x    3 root     oinstall        256 Jan 30 2017  osysmond
drwxr-xr-x    3 root     oinstall        256 Jan 30 2017  ologgerd
drwxr-xr-x    2 oracrs   oinstall       4096 Jan 30 2017  jlib
drwxrwxr-x    5 oracrs   oinstall        256 Jan 30 2017  cdata
drwxr-xr-x    7 oracrs   oinstall       4096 Jan 30 2017  install
drwxr-xr-x    4 root     system          256 Jan 30 2017  tfa
drwxr-x---   15 oracrs   oinstall       4096 Oct 10 2017  OPatch_old
drwxr-xr-x    3 oracrs   oinstall        256 Feb 05 2018  patch
drwxrwx---   15 oracrs   oinstall       4096 Feb 05 2018  inventory
drwxr-xr-x    4 oracrs   oinstall        256 Feb 05 2018  suptools
drwxrwxr-x    6 oracrs   oinstall        256 Dec 30 2018  log
drwxrwxr-x    6 oracrs   oinstall       4096 Feb 01 2019  cfgtoollogs
drwxr-xr-x   22 oracrs   oinstall       4096 Feb 01 2019  psu
drwxr-xr-x    3 root     oinstall      12288 Feb 01 2019  lib
drwxr-xr-x    9 oracrs   oinstall        256 Feb 01 2019  ccr
drwxr-xr-x    2 root     oinstall      24576 Feb 01 2019  bin
drwxr-x---   27 oracrs   oinstall       4096 Jun 28 2019  .patch_storage
drwxr-x---   15 root     system         4096 Dec 18 15:48 OPatch
drwxr-xr-x    2 oracrs   oinstall        256 Feb 19 02:35 dbs
-rwxr-xr-x    1 oracrs   oinstall  172439196 Mar 10 10:28 p6880880_112000_AIX64-5L.zip
[nft1x028:root:/u02/oracle/11.2.0.4/grid:]

[nft1x028:root:/u02/oracle/11.2.0.4/grid:] c
[nft1x028:root:/u02/oracle/11.2.0.4/grid:] mv OPatch OPatch_old
[nft1x028:root:/u02/oracle/11.2.0.4/grid:] unzip p6880880_112000_AIX64-5L.zip
Archive:  p6880880_112000_AIX64-5L.zip


chown -R oracrs:oinstall OPatch
chmod 775 OPatch



The version output of the previous command should be 11.2.0.3.21 or later.

3) Validation of Oracle Inventory

Node1[/u02/oracle/11.2.0.4/OPatch/ocm]$ cd ..
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_06-10-00AM_1.log

Lsinventory Output file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/lsinv/lsinventory2020-02-28_06-10-00AM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: Node1.retail2u.trcg.co.uk
ARU platform id: 212
ARU platform description:: IBM_AIX

Installed Top-level Products (1):

Oracle Grid Infrastructure 11g                                       11.2.0.4.0
There are 1 products installed in this Oracle Home.

or

Node1[/u02/oracle/11.2.0.4/OPatch opatch lsinventory -detail -oh /u02/oracle/11.2.0.4

4) create an OCM response file

I have suggested him to use the below location to create an OCM response file.

/u01/app/12.1.0/grid/ccr/bin/emocmrsp

in case  emocmrsp don't exist at ocm bin directory

Node1[/u02/oracle/11.2.0.4]$ cd ccr
Node1[/u02/oracle/11.2.0.4/ccr]$ ls -ltr
total 248
-rw-r--r--    1 oracrs   oinstall     117908 Mar 07 2012  README.pdf
drwxr-xr-x    3 oracrs   oinstall        256 Jan 29 2019  inventory
drwx------    3 oracrs   oinstall        256 Jan 29 2019  config
drwxr-----    2 oracrs   oinstall        256 Jan 29 2019  state
drwxr-----    3 oracrs   oinstall        256 Jan 29 2019  hosts
drwxr-xr-x    2 oracrs   oinstall       4096 Jan 29 2019  lib
drwxr-xr-x    2 oracrs   oinstall        256 Jan 29 2019  doc
drwxr-xr-x    2 oracrs   oinstall       4096 Jan 29 2019  bin
Node1[/u02/oracle/11.2.0.4/ccr]$ cd bin
Node1[/u02/oracle/11.2.0.4/ccr/bin]$ ls -ltr
total 336
-rw-r--r--    1 oracrs   oinstall        903 Mar 07 2012  strip_path.awk
-rwxr-----    1 oracrs   oinstall       1630 Mar 07 2012  ocmJarUtil
-rwx------    1 oracrs   oinstall       5881 Mar 07 2012  emSnapshotEnv
-rw-r--r--    1 oracrs   oinstall       3539 Mar 07 2012  OCMJarUtil.class
-rwx------    1 oracrs   oinstall      19169 Jan 29 2019  common
-rwx------    1 oracrs   oinstall      58374 Jan 29 2019  setupCCR
-rw-------    1 oracrs   oinstall      47285 Jan 29 2019  deployPackages
-rwx------    1 oracrs   oinstall       9063 Jan 29 2019  emocmrsp
-rwx------    1 oracrs   oinstall       6116 Jan 29 2019  lockfile
Node1[/u02/oracle/11.2.0.4/ccr/bin]$ emocmrsp
Node1[/u02/oracle/11.2.0.4/ccr/bin]$ id
uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/u02/oracle/11.2.0.4/ccr/bin]$ emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates.  All rights reserved.

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (ocm.rsp) was successfully created.
Node1[/u02/oracle/11.2.0.4/ccr/bin]$ ls -ltr
total 344
-rw-r--r--    1 oracrs   oinstall        903 Mar 07 2012  strip_path.awk
-rwxr-----    1 oracrs   oinstall       1630 Mar 07 2012  ocmJarUtil
-rwx------    1 oracrs   oinstall       5881 Mar 07 2012  emSnapshotEnv
-rw-r--r--    1 oracrs   oinstall       3539 Mar 07 2012  OCMJarUtil.class
-rwx------    1 oracrs   oinstall      19169 Jan 29 2019  common
-rwx------    1 oracrs   oinstall      58374 Jan 29 2019  setupCCR
-rw-------    1 oracrs   oinstall      47285 Jan 29 2019  deployPackages
-rwx------    1 oracrs   oinstall       9063 Jan 29 2019  emocmrsp
-rwx------    1 oracrs   oinstall       6116 Jan 29 2019  lockfile
-rw-r--r--    1 oracrs   oinstall        623 Feb 28 07:45 ocm.rsp
Node1[/u02/oracle/11.2.0.4/ccr/bin]$

and change permission to 775

Node1[/u02/oracle/11.2.0.4/ccr/bin]$ chmod 775 ocm.rsp
Node1[/u02/oracle/11.2.0.4/ccr/bin]$




5)  Run OPatch Conflict Check

cd /u02/oracle/11.2.0.4/OPatch

opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch/30070097

eg

Node1[/u02/oracle/11.2.0.4/ccr/bin]$ cd /u02/oracle/11.2.0.4/OPatch
Node1[/u02/oracle/11.2.0.4/OPatch]$ ps -ef|grep pmon
  oracrs 12583112        1   0 14:41:33      -  0:00 asm_pmon_+ASM2
  oracrs 13041776 19791982   0 14:45:18  pts/0  0:00 grep pmon
Node1[/u02/oracle/11.2.0.4/OPatch]$
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch/30070097
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_14-45-33PM_1.log

Invoking prereq "checkconflictamongpatcheswithdetail"

Prereq "checkConflictAmongPatchesWithDetail" passed.

OPatch succeeded.

6) stop database and clusterware on one node and take backup grid home and inventory

from grid user
srvctl stop instance -d e2b11nft -i e2b11nft2 -o immediate

from root user

./crsctl stop crs

take backup from root user

Backup home

cd /u02/oracle/11.2.0.4

tar -cvf /staging/grid_home_database.tar .

du -sg  /staging/grid_home_database.tar

Backup for inventory

cd /u02/oracle/oraInventory

tar -cvf  /staging/grid_home_inventory.tar .
du -hP   /staging/grid_home_inventory.tar


7) start clusterware but don't start database

from root user

./crsctl start crs

double check space in grid home and /tmp

Node1[/u02/oracle/11.2.0.4/OPatch]$ df -g .
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u02          60.00     28.02   54%    82563     2% /u02
Node1[/u02/oracle/11.2.0.4/OPatch]$

8) Apply patch opatch auto and it will take approx 45 min , if it is hanging during Installing Trace File Analyzer .please use manual method
run from root user

cd /u02/oracle/11.2.0.4/OPatch

export PATH=$PATH:/u02/oracle/11.2.0.4/OPatch


./opatch auto /staging/2020_rac_patch/30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp

eg

[Node1:root:/u02/oracle/11.2.0.4/bin:]
[Node1:root:/u02/oracle/11.2.0.4/bin:]
[Node1:root:/u02/oracle/11.2.0.4/bin:]
[Node1:root:/u02/oracle/11.2.0.4/bin:] cd ..
[Node1:root:/u02/oracle/11.2.0.4/bin:] cd /u02/oracle/11.2.0.4/OPatch
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] export PATH=$PATH:/u02/oracle/11.2.0.4/OPatch
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /u02/oracle/11.2.0.4/OPatch
total 288
drwxr-x---    5 oracrs   oinstall        256 Oct 26 2018  jre
-rw-r-----    1 oracrs   oinstall         27 Dec 18 15:42 version.txt
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:42 scripts
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:42 oplan
-rw-r-----    1 oracrs   oinstall       3177 Dec 18 15:42 operr_readme.txt
-rwxr-x---    1 oracrs   oinstall       4218 Dec 18 15:42 operr.bat
-rwxr-x---    1 oracrs   oinstall       3159 Dec 18 15:42 operr
-rwxr-x---    1 oracrs   oinstall       4290 Dec 18 15:42 opatch_env.sh
-rw-r-----    1 oracrs   oinstall       2551 Dec 18 15:42 opatch.pl
-rwxr-x---    1 oracrs   oinstall      16412 Dec 18 15:42 opatch.bat
-rwxr-x---    1 oracrs   oinstall      49269 Dec 18 15:42 opatch
drwxr-x---    5 oracrs   oinstall        256 Dec 18 15:42 ocm
-rwxr-x---    1 oracrs   oinstall      23550 Dec 18 15:42 emdpatch.pl
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:42 config
-rw-r-----    1 oracrs   oinstall       2915 Dec 18 15:42 README.txt
drwxr-x---    4 oracrs   oinstall        256 Dec 18 15:42 opatchprereqs
drwxr-x---    3 oracrs   oinstall       4096 Dec 18 15:42 crs
drwxr-x---    2 oracrs   oinstall       4096 Dec 18 15:42 jlib
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:48 plugins
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:48 oracle_common
drwxr-x---    9 oracrs   oinstall       4096 Dec 18 15:48 modules
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:48 docs
drwxr-x---    6 oracrs   oinstall        256 Dec 18 15:48 auto
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
-rwxrwxr-x    1 oracrs   oinstall        623 Feb 28 14:43 /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /u02/oracle/11.2.0.4
total 337040
drwxr-xr-x    5 oracrs   oinstall        256 Jan 26 2017  clone
drwxr-xr-x    4 root     system          256 Jan 26 2017  tfa
drwxrwxr-x    5 oracrs   oinstall        256 Jan 26 2017  cdata
drwxr-x---   15 oracrs   oinstall       4096 Oct 10 2017  OPatch_jan29
drwxr-xr-x    3 oracrs   oinstall        256 Feb 19 2018  patch
drwxrwx---   15 oracrs   oinstall       4096 Feb 19 2018  inventory
drwxr-xr-x    4 oracrs   oinstall        256 Feb 19 2018  suptools
drwxr-x---   15 oracrs   oinstall       4096 Jan 29 2019  OPatch_jan29.bkp
drwxrwxr-x    6 oracrs   oinstall       4096 Jan 29 2019  cfgtoollogs
drwxr-x---    3 oracrs   oinstall        256 Jan 29 2019  .opatchauto_storage
drwxr-xr-x   22 oracrs   oinstall       4096 Jan 29 2019  psu
drwxr-xr-x    3 root     oinstall      12288 Jan 29 2019  lib
drwxr-xr-x    9 oracrs   oinstall        256 Jan 29 2019  ccr
drwxr-xr-x    2 root     oinstall      24576 Jan 29 2019  bin
drwxrwxr-x    6 oracrs   oinstall        256 Apr 25 2019  log
drwxr-x---   15 oracrs   oinstall       4096 Dec 18 15:48 OPatch
drwxrwxr-x   15 oracrs   oinstall       4096 Feb 28 05:00 OPatch_od
-rwxr-xr-x    1 oracrs   oinstall  172439196 Feb 28 05:11 p6880880_112000_AIX64-5L.zip
drwxr-xr-x    2 oracrs   oinstall        256 Feb 28 14:41 dbs
drwxr-x---   27 oracrs   oinstall       4096 Feb 28 14:45 .patch_storage
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ls -ltr /staging/2020_rac_patch/30070097
total 424
drwxrwxr-x    5 oracrs   oinstall        256 Jun 30 2019  29509309
drwxrwxr-x   26 oracrs   oinstall       4096 Sep 16 07:34 29913194
-rwxrwxr-x    1 oracrs   oinstall     116427 Sep 20 08:41 PatchSearch.xml
drwxrwxr-x    5 oracrs   oinstall        256 Sep 20 16:43 29938455
-rwxrwxr-x    1 oracrs   oinstall          0 Sep 20 17:24 README.txt
-rwxrwxr-x    1 oracrs   oinstall        549 Sep 20 17:30 bundle.xml
-rwxrwxr-x    1 oracrs   oinstall          0 Sep 20 17:30 atp_lfp
-rwxrwxr-x    1 oracrs   oinstall      87111 Oct 09 11:09 README.html
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ./opatch auto /staging/2020_rac_patch/30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp
Executing /u02/oracle/11.2.0.4/perl/bin/perl /u02/oracle/11.2.0.4/OPatch/crs/patch11203.pl -patchdir /staging/2020_rac_patch -patchn 30070097 -oh /u02/oracle/11.2.0.4 -ocmrf /u02/oracle/11.2.0.4/ccr/bin/ocm.rsp -paramfile /u02/oracle/11.2.0.4/crs/install/crsconfig_params

This is the main log file: /u02/oracle/11.2.0.4/cfgtoollogs/opatchauto2020-02-28_14-47-56.log

This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u02/oracle/11.2.0.4/cfgtoollogs/opatchauto2020-02-28_14-47-56.report.log

2020-02-28 14:47:56: Starting Clusterware Patch Setup
Using configuration parameter file: /u02/oracle/11.2.0.4/crs/install/crsconfig_params

Stopping CRS...
Stopped CRS successfully


patch /staging/2020_rac_patch/30070097/29938455  apply successful for home  /u02/oracle/11.2.0.4
patch /staging/2020_rac_patch/30070097/29913194  apply successful for home  /u02/oracle/11.2.0.4
patch /staging/2020_rac_patch/30070097/29509309  apply successful for home  /u02/oracle/11.2.0.4

Starting CRS...
Installing Trace File Analyzer

CRS-4123: Oracle High Availability Services has been started.

opatch auto succeeded.
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]
[Node1:root:/u02/oracle/11.2.0.4/OPatch:] ps -ef|grep pmon
  oracrs 16908296        1   0 15:09:25      -  0:00 asm_pmon_+ASM2
[Node1:root:/u02/oracle/11.2.0.4/OPatch:]


Node1[/home/oracrs]$ .
Node1[/home/oracrs]$ srvctl start instance -d e2b11nft -i e2b11nft2
Node1[/home/oracrs]$
Node1[/home/oracrs]$
Node1[/home/oracrs]$ ps -ef|grep pmon
e2b11nft 13566000        1   0 15:17:58      -  0:00 ora_pmon_e2b11nft2
  oracrs 16908296        1   0 15:09:25      -  0:00 asm_pmon_+ASM2
Node1[/home/oracrs]$



-----------Manual Method-----




    Apply the CRS patch using.

    As the GI home owner execute:

    $ <GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_PSU_number>/<OCW Components_number>

    As the GI home owner execute:

    $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_PSU_number>/<ACFS Components_number>

    As the GI home owner execute:

    $ <GI_HOME>/OPatch/opatch apply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/<GI_PSU_number>/<DB_PSU_number>




Database and clusterware are down on one due to hange issue during Installing Trace File Analyzer .

oracle note -
Oracle Grid Infrastructure 11.2.0.4.x Patch Set Update SUPPLEMENTAL README ( Doc ID 1641136.1 )
Manual Steps for Apply/Rollback Patch

eg

from root user

cd /u02/oracle/11.2.0.4/crs/install
./rootcrs.pl -unlock

from grid user

cd /u02/oracle/11.2.0.4/OPatch
export PATH=$PATH:/u02/oracle/11.2.0.4/OPatch
opatch napply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29938455
opatch apply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29913194
opatch apply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29509309


Root user

/u02/oracle/11.2.0.4/crs/install/rootcrs.pl -patch
start crs on node1
start database on node1


Details
-------

encounter below error but We need go ahead with next steps

From root user

[Node1:root:/home/root:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[Node1:root:/home/root:] cd /u02/oracle/11.2.0.4/crs/install
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] ./rootcrs.pl -unlock
Using configuration parameter file: ./crsconfig_params
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Successfully unlock /u02/oracle/11.2.0.4
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]

set enviroment and apply patch one by one from grid user
----

Node1[/home/oracrs]$ cd /u02/oracle/11.2.0.4/OPatch
Node1[/u02/oracle/11.2.0.4/OPatch]$ export PATH=$PATH:/u02/oracle/11.2.0.4/OPatch
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -ltr /u02/oracle/11.2.0.4/OPatch
total 288
drwxr-x---    5 oracrs   oinstall        256 Oct 26 2018  jre
-rw-r-----    1 oracrs   oinstall         27 Dec 18 15:42 version.txt
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:42 scripts
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:42 oplan
-rw-r-----    1 oracrs   oinstall       3177 Dec 18 15:42 operr_readme.txt
-rwxr-x---    1 oracrs   oinstall       4218 Dec 18 15:42 operr.bat
-rwxr-x---    1 oracrs   oinstall       3159 Dec 18 15:42 operr
-rwxr-x---    1 oracrs   oinstall       4290 Dec 18 15:42 opatch_env.sh
-rw-r-----    1 oracrs   oinstall       2551 Dec 18 15:42 opatch.pl
-rwxr-x---    1 oracrs   oinstall      16412 Dec 18 15:42 opatch.bat
-rwxr-x---    1 oracrs   oinstall      49269 Dec 18 15:42 opatch
drwxr-x---    5 oracrs   oinstall        256 Dec 18 15:42 ocm
-rwxr-x---    1 oracrs   oinstall      23550 Dec 18 15:42 emdpatch.pl
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:42 config
-rw-r-----    1 oracrs   oinstall       2915 Dec 18 15:42 README.txt
drwxr-x---    4 oracrs   oinstall        256 Dec 18 15:42 opatchprereqs
drwxr-x---    3 oracrs   oinstall       4096 Dec 18 15:42 crs
drwxr-x---    2 oracrs   oinstall       4096 Dec 18 15:42 jlib
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:48 plugins
drwxr-x---    3 oracrs   oinstall        256 Dec 18 15:48 oracle_common
drwxr-x---    9 oracrs   oinstall       4096 Dec 18 15:48 modules
drwxr-x---    2 oracrs   oinstall        256 Dec 18 15:48 docs
drwxr-x---    6 oracrs   oinstall        256 Dec 18 15:48 auto
Node1[/u02/oracle/11.2.0.4/OPatch]$ id
uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -ltr  /u02/oracle/11.2.0.4
total 337016
-rw-r-----    1 oracrs   oinstall         59 May 09 2013  oraInst.loc
drwxr-xr-x    3 oracrs   oinstall        256 Jan 26 2017  csmig
drwxr-xr-x    3 oracrs   oinstall        256 Jan 26 2017  demo
drwxrwxr-x    6 oracrs   oinstall        256 Dec 28 2018  log
drwxr-xr-x    9 oracrs   oinstall        256 Jan 29 2019  ccr
drwxr-x---   15 oracrs   oinstall       4096 Dec 18 15:48 OPatch
-rwxr-xr-x    1 oracrs   oinstall  172439196 Feb 28 05:25 p6880880_112000_AIX64-5L.zip
drwxrwxr-x    7 oracrs   oinstall       4096 Feb 28 08:05 cfgtoollogs
drwxr-xr-x    2 oracrs   oinstall        256 Feb 28 08:10 dbs
drwxr-xr-x   26 oracrs   oinstall       4096 Feb 28 08:18 psu
drwxr-xr-x    3 oracrs   oinstall      12288 Feb 28 08:18 lib
drwxr-xr-x    2 oracrs   oinstall      16384 Feb 28 08:19 bin
drwxr-xr-x    4 root     system          256 Feb 28 08:19 tfa
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -ltr /staging/2020_rac_patch/30070097/29938455
total 48
drwxrwxr-x    4 oracrs   oinstall        256 Sep 20 16:42 etc
drwxrwxr-x    4 oracrs   oinstall        256 Sep 20 16:42 custom
-rwxrwxr-x    1 oracrs   oinstall      18477 Sep 20 16:43 README.txt
drwxrwxr-x   15 oracrs   oinstall       4096 Sep 20 16:47 files
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -ltr /staging/2020_rac_patch/30070097/29913194
total 144
-rwxrwxr-x    1 oracrs   oinstall         21 Sep 16 07:30 README.txt
drwxrwxr-x    5 oracrs   oinstall        256 Sep 16 07:31 29497421
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:31 29141056
drwxrwxr-x    5 oracrs   oinstall        256 Sep 16 07:31 28729262
drwxrwxr-x    5 oracrs   oinstall        256 Sep 16 07:31 28204707
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:31 27734982
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:31 27338049
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 26925576
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 26609445
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 26392168
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 25869727
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 24732075
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 24006111
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 23054359
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 22502456
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 21948347
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 21352635
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 20760982
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 20299013
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:32 19769489
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:33 19121551
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:33 18522509
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:33 18031668
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:33 17478514
drwxrwxr-x    4 oracrs   oinstall        256 Sep 16 07:34 29913194
-rwxrwxr-x    1 oracrs   oinstall       3073 Sep 16 07:38 patchmd.xml
-rwxrwxr-x    1 oracrs   oinstall      64672 Oct 09 11:09 README.html
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -lt
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls -ltr /staging/2020_rac_patch/30070097/29509309
total 16
drwxrwxr-x    4 oracrs   oinstall        256 Jun 30 2019  etc
drwxrwxr-x    3 oracrs   oinstall        256 Jun 30 2019  custom
-rwxrwxr-x    1 oracrs   oinstall       5971 Jun 30 2019  README.txt
drwxrwxr-x    6 oracrs   oinstall        256 Jun 30 2019  files
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch napply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29938455
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-40-03PM_1.log

Verifying environment and performing prerequisite checks...

The following patch(es) are duplicate patches with patches installed in the Oracle Home.
 [ 29938455]
You have already installed same patch(es) with same UPI(s) or same version(s).
These patch(es) will be skipped.


OPatch Session completed with warnings.
Log file location: /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-40-03PM_1.log

it completed with warnings Since it may be patch applied during opatch auto


OPatch completed with warnings.
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch apply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29913194
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-41-07PM_1.log

Verifying environment and performing prerequisite checks...
All of the sub-patch(es) of the composite patch are already installed in the Oracle Home. No need to apply this patch.
Log file location: /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-41-07PM_1.log

OPatch succeeded.

it completed successfully Since it may be patch could not applied during opatch auto

Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch apply -oh /u02/oracle/11.2.0.4 -local /staging/2020_rac_patch/30070097/29509309
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/oracle/11.2.0.4
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-41-56PM_1.log

Verifying environment and performing prerequisite checks...

The following patch(es) are duplicate patches with patches installed in the Oracle Home.
 [ 29509309]
You have already installed same patch(es) with same UPI(s) or same version(s).
These patch(es) will be skipped.


OPatch Session completed with warnings.
Log file location: /u02/oracle/11.2.0.4/cfgtoollogs/opatch/opatch2020-02-28_13-41-56PM_1.log

OPatch completed with warnings.

it completed with warnings Since it may be patch applied during opatch auto


Node1[/u02/oracle/11.2.0.4/OPatch]$

As root user, run:

Node1[/u02/oracle/11.2.0.4/OPatch]$
Node1[/u02/oracle/11.2.0.4/OPatch]$ exit
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] /u02/oracle/11.2.0.4/crs/install/rootcrs.pl -patch
Using configuration parameter file: /u02/oracle/11.2.0.4/crs/install/crsconfig_params
Installing Trace File Analyzer

CRS-4123: Oracle High Availability Services has been started.


cluserware started automatically ,only we need to start database instance manually

[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] ps -ef|grep pmon
  oracrs 10944568        1   0 13:48:58      -  0:00 asm_pmon_+ASM1
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:]
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] srvctl start instance -d e2b11nft -i e2b11nft1
ksh: srvctl:  not found.
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] su p
[Node1:root:/u02/oracle/11.2.0.4/crs/install:] su - oracrs
Node1[/home/oracrs]$ srvctl start instance -d e2b11nft -i e2b11nft1
Node1[/home/oracrs]$
Node1[/home/oracrs]$
Node1[/home/oracrs]$ ps -ef|grep pmon
  oracrs 10944568        1   0 13:48:58      -  0:00 asm_pmon_+ASM1
e2b11nft 11534476        1   0 13:59:07      -  0:00 ora_pmon_e2b11nft1
  oracrs 30671008 33226884   0 13:59:59  pts/0  0:00 grep pmon
Node1[/home/oracrs]$








Post implementation
-------------------

during apply patch,it will stop clusterware and apply patch on clusterware home and then start clusterware automatically
2) Validate PSU using Opatch Lsinventory from Grid user

crsctl query crs activeversion
crsctl query crs releaseversion

check status of crs
./crsctl check crs
ps -ef|grep d.bin



su - oracrs

start database in case not started and check acfs mount point

srvctl start instance -d test1db -i test1db1
srvctl start instance -d test2db -i test2db1

e2b11nft2

and check load balancing



Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> set linesize 200
SQL> select name,open_mode,to_char(startup_time,'dd-mon-yyyy hh24:mi:ss') "DB Startup Time",host_name from v$database,v$instance;

NAME      OPEN_MODE            DB Startup Time               HOST_NAME
--------- -------------------- ----------------------------- ----------------------------------------------------------------
SAM44PPT  READ WRITE           15-feb-2020 09:05:08          heanor

SQL> select count(1),inst_id from gv$session group by inst_id;

  COUNT(1)    INST_ID





==========================march 10 2020============================================


srvctl stop instance -d sam11nft -i sam11nft2 -o immediate

srvctl start instance -d sam11nft -i sam11nft2

nft1x028[/staging/2020_rac_patch_new]$ ps -e
nft1x028[/staging/2020_rac_patch_new]$ ps -ef|grep pmon
sam11nft  5963942        1   0   Feb 19      -  6:23 ora_pmon_sam11nft2
  oracrs  6422930        1   0   Feb 19      -  4:21 asm_pmon_+ASM2
  oracrs  6816180 13107308   0 10:44:57  pts/0  0:00 grep pmon
nft1x028[/staging/2020_rac_patch_new]$


w-r--r--    1 oracrs   oinstall 35595284480 Mar 10 05:54 crshome.tar
[nft1x027:root:/staging:] cd 2020_rac_patch
[nft1x027:root:/staging/2020_rac_patch:] ls -ltr
total 3600072
drwxr-x---   15 oracrs   oinstall       4096 Oct 23 2018  OPatch
-rwxrwxr-x    1 oracrs   oinstall  167499907 Feb 05 2019  p6880880_112000_AIX64-5L.zip
drwxr-xr-x    5 oracrs   oinstall        256 Sep 20 17:30 30070097
-rw-rw-r--    1 oracrs   oinstall     159922 Oct 15 12:32 PatchSearch.xml
-rwx------    1 oracrs   oinstall 1675564537 Mar 09 16:07 p30070097_112040_AIX64-5L.zip
[nft1x027:root:/staging/2020_rac_patch:] cat
[nft1x027:root:/staging/2020_rac_patch:] ps -ef|grep pmon
  oracrs  9109748        1   0 07:09:59      -  0:00 asm_pmon_+ASM1
    root  9437292 34931150   0 08:38:38  pts/1  0:00 grep pmon
sam11nft 65470816        1   0 07:10:55      -  0:01 ora_pmon_sam11nft1
[nft1x027:root:/staging/2020_rac_patch:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[nft1x027:root:/staging/2020_rac_patch:] su - oracrs
nft1x027[/home/oracrs]$
nft1x027[/home/oracrs]$ srvctl stop instance -d sam11nft -i sam11nft1 -o immediate
nft1x027[/home/oracrs]$
nft1x027[/home/oracrs]$
nft1x027[/home/orac

rwxrwxr-x    7 oracrs   oinstall       4096 Mar 10 08:35 cfgtoollogs
[nft1x027:root:/u02/oracle/11.2.0.4/grid:] cd OPatch
[nft1x027:root:/u02/oracle/11.2.0.4/grid/OPatch:] cd ..
[nft1x027:root:/u02/oracle/11.2.0.4/grid:] cp
[nft1x027:root:/u02/oracle/11.2.0.4/grid:] mv OPatch OPatch_old
[nft1x027:root:/u02/oracle/11.2.0.4/grid:]


cp p6880880_112000_AIX64-5L.zip /u02/oracle/11.2.0.4/grid/


p6880880_112000_AIX64-5L.zip


chown -R oracrs:oinstall OPatch


rwxr-xr-x    2 sam11nft oinstall        256 Apr 18 2019  dpdump
drwxrwxrwx    4 oracrs   oinstall        256 Mar 09 16:20 2020_rac_patch
-rw-r--r--    1 oracrs   oinstall 35595284480 Mar 10 05:54 crshome.tar
[nft1x027:root:/staging:] cd 2020_rac_patch
[nft1x027:root:/staging/2020_rac_patch:] ls -ltr
total 3600072
drwxr-x---   15 oracrs   oinstall       4096 Oct 23 2018  OPatch
-rwxrwxr-x    1 oracrs   oinstall  167499907 Feb 05 2019  p6880880_112000_AIX64-5L.zip
drwxr-xr-x    5 oracrs   oinstall        256 Sep 20 17:30 30070097
-rw-rw-r--    1 oracrs   oinstall     159922 Oct 15 12:32 PatchSearch.xml
-rwx------    1 oracrs   oinstall 1675564537 Mar 09 16:07 p30070097_112040_AIX64-5L.zip
[nft1x027:root:/staging/2020_rac_patch:]





Node1[/home/oracrs]$ cd $ORACLE_HOME
Node1[/u02/oracle/11.2.0.4]$ cd OPatch
Node1[/u02/oracle/11.2.0.4/OPatch]$ ls
Node1[/u02/oracle/11.2.0.4/OPatch]$ opatch version
OPatch Version: 11.2.0.3.23

OPatch succeeded.
Node1[/u02/oracle/11.2.0.4/OPatch]$ cd /staging
Node1[/staging]$ ls -ltr
total 544
drwxr-xr-x    3 oracrs   oinstall        256 Jan 27 2015  11204_Grid
drwxr-xr-x    3 e2b11nft oinstall        256 Jan 28 2015  11204_DB
drwxr-xr-x    2 root     system          256 Nov 08 2016  lost+found
drwxr-xr-x    2 e2b11nft oinstall        256 Mar 12 2018  dpdump
-rw-r--r--    1 root     system       276480 Jan 29 2019  inventory.tar
drwxrwxrwx    3 oracrs   oinstall        256 Feb 28 05:07 2020_rac_patch
drwxrwxrwx    2 root     system          256 Feb 28 14:17 patches
Node1[/staging]$ cd 2020_rac_patch
Node1[/staging/2020_rac_patch]$ ls -ltr
total 3609712
drwxrwxr-x    5 oracrs   oinstall        256 Sep 20 17:30 30070097
-rw-rw-r--    1 oracrs   oinstall     159922 Oct 15 12:32 PatchSearch.xml
-rwxrwxr-x    1 oracrs   oinstall  172439196 Feb 27 16:56 p6880880_112000_AIX64-5L.zip
-rwxrwxr-x    1 oracrs   oinstall 1675564537 Feb 27 16:58 p30070097_112040_AIX64-5L.zip
Node1[/staging/2020_rac_patch]$

mkdir 2020_rac_patch_new

Node1[/staging/2020_rac_patch]$
Node1[/staging/2020_rac_patch]$ ls -ltr
total 3609712
drwxrwxr-x    5 oracrs   oinstall        256 Sep 20 17:30 30070097
-rw-rw-r--    1 oracrs   oinstall     159922 Oct 15 12:32 PatchSearch.xml
-rwxrwxr-x    1 oracrs   oinstall  172439196 Feb 27 16:56 p6880880_112000_AIX64-5L.zip
-rwxrwxr-x    1 oracrs   oinstall 1675564537 Feb 27 16:58 p30070097_112040_AIX64-5L.zip
Node1[/staging/2020_rac_patch]$


scp p6880880_112000_AIX64-5L.zip eibm994@nft1x027-adm:/staging/2020_rac_patch_new


scp p6880880_112000_AIX64-5L.zip eibm994@nft1x028-adm:/tmp/


scp p30070097_112040_AIX64-5L.zip eibm994@nft1x028-adm:/tmp/


wxr-xr-x    2 oracrs   oinstall        256 Mar 10 09:10 2020_rac_patch_new
nft1x027[/staging]$ chmod 775 2020_rac_patch_new
nft1x027[/staging]$

nft1x028[/home/oracrs]$ cd /u02/oracle/11.2.0.4/grid/Opatch
ksh: /u02/oracle/11.2.0.4/grid/Opatch:  not found.
nft1x028[/home/oracrs]$  cd /u02/oracle/11.2.0.4/grid/OPatch
nft1x028[/u02/oracle/11.2.0.4/grid/OPatch]$ opatch version
OPatch Version: 11.2.0.3.23

OPatch succeeded.
nft1x028[/u02/oracle/11.2.0.4/grid/OPatch]$ cd /u02/oracle/11.2.0.4/grid/ccr/bin
nft1x028[/u02/oracle/11.2.0.4/grid/ccr/bin]$ id
uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
nft1x028[/u02/oracle/11.2.0.4/grid/ccr/bin]$ ls -ltr
total 336
-rw-r--r--    1 oracrs   oinstall        903 Mar 07 2012  strip_path.awk
-rwxr-----    1 oracrs   oinstall       1630 Mar 07 2012  ocmJarUtil
-rwx------    1 oracrs   oinstall       5881 Mar 07 2012  emSnapshotEnv
-rw-r--r--    1 oracrs   oinstall       3539 Mar 07 2012  OCMJarUtil.class
-rwx------    1 oracrs   oinstall      19169 Feb 01 2019  common
-rwx------    1 oracrs   oinstall      58374 Feb 01 2019  setupCCR
-rw-------    1 oracrs   oinstall      47285 Feb 01 2019  deployPackages
-rwx------    1 oracrs   oinstall       9063 Feb 01 2019  emocmrsp
-rwx------    1 oracrs   oinstall       6116 Feb 01 2019  lockfile
nft1x028[/u02/oracle/11.2.0.4/grid/ccr/bin]$ emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates.  All rights reserved.

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y
The OCM configuration response file (ocm.rsp) was successfully created.
nft1x028[/u02/oracle/11.2.0.4/grid/ccr/bin]$ chmod 775 ocm.rsp
nft1x028[/u02/oracle/11.2.0.4/grid/ccr/bin]$



he OCM configuration response file (ocm.rsp) was successfully created.
nft1x027[/u02/oracle/11.2.0.4/grid/ccr/bin]$ ls -ltr
total 344
-rw-r--r--    1 oracrs   oinstall        903 Mar 07 2012  strip_path.awk
-rwxr-----    1 oracrs   oinstall       1630 Mar 07 2012  ocmJarUtil
-rwx------    1 oracrs   oinstall       5881 Mar 07 2012  emSnapshotEnv
-rw-r--r--    1 oracrs   oinstall       3539 Mar 07 2012  OCMJarUtil.class
-rwx------    1 oracrs   oinstall      58374 Feb 01 2019  setupCCR
-rwx------    1 oracrs   oinstall      19169 Feb 01 2019  common
-rw-------    1 oracrs   oinstall      47285 Feb 01 2019  deployPackages
-rwx------    1 oracrs   oinstall       6116 Feb 01 2019  lockfile
-rwx------    1 oracrs   oinstall       9063 Feb 01 2019  emocmrsp
-rw-r--r--    1 oracrs   oinstall        623 Mar 10 09:35 ocm.rsp
nft1x027[/u02/oracle/11.2.0.4/grid/ccr/bin]$ chmod 775 ocm.rsp
nft1x027[/u02/oracle/11.2.0.4/grid/ccr/bin]$

opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch_new/30070097


rwxrwxr-x    1 oracrs   oinstall      87111 Oct 09 11:09 README.html
nft1x027[/u02/oracle/11.2.0.4/grid/OPatch]$ opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /staging/2020_rac_patch_new/30070097
Oracle Interim Patch Installer version 11.2.0.3.23
Copyright (c) 2020, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u02/oracle/11.2.0.4/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /u02/oracle/11.2.0.4/grid/oraInst.loc
OPatch version    : 11.2.0.3.23
OUI version       : 11.2.0.4.0
Log file location : /u02/oracle/11.2.0.4/grid/cfgtoollogs/opatch/opatch2020-03-10_09-46-24AM_1.log

Invoking prereq "checkconflictamongpatcheswithdetail"

Prereq "checkConflictAmongPatchesWithDetail" passed.

OPatch succeeded.
nft1x027[/u02/oracle/11.2.0.4/grid/OPatch]$


export PATH=$PATH:/u02/oracle/11.2.0.4/grid/OPatch



./opatch auto /staging/2020_rac_patch_new/30070097 -oh /u02/oracle/11.2.0.4/grid -ocmrf /u02/oracle/11.2.0.4/grid/ccr/bin/ocm.rsp

To add space on ASM in HA linux enviroment Step by Step

To add space on ASM in HA linux enviroment
------------------------------------------
Enviroment Details

There is two node RAC database with single node Standby in Linux enviroment

Request to you add 50 GB on primary database server and Standby database Server

We require 50gb disk space on  Node1 and Nod2 Primary RAC database Server and 50gb on Node3 (dataguard node)
we need 100Gb in total


It is Non Disruptive Activity

High Level Plan
--------------------

The list of actions required to add disk space at the ASM level for RAC database TestRAC:

1)  50 GB new LUN allocation to the RAC nodes Node1 and Node2 - for RAC database TestRAC. (Storage Team)

2) 50 GB new LUN allocation to Node3 (For standby database) (Storage Team)
3) allocated lUN need to attached with each server (OVM Team)
4) The disks need to be partitioned , via "fdisk" on Node1 and Node3  (Linux Team)
5) on partitioning the disk, a "partprobe /dev/xvdo" should have been issued on node2 (second primary node) (linux Team)

and the "oracleasm scandisks" command likewise should have been issued on Node2 once "oracleasm createdisk" had been run on Node1
Has much the same effect as "partprobe" - ensures the other nodes in the cluster get to see the changes made on the first node.

6)  As root, on the first node of the cluster only, use the oracleasm program to allocate the disk to group oracrs asmadmin. (DBA Team)
     The command is  oracleasm  createdisk <disk name> <path>

7) Verify that the disk is available: # /usr/bin/oracleasm listdisks. The output should show all asm disks, including the new one (DBA team)
        "oracleasm listdisks"

8) On all other nodes of the cluster, as root, scan for the new oracle disk   (DBA team)
     oracleasm scandisks (second primary node)

8) Verify that the new disk is visible on both nodes (DBA team)
     "oracleasm listdisks | sed 's/^/oracleasm querydisk -p /' | sh "
      Should give same output on all cluster nodes.

9)  Add disk in disk diskgroup on database TestRAC (DBA team)
     The diskgroup in question being called "TestRAC_DATA3" on both primary and standby.

Detail Plans
------------------

Precheck
-----------
oracrs@Node3 dev]$ cd  /usr/bin
[oracrs@Node3 bin]$ oracleasm listdisks
ACFS_CTRLM
TestRAC_DATA1
TestRAC_DATA2
TestRAC_FRA1
TestRAC_VOTE01
[oracrs@Node3 bin]$


[root@Node3 init.d]# oracleasm listdisks | sed 's/^/oracleasm querydisk -p /' | sh
Disk "ACFS_CTRLM" is a valid ASM disk
/dev/xvdl1: LABEL="ACFS_CTRLM" TYPE="oracleasm"
Disk "TestRAC_DATA1" is a valid ASM disk
/dev/xvdh1: LABEL="TestRAC_DATA1" TYPE="oracleasm"
Disk "TestRAC_DATA2" is a valid ASM disk
/dev/xvdi1: LABEL="TestRAC_DATA2" TYPE="oracleasm"
Disk "TestRAC_FRA1" is a valid ASM disk
/dev/xvdj1: LABEL="TestRAC_FRA1" TYPE="oracleasm"
Disk "OCR_VOTE01" is a valid ASM disk
/dev/xvdk1: LABEL="OCR_VOTE01" TYPE="oracleasm"
[root@Node3 init.d]#




oracleasm querydisk -d TestRAC_DATA2

root@Node3 init.d]# oracleasm querydisk -d TestRAC_DATA2
Disk "TestRAC_DATA2" is a valid ASM disk on device [202,129]
[root@Node3 init.d]#

SQL> select name,group_number,mount_status,header_status,state,name,path from v$asm_disk;

NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ACFS_CTRLM                                1 CACHED  MEMBER       NORMAL   ACFS_CTRLM
ORCL:ACFS_CTRLM


TestRAC_DATA1                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA1
ORCL:TestRAC_DATA1

TestRAC_DATA2                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA2
ORCL:TestRAC_DATA2


NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TestRAC_FRA1                                  5 CACHED  MEMBER       NORMAL   TestRAC_FRA1
ORCL:TestRAC_FRA1

OCR_VOTE01                                6 CACHED  MEMBER       NORMAL   OCR_VOTE01
ORCL:OCR_VOTE01





SQL> select GROUP_NUMBER,DISK_NUMBER,TOTAL_MB,FREE_MB,path from V$ASM_DISK;

GROUP_NUMBER DISK_NUMBER   TOTAL_MB    FREE_MB
------------ ----------- ---------- ----------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           1           0      40959        967
ORCL:ACFS_CTRLM

           2           0     102399      96885
           4           0     153599      70158
ORCL:TestRAC_DATA1

           4           1     153599      70147
ORCL:TestRAC_DATA2


GROUP_NUMBER DISK_NUMBER   TOTAL_MB    FREE_MB
------------ ----------- ---------- ----------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           5           0     102399          0
ORCL:TestRAC_FRA1

           6           0      10239       5620
ORCL:OCR_VOTE01



SQL>

Check Newly created Block device/raw device on all nodes using command 'lsblk else fdisk -l'
It will show all raws devices including new one


Node1,Node2 (primary node)  and Node3 (dataguard node)


[oracrs@Node3 dev]$
[oracrs@Node3 dev]$ lsblk
NAME                                 MAJ:MIN   RM  SIZE RO TYPE MOUNTPOINT
xvdb                                 202:16     0   60G  0 disk
├─xvdb1                              202:17     0  500M  0 part /boot
└─xvdb2                              202:18     0 59.5G  0 part
  ├─vg_root-lv_swap (dm-0)           253:0      0    6G  0 lvm  [SWAP]
  ├─vg_root-lv_root (dm-1)           253:1      0    2G  0 lvm  /
  ├─vg_root-lv_home (dm-5)           253:5      0    2G  0 lvm  /home
  ├─vg_root-lv_opt (dm-6)            253:6      0    4G  0 lvm  /opt
  ├─vg_root-lv_optIbmItm (dm-7)      253:7      0    4G  0 lvm  /opt/IBM/ITM
  ├─vg_root-lv_optTivoli (dm-8)      253:8      0  500M  0 lvm  /opt/Tivoli
  ├─vg_root-lv_tmp (dm-9)            253:9      0    5G  0 lvm  /tmp
  ├─vg_root-lv_usr (dm-10)           253:10     0   12G  0 lvm  /usr
  ├─vg_root-lv_var (dm-11)           253:11     0    6G  0 lvm  /var
  ├─vg_root-lv_varAdmPerfmgr (dm-12) 253:12     0  500M  0 lvm  /var/adm/perfmgr
  ├─vg_root-lv_varTmp (dm-13)        253:13     0    2G  0 lvm  /var/tmp
  ├─vg_root-lv_oswatcher (dm-14)     253:14     0    7G  0 lvm  /var/log/oswatcher
  ├─vg_root-lv_controlm (dm-15)      253:15     0    3G  0 lvm  /opt/ctm700
  └─vg_root-lv_best1 (dm-16)         253:16     0  7.1G  0 lvm  /opt/bmc/Patrol3
xvdc                                 202:32     0   90G  0 disk
├─vg_app-lv_oracle (dm-3)            253:3      0   20G  0 lvm  /oracle
└─vg_app-lv_u01 (dm-4)               253:4      0   80G  0 lvm  /u01
xvdd                                 202:48     0   60G  0 disk
└─vg_grid-lv_u02 (dm-2)              253:2      0   60G  0 lvm  /u02
xvde                                 202:64     0  100G  0 disk
└─xvde1                              202:65     0  100G  0 part
xvdf                                 202:80     0  100G  0 disk
└─xvdf1                              202:81     0  100G  0 part
xvdg                                 202:96     0  100G  0 disk
└─xvdg1                              202:97     0  100G  0 part
xvdh                                 202:112    0  150G  0 disk
└─xvdh1                              202:113    0  150G  0 part
xvdi                                 202:128    0  150G  0 disk
└─xvdi1                              202:129    0  150G  0 part
xvdj                                 202:144    0  100G  0 disk
└─xvdj1                              202:145    0  100G  0 part
xvdk                                 202:160    0   10G  0 disk
└─xvdk1                              202:161    0   10G  0 part
xvdl                                 202:176    0   40G  0 disk
└─xvdl1                              202:177    0   40G  0 part
xvdm                                 202:192    0    3G  0 disk
└─vg_root-lv_optIbmItm (dm-7)        253:7      0    4G  0 lvm  /opt/IBM/ITM
xvdn                                 202:208    0   41G  0 disk
└─vg_app-lv_u01 (dm-4)               253:4      0   80G  0 lvm  /u01
sr0                                   11:0      1 1024M  0 rom
asm!acfsvol1-110                     252:56321  0   39G  0 disk
xvdo                                 202:224    0   50G  0 disk
[oracrs@Node3 dev]$


---new disk--


[oracrs@Node3 dev]$ ls -ltr xvdo
brw-rw---- 1 root disk 202, 224 Mar 11 14:33 xvdo


update from Linux Team

The Disk has been added on Node1 - partition has been created /dev/xdvo1
The disk partition has been done on Node3 - /dev/xdvo1

If they partition the disk correctly, We would expect to see a disk partition called /dev/xvdo1


We would name the ASM disk "TestRAC_DATA3" as it's going to be added to the diskgroup "TestRAC_DATA"



DBA task start Now to add space on ASM for RAC database TestRAC (standby Node). Standy databas is single node with ASM

1) Create ASM disk

/etc/init.d/oracleasm createdisk <disk name> <path>

oracleasm createdisk TestRACPRD_DATA3 /dev/xvdo1

[root@Node3 bin]# ls -ltr /dev/xvdo1
brw-rw---- 1 root disk 202, 225 Mar 11 15:53 /dev/xvdo1
[root@Node3 bin]# oracleasm createdisk TestRAC_DATA3 /dev/xvdo1
Writing disk header: done
Instantiating disk: done
[root@Node3 bin]# oracleasm listdisks
ACFS_CTRLM
TestRAC_DATA1
TestRAC_DATA2
TestRAC_DATA3
TestRAC_FRA1
OCR_VOTE01
[root@Node3 bin]#

to verify disk on standby node

SQL> select group_number, name from v$asm_diskgroup;
GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS_CTRLM
           5 TestRAC_FRA
           4 TestRAC_DATA
           6 OCR_VOTE
          
6 rows selected.
SQL> select name from v$asm_disk where group_number = '4';
NAME
------------------------------
TestRAC_DATA1
TestRAC_DATA2
TestRAC_DATA3
SQL>


We think that we need to prefix 'ORCL'

NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TestRAC_DATA1                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA1
ORCL:TestRAC_DATA1
TestRAC_DATA2                                 4 CACHED  MEMBER       NORMAL   TestRAC


To Add disk to disk group


alter diskgroup TestRAC_DATA  add disk 'TestRAC_DATA3';
or
alter diskgroup TestRAC_DATA  add disk 'ORCL:TestRAC_DATA3';

or

as opposed to logging onto Node3 as "oracrs" and typing asmca
Then right click on diskgroup FEO_DATA - select "Add Disks"
From the "Add Disks" screen, select the disk TestRAC_DATA3_DATA3
Click OK
Once the disk has been added, you get returned to the diskgroups screen, with thr amount of space available in the diskgroups appropriately updated.
And just click "Exit"
It's the way every single disk has been added to the various diskgroups....until now.


same process on primary node (node1 and node2)

----------------------

exact same process, only you issue the "oracleasm createdisk" on one VM and "oracleasm scandisks" on the other will ensure the new storage can be seen by both VMs