Thursday, 22 March 2018

How to upgrade Grid from 11.2.0.3 to 11.2.0.4 step by step Real-Time



Summary

This document is related to  upgrade Grid/Clusterware on Two Node RAC Enviroment.
 This upgrade is done by out-of-place approach(new home for  GI )
 As part of upgrade ,We are first increasing size of memory target for ASM so that it meet pre-requisite for upgrate and in second step , we are upgrading grid using GUI with upgrade option
This is real-time example  

Enviroment Details

Note--The ORACLE_HOME is hard coded in the oracrs users .profile script so you could do with a task to update that so you don't forget.



node1[/home/oracrs]$
node1[/home/oracrs]$ cat .profile
TH=$PATH:$HOME/bin:/usr/local/bin:.

export PATH

if [ -s $MAIL ]           # This is at Shell startup.  In normal
then echo $MAILMSG        # operation, the Shell checks
fi                          # periodically.

umask 022
AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE
unset  ORACLE_HOME ORACLE_SID ORACLE_BASE
export ORACLE_BASE=/u02/oracle/PRD
export ORACLE_HOME=/u02/oracle/11.2.0.3/grid
export ORACLE_SID=+ASM1
#export ORACLE_UNQNAME=

export PATH=${ORACLE_HOME}/bin:${PATH}

alias oh='cd $ORACLE_HOME'
# alias otrace= 'cd ${ORACLE_BASE}/diag/rdbms/${ORACLE_UNQNAME}/${ORACLE_SID}/trace'

set -o vi


node2[/home/oracrs]$
node2[/home/oracrs]$ cat .profile
PATH=$PATH:$HOME/bin:/usr/local/bin:.

export PATH

if [ -s $MAIL ]           # This is at Shell startup.  In normal
then echo $MAILMSG        # operation, the Shell checks
fi                          # periodically.

umask 022
AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE
unset  ORACLE_HOME ORACLE_SID ORACLE_BASE
export ORACLE_BASE=/u02/oracle/PRD
export ORACLE_HOME=/u02/oracle/11.2.0.3/grid
export ORACLE_SID=+ASM2
export TNS_NAME=/u02/oracle/network/admin
#export ORACLE_UNQNAME=e

export PATH=${ORACLE_HOME}/bin:${PATH}

alias oh='cd $ORACLE_HOME'
#alias otrace='cd ${ORACLE_BASE}/diag/rdbms/${ORACLE_UNQNAME}/${ORACLE_SID}/trace
#'
alias otrace='cd /u02/oracle/PRD/diag/asm/+asm/+ASM2/trace'
node2[/home/oracrs]$

node2[/home/oracrs]$ echo $ORACLE_HOME
/u02/oracle/11.2.0.3/grid
node2[/home/oracrs]$ echo $ORACLE_BASE
/u02/oracle/PRD
node2[/home/oracrs]$

Enviroment details

It is two node RAC environment where RAC Database EASYPROD is running 

node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep pmon
 fusprod  3473582        1   0   Jun 15      -  3:43 ora_pmon_EASYPROD1
  oracrs 12845076        1   0   Jun 15      -  1:26 asm_pmon_+ASM1
node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep tns
  oracrs 11141142        1   0   Jun 15      -  2:46 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
  oracrs 11468890        1   0   Jun 15      - 18:53 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
  oracrs 66453796 28836502   0 20:22:49  pts/1  0:00 grep tns
node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep oswatcher
  oracrs 49021146 28836502   0 20:23:03  pts/1  0:00 grep oswatcher
node1[/u02/oracle/11.2.0.3/grid]$

node2[/u02/oracle/11.2.0.3/grid]$
node2 [/u02/oracle/11.2.0.3/grid]$ ps -ef|grep tns
  oracrs 14418170        1   1   Oct 18      - 568:33 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
  oracrs 42795434 54919540   0 20:24:52  pts/0  0:00 grep tns
  oracrs 21692996        1   0   Jun 15      -  2:49 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
  oracrs 47317996        1   0   Jun 15      -  2:48 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
node2[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep pmon
  oracrs 12714058        1   0   Oct 18      - 46:59 asm_pmon_+ASM2
  fusprod 15335424        1   0   Oct 18      - 109:26 ora_pmon_EASYPROD2
  oracrs  8913808 54919540   0 20:25:03  pts/0  0:00 grep pmon
node2[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep oswatcher
  oracrs 25427980  4522948   0   Jan 18      - 23:52 /bin/sh ./OSWatcherFM.sh 360 /oswatcher/oswbb/archive
  oracrs  7012788 54919540   0 20:25:12  pts/0  0:00 grep oswatcher
node2[/u02/oracle/11.2.0.3/grid]$

Prerequisite checks

1) Download Grid software and transfer on both Grid server and unzip software from Grid user
 3) Check ASM status  like  FAIL ASM etc
4) Remove the audit files and any large log files on each node(not deleting will slowdown the upgrade).
$cd $GRID_HOME/rdbms/audit
$rm *.aud
4)  Take full database backup using RMAN and also take full database export.
5)  Backup of Grid  binaries using tar.
6) Backup of Oracle Inventory
7) Backup of OCR on each nodes.
8) Backup of ASM metadata
9)  check ssh passworless connectivity from each node

grep node1 /etc/hosts
grep node1 /etc/hosts
grep node2 /etc/hosts
grep fairfied /etc/hosts
eg

$ ssh RACNODE1 date                    ====>> real hostname for node1
Fri Nov 8 10:01:03 PST 2013   
$ ssh racnode1 date                    ====>> lower case hostname for node1
Fri Nov 8 10:01:04 PST 2013
$ ssh RACNODE2 date
Fri Nov 8 10:01:05 PST 2013
$ ssh racnode2 date
Fri Nov 8 10:01:06 PST 2013

10.0.0.101 RACNODE1.US.ORACLE.COM RACNODE1
10.0.0.102 RACNODE2.US.ORACLE.COM RACNODE2

check on node1
date;ssh node2 date;
date;ssh fairfied-rac date;
date;ssh node1 date;
date;ssh node1-rac date;

check on Node2
date;ssh node1 date;
date;ssh node1-rac date;
date;ssh buxton date;
date;ssh buxton-rac date;

Location for downloaded 11.2.0.4 software

node1[/oswatcher/upgrade]$ ls -ltr
total 12001192
drwxr-xr-x    8 oracrs   dba            4096 Oct 09 2013  grid
-rwxrwxrwx    1 oracrs   dba      2127071138 Jan 05 12:18 p13390677_112040_AIX64-5L_3of7.zip --grid software
-rwxrwxrwx    1 oracrs   dba      1170882875 Jan 05 12:35 p13390677_112040_AIX64-5L_2of7.zip
-rwxrwxrwx    1 oracrs   dba      1801653734 Jan 05 12:51 p13390677_112040_AIX64-5L_1of7.zip
-rwxrwxrwx    1 oracrs   dba      1044979572 Jan 05 15:29 p9655006_112010_AIX64-5L.zip
node1[/oswatcher/upgrade]$

Node2[/oswatcher/upgrade]$ ls -ltr
total 4154456
drwxr-xr-x    8 oracrs   dba            4096 Oct 09 2013  grid
-rwxr-xr-x    1 oracrs   dba      2127071138 Jun 22 19:47 p13390677_112040_AIX64-5L_3of7.zip
Node2[/oswatcher/upgrade]$


10) Check storage space for the new binary to be installed. We must have around 20 GB of space         available for Grid home on each node.
Also issue df -kh and make sure we space available in /tmp and /(root).


node1[/u02/oracle/11.2.0.3/grid]$ df -g .
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u02_lv      124.00     56.37   55%   924168     7% /u02

node1[/u02/oracle]$ ls -ltr
total 8
drwxr-xr-x    3 root     dba             256 Jul 11 2011  11.2.0
drwxr-xr-x    7 oracrs   dba             256 Mar 14 2012  PROD
drwxr-xr-x    4 root     dba             256 Sep 15 2013  11.2.0.3
drwxrwx---    6 oracrs   dba            4096 Sep 15 2013  oraInventory
drwxrwxr-x    2 root     dba             256 Mar 16 14:01 11.2.0.4
node1[/u02/oracle]$


node2[/u02/oracle/11.2.0.3/grid]$
node2[/u02/oracle/11.2.0.3/grid]$ df -g .
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u02_lv      124.00     45.33   64%  2072408    17% /u02

node2[/u02/oracle]$ ls -ltr
total 8
drwxr-xr-x    3 root     dba             256 Jul 11 2011  11.2.0
drwxr-xr-x    7 oracrs   dba             256 Mar 14 2012  PROD
drwxr-xr-x    4 root     dba             256 Sep 15 2013  11.2.0.3
drwxrwx---    6 oracrs   dba            4096 Sep 15 2013  oraInventory
drwx------    2 root     system          256 Mar 17 09:14 11.2.04
drwxrwxr-x    2 root     dba             256 Mar 17 09:15 11.2.0.4
node2[/u02/oracle]$

node2[/u02/oracle/11.2.0.3/grid]$

8) Check vncserver Since We are upgrading using GUI and GUI are working on both servers and root access 

node1[/u02/oracle]$ vncserver

New 'X' desktop is node1:1

Starting applications specified in /home/oracrs/.vnc/xstartup
Log file is /home/oracrs/.vnc/node1:1.log

node1[/u02/oracle]$ ps -ef|grep vnc
  oracrs 21299840 28836502   0 20:04:10  pts/1  0:00 grep vnc
  oracrs 39649952        1   0 20:03:49  pts/1  0:00 Xvnc :1 -desktop X -httpd /opt/freeware/vnc/classes -auth /home/oracrs/.Xauthority -geometry 1024x768 -depth 8 -rfbwait 120000 -rfbauth /home/oracrs/.vnc/passwd -rfbport 5901 -nolisten local -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/,/usr/lib/X11/fonts/75dpi/,/usr/lib/X11/fonts/100dpi/,/usr/lib/X11/fonts/ibm850/,/usr/lib/X11/fonts/Type1/
node1[/u02/oracle]$


node2[/u02/oracle]$ ps -ef|grep vnc
  oracrs 42795314 54919540   0 20:06:27  pts/0  0:00 grep vnc
node2[/u02/oracle]$ vncserver

New 'X' desktop is buxton:1

Starting applications specified in /home/oracrs/.vnc/xstartup
Log file is /home/oracrs/.vnc/buxton:1.log

node2[/u02/oracle]$ ps -ef|grep vnc
  oracrs 46071882 54919540   0 20:06:46  pts/0  0:00 grep vnc
  oracrs 13042452        1   0 20:06:34  pts/0  0:00 Xvnc :1 -desktop X -httpd /opt/freeware/vnc/classes -auth /home/oracrs/.Xauthority -geometry 1024x768 -depth 8 -rfbwait 120000 -rfbauth /home/oracrs/.vnc/passwd -rfbport 5901 -nolisten local -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/,/usr/lib/X11/fonts/75dpi/,/usr/lib/X11/fonts/100dpi/,/usr/lib/X11/fonts/ibm850/,/usr/lib/X11/fonts/Type1/
buxton[/u02/oracle]$

10.160.45.73:5901
password for node1 =vncforu

9) Ensure that you have information captured As this details are required during installation, including the following:

a)   An Oracle base location for Oracle Clusterware.


Node1

node1[/u02/oracle]$
node1[/u02/oracle]$ echo $ORACLE_BASE
/u02/oracle/PROD
node1[/u02/oracle]$ echo $ORACLE_HOME
/u02/oracle/11.2.0.3/grid
node1[/u02/oracle]$ ls -ltr /u02/oracle/PROD
total 0
drwxr-x---    3 oracrs   dba             256 Jul 11 2011  cfgtoollogs
drwxrwxr-x    4 oracrs   dba             256 Jul 11 2011  diag
drwxr-xr-x    3 oracrs   dba             256 Mar 14 2012  node1
drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  Clusterware
drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  checkpoints
node1[/u02/oracle]$

Node2

node2[/u02/oracle]$ echo $ORACLE_BASE
/u02/oracle/PROD
node2[/u02/oracle]$ ls -ltr /u02/oracle/PROD
total 0
drwxrwxr-x    4 oracrs   dba             256 Jul 11 2011  diag
drwxr-xr-x    2 oracrs   dba             256 Jul 11 2011  checkpoints
drwxr-xr-x    3 oracrs   dba             256 Mar 14 2012  buxton
drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  Clusterware
drwxr-x---    5 oracrs   dba             256 Mar 14 2012  cfgtoollogs
node2[/u02/oracle]$

b)  An Oracle Grid Infrastructure home location that is different from your existing Oracle Clusterware location
c)  A SCAN address

Node1

node1[/u02/oracle]$ srvctl config scan
SCAN name: fusprod-scan, Network: 1/10.160.32.0/255.255.255.192/en0
SCAN VIP name: scan1, IP: /fusprod-scan/10.160.32.41
SCAN VIP name: scan2, IP: /fusprod-scan/10.160.32.42
SCAN VIP name: scan3, IP: /fusprod-scan/10.160.32.40
node1[/u02/oracle]$

Node2

node1[/u02/oracle/11.2.0.3/grid/OPatch]$ srvctl config scan
SCAN name: scan-blaze, Network: 1/10.160.32.128/255.255.255.192/en0
SCAN VIP name: scan1, IP: /scan-blaze/10.160.32.189
SCAN VIP name: scan2, IP: /scan-blaze/10.160.32.188
SCAN VIP name: scan3, IP: /scan-blaze/10.160.32.190
node1[/u02/oracle/11.2.0.3/grid/OPatch]$

node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ifconfig -a
en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.32.178 netmask 0xffffffc0 broadcast 10.160.32.191
        inet 192.168.32.178 netmask 0xffffff00 broadcast 192.168.32.255
        inet 10.160.32.181 netmask 0xffffffc0 broadcast 10.160.32.191
        inet 10.160.32.189 netmask 0xffffffc0 broadcast 10.160.32.191
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.34.94 netmask 0xffffffc0 broadcast 10.160.34.127
        inet 192.168.34.94 netmask 0xffffff00 broadcast 192.168.34.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en2: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.36.88 netmask 0xffffffc0 broadcast 10.160.36.127
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en3: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.45.73 netmask 0xffffff00 broadcast 10.160.45.255
        inet 10.160.45.85 netmask 0xffffff00 broadcast 10.160.45.255
        inet 10.160.45.86 netmask 0xffffff00 broadcast 10.160.45.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en4: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 192.168.33.2 netmask 0xffffffc0 broadcast 192.168.33.63
        inet 169.254.85.13 netmask 0xffff0000 broadcast 169.254.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en5: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.32.197 netmask 0xffffffc0 broadcast 10.160.32.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
        inet6 ::1%1/0
         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
node1[/u02/oracle/11.2.0.3/grid/OPatch]$

and


node2[/u02/oracle/11.2.0.3/grid/OPatch]$ info
node2[/u02/oracle/11.2.0.3/grid/OPatch]$ ifconfig -a
en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.32.177 netmask 0xffffffc0 broadcast 10.160.32.191
        inet 192.168.32.177 netmask 0xffffff00 broadcast 192.168.32.255
        inet 10.160.32.190 netmask 0xffffffc0 broadcast 10.160.32.191
        inet 10.160.32.188 netmask 0xffffffc0 broadcast 10.160.32.191
        inet 10.160.32.132 netmask 0xffffffc0 broadcast 10.160.32.191
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.34.93 netmask 0xffffffc0 broadcast 10.160.34.127
        inet 192.168.34.93 netmask 0xffffff00 broadcast 192.168.34.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en2: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.36.87 netmask 0xffffffc0 broadcast 10.160.36.127
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en3: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.45.72 netmask 0xffffff00 broadcast 10.160.45.255
        inet 10.160.45.82 netmask 0xffffff00 broadcast 10.160.45.255
        inet 10.160.45.83 netmask 0xffffff00 broadcast 10.160.45.255
        inet 10.160.45.84 netmask 0xffffff00 broadcast 10.160.45.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en4: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 192.168.33.1 netmask 0xffffffc0 broadcast 192.168.33.63
        inet 169.254.241.173 netmask 0xffff0000 broadcast 169.254.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en5: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.160.32.198 netmask 0xffffffc0 broadcast 10.160.32.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
        inet6 ::1%1/0
         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
node2[/u02/oracle/11.2.0.3/grid/OPatch]$


10)  Check opatch version and lsinventory on both node

node1[/u02/oracle/11.2.0.3/grid]$ OPatch/opatch version
OPatch Version: 11.2.0.3.5

OPatch succeeded.

or
node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ./opatch version
Invoking OPatch 11.2.0.1.7

OPatch Version: 11.2.0.1.7

OPatch succeeded.
node1[/u02/oracle/11.2.0.3/grid/OPatch]$

node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ./opatch lsinventory
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.

Oracle Home       : /u02/oracle/11.2.0.3/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/opatch2016-08-30_09-35-54AM.log

Lsinventory Output file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2016-08-30_09-35-54AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Grid Infrastructure                                           11.2.0.3.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
Rac system comprising of multiple nodes
  Local node = node1
  Remote node = node2
--------------------------------------------------------------------------------
OPatch succeeded.
node1[/u02/oracle/11.2.0.3/grid/OPatch]$
node1[/u02/oracle/11.2.0.3/grid]$


node2[/u02/oracle]$ cd $ORACLE_HOME
node2[/u02/oracle/11.2.0.3/grid]$ OPatch/opatch version
OPatch Version: 11.2.0.3.5

OPatch succeeded.
buxton[/u02/oracle/11.2.0.3/grid]$

or
node2[/u02/oracle/11.2.0.3/grid/OPatch]$ opatch version
Invoking OPatch 11.2.0.1.7

OPatch Version: 11.2.0.1.7

OPatch succeeded.

node2[/u02/oracle/11.2.0.3/grid/OPatch]$ opatch lsinventory
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home       : /u02/oracle/11.2.0.3/grid
Central Inventory : /u02/oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/opatch2016-08-30_09-34-40AM.log

Lsinventory Output file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2016-08-30_09-34-40AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Grid Infrastructure                                           11.2.0.3.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = node2
  Remote node = node1

--------------------------------------------------------------------------------

OPatch succeeded.
node2[/u02/oracle/11.2.0.3/grid/OPatch]$

node2[/u02/oracle/11.2.0.3/grid/OPatch]$

node2[/u02/oracle/11.2.0.3/grid/OPatch]$ df -g .
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u02          60.00     37.38   38%   217572     3% /u02
node2[/u02/oracle/11.2.0.3/grid/OPatch]$

clusterware version on both node
-----------------------------

node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
or
node2[/u02/oracle/11.2.0.3/grid/OPatch]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid/OPatch]$

node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
or
node2[/u02/oracle/11.2.0.3/grid/OPatch]$  crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid/OPatch]$

node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [11.2.0.3.0]
node1[/u02/oracle/11.2.0.3/grid]$
or
node2[/u02/oracle/11.2.0.3/grid/OPatch]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node2] is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid/OPatch]$


clusterware version on node2
-----------------------------

node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs softwareversion
Oracle Clusterware version on node [buxton] is [11.2.0.3.0]
node2[/u02/oracle/11.2.0.3/grid]$


13)Run CVU pre-upgrade check tool to ensure all required prerequisites are met

From grid user
----------------
/oswatcher/upgrade/grid/runcluvfy.sh stage -pre crsinst -upgrade -n node1,buxton -rolling -src_crshome /u02/oracle/11.2.0.3/grid -dest_crshome /u02/oracle/11.2.0.4/grid -dest_version 11.2.0.4.0 -fixup -fixupdir /oswatcher/cluster_upgradlog/fixup -verbose | tee /oswatcher/upgrade/CH9283/runcluvfy_a_fix14jul.out
 or

/oswatcher/upgrade/grid/runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome /u02/oracle/11.2.0.3/grid -dest_crshome /u02/oracle/11.2.0.4/grid -dest_version 11.2.0.4.0 -fixup -fixupdir /oswatcher/upgrade/CH11364/fixup -verbose | tee /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out

cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out   |grep -i failed
cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out  |grep -i war


---failed result on node1 and see complete log-------

Pre-check for cluster services setup was unsuccessful on all the nodes.
node1[/u02/oracle/11.2.0.3/grid/OPatch]$ cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix30aug.out   |grep -i failed
  node1     10GB (1.048576E7KB)       16GB (1.6777216E7KB)      failed
  node2     10GB (1.048576E7KB)       16GB (1.6777216E7KB)      failed
Result: Swap space check failed
  node1     missing                   Patch IZ97457             failed
  node2     missing                   Patch IZ97457             failed
Result: Operating system patch check failed for "Patch IZ97457"
  node1     unknown                   Patch IZ88711|IZ89514     failed
  node2     unknown                   Patch IZ88711|IZ89514     failed
Result: Operating system patch check failed for "Patch IZ88711|IZ89514"


node1[/u02/oracle/11.2.0.3/grid/OPatch]$ df -g .
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/u02          60.00     36.65   39%   389512     5% /u02
node1[/u02/oracle/11.2.0.3/grid/OPatch]$ nmon
node1[/u02/oracle/11.2.0.3/grid/OPatch]$

node1[/u02/oracle/11.2.0.3/grid/OPatch]$ lsps -s
Total Paging Space   Percent Used
      10240MB               1%
node1[/u02/oracle/11.2.0.3/grid/OPatch]$
node1[/home/oracrs]$
node1[/home/oracrs]$ env
_=/usr/bin/env
LANG=en_US
LOGIN=oracrs
PATH=/u02/oracle/11.2.0.3/grid/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java6_64/jre/bin:/usr/java6_64/bin:/usr/local/bin
EXTENDED_HISTORY=ON
ORACLE_BASE=/u02/oracle/PRD
LC__FASTMSG=true
HISTFILE=/var/adm/hist/oracrs/.sh_history.eibm994:oracrs
LOGNAME=oracrs
MAIL=/usr/spool/mail/oracrs
ORACLE_SID=+ASM1
LOCPATH=/usr/lib/nls/loc
USER=oracrs
AUTHSTATE=files
SHELL=/usr/bin/ksh
ODMDIR=/etc/objrepos
HISTSIZE=1024
TIMEOUT=600
TMOUT=600
HOME=/home/oracrs
TERM=xterm
MAILMSG=[YOU HAVE NEW MAIL]
ORACLE_HOME=/u02/oracle/11.2.0.3/grid
PWD=/home/oracrs
TZ=GMT0BST,M3.5.0,M10.5.0
AIXTHREAD_SCOPE=S
A__z=! LOGNAME="*TMOUT
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
node1[/home/oracrs]$

 14) check memory max size for  ASM 

SQL> show parameter memory

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
memory_max_target                    big integer 272M
memory_target                        big integer 272M
SQL>


Implementation



1) Stop OEM agent ,oswatcher on node1 and node2 if these are installed
 
a) stop agent from both node
  emctl stop agent
 $ORACLE_HOME/bin/emctl stop agent
$ORACLE_HOME/bin/emctl stop dbconsole

b) to stop oswatcher if installed on server

./stopOSWbb.sh

2) check CPU and Memory Utilization on  both servers 
   topas, top ,vmstat 5 5
3) session count node wise on all running database

select count(1),inst_id from gv$session group by inst_id

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');


4)  If necessary, adjust existing ASM instance memory parameter and bounce database and grid from each node one by one
     if the value is smaller than 1536 m, then issue the following command:
 
node1[/home/oracrs]$ sqlplus

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jun 22 23:14:32 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter db

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_size                        big integer 0
db_ultra_safe                        string      OFF
db_unique_name                       string      +ASM

SQL> show parameter target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
memory_max_target                    big integer 304M
memory_target                        big integer 304M
pga_aggregate_target                 big integer 0
sga_target                           big integer 0
SQL> show parameter max

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size                   string      unlimited
memory_max_target                    big integer 304M
sga_max_size                         big integer 308M
SQL>

 increased memory_max_target to meet upgradation requirment

SQL> alter system set memory_max_target=1536m scope=spfile;

System altered.

SQL> alter system set memory_target=1536m scope=spfile;

System altered.

SQL> show parameter memory_max_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
memory_max_target                    big integer 272M
SQL>
SQL>
SQL>
SQL>

bounce database and grid on both node to make parameter affective

srvctl stop instance -d EASYPROD -i EASYPROD1 -o immediate

 srvctl stop asm -n node1

or

stop complete clusterware
 cd /u02/oracle/11.2.0.3/grid/bin
 ./crsctl stop crs

 ps -ef|grep pmon

To start clusterware then Database instance from node1

srvctl start asm -n node1

or

start complete clusterware
/u02/oracle/11.2.0.3/grid/bin
 ./crsctl start crs

/crsctl check crs

then database

srvctl start instance -d EASYPROD  -i eEASYPROD1
node1[/home/eibm994]$
node1[/home/eibm994]$ ps -ef|grep pmon
 eibm994 13697098 11731206   0 00:38:15  pts/1  0:00 grep pmon
  e2bprd 14614660        1   0 00:25:00      -  0:00 ora_pmon_EASYPROD1
  oracrs  9372102        1   0 00:22:50      -  0:00 asm_pmon_+ASM1
node1[/home/eibm994]$ ps -ef|grep tns
  oracrs  8126520        1   0 00:31:34      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
 eibm994 10027022 11731206   0 00:38:20  pts/1  0:00 grep tns
  oracrs 13435028        1   0 00:31:35      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
  oracrs 11337984        1   0 00:23:12      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
node1[/home/eibm994]$


and Now bounce database and grid on  node2

node2[/home/oracrs]$ id
uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)
node2[/home/oracrs]$

srvctl stop instance -d EASYPROD -i eEASYPROD2 -o immediate

 srvctl stop asm -n node2

or

stop complete clusterware
 /u02/oracle/11.2.0.3/grid/bin
 ./crsctl stop crs

 srvctl stop asm -n node2

/crsctl check crs

 ps -ef|grep pmon


start clusterware then database
srvctl start asm -n node2
or
start complete clusterware
 /u02/oracle/11.2.0.3/grid/bin
 ./crsctl start crs

/crsctl check crs

CRS-4533: Event Manager is online
[node2:root:/u02/oracle/11.2.0.3/grid/bin:] ps -ef|grep pmon
  oracrs 13959340        1   0 00:35:23      -  0:00 asm_pmon_+ASM2
    root 29425910 21823716   0 00:36:53  pts/1  0:00 grep pmon
[node2:root:/u02/oracle/11.2.0.3/grid/bin:] ps -ef|grep tns
  oracrs  5374024        1   0   Aug 01      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr listener_adm -inherit
  oracrs 11534416        1   0 00:35:49      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
  oracrs 15073348        1   0 00:35:48      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
    root 29425676 21823716   0 00:36:59  pts/1  0:00 grep tns
[node2:root:/u02/oracle/11.2.0.3/grid/bin:]


To start Database instance from node2

srvctl start instance -d EASYPROD  -i EASYPROD2


Login on asm instance and verify memory_target from both node

SQL> show parameter target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
memory_max_target                    big integer 1536M
memory_target                        big integer 1536M
pga_aggregate_target                 big integer 0
sga_target                           big integer 0
SQL> show parameter max

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size                   string      unlimited
memory_max_target                    big integer 1536M
sga_max_size                         big integer 1536M
SQL>

SQL>

Grid upgrade 

Grid Health check and basic information collection

SQL>
node1[/home/oracrs]$
node1[/home/oracrs]$ ps -ef|grep pmon
  oracrs 13959340        1   0 00:35:23      -  0:00 asm_pmon_+ASM2
  e2bprd 23134368        1   0 00:38:07      -  0:00 ora_pmon_EASYPROD1
node1[/home/oracrs]$ ps -ef|grep tns
  oracrs  5374024        1   0   Aug 01      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr listener_adm -inherit
  oracrs  7602332 12648646   0 00:44:33  pts/1  0:00 grep tns
  oracrs 11534416        1   0 00:35:49      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
  oracrs 15073348        1   0 00:35:48      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
node1[/home/oracrs]$

node1[/home/oracrs]$ crst
node1[/home/oracrs]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ATA1.dg ora....up.type ONLINE    ONLINE    node2
ora....FRA1.dg ora....up.type ONLINE    ONLINE    node2
ora....EDO1.dg ora....up.type ONLINE    ONLINE    node2
ora....EDO2.dg ora....up.type ONLINE    ONLINE    node2
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node2
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2
ora....N2.lsnr ora....er.type ONLINE    ONLINE    node1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    node1
ora.VOTING.dg  ora....up.type ONLINE    ONLINE    node2
ora.asm        ora.asm.type   ONLINE    ONLINE    node2
ora.cvu        ora.cvu.type   ONLINE    ONLINE    node1
ora....1prd.db ora....se.type ONLINE    ONLINE    node1
ora....SM2.asm application    ONLINE    ONLINE    node2
ora....LD.lsnr application    ONLINE    ONLINE    node2
ora....eld.gsd application    OFFLINE   OFFLINE
ora....eld.ons application    ONLINE    ONLINE    node2
ora....eld.vip ora....t1.type ONLINE    ONLINE    node2
ora....SM1.asm application    ONLINE    ONLINE    node1
ora....OD.lsnr application    ONLINE    ONLINE    node1
ora....ood.gsd application    OFFLINE   OFFLINE
ora....ood.ons application    ONLINE    ONLINE    node1
ora....ood.vip ora....t1.type ONLINE    ONLINE    node1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    node2
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    node1
ora.ons        ora.ons.type   ONLINE    ONLINE    node2
ora....ry.acfs ora....fs.type ONLINE    ONLINE    node2
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    node1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    node1
node2[/home/oracrs]$

node1[/home/oracrs]$ . oraenv
ORACLE_SID = [+ASM2] ?
The Oracle base remains unchanged with value /u02/oracle/PRD
node1[/home/oracrs]$ cd $ORACLE_HOME
node1[/u02/oracle/11.2.0.3/grid]$ cd ../..
node1[/u02/oracle]$ ls -ltr
total 0
drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3
drwxrwx---    5 oracrs   oinstall        256 May 10 2013  oraInventory
drwxr-xr-x    5 oracrs   oinstall        256 May 28 2013  network
drwxr-xr-x    6 oracrs   oinstall        256 Jun 25 2013  PRD
drwxrwxr-x    2 root     oinstall        256 Aug 30 11:55 11.2.0.4
node1[/u02/oracle]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   42941f8e34314f1ebf0738e18d29e41c (/dev/rhdisk3) [VOTING]
Located 1 voting disk(s).
node2[/u02/oracle]$  ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3152
         Available space (kbytes) :     258968
         ID                       : 1814067896
         Device/File Name         :    +VOTING
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

node1[/u02/oracle]$
node1[/u02/oracle]$ crsctl check cluster -all
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node1[/u02/oracle]$

node1[/u02/oracle]$ srvctl config database -d  EASYPROD
Database unique name: EASYPROD
Database name: EASYPROD
Oracle home: /u01/oracle/e2b01prd/product/11.2.0.3
Oracle user: e2bprd
Spfile: +EASYPROD_DATA1/e2b01prd/spfileEASYPROD.ora
Domain: world
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: e2b01prd
Database instances: e2b01prd2,e2b01prd1
Disk Groups: EASYPRD_DATA1,EASYPRD_FRA1,EASYPRD_REDO1,EASYPRD_REDO2
Mount point paths:
Services:
Type: RAC
Database is administrator managed
node1[/u02/oracle]$

Verify the qosadmin users  :

Known bug related to qosadmin user AUTH failure : So check before upgrade only :

GI Upgrade from 11.2.0.3.6+ to 11.2.0.4 or 12.1.0.1 Fails with User(qosadmin) is deactivated. AUTH FAILURE. (Doc ID 1577072.1)
otherwise rootupgrade.sh will fail with error "Failed to perform J2EE (OC4J) Container Resource upgrade at /haclu/64bit/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 9323"


[root@node1 ~]# /data01/app/11.2.0/grid_11203/bin/qosctl qosadmin -listusers
AbstractLoginModule password: ---> Default password is oracle112   
oc4jadmin
JtaAdmin
qosadmin

node1[/u02/oracle]$ qosctl qosadmin -listusers
AbstractLoginModule password: 2
JtaAdmin
oc4jadmin
qosadmin

node1[/u02/oracle]$



For implementation Upgradation
-------------------------------
Note
./runInstaller -updateNodelistORACLE_HOME=/u02/oracle/11.2.0.3/grid "CLUSTER_NODES=node1,node2" CRS=true

/oui/bin/runInstaller -updateNodelist ORACLE_HOME= "CLUSTER_NODES={node1,node2}" CRS=true


1) From root user, create a new directory for the Oracle Grid software on both node (if already created during clusterverification then not required)

mkdir -p /u02/oracle/11.2.0.4/grid
chown -R oracrs:dba /u02/oracle/11.2.0.4/grid

[node1:root:/u02/oracle:] ls -ltr
total 8
drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3
drwxr-xr-x    7 oracrs   oinstall        256 May 09 2013  PRD
drwxrwx---    6 oracrs   oinstall       4096 May 10 2013  oraInventory
drwxr-xr-x    5 oracrs   oinstall        256 Jul 25 2013  network
drwxr-xr-x    2 root     system          256 Aug 30 11:46 11.2.0.4.0
[node1:root:/u02/oracle:]

or

chown  root:oinstall /u02/oracle/11.2.0.4
ls -ltr
chmod  775 /u02/oracle/11.2.0.4/

2) On each node, copy listener.ora and sqlnet.ora to new Grid home(11.2.0.4).
3)  Unzip software in same directory (don't create different directory for each zip file) from grid user
4) Unset Oracle related environment variables especially ORA_CRS_HOME before starting OUI or before executing any root script for both root and grid user.

$ unset ORACLE_BASE
$ unset ORACLE_HOME
$ unset ORACLE_HOSTNAME
$ unset ORACLE_SID
$ unset ORACLE_UNQNAM
$ unset TNS_ADMIN
$ unset CRS_HOME
$unset ORA_NLS10

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:.
export PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14_64/jre/bin:/usr/java5/jre/bin:/usr/java5/bin:/opt/IBM/cmdb/nmap-4.76:/usr/local/bin:/usr/local/bin
export IGNORE_PREADDNODE_CHECKS=Y

5) Install the 11.2.0.4 software using GUI and using the grid user  and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.

$ cd /u02/oracle/11.2.0.4/grid
      $ ./runInstaller
Starting Oracle Universal Installer...
The OUI installation log is located at /u02/oracle/oraInventory/logs.
For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

run from both node from root user

[node1:root:/oswatcher/upgrade/grid:] ls -ltr
total 160
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  sshsetup
-rwxr-xr-x    1 oracrs   oinstall       4878 Oct 09 2013  runcluvfy.sh
-rwxr-xr-x    1 oracrs   oinstall      10306 Oct 09 2013  runInstaller
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  rpm
-rwxr-xr-x    1 oracrs   oinstall      14047 Oct 09 2013  rootpre.sh
drwxr-xr-x    3 oracrs   oinstall        256 Oct 09 2013  rootpre
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  response
drwxr-xr-x    4 oracrs   oinstall       4096 Oct 09 2013  install
drwxr-xr-x   14 oracrs   oinstall       4096 Oct 09 2013  stage
-rw-r--r--    1 oracrs   oinstall        500 Oct 09 2013  welcome.html
-rw-r--r--    1 oracrs   oinstall      30016 Oct 09 2013  readme.html
[node1:root:/oswatcher/upgrade/grid:] ./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_16-09-05.01:13:14

Checking if group services should be configured....
Nothing to configure.
[node1:root:/oswatcher/upgrade/grid:]

[node2:root:/oswatcher/upgrade/grid:] ls -ltr
total 160
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  sshsetup
-rwxr-xr-x    1 oracrs   oinstall       4878 Oct 09 2013  runcluvfy.sh
-rwxr-xr-x    1 oracrs   oinstall      10306 Oct 09 2013  runInstaller
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  rpm
-rwxr-xr-x    1 oracrs   oinstall      14047 Oct 09 2013  rootpre.sh
drwxr-xr-x    3 oracrs   oinstall        256 Oct 09 2013  rootpre
drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  response
drwxr-xr-x    4 oracrs   oinstall       4096 Oct 09 2013  install
drwxr-xr-x   14 oracrs   oinstall       4096 Oct 09 2013  stage
-rw-r--r--    1 oracrs   oinstall        500 Oct 09 2013  welcome.html
-rw-r--r--    1 oracrs   oinstall      30016 Oct 09 2013  readme.html
[node2:root:/oswatcher/upgrade/grid:] ./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_16-09-05.01:14:32

Checking if group services should be configured....
Nothing to configure.
[node2:root:/oswatcher/upgrade/grid:]


Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.

    On the node selection page, select all nodes.

    Select installation options as prompted.

    When prompted, run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

    Run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.

 

give the new path as below

node1[/u02/oracle]$ ls -ltr
total 8
drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3
drwxr-xr-x    7 oracrs   oinstall        256 May 09 2013  PRD
drwxrwx---    6 oracrs   oinstall       4096 May 10 2013  oraInventory
drwxr-xr-x    5 oracrs   oinstall        256 Jul 25 2013  network
drwxr-xr-x    2 root     system          256 Aug 30 11:46 11.2.0.4.0
drwxrwxr-x    2 root     oinstall        256 Aug 30 11:52 11.2.0.4


node1[/u02/oracle/11.2.0.4]$ pwd
/u02/oracle/11.2.0.4
node1[/u02/oracle/11.2.0.4]$

when prompted for software location . provide location as below

software location =/u02/oracle/11.2.0.4/grid

not
/u02/oracle/11.2.0.4/grid/ (not last slash)


6) run rootupgrade.sh from root user once upgrade complete on all node
----------------
Note:
 The rootupgrade.sh script can only be run sequentially node by node, and not in parallel between nodes.
 This script takes 15-20 minutes to complete.

eg

[node1]# /u01/app/11.2.0.3/grid/rootupgrade.sh

[node1:root:/u02/oracle/11.2.0.4:]
[node1:root:/u02/oracle/11.2.0.4:] ls -ltr root*
-rwxr-xr-x    1 oracrs   oinstall        480 Sep 05 01:34 rootupgrade.sh
-rwxr-x---    1 oracrs   oinstall        467 Sep 05 01:34 root.sh
[node1:root:/u02/oracle/11.2.0.4:] id
uid=0(root) gid=0(system) groups=208(tivlogs)
[node1:root:/u02/oracle/11.2.0.4:] rootupgrade.sh
ksh: rootupgrade.sh:  not found.
[node1:root:/u02/oracle/11.2.0.4:] ./rootupgrade.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracrs
    ORACLE_HOME=  /u02/oracle/11.2.0.4

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: n
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u02/oracle/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer

node1[/home/oracrs]$
node1[/home/oracrs]$ cat .profile
TH=$PATH:$HOME/bin:/usr/local/bin:.

export PATH

if [ -s $MAIL ]           # This is at Shell startup.  In normal
then echo $MAILMSG        # operation, the Shell checks
fi                          # periodically.

umask 022
AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE
unset  ORACLE_HOME ORACLE_SID ORACLE_BASE
export ORACLE_BASE=/u02/oracle/PRD
export ORACLE_HOME=/u02/oracle/11.2.0.3/grid
export ORACLE_SID=+ASM1
#export ORACLE_UNQNAME=

export PATH=${ORACLE_HOME}/bin:${PATH}

alias oh='cd $ORACLE_HOME'
# alias otrace= 'cd ${ORACLE_BASE}/diag/rdbms/${ORACLE_UNQNAME}/${ORACLE_SID}/trace'

set -o vi



7) and  run rootupgrade.sh on  node2 (second node)

8) Once the Grid upgrade script finishes, all the clusterware and database services automatically comes up. If not then manually start it.

9) make new grid home entry in oratab

node1[/home/eibm994]$ ps -ef|grep pmon
 eibm994 13697098 11731206   0 00:38:15  pts/1  0:00 grep pmon
  e2bprd 14614660        1   0 00:25:00      -  0:00 ora_pmon_EASYPROD1
  oracrs  9372102        1   0 00:22:50      -  0:00 asm_pmon_+ASM1
node1[/home/eibm994]$ ps -ef|grep tns
  oracrs  8126520        1   0 00:31:34      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit
 eibm994 10027022 11731206   0 00:38:20  pts/1  0:00 grep tns
  oracrs 13435028        1   0 00:31:35      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit
  oracrs 11337984        1   0 00:23:12      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit
node1[/home/eibm994]$

Verification (post implemetation)

1) Check version

select * from v$version

select group_number,instance_name,db_name,software_version from gv$asm_client;

2) Verify the cluster health from all nodes
$ crsctl check cluster -all
     #crsctl check crs
     #crsctl stat resource -t
     #ocrcheck -local
     $srvctl status database -d dbname


3)  To list all existing nodes in a cluster
$ olsnodes -n

4) ps -ef |grep css

5) Oracle Clusterware active version on the cluster and ervices  release version on both Node.
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion

node1[/home/oracrs]$
node1[/home/oracrs]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
node1[/home/oracrs]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
node1[/home/oracrs]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [11.2.0.4.0]
node1[/home/oracrs]$


The Oracle base remains unchanged with value /u02/oracle/PRD
node1[/home/oracrs]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3228
         Available space (kbytes) :     258892
         ID                       : 1814067896
         Device/File Name         :    +VOTING
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user


6)Cluster Health Check with CVU

Starting with 11.2.0.3, the cluster verification utility (cluvfy) is capable of carrying out the post-Clusterware and Database installation health checks. With the new –healthcheck argument, cluster and database components best practices, mandatory requirements, deviation, and proper functionality can be verified.
The following example collects detailed information about best-practice recommendations for Clusterware in an HTML file named cvucheckreport_<timestamp>.htm:
$./cluvfy comp healthcheck –collect cluster –bestpractice -html
When no further arguments are attached with the healthcheck parameter, the Clusterware and Database checks are carried out. Use the following example to perform the health checks on the cluster and database because no –html argument was specified; the output will be stored in a text file:
$./cluvfy comp healthcheck

node1[/home/oracrs]$ cluvfy comp healthcheck

Verifying OS mandatory requirements

Verifying Multicast check ...passed
Verifying Physical Memory ...passed
Verifying Available Physical Memory ...passed
Verifying Swap Size ...warning
Verifying Free Space: node1:/u02/oracle/11.2.0.4 ...passed
Verifying Free Space: node2:/u02/oracle/11.2.0.4 ...passed
Verifying Free Space: node1:/tmp/ ...passed
Verifying Free Space: node2:/tmp/ ...passed
Verifying User Existence: oracrs ...passed
Verifying Group Existence: oinstall ...passed
Verifying Group Membership: oinstall(Primary) ...passed
Verifying Run Level ...passed
Verifying Hard Limit: maximum open file descriptors ...passed
Verifying Soft Limit: maximum open file descriptors ...passed
Verifying Hard Limit: maximum user processes ...passed
Verifying Soft Limit: maximum user processes ...passed
Verifying Architecture ...passed
Verifying OS Kernel Version ...passed
Verifying OS Kernel Parameter: ncargs ...passed
Verifying OS Kernel Parameter: maxuproc ...passed
Verifying OS Kernel Parameter: tcp_ephemeral_low ...passed
Verifying OS Kernel Parameter: tcp_ephemeral_high ...passed
Verifying OS Kernel Parameter: udp_ephemeral_low ...passed
Verifying OS Kernel Parameter: udp_ephemeral_high ...passed
Verifying Package: bos.adt.base-... ...passed
Verifying Package: bos.adt.lib-... ...passed
Verifying Package: bos.adt.libm-... ...passed
Verifying Package: bos.perf.libperfstat-6.1.2.1 ...passed
Verifying Package: bos.perf.perfstat-... ...passed
Verifying Package: bos.perf.proctools-... ...passed
Verifying Package: xlC.aix61.rte-10.1.0.0 ...passed
Verifying Package: xlC.rte-10.1.0.0 ...passed
Verifying OS Patch:IZ97457 ...warning
Verifying OS Patch:IZ88711|IZ89514 ...warning
Verifying Users With Same UID ...passed
Verifying Current Group ID ...passed
Verifying Root user consistency ...passed
Verifying Network Time Protocol (NTP) ...passed
Verifying Same core file name pattern ...passed
Verifying User Not In Group: oracrs ...passed
Verifying User Mask ...passed
Verifying Task resolv.conf Integrity ...warning
Verifying Time zone consistency ...passed
Verifying UserID65535 ...passed
Verifying Kernel64Bit ...passed
Verifying Network parameter - ipqmaxlen ...met
Verifying Network parameter - rfc1323 ...met
Verifying Network parameter - sb_max ...met
Verifying Network parameter - tcp_sendspace ...met
Verifying Network parameter - tcp_recvspace ...met
Verifying Network parameter - udp_sendspace ...met
Verifying Network parameter - udp_recvspace ...met

Verifying OS Best Practice

Verifying availability of port 8888 ...met
Verifying Network parameter - ipqmaxlen ...met
Verifying Network parameter - rfc1323 ...met
Verifying Network parameter - sb_max ...met
Verifying Network parameter - tcp_sendspace ...met
Verifying Network parameter - tcp_recvspace ...met
Verifying Network parameter - udp_sendspace ...met
Verifying Network parameter - udp_recvspace ...met

Verifying Clusterware mandatory requirements

Verifying Node Connectivity ...passed



or

./orachk –u –o post

8) change path in OEM
9) login on running database and check session count

SQL> select inst_id,count(1) from gv$session group by inst_id;

   INST_ID   COUNT(1)
---------- ----------
         1         51
         2         55

SQL>



 =================


We found Doc ID 1989922.1 and corrected the permissions as suggested. We did:

node2[/u02/oracle/12.1.0/grid/lib]$ ls -l libclntsh.so
-rwx------ 1 oracrs dba 61812395 May 17 15:44 libclntsh.so
node2[/u02/oracle/12.1.0/grid/lib]$ ls -l libclntshcore.so
-rwx------ 1 oracrs dba 4187247 May 17 15:44 libclntshcore.so
node2[/u02/oracle/12.1.0/grid/lib]$ ls -l libocci.so
-rwx------ 1 oracrs dba 4996222 May 17 15:44 libocci.so
node2[/u02/oracle/12.1.0/grid/lib]$ ls -l libttsh12.so
-rwx------ 1 oracrs dba 83299908 May 17 15:44 libttsh12.so

chmod 755 libclntsh.so libclntshcore.so libocci.so libttsh12.so

This appears to have fixed the issue and we can now start the instances with srvctl.

I would still like to know if, from the log files, the PSU appears to have been correctly applied. I noticed some warnings regarding SQL scripts associated with the patches that could not be run due to the instance being down. Do I need to run these manually?



Downgrad steps

=================

Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3
The environment used for this downgrade had a successful upgrade to 11.2.0.4 from 11.2.0.3

1) Check clusterware version on both nodes

crsctl query crs activeversion
crsctl query crs softwareversion

The old GI home (11.2.0.3 home) was not uninstalled nor removed after the upgrade to 11.2.0.4

Before proceeding with the downgrade identify the "ocr-node". OCR node an Oracle term for the node which created the backup of the OCR when the GI was upgraded. This is usually the node on which the first rootupgrade.sh was run during the upgrade. Look in the cdata folder inside GI HOME and locate the backup ocr of the form ocrold_version.

$ pwd
/opt/app/11.2.0.3/grid/cdata
$ ls -l

2)
From Root user

In this two node RAC the rootupgrade.sh was first run on node1 node and this become the OCR node. The downgrade should be run last on this node.
On all the nodes other than the OCR-node run the rootcrs.pl with the downgrade option from the new GI HOME (11.2.0.4 home).

# $GI_HOME/crs/install/rootcrs.pl -downgrade -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0
       or

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0


If you want to stop a partial or failed 11g release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command.

From root user

Once all the non-OCR Nodes/local node are downgraded, run the downgrade on the OCR-Node with the lastnode option.

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0
               or

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0

This script downgrades the OCR. If you want to stop a partial or failed 11g Release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command

3)

Before running the crs start there are few things to be done. As part of the upgrade to 11.2.0.4 trace file analyzer (TFA) is installed. Remove the init script for TFA

#cd /etc/init
# rm oracle-tfa.conf
rm: remove regular file `oracle-tfa.conf'? y
# rm ../init.d/init.tfa
rm: remove regular file `../init.d/init.tfa'? y

Verify the GRID home in the /etc/oratab is pointing to the old GI home in both nodes

cat /etc/oratab
+ASM1:/u02/oracle/11.2.0.3/grid/ :N            # line added by Agent

Remove gpnp profiles if the rootupgrade.sh was run concurrently on multiple nodes. Also remove the clusterware checkpoint files.

rm -rf  /u02/oracle/11.2.0.4/grid/gpnp/*
rm -rf $ORACLE_BASE/Clusterware/ckptGridHA_rhel6m1.xml

4)

Start clusterware stack on all the nodes with crsctl start crs. Verify the active version is the downgraded version.

$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
$ crsctl query crs softwareversion
Oracle Clusterware version on node [rhel6m2] is [11.2.0.3.0]

Run a ocrcheck to verify the OCR intergrity

# ocrcheck
BTfxlecR

ocrcheck -local

5)
login through Grid owner(oracrs)

The inventory will still have the CRS=true on the 11.2.0.4 home. Run the runInstaller to set the CRS=true to the 11.2.0.3 home (old home).

a) run the following commands, where /u02/oracle/11.2.0.4/grid/ is the location of the new (upgraded) Grid home (11.2):

Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u02/oracle/11.2.0.4/grid/

b)run the command ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=pre11.2_crs_home, where pre11.2_crs_home represents the home directory of the earlier Oracle Clusterware installation

Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u02/oracle/11.2.0.3/grid/

As the last step detach the 11.2.0.4 GI Home from the inventory and remove it
 ./runInstaller -detachHome ORACLE_HOME=/u02/oracle/11.2.0.4/grid/ -silent

Note--The ORACLE_HOME is hard coded in the oracrs users .profile script so you could do with a task to update that so you don't forget.



No comments:

Post a Comment