Friday, 19 August 2022

Oracle Clusterware/Grid upgrade From 11.2.0.3 To 11.2.0.4

 

ClusterWare Upgrade

 

 

Clusterware upgrade From 11.2.0.3 To 11.2.0.4

 

 

Note : First we need to upgrade Clusterware . Once Clusterware  get upgraded successfully . We can go ahead with Database upgrade running on this server

1) Stop OEM agent ,oswatcher on Node1/Node2

2) Take backup of clusterware Home on one node1/Node2

3) increase memory_max_target and memory_target from 272M to 1500M for  ASM instance using rolling mode. In case  of less memory for  ASM instance

4)Unset environment and Install 19 clusterware  Home  Binary  in a separate Oracle home  upgrade using GUI

5)  run rootupgrade.sh on Node1

6) Start cluster ware and all database instances(if not started automatic)

7)run rootupgrade.sh on another  Node

8) Start cluster ware and all database instances on Node2 (if not started automatic)

9)  Start OEM agent ,oswatcher on Node1/Node2

10) Post verification (health check for clusterware and databases)

 


 

Precheck

 

1)For Each Node, We need to run Cluster Verification Utility to ensure that you have completed preinstallation steps.

 

 cluvfy comp software -n all -verbose

 

2)check passworless ssh connectivity

 

grep Node1 /etc/hosts

grep NODE1 /etc/hosts

grep NODE2 /etc/hosts

grep fairfied /etc/hosts

or

date;ssh Node2 date;

date;ssh fairfied-rac date;

date;ssh Node1 date;

date;ssh Node1-rac date;

 

3) check space in clusterware home on both node

  Df -g

 

4) Ensure that you have the following information which is require during Clusterware Binary installation

 

·       An Oracle Grid Base and Oracle Grid Home location for Oracle Clusterware

 Node1[/u02/oracle]$ echo $ORACLE_BASE

 /u02/oracle/PROD

Node1[/u02/oracle]$ echo $ORACLE_HOME

/u02/oracle/11.2.0.3/grid

5) download Grid software

 

6) Check Failover status

 

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');

  

 

7) check space in clusterware home on both node ---------

 

Node1[/u02/oracle/11.2.0.3/grid]$ df -g .

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/u02_lv      124.00     56.37   55%   924168     7% /u02

 

Node1[/u02/oracle]$ ls -ltr

total 8

drwxr-xr-x    3 root     dba             256 Jul 11 2011  11.2.0

drwxr-xr-x    7 oracrs   dba             256 Mar 14 2012  PROD

drwxr-xr-x    4 root     dba             256 Sep 15 2013  11.2.0.3

drwxrwx---    6 oracrs   dba            4096 Sep 15 2013  oraInventory

drwxrwxr-x    2 root     dba             256 Mar 16 14:01 11.2.0.4

Node1[/u02/oracle]$

 

 

Node1[/u02/oracle/11.2.0.3/grid]$

 

Node2[/u02/oracle/11.2.0.3/grid]$ df -g .

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/u02_lv      124.00     45.33   64%  2072408    17% /u02

 

 

Node2[/u02/oracle]$ ls -ltr

total 8

drwxr-xr-x    3 root     dba             256 Jul 11 2011  11.2.0

drwxr-xr-x    7 oracrs   dba             256 Mar 14 2012  PROD

drwxr-xr-x    4 root     dba             256 Sep 15 2013  11.2.0.3

drwxrwx---    6 oracrs   dba            4096 Sep 15 2013  oraInventory

drwx------    2 root     system          256 Mar 17 09:14 11.2.04

drwxrwxr-x    2 root     dba             256 Mar 17 09:15 11.2.0.4

Node2[/u02/oracle]$

 

 

Node2[/u02/oracle/11.2.0.3/grid]$

 

9) check load on  both servers and session count node wise on all database

 

 topas,

select count(1),inst_id from gv$session group by inst_id

 

10)Check GUI are working on both servers and root access ---------

 

Node1[/u02/oracle]$ vncserver

 

New 'X' desktop is Node1:1

 

Starting applications specified in /home/oracrs/.vnc/xstartup

Log file is /home/oracrs/.vnc/Node1:1.log

 

Node1[/u02/oracle]$ ps -ef|grep vnc

  oracrs 21299840 28836502   0 20:04:10  pts/1  0:00 grep vnc

  oracrs 39649952        1   0 20:03:49  pts/1  0:00 Xvnc :1 -desktop X -httpd /opt/freeware/vnc/classes -auth /home/oracrs/.Xauthority -geometry 1024x768 -depth 8 -rfbwait 120000 -rfbauth /home/oracrs/.vnc/passwd -rfbport 5901 -nolisten local -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/,/usr/lib/X11/fonts/75dpi/,/usr/lib/X11/fonts/100dpi/,/usr/lib/X11/fonts/ibm850/,/usr/lib/X11/fonts/Type1/

Node1[/u02/oracle]$

 

 

Node2[/u02/oracle]$ ps -ef|grep vnc

  oracrs 46071882 54919540   0 20:06:46  pts/0  0:00 grep vnc

  oracrs 13042452        1   0 20:06:34  pts/0  0:00 Xvnc :1 -desktop X -httpd /opt/freeware/vnc/classes -auth /home/oracrs/.Xauthority -geometry 1024x768 -depth 8 -rfbwait 120000 -rfbauth /home/oracrs/.vnc/passwd -rfbport 5901 -nolisten local -fp /usr/lib/X11/fonts/,/usr/lib/X11/fonts/misc/,/usr/lib/X11/fonts/75dpi/,/usr/lib/X11/fonts/100dpi/,/usr/lib/X11/fonts/ibm850/,/usr/lib/X11/fonts/Type1/

Node2[/u02/oracle]$

 

10.160.45.73:5901

password for Node1 =vncforu

 

11) nsure that you have information you will need during installation, including the following:

 

 

a)   An Oracle base location for Oracle Clusterware.

 

Node1[/u02/oracle]$

Node1[/u02/oracle]$ echo $ORACLE_BASE

/u02/oracle/PROD

Node1[/u02/oracle]$ echo $ORACLE_HOME

/u02/oracle/11.2.0.3/grid

Node1[/u02/oracle]$ ls -ltr /u02/oracle/PROD

total 0

drwxr-x---    3 oracrs   dba             256 Jul 11 2011  cfgtoollogs

drwxrwxr-x    4 oracrs   dba             256 Jul 11 2011  diag

drwxr-xr-x    3 oracrs   dba             256 Mar 14 2012  Node1

drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  Clusterware

drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  checkpoints

Node1[/u02/oracle]$

 

 

 Node2[/u02/oracle]$ echo $ORACLE_BASE

/u02/oracle/PROD

 

Node2[/u02/oracle]$ ls -ltr /u02/oracle/PROD

total 0

drwxrwxr-x    4 oracrs   dba             256 Jul 11 2011  diag

drwxr-xr-x    2 oracrs   dba             256 Jul 11 2011  checkpoints

drwxr-xr-x    3 oracrs   dba             256 Mar 14 2012  Node2

drwxr-xr-x    2 oracrs   dba             256 Mar 14 2012  Clusterware

drwxr-x---    5 oracrs   dba             256 Mar 14 2012  cfgtoollogs

Node2[/u02/oracle]$

 

Node2[/u02/oracle]$

 

 

b)  An Oracle Grid Infrastructure home location that is different from your existing Oracle Clusterware location

c)  A SCAN address

 

Node1[/u02/oracle]$ srvctl config scan

SCAN name: fusprod-scan, Network: 1/10.160.32.0/255.255.255.192/en0

SCAN VIP name: scan1, IP: /fusprod-scan/10.160.32.41

SCAN VIP name: scan2, IP: /fusprod-scan/10.160.32.42

SCAN VIP name: scan3, IP: /fusprod-scan/10.160.32.40

Node1[/u02/oracle]$

 

or

 

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ srvctl config scan

SCAN name: scan-blaze, Network: 1/10.160.32.128/255.255.255.192/en0

SCAN VIP name: scan1, IP: /scan-blaze/10.160.32.189

SCAN VIP name: scan2, IP: /scan-blaze/10.160.32.188

SCAN VIP name: scan3, IP: /scan-blaze/10.160.32.190

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.32.178 netmask 0xffffffc0 broadcast 10.160.32.191

        inet 192.168.32.178 netmask 0xffffff00 broadcast 192.168.32.255

        inet 10.160.32.181 netmask 0xffffffc0 broadcast 10.160.32.191

        inet 10.160.32.189 netmask 0xffffffc0 broadcast 10.160.32.191

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.34.94 netmask 0xffffffc0 broadcast 10.160.34.127

        inet 192.168.34.94 netmask 0xffffff00 broadcast 192.168.34.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en2: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.36.88 netmask 0xffffffc0 broadcast 10.160.36.127

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en3: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.45.73 netmask 0xffffff00 broadcast 10.160.45.255

        inet 10.160.45.85 netmask 0xffffff00 broadcast 10.160.45.255

        inet 10.160.45.86 netmask 0xffffff00 broadcast 10.160.45.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en4: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 192.168.33.2 netmask 0xffffffc0 broadcast 192.168.33.63

        inet 169.254.85.13 netmask 0xffff0000 broadcast 169.254.255.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en5: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.32.197 netmask 0xffffffc0 broadcast 10.160.32.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

and

 

 

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ info

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.32.177 netmask 0xffffffc0 broadcast 10.160.32.191

        inet 192.168.32.177 netmask 0xffffff00 broadcast 192.168.32.255

        inet 10.160.32.190 netmask 0xffffffc0 broadcast 10.160.32.191

        inet 10.160.32.188 netmask 0xffffffc0 broadcast 10.160.32.191

        inet 10.160.32.132 netmask 0xffffffc0 broadcast 10.160.32.191

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.34.93 netmask 0xffffffc0 broadcast 10.160.34.127

        inet 192.168.34.93 netmask 0xffffff00 broadcast 192.168.34.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en2: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.36.87 netmask 0xffffffc0 broadcast 10.160.36.127

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en3: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.45.72 netmask 0xffffff00 broadcast 10.160.45.255

        inet 10.160.45.82 netmask 0xffffff00 broadcast 10.160.45.255

        inet 10.160.45.83 netmask 0xffffff00 broadcast 10.160.45.255

        inet 10.160.45.84 netmask 0xffffff00 broadcast 10.160.45.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en4: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 192.168.33.1 netmask 0xffffffc0 broadcast 192.168.33.63

        inet 169.254.241.173 netmask 0xffff0000 broadcast 169.254.255.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en5: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 10.160.32.198 netmask 0xffffffc0 broadcast 10.160.32.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

 

6)  Check opatch version

 

Node1[/u02/oracle/11.2.0.3/grid]$ OPatch/opatch version

OPatch Version: 11.2.0.3.5

 

OPatch succeeded.

 

 

or

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ./opatch version

Invoking OPatch 11.2.0.1.7

 

OPatch Version: 11.2.0.1.7

 

OPatch succeeded.

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ ./opatch lsinventory

Invoking OPatch 11.2.0.1.7

 

Oracle Interim Patch Installer version 11.2.0.1.7

Copyright (c) 2011, Oracle Corporation.  All rights reserved.

 

 

Oracle Home       : /u02/oracle/11.2.0.3/grid

Central Inventory : /u02/oracle/oraInventory

   from           : /etc/oraInst.loc

OPatch version    : 11.2.0.1.7

OUI version       : 11.2.0.3.0

Log file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/opatch2016-08-30_09-35-54AM.log

 

Lsinventory Output file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2016-08-30_09-35-54AM.txt

 

--------------------------------------------------------------------------------

Installed Top-level Products (1):

 

Oracle Grid Infrastructure                                           11.2.0.3.0

There are 1 products installed in this Oracle Home.

 

 

There are no Interim patches installed in this Oracle Home.

 

 

Rac system comprising of multiple nodes

  Local node = Node1

  Remote node = Node2

 

--------------------------------------------------------------------------------

 

OPatch succeeded.

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid]$

 

 

Node2[/u02/oracle]$ cd $ORACLE_HOME

Node2[/u02/oracle/11.2.0.3/grid]$ OPatch/opatch version

OPatch Version: 11.2.0.3.5

 

OPatch succeeded.

Node2[/u02/oracle/11.2.0.3/grid]$

 

or

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ opatch version

Invoking OPatch 11.2.0.1.7

 

OPatch Version: 11.2.0.1.7

 

OPatch succeeded.

 

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ opatch lsinventory

Invoking OPatch 11.2.0.1.7

 

Oracle Interim Patch Installer version 11.2.0.1.7

Copyright (c) 2011, Oracle Corporation.  All rights reserved.

 

 

Oracle Home       : /u02/oracle/11.2.0.3/grid

Central Inventory : /u02/oracle/oraInventory

   from           : /etc/oraInst.loc

OPatch version    : 11.2.0.1.7

OUI version       : 11.2.0.3.0

Log file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/opatch2016-08-30_09-34-40AM.log

 

Lsinventory Output file location : /u02/oracle/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2016-08-30_09-34-40AM.txt

 

--------------------------------------------------------------------------------

Installed Top-level Products (1):

 

Oracle Grid Infrastructure                                           11.2.0.3.0

There are 1 products installed in this Oracle Home.

 

 

There are no Interim patches installed in this Oracle Home.

 

 

Rac system comprising of multiple nodes

  Local node = Node2

  Remote node = Node1

 

--------------------------------------------------------------------------------

 

OPatch succeeded.

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

 

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ df -g .

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/u02          60.00     37.38   38%   217572     3% /u02

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

clusterware version on Node2

-----------------------------

 

Node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

 

or

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

or

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$  crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid]$ crsctl query crs softwareversion

Oracle Clusterware version on node [Node1] is [11.2.0.3.0]

Node1[/u02/oracle/11.2.0.3/grid]$

or

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$ crsctl query crs softwareversion

Oracle Clusterware version on node [Node2] is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid/OPatch]$

 

 

clusterware version on Node2

-----------------------------

 

Node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid]$ crsctl query crs softwareversion

Oracle Clusterware version on node [Node2] is [11.2.0.3.0]

Node2[/u02/oracle/11.2.0.3/grid]$

 

12) Run CVU pre-upgrade check tool to ensure all required prerequisites are met and check memory max size of ASM ----

 

14) create folder on both node from root user.

id

 

[Node1:root:/u02/oracle:] mkdir 11.2.0.4

[Node1:root:/u02/oracle:]  chown  root:oinstall /u02/oracle/11.2.0.4

[Node1:root:/u02/oracle:] ls -ltr

total 8

drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3

drwxr-xr-x    7 oracrs   oinstall        256 May 09 2013  PRD

drwxrwx---    6 oracrs   oinstall       4096 May 10 2013  oraInventory

drwxr-xr-x    5 oracrs   oinstall        256 Jul 25 2013  network

drwxr-xr-x    2 root     system          256 Aug 30 11:46 11.2.0.4.0

drwxr-xr-x    2 root     oinstall        256 Aug 30 11:52 11.2.0.4

[Node1:root:/u02/oracle:]

 

[Node1:root:/u02/oracle:] chmod  775 /u02/oracle/11.2.0.4/

[Node1:root:/u02/oracle:]

 

From grid user

----------------

/oswatcher/upgrade/grid/runcluvfy.sh stage -pre crsinst -upgrade -n Node1,Node2 -rolling -src_crshome /u02/oracle/11.2.0.3/grid -dest_crshome /u02/oracle/11.2.0.4/grid -dest_version 11.2.0.4.0 -fixup -fixupdir /oswatcher/cluster_upgradlog/fixup -verbose | tee /oswatcher/upgrade/CH9283/runcluvfy_a_fix14jul.out

 or

 

/oswatcher/upgrade/grid/runcluvfy.sh stage -pre crsinst -upgrade -n Node1,Node2 -rolling -src_crshome /u02/oracle/11.2.0.3/grid -dest_crshome /u02/oracle/11.2.0.4/grid -dest_version 11.2.0.4.0 -fixup -fixupdir /oswatcher/upgrade/CH11364/fixup -verbose | tee /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out

 

cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out   |grep -i failed

cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix05aug2016.out  |grep -i war

 

---failed result on Node1-------

 

Pre-check for cluster services setup was unsuccessful on all the nodes.

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ cat /oswatcher/upgrade/CH11364/runcluvfy_a_fix30aug.out   |grep -i failed

  Node1     10GB (1.048576E7KB)       16GB (1.6777216E7KB)      failed

  Node2     10GB (1.048576E7KB)       16GB (1.6777216E7KB)      failed

Result: Swap space check failed

  Node1     missing                   Patch IZ97457             failed

  Node2     missing                   Patch IZ97457             failed

Result: Operating system patch check failed for "Patch IZ97457"

  Node1     unknown                   Patch IZ88711|IZ89514     failed

  Node2     unknown                   Patch IZ88711|IZ89514     failed

Result: Operating system patch check failed for "Patch IZ88711|IZ89514"

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ df -g .

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/u02          60.00     36.65   39%   389512     5% /u02

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ nmon

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$ lsps -s

Total Paging Space   Percent Used

      10240MB               1%

Node1[/u02/oracle/11.2.0.3/grid/OPatch]$

 

 

Node1[/home/oracrs]$

Node1[/home/oracrs]$ env

_=/usr/bin/env

LANG=en_US

LOGIN=oracrs

PATH=/u02/oracle/11.2.0.3/grid/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java6_64/jre/bin:/usr/java6_64/bin:/usr/local/bin

EXTENDED_HISTORY=ON

ORACLE_BASE=/u02/oracle/PRD

LC__FASTMSG=true

HISTFILE=/var/adm/hist/oracrs/.sh_history.eibm994:oracrs

LOGNAME=oracrs

MAIL=/usr/spool/mail/oracrs

ORACLE_SID=+ASM1

LOCPATH=/usr/lib/nls/loc

USER=oracrs

AUTHSTATE=files

SHELL=/usr/bin/ksh

ODMDIR=/etc/objrepos

HISTSIZE=1024

TIMEOUT=600

TMOUT=600

HOME=/home/oracrs

TERM=xterm

MAILMSG=[YOU HAVE NEW MAIL]

ORACLE_HOME=/u02/oracle/11.2.0.3/grid

PWD=/home/oracrs

TZ=GMT0BST,M3.5.0,M10.5.0

AIXTHREAD_SCOPE=S

A__z=! LOGNAME="*TMOUT

NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat

Node1[/home/oracrs]$

 

 

 

SQL> show parameter memory

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_max_target                    big integer 272M

memory_target                        big integer 272M

SQL>

 

 

4)Stop OEM agent ,oswatcher on Node1 and Node2-----------------

 

 

Node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep pmon

 fusprod  3473582        1   0   Jun 15      -  3:43 ora_pmon_FSPROD1

  oracrs 12845076        1   0   Jun 15      -  1:26 asm_pmon_+ASM1

  fuoprd 15597792        1   0   Jun 15      -  5:17 ora_pmon_fuoprd1

  fubprd 17760296        1   0   Jun 15      -  3:20 ora_pmon_fubprd1

  oracrs 38862906 28836502   0 20:22:40  pts/1  0:00 grep pmon

Node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep tns

  oracrs 11141142        1   0   Jun 15      -  2:46 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit

  oracrs 11468890        1   0   Jun 15      - 18:53 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

  oracrs 66453796 28836502   0 20:22:49  pts/1  0:00 grep tns

Node1[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep oswatcher

  oracrs 49021146 28836502   0 20:23:03  pts/1  0:00 grep oswatcher

Node1[/u02/oracle/11.2.0.3/grid]$

 

 

Node2[/u02/oracle/11.2.0.3/grid]$

Node2[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep tns

  oracrs 14418170        1   1   Oct 18      - 568:33 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

  oracrs 42795434 54919540   0 20:24:52  pts/0  0:00 grep tns

  oracrs 21692996        1   0   Jun 15      -  2:49 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit

  oracrs 47317996        1   0   Jun 15      -  2:48 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit

Node2[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep pmon

  oracrs 12714058        1   0   Oct 18      - 46:59 asm_pmon_+ASM2

  fuoprd 14745610        1   0   Oct 18      - 178:42 ora_pmon_fuoprd2

 fusprod 15335424        1   0   Oct 18      - 109:26 ora_pmon_FSPROD2

  fubprd 19202122        1   0   Oct 18      - 112:28 ora_pmon_fubprd2

  oracrs  8913808 54919540   0 20:25:03  pts/0  0:00 grep pmon

Node2[/u02/oracle/11.2.0.3/grid]$ ps -ef|grep oswatcher

  oracrs 25427980  4522948   0   Jan 18      - 23:52 /bin/sh ./OSWatcherFM.sh 360 /oswatcher/oswbb/archive

  oracrs  7012788 54919540   0 20:25:12  pts/0  0:00 grep oswatcher

Node2[/u02/oracle/11.2.0.3/grid]$

 

 

 

 

Implementation steps

--------------------

Stop OEM agent ,oswatcher on Node2/Node1

 

1) stop agent from both node

  emctl stop agent

 $ORACLE_HOME/bin/emctl stop agent

$ORACLE_HOME/bin/emctl stop dbconsole

 

 

 

2) to stop oswatcher

 

./stopOSWbb.sh

 

 

Node2[/home/oracrs]$ sqlplus

 

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jun 22 23:14:32 2016

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

Enter user-name: / as sysdba

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

 

SQL> show parameter db

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

db_cache_size                        big integer 0

db_ultra_safe                        string      OFF

db_unique_name                       string      +ASM

 

SQL> show parameter target

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_max_target                    big integer 304M

memory_target                        big integer 304M

pga_aggregate_target                 big integer 0

sga_target                           big integer 0

SQL> show parameter max

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

max_dump_file_size                   string      unlimited

memory_max_target                    big integer 304M

sga_max_size                         big integer 308M

SQL>

 

 

SQL> alter system set memory_max_target=1536m scope=spfile;

 

System altered.

 

SQL> alter system set memory_target=1536m scope=spfile;

 

System altered.

 

SQL> show parameter memory_max_target

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_max_target                    big integer 272M

SQL>

SQL>

SQL>

SQL>

 

srvctl stop instance -d testdb -i testdb1 -o immediate

 

 srvctl stop asm -n Node1

 

or

 

stop complete clusterware

 

 cd /u02/oracle/11.2.0.3/grid/bin

 ./crsctl stop crs

 

 

 ps -ef|grep pmon

 

 

To start clusterware then Database instance from Node1

 

 

srvctl start asm -n Node1

 

or

 

start complete clusterware

/u02/oracle/11.2.0.3/grid/bin

 ./crsctl start crs

 

/crsctl check crs

 

 

then database

 

srvctl start instance -d testdb  -i testdb1

 

 

Node1[/home/eibm994]$

Node1[/home/eibm994]$ ps -ef|grep pmon

 eibm994 13697098 11731206   0 00:38:15  pts/1  0:00 grep pmon

  e2bprd 14614660        1   0 00:25:00      -  0:00 ora_pmon_e2b01prd1

  oracrs  9372102        1   0 00:22:50      -  0:00 asm_pmon_+ASM1

Node1[/home/eibm994]$ ps -ef|grep tns

  oracrs  8126520        1   0 00:31:34      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN3 -inherit

 eibm994 10027022 11731206   0 00:38:20  pts/1  0:00 grep tns

  oracrs 13435028        1   0 00:31:35      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN2 -inherit

  oracrs 11337984        1   0 00:23:12      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

Node1[/home/eibm994]$

 

 

and same for Node2

 

Node2[/home/oracrs]$ id

uid=350(oracrs) gid=207(oinstall) groups=1(staff),203(dba),210(asmadmin),212(asmdba),213(asmoper)

Node2[/home/oracrs]$

 

 

srvctl stop instance -d testdb -i testdb2 -o immediate 

 

 srvctl stop asm -n Node2

 

or

 

stop complete clusterware

 /u02/oracle/11.2.0.3/grid/bin

 ./crsctl stop crs

 

 srvctl stop asm -n Node2

 

/crsctl check crs

 

 ps -ef|grep pmon

 

 

 

 

------start clusterware then database-----------

srvctl start asm -n Node2

or

start complete clusterware

 /u02/oracle/11.2.0.3/grid/bin

 ./crsctl start crs

 

/crsctl check crs

 

CRS-4533: Event Manager is online

[Node2:root:/u02/oracle/11.2.0.3/grid/bin:] ps -ef|grep pmon

  oracrs 13959340        1   0 00:35:23      -  0:00 asm_pmon_+ASM2

    root 29425910 21823716   0 00:36:53  pts/1  0:00 grep pmon

[Node2:root:/u02/oracle/11.2.0.3/grid/bin:] ps -ef|grep tns

  oracrs  5374024        1   0   Aug 01      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr listener_adm -inherit

  oracrs 11534416        1   0 00:35:49      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit

  oracrs 15073348        1   0 00:35:48      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

    root 29425676 21823716   0 00:36:59  pts/1  0:00 grep tns

[Node2:root:/u02/oracle/11.2.0.3/grid/bin:]

 

 

To start Database instance from node2

 

srvctl start instance -d testdb  -i testdb2

 

 

login on asm instance and check memory_target on both node

 

SQL> show parameter target

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_max_target                    big integer 1536M

memory_target                        big integer 1536M

pga_aggregate_target                 big integer 0

sga_target                           big integer 0

SQL> show parameter max

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

max_dump_file_size                   string      unlimited

memory_max_target                    big integer 1536M

sga_max_size                         big integer 1536M

SQL>

 

SQL>

SQL>

SNode2[/home/oracrs]$

Node2[/home/oracrs]$ ps -ef|grep pmon

  oracrs 13959340        1   0 00:35:23      -  0:00 asm_pmon_+ASM2

  e2bprd 23134368        1   0 00:38:07      -  0:00 ora_pmon_e2b01prd2

Node2[/home/oracrs]$ ps -ef|grep tns

  oracrs  5374024        1   0   Aug 01      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr listener_adm -inherit

  oracrs  7602332 12648646   0 00:44:33  pts/1  0:00 grep tns

  oracrs 11534416        1   0 00:35:49      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER_SCAN1 -inherit

  oracrs 15073348        1   0 00:35:48      -  0:00 /u02/oracle/11.2.0.3/grid/bin/tnslsnr LISTENER -inherit

Node2[/home/oracrs]$

 

Node2[/home/oracrs]$ crst

Node2[/home/oracrs]$ crs_stat -t

Name           Type           Target    State     Host

------------------------------------------------------------

ora....ATA1.dg ora....up.type ONLINE    ONLINE    Node2

ora....FRA1.dg ora....up.type ONLINE    ONLINE    Node2

ora....EDO1.dg ora....up.type ONLINE    ONLINE    Node2

ora....EDO2.dg ora....up.type ONLINE    ONLINE    Node2

ora....ER.lsnr ora....er.type ONLINE    ONLINE    Node2

ora....N1.lsnr ora....er.type ONLINE    ONLINE    Node2

ora....N2.lsnr ora....er.type ONLINE    ONLINE    Node1

ora....N3.lsnr ora....er.type ONLINE    ONLINE    Node1

ora.VOTING.dg  ora....up.type ONLINE    ONLINE    Node2

ora.asm        ora.asm.type   ONLINE    ONLINE    Node2

ora.cvu        ora.cvu.type   ONLINE    ONLINE    Node1

ora....1prd.db ora....se.type ONLINE    ONLINE    Node1

ora....SM2.asm application    ONLINE    ONLINE    Node2

ora....LD.lsnr application    ONLINE    ONLINE    Node2

ora....eld.gsd application    OFFLINE   OFFLINE

ora....eld.ons application    ONLINE    ONLINE    Node2

ora....eld.vip ora....t1.type ONLINE    ONLINE    Node2

ora....SM1.asm application    ONLINE    ONLINE    Node1

ora....OD.lsnr application    ONLINE    ONLINE    Node1

ora....ood.gsd application    OFFLINE   OFFLINE

ora....ood.ons application    ONLINE    ONLINE    Node1

ora....ood.vip ora....t1.type ONLINE    ONLINE    Node1

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE

ora....network ora....rk.type ONLINE    ONLINE    Node2

ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    Node1

ora.ons        ora.ons.type   ONLINE    ONLINE    Node2

ora....ry.acfs ora....fs.type ONLINE    ONLINE    Node2

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    Node2

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    Node1

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    Node1

Node2[/home/oracrs]$

 

Node2[/home/oracrs]$ . oraenv

ORACLE_SID = [+ASM2] ?

The Oracle base remains unchanged with value /u02/oracle/PRD

Node2[/home/oracrs]$ cd $ORACLE_HOME

Node2[/u02/oracle/11.2.0.3/grid]$ cd ../..

Node2[/u02/oracle]$ ls -ltr

total 0

drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3

drwxrwx---    5 oracrs   oinstall        256 May 10 2013  oraInventory

drwxr-xr-x    5 oracrs   oinstall        256 May 28 2013  network

drwxr-xr-x    6 oracrs   oinstall        256 Jun 25 2013  PRD

drwxrwxr-x    2 root     oinstall        256 Aug 30 11:55 11.2.0.4

Node2[/u02/oracle]$ crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   42941f8e34314f1ebf0738e18d29e41c (/dev/rhdisk3) [VOTING]

Located 1 voting disk(s).

Node2[/u02/oracle]$  ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          3

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3152

         Available space (kbytes) :     258968

         ID                       : 1814067896

         Device/File Name         :    +VOTING

                                    Device/File integrity check succeeded

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

         Cluster registry integrity check succeeded

 

         Logical corruption check bypassed due to non-privileged user

 

Node2[/u02/oracle]$

 

 

Node2[/u02/oracle]$ crsctl check cluster -all

**************************************************************

Node2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

Node1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

Node2[/u02/oracle]$

 

Node2[/u02/oracle]$ srvctl config database -d  testdb

Database unique name: testdb

Database name: e2b01prd

Oracle home: /u01/oracle/testdb/product/11.2.0.3

Oracle user: testuser

Spfile: +E2B01PRD_DATA1/testdb/spfilee2b01prd.ora

Domain: world

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: e2b01prd

Database instances: testdb2,testdb1

Disk Groups: testdb_DATA1,testdb_FRA1,testdb_REDO1,testdb_REDO2

Mount point paths:

Services:

Type: RAC

Database is administrator managed

Node2[/u02/oracle]$

 

Verify the qosadmin users  :

 

Known bug related to qosadmin user AUTH failure : So check before upgrade only :

 

GI Upgrade from 11.2.0.3.6+ to 11.2.0.4 or 12.1.0.1 Fails with User(qosadmin) is deactivated. AUTH FAILURE. (Doc ID 1577072.1)

otherwise rootupgrade.sh will fail with error "Failed to perform J2EE (OC4J) Container Resource upgrade at /haclu/64bit/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 9323"

 

 

[root@node1 ~]# /data01/app/11.2.0/grid_11203/bin/qosctl qosadmin -listusers

AbstractLoginModule password: ---> Default password is oracle112     

oc4jadmin

JtaAdmin

qosadmin

 

Node2[/u02/oracle]$ qosctl qosadmin -listusers

AbstractLoginModule password: 2

JtaAdmin

oc4jadmin

qosadmin

 

Node2[/u02/oracle]$

 

 

 

For implementation Upgradation

-------------------------------

 

./runInstaller -updateNodelistORACLE_HOME=/u02/oracle/11.2.0.3/grid "CLUSTER_NODES=Node1,Node2" CRS=true

 

/oui/bin/runInstaller -updateNodelist ORACLE_HOME= "CLUSTER_NODES={Node1,Node2}" CRS=true

 

4)  Unzip software in same directory (don't create different directory for each zip file)

5) Unset Oracle related environment variables especially ORA_CRS_HOME before starting OUI or before executing any root script for both root and grid user.

 

$ unset ORACLE_BASE

$ unset ORACLE_HOME

$ unset ORACLE_HOSTNAME

$ unset ORACLE_SID

$ unset ORACLE_UNQNAM

$ unset TNS_ADMIN

$ unset CRS_HOME

$unset ORA_NLS10

 

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:.

export PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14_64/jre/bin:/usr/java5/jre/bin:/usr/java5/bin:/opt/IBM/cmdb/nmap-4.76:/usr/local/bin:/usr/local/bin

export IGNORE_PREADDNODE_CHECKS=Y 

 

6) from root user, create a new directory for the Oracle Grid software on both node (if already created during clusterverification then not required)

 

mkdir -p /u02/oracle/11.2.0.4/grid

chown -R oracrs:dba /u02/oracle/11.2.0.4/grid

 

[Node1:root:/u02/oracle:] ls -ltr

total 8

drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3

drwxr-xr-x    7 oracrs   oinstall        256 May 09 2013  PRD

drwxrwx---    6 oracrs   oinstall       4096 May 10 2013  oraInventory

drwxr-xr-x    5 oracrs   oinstall        256 Jul 25 2013  network

drwxr-xr-x    2 root     system          256 Aug 30 11:46 11.2.0.4.0

[Node1:root:/u02/oracle:]

 

or

 

 

chown  root:oinstall /u02/oracle/11.2.0.4

ls -ltr

chmod  775 /u02/oracle/11.2.0.4/

 

7) Install the 11.2.0.4 software using GUI and using the grid user :

$ cd /u02/oracle/11.2.0.4/grid

      $ ./runInstaller

Starting Oracle Universal Installer...

The OUI installation log is located at /u02/oracle/oraInventory/logs.

For OUI installations or execution of critical scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

 

run from both node from root user

 

[Node1:root:/oswatcher/upgrade/grid:] ls -ltr

total 160

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  sshsetup

-rwxr-xr-x    1 oracrs   oinstall       4878 Oct 09 2013  runcluvfy.sh

-rwxr-xr-x    1 oracrs   oinstall      10306 Oct 09 2013  runInstaller

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  rpm

-rwxr-xr-x    1 oracrs   oinstall      14047 Oct 09 2013  rootpre.sh

drwxr-xr-x    3 oracrs   oinstall        256 Oct 09 2013  rootpre

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  response

drwxr-xr-x    4 oracrs   oinstall       4096 Oct 09 2013  install

drwxr-xr-x   14 oracrs   oinstall       4096 Oct 09 2013  stage

-rw-r--r--    1 oracrs   oinstall        500 Oct 09 2013  welcome.html

-rw-r--r--    1 oracrs   oinstall      30016 Oct 09 2013  readme.html

[Node1:root:/oswatcher/upgrade/grid:] ./rootpre.sh

./rootpre.sh output will be logged in /tmp/rootpre.out_16-09-05.01:13:14

 

Checking if group services should be configured....

Nothing to configure.

[Node1:root:/oswatcher/upgrade/grid:]

 

[Node2:root:/oswatcher/upgrade/grid:] ls -ltr

total 160

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  sshsetup

-rwxr-xr-x    1 oracrs   oinstall       4878 Oct 09 2013  runcluvfy.sh

-rwxr-xr-x    1 oracrs   oinstall      10306 Oct 09 2013  runInstaller

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  rpm

-rwxr-xr-x    1 oracrs   oinstall      14047 Oct 09 2013  rootpre.sh

drwxr-xr-x    3 oracrs   oinstall        256 Oct 09 2013  rootpre

drwxr-xr-x    2 oracrs   oinstall        256 Oct 09 2013  response

drwxr-xr-x    4 oracrs   oinstall       4096 Oct 09 2013  install

drwxr-xr-x   14 oracrs   oinstall       4096 Oct 09 2013  stage

-rw-r--r--    1 oracrs   oinstall        500 Oct 09 2013  welcome.html

-rw-r--r--    1 oracrs   oinstall      30016 Oct 09 2013  readme.html

[Node2:root:/oswatcher/upgrade/grid:] ./rootpre.sh

./rootpre.sh output will be logged in /tmp/rootpre.out_16-09-05.01:14:32

 

Checking if group services should be configured....

Nothing to configure.

[Node2:root:/oswatcher/upgrade/grid:]

 

 

Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.

 

    On the node selection page, select all nodes.

 

    Select installation options as prompted.

 

    When prompted, run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

 

    Run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.

 

give the new path as below

Node1[/u02/oracle]$ ls -ltr

total 8

drwxr-xr-x    3 root     oinstall        256 May 07 2013  11.2.0.3

drwxr-xr-x    7 oracrs   oinstall        256 May 09 2013  PRD

drwxrwx---    6 oracrs   oinstall       4096 May 10 2013  oraInventory

drwxr-xr-x    5 oracrs   oinstall        256 Jul 25 2013  network

drwxr-xr-x    2 root     system          256 Aug 30 11:46 11.2.0.4.0

drwxrwxr-x    2 root     oinstall        256 Aug 30 11:52 11.2.0.4

 

 

Node1[/u02/oracle/11.2.0.4]$ pwd

/u02/oracle/11.2.0.4

Node1[/u02/oracle/11.2.0.4]$

 

 

software location =/u02/oracle/11.2.0.4/grid

 

not

/u02/oracle/11.2.0.4/grid/ (not last slash)

 

 

 

From root user

----------------

[root@kens-racnode1 OPatch]# /u01/app/11.2.0.3/grid/rootupgrade.sh

 

 

[Node1:root:/u02/oracle/11.2.0.4:]

[Node1:root:/u02/oracle/11.2.0.4:] ls -ltr root*

-rwxr-xr-x    1 oracrs   oinstall        480 Sep 05 01:34 rootupgrade.sh

-rwxr-x---    1 oracrs   oinstall        467 Sep 05 01:34 root.sh

[Node1:root:/u02/oracle/11.2.0.4:] id

uid=0(root) gid=0(system) groups=208(tivlogs)

[Node1:root:/u02/oracle/11.2.0.4:] rootupgrade.sh

ksh: rootupgrade.sh:  not found.

[Node1:root:/u02/oracle/11.2.0.4:] ./rootupgrade.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= oracrs

    ORACLE_HOME=  /u02/oracle/11.2.0.4

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) [n]: n

The contents of "coraenv" have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u02/oracle/11.2.0.4/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

 

Or

 

 

srvctl upgrade database -d DATABASENAME -o ‘/u01/app/oracle/12.1.0.2′

 

If you are upgrading a cluster database from releases 10.2, 11.1, or 11.2, then upgrade the database configuration in Oracle Clusterware using the following command:

 

$ srvctl upgrade database -d db-unique-name -o oraclehome

 

 

Node1[/home/oracrs]$

Node1[/home/oracrs]$ cat .profile

TH=$PATH:$HOME/bin:/usr/local/bin:.

 

export PATH

 

if [ -s $MAIL ]           # This is at Shell startup.  In normal

then echo $MAILMSG        # operation, the Shell checks

fi                          # periodically.

 

umask 022

AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE

unset  ORACLE_HOME ORACLE_SID ORACLE_BASE

export ORACLE_BASE=/u02/oracle/PRD

export ORACLE_HOME=/u02/oracle/11.2.0.3/grid

export ORACLE_SID=+ASM1

#export ORACLE_UNQNAME=

 

export PATH=${ORACLE_HOME}/bin:${PATH}

 

alias oh='cd $ORACLE_HOME'

# alias otrace= 'cd ${ORACLE_BASE}/diag/rdbms/${ORACLE_UNQNAME}/${ORACLE_SID}/trace'

 

set -o vi

 

Node1[/home/oracrs]$ ps -ef|grep pmon

  oracrs  8060936        1   0 01:57:33      -  0:00 asm_pmon_+ASM1

  e2bprd  1245504        1   0 01:58:17      -  0:00 ora_pmon_e2b01prd1

  oracrs  9896352  8192044   0 02:00:43  pts/2  0:00 grep pmon

Node1[/home/oracrs]$

Node1[/home/oracrs]$

Node1[/home/oracrs]$

Node1[/home/oracrs]$ ps -ef|grep tns

  oracrs  6881398        1   0 01:58:13      -  0:00 /u02/oracle/11.2.0.4/bin/tnslsnr LISTENER -inherit

  oracrs 16777426        1   0 01:58:13      -  0:00 /u02/oracle/11.2.0.4/bin/tnslsnr LISTENER_SCAN1 -inherit

Node1[/home/oracrs]$

 

 

SQL> select inst_id,count(1) from gv$session group by inst_id;

 

   INST_ID   COUNT(1)

---------- ----------

         1         51

         2         55

 

SQL>

 

Node1[/home/oracrs]$

Node1[/home/oracrs]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

Node1[/home/oracrs]$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

Node1[/home/oracrs]$ crsctl query crs softwareversion

Oracle Clusterware version on node [Node1] is [11.2.0.4.0]

Node1[/home/oracrs]$

 

 

-rw-r-----    1 oracrs   oinstall       8327 Sep 05 01:46 AttachHome2016-09-05_01-14-49AM.log.Node2

-rw-r-----    1 oracrs   oinstall          0 Sep 05 02:27 oraInstall2016-09-05_02-27-17AM.err

-rw-r-----    1 oracrs   oinstall          0 Sep 05 02:27 oraInstall2016-09-05_02-27-17AM.err.Node2

-rw-r-----    1 oracrs   oinstall         33 Sep 05 02:28 oraInstall2016-09-05_02-27-17AM.out.Node2

-rw-r-----    1 oracrs   oinstall       8416 Sep 05 02:28 UpdateNodeList2016-09-05_02-27-17AM.log.Node2

-rw-r-----    1 oracrs   oinstall         33 Sep 05 02:28 oraInstall2016-09-05_02-27-17AM.out

-rw-r-----    1 oracrs   oinstall      12506 Sep 05 02:28 UpdateNodeList2016-09-05_02-27-17AM.log

-rw-r-----    1 oracrs   oinstall          0 Sep 05 02:28 oraInstall2016-09-05_02-28-19AM.err

-rw-r-----    1 oracrs   oinstall          0 Sep 05 02:28 oraInstall2016-09-05_02-28-19AM.err.Node2

-rw-r-----    1 oracrs   oinstall         33 Sep 05 02:29 oraInstall2016-09-05_02-28-19AM.out.Node2

-rw-r-----    1 oracrs   oinstall       8446 Sep 05 02:29 UpdateNodeList2016-09-05_02-28-19AM.log.Node2

-rw-r-----    1 oracrs   oinstall         33 Sep 05 02:29 oraInstall2016-09-05_02-28-19AM.out

-rw-r-----    1 oracrs   oinstall      12654 Sep 05 02:29 UpdateNodeList2016-09-05_02-28-19AM.log

-rw-r-----    1 oracrs   oinstall    1818777 Sep 05 02:32 installActions2016-09-05_01-14-49AM.log

Node1[/u02/oracle/oraInventory/logs]$ cat installActions2016-09-05_01-14-49AM.log

 

 

 

8) Start cluster ware and all database instances(if not started automatic)

 

  After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. When the script is run successfully on all the nodes except the last node, run the script on the last node.

 

9) and  run rootupgrade.sh on Node2 node

 

10) Change grid home in oratab 

 

Post Implementation

------------------ -

1) Check version

 

select * from v$version

 

select group_number,instance_name,db_name,software_version from gv$asm_client;

 

2) Verify the cluster health from all nodes

      $ crsctl check cluster -all

3)  To list all existing nodes in a cluster

      $ olsnodes -n

 

4) ps -ef |grep css

 

5) Oracle Clusterware active version on the cluster and ervices  release version on both Node.

      crsctl query crs activeversion

      crsctl query crs releaseversion

      crsctl query crs softwareversion

 

The Oracle base remains unchanged with value /u02/oracle/PRD

Node1[/home/oracrs]$ ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          3

         Total space (kbytes)     :     262120

         Used space (kbytes)      :       3228

         Available space (kbytes) :     258892

         ID                       : 1814067896

         Device/File Name         :    +VOTING

                                    Device/File integrity check succeeded

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

                                    Device/File not configured

 

         Cluster registry integrity check succeeded

 

         Logical corruption check bypassed due to non-privileged user

 

 

 

6)

   crsctl stat res -t

7)

    Cluster Health Check with CVU

Starting with 11.2.0.3, the cluster verification utility (cluvfy) is capable of carrying out the post-Clusterware and Database installation health checks. With the new –healthcheck argument, cluster and database components best practices, mandatory requirements, deviation, and proper functionality can be verified.

The following example collects detailed information about best-practice recommendations for Clusterware in an HTML file named cvucheckreport_<timestamp>.htm:

$./cluvfy comp healthcheck –collect cluster –bestpractice -html

When no further arguments are attached with the healthcheck parameter, the Clusterware and Database checks are carried out. Use the following example to perform the health checks on the cluster and database because no –html argument was specified; the output will be stored in a text file:

$./cluvfy comp healthcheck

 

 

Node2[/home/oracrs]$ cluvfy comp healthcheck

 

Verifying OS mandatory requirements

 

Verifying Multicast check ...passed

Verifying Physical Memory ...passed

Verifying Available Physical Memory ...passed

Verifying Swap Size ...warning

Verifying Free Space: Node1:/u02/oracle/11.2.0.4 ...passed

Verifying Free Space: Node2:/u02/oracle/11.2.0.4 ...passed

Verifying Free Space: Node1:/tmp/ ...passed

Verifying Free Space: Node2:/tmp/ ...passed

Verifying User Existence: oracrs ...passed

Verifying Group Existence: oinstall ...passed

Verifying Group Membership: oinstall(Primary) ...passed

Verifying Run Level ...passed

Verifying Hard Limit: maximum open file descriptors ...passed

Verifying Soft Limit: maximum open file descriptors ...passed

Verifying Hard Limit: maximum user processes ...passed

Verifying Soft Limit: maximum user processes ...passed

Verifying Architecture ...passed

Verifying OS Kernel Version ...passed

Verifying OS Kernel Parameter: ncargs ...passed

Verifying OS Kernel Parameter: maxuproc ...passed

Verifying OS Kernel Parameter: tcp_ephemeral_low ...passed

Verifying OS Kernel Parameter: tcp_ephemeral_high ...passed

Verifying OS Kernel Parameter: udp_ephemeral_low ...passed

Verifying OS Kernel Parameter: udp_ephemeral_high ...passed

Verifying Package: bos.adt.base-... ...passed

Verifying Package: bos.adt.lib-... ...passed

Verifying Package: bos.adt.libm-... ...passed

Verifying Package: bos.perf.libperfstat-6.1.2.1 ...passed

Verifying Package: bos.perf.perfstat-... ...passed

Verifying Package: bos.perf.proctools-... ...passed

Verifying Package: xlC.aix61.rte-10.1.0.0 ...passed

Verifying Package: xlC.rte-10.1.0.0 ...passed

Verifying OS Patch:IZ97457 ...warning

Verifying OS Patch:IZ88711|IZ89514 ...warning

Verifying Users With Same UID ...passed

Verifying Current Group ID ...passed

Verifying Root user consistency ...passed

Verifying Network Time Protocol (NTP) ...passed

Verifying Same core file name pattern ...passed

Verifying User Not In Group: oracrs ...passed

Verifying User Mask ...passed

Verifying Task resolv.conf Integrity ...warning

Verifying Time zone consistency ...passed

Verifying UserID65535 ...passed

Verifying Kernel64Bit ...passed

Verifying Network parameter - ipqmaxlen ...met

Verifying Network parameter - rfc1323 ...met

Verifying Network parameter - sb_max ...met

Verifying Network parameter - tcp_sendspace ...met

Verifying Network parameter - tcp_recvspace ...met

Verifying Network parameter - udp_sendspace ...met

Verifying Network parameter - udp_recvspace ...met

 

 

Verifying OS Best Practice

 

Verifying availability of port 8888 ...met

Verifying Network parameter - ipqmaxlen ...met

Verifying Network parameter - rfc1323 ...met

Verifying Network parameter - sb_max ...met

Verifying Network parameter - tcp_sendspace ...met

Verifying Network parameter - tcp_recvspace ...met

Verifying Network parameter - udp_sendspace ...met

Verifying Network parameter - udp_recvspace ...met

 

 

Verifying Clusterware mandatory requirements

 

Verifying Node Connectivity ...passed

 

 

 

or

 

./orachk –u –o post

 

8) change path in OEM

 

=================

 

     

We found Doc ID 1989922.1 and corrected the permissions as suggested. We did:

 

preston[/u02/oracle/12.1.0/grid/lib]$ ls -l libclntsh.so

-rwx------ 1 oracrs dba 61812395 May 17 15:44 libclntsh.so

preston[/u02/oracle/12.1.0/grid/lib]$ ls -l libclntshcore.so

-rwx------ 1 oracrs dba 4187247 May 17 15:44 libclntshcore.so

preston[/u02/oracle/12.1.0/grid/lib]$ ls -l libocci.so

-rwx------ 1 oracrs dba 4996222 May 17 15:44 libocci.so

preston[/u02/oracle/12.1.0/grid/lib]$ ls -l libttsh12.so

-rwx------ 1 oracrs dba 83299908 May 17 15:44 libttsh12.so

 

chmod 755 libclntsh.so libclntshcore.so libocci.so libttsh12.so

 

This appears to have fixed the issue and we can now start the instances with srvctl.

 

I would still like to know if, from the log files, the PSU appears to have been correctly applied. I noticed some warnings regarding SQL scripts associated with the patches that could not be run due to the instance being down. Do I need to run these manually?

 

 

 

downgrad steps

 

=================

 

Downgrade Grid Infrastructure from 11.2.0.4 to 11.2.0.3

The environment used for this downgrade had a successful upgrade to 11.2.0.4 from 11.2.0.3

 

1) Check clusterware version on both nodes

 

crsctl query crs activeversion

crsctl query crs softwareversion

 

The old GI home (11.2.0.3 home) was not uninstalled nor removed after the upgrade to 11.2.0.4

 

Before proceeding with the downgrade identify the "ocr-node". OCR node an Oracle term for the node which created the backup of the OCR when the GI was upgraded. This is usually the node on which the first rootupgrade.sh was run during the upgrade. Look in the cdata folder inside GI HOME and locate the backup ocr of the form ocrold_version.

 

$ pwd

/opt/app/11.2.0.3/grid/cdata

$ ls -l

 

2)

From Root user

 

In this two node RAC the rootupgrade.sh was first run on Node1 node and this become the OCR node. The downgrade should be run last on this node.

On all the nodes other than the OCR-node run the rootcrs.pl with the downgrade option from the new GI HOME (11.2.0.4 home).

 

# $GI_HOME/crs/install/rootcrs.pl -downgrade -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0

       or

 

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0

 

 

If you want to stop a partial or failed 11g release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command.

 

From root user

 

Once all the non-OCR Nodes/local node are downgraded, run the downgrade on the OCR-Node with the lastnode option.

 

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0

               or

 

# $GI_HOME/crs/install/rootcrs.pl -downgrade -force -lastnode -oldcrshome /u02/oracle/11.2.0.3/grid/  -version 11.2.0.3.0

 

This script downgrades the OCR. If you want to stop a partial or failed 11g Release 2 (11.2) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command

 

3)

 

Before running the crs start there are few things to be done. As part of the upgrade to 11.2.0.4 trace file analyzer (TFA) is installed. Remove the init script for TFA

 

#cd /etc/init

# rm oracle-tfa.conf

rm: remove regular file `oracle-tfa.conf'? y

# rm ../init.d/init.tfa

rm: remove regular file `../init.d/init.tfa'? y

 

Verify the GRID home in the /etc/oratab is pointing to the old GI home in both nodes

 

cat /etc/oratab

+ASM1:/u02/oracle/11.2.0.3/grid/ :N            # line added by Agent

 

Remove gpnp profiles if the rootupgrade.sh was run concurrently on multiple nodes. Also remove the clusterware checkpoint files.

 

rm -rf  /u02/oracle/11.2.0.4/grid/gpnp/*

rm -rf $ORACLE_BASE/Clusterware/ckptGridHA_rhel6m1.xml

 

4)

 

Start clusterware stack on all the nodes with crsctl start crs. Verify the active version is the downgraded version.

 

$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

$ crsctl query crs softwareversion

Oracle Clusterware version on node [rhel6m2] is [11.2.0.3.0]

 

Run a ocrcheck to verify the OCR intergrity

 

# ocrcheck

BTfxlecR

 

ocrcheck -local

 

5)

login through Grid owner(oracrs)

 

The inventory will still have the CRS=true on the 11.2.0.4 home. Run the runInstaller to set the CRS=true to the 11.2.0.3 home (old home).

 

a) run the following commands, where /u02/oracle/11.2.0.4/grid/ is the location of the new (upgraded) Grid home (11.2):

 

Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u02/oracle/11.2.0.4/grid/

 

b)run the command ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=pre11.2_crs_home, where pre11.2_crs_home represents the home directory of the earlier Oracle Clusterware installation

 

Grid_home/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u02/oracle/11.2.0.3/grid/

 

As the last step detach the 11.2.0.4 GI Home from the inventory and remove it

 ./runInstaller -detachHome ORACLE_HOME=/u02/oracle/11.2.0.4/grid/ -silent

 

Note--The ORACLE_HOME is hard coded in the oracrs users .profile script so you could do with a task to update that so you don't forget.

 

 

 

 

No comments:

Post a Comment