Friday, 27 September 2019

Grid Infrastructure 11.2.0.4 to 12.1.0.2.0 Upgrade for 2 Node RAC

STEP 1:  Back Up the Oracle Software Before Upgrades and check the current cluster details

·         Backup your GRID and DB binaries .

    Before starting the Upgrade check & spool the existing status for GRID and RDBMS :

crsctl stat res -t | tee /tmp/crsctl_bef_patch.txt <= Is anything other than gsd OFFLINE ?
crsctl query crs activeversion | tee /tmp/crsversion_bef_patch.txt
crsctl query crs softwareversion
crsctl stat res -p | tee /tmp/crs_stat_p_bef_patch.txt
crsctl query css votedisk | tee /tmp/qry_css_bef_patch.txt
ocrcheck | tee /tmp/ocrchk_bef_patch.txt
crsctl check cluster -all
srvctl status database -d orcl
srvctl config database -d orcl
·

Create Directory Structure for 12c Grid infrastructure

11g Details  :

ORACLE_BASE=/data01/app/grid
ORACLE_HOME = /data01/app/11.2.0/grid_11204

New Oracle_HOME for Grid 12c on all cluster  nodes  :

ORACLE_BASE=/data01/app/grid
ORACLE_HOME = /data01/app/12C/grid_121020
cd /data01/app/
mkdir -p /data01/app/12C/grid_121020
chown -R grid:oinstall 12C/
chmod -R 775 12C/

About Oracle Grid Infrastructure and Oracle ASM Upgrade and Downgrade
       You can upgrade in rolling / Non-Rolling mode . We will follow Rolling mode . Rolling Upgrade involves upgrading individual nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster
       All upgrades are out-of-place upgrades, meaning that the software binaries are placed in a different Grid home from the Grid home used for the prior release.
      Download the 12c Grid binaries

Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades



When you upgrade from Oracle Grid Infrastructure 11g or Oracle Clusterware and Oracle ASM 10g releases to Oracle Grid Infrastructure 12cRelease 1 (12.1), you upgrade to a standard cluster configuration. You can enable Oracle Flex Cluster configuration after the upgrade.
If the Oracle Cluster Registry (OCR) and voting file locations for your current installation are on raw or block devices, then you must migrate them to Oracle ASM disk groups or shared file systems before upgrading to Oracle Grid Infrastructure 12c. How to Upgrade to 12c Grid Infrastructure if OCR or Voting File is on Raw/Block Device (Doc ID 1572925.1)
If you want to upgrade Oracle Grid Infrastructure releases before Oracle Grid Infrastructure 11g Release 2 (11.2), where the OCR and voting files are on raw or block devices, and you want to migrate these files to Oracle ASM rather than to a shared file system, then you must upgrade to Oracle Grid Infrastructure 11g Release 2 (11.2) before you upgrade to Oracle Grid Infrastructure 12c.
To upgrade existing Oracle Clusterware installations to a standard configuration Oracle Grid Infrastructure 12c cluster, your release must be greater than or equal to Oracle Clusterware 10g Release 1 (10.1.0.5), Oracle Clusterware 10g Release 2 (10.2.0.3), Oracle Grid Infrastructure 11gRelease 1 (11.1.0.6), or Oracle Grid Infrastructure 11g Release 2 (11.2).
To upgrade existing Oracle Grid Infrastructure installations from Oracle Grid Infrastructure 11g Release 2 (11.2.0.2) to a later release, you must apply patch 11.2.0.2.3 (11.2.0.2 PSU 3) or later.
Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use OPatch to patch the grid home, and OPatch displays the error message "'checkdir' error: cannot createGrid_home/OPatch".
To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 12c Release 1 (12.1), you must first verify if you need to apply any mandatory patches for upgrade to succeed. We will use CVU to check this below .
Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. You cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
If the existing Oracle Clusterware home is a shared home, note that you can use a non-shared home for the Oracle Grid Infrastructure for a cluster home for Oracle Clusterware and Oracle ASM 12c Release 1 (12.1).
The same user that owned the earlier release Oracle Grid Infrastructure software must perform the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.
Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure home.
During a major release upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), the software in the 12c Release 1 (12.1) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the new Grid homes are not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes.
To manage databases in existing earlier release database homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the existing database homes.
You can perform upgrades on a shared Oracle Clusterware home.
During Oracle Clusterware installation, if there is a single instance Oracle ASM release on the local node, then it is converted to a clustered Oracle ASM 12c Release 1 (12.1) installation, and Oracle ASM runs in the Oracle Grid Infrastructure home on all nodes.
If a single instance (non-clustered) Oracle ASM installation is on a remote node, which is a node other than the local node (the node on which the Oracle Grid Infrastructure installation is being performed), then it will remain a single instance Oracle ASM installation. However, during installation, if you select to place the Oracle Cluster Registry (OCR) and voting files on Oracle ASM, then a clustered Oracle ASM installation is created on all nodes in the cluster, and the single instance Oracle ASM installation on the remote node will become nonfunctional.
After completing the force upgrade of a cluster to a release, all inaccessible nodes must be deleted from the cluster or joined to the cluster before starting the cluster upgrade to a later release.


For each node, use Cluster Verification Utility to ensure that you have completed preinstallation steps. It can generate Fixup scripts to help you to prepare servers. In addition, the installer will help you to ensure all required prerequisites are met.

runcluvfy.sh stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome -dest_crshome dest_Gridhome -dest_version dest_release [-fixup][-method {sudo|root} [-location dir_path] [-user user_name]] [-verbose]

/runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome /data01/app/11.2.0/grid_11204 -dest_crshome /data01/app/12C/grid_121020 -dest_version 12.1.0.2.0 -fixup -fixupdir /home/grid/logs -verbose | tee /home/grid/logs/runcluvfy.out


OR

Download latest cluvfy and perform the pre checks for grid upgrade :

http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html
Downloaded : cvupack_Linux_x86_64.zip
unzip cvupack_Linux_x86_64.zip -d /home/grid/cvu
[grid@node1 cvu]$  /home/grid/cvu/bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

 /home/grid/cvu/bin/cluvfy stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome $ORACLE_HOME -dest_crshome /data01/app/12C/grid_121020/ -dest_version 12.1.0.2.0  -fixup
-fixupdir /tmp -verbose | tee /home/grid/11_to_12c_upgrade/cluvfy_pre_upgrade.log


Download latest cluvfy and perform the pre checks for grid upgrade :

http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html

Downloaded : cvupack_Linux_x86_64.zip

unzip cvupack_Linux_x86_64.zip -d /home/grid/cvu

[grid@node1 cvu]$  /home/grid/cvu/bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

 /home/grid/cvu/bin/cluvfy stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome $ORACLE_HOME -dest_crshome /data01/app/12C/grid_121020/ -dest_version 12.1.0.2.0  -fixup -fixupdir /tmp -verbose | tee /home/grid/11_to_12c_upgrade/cluvfy_pre_upgrade.log

[grid@node1 11_to_12c_upgrade]$ cat cluvfy_pre_upgrade.log |more

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "node1"
  Destination Node                      Reachable?            
  ------------------------------------  ------------------------
  node1                                 yes                    
  node2                                 yes                    
Result: Node reachability check passed from node "node1"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status                
  ------------------------------------  ------------------------
  node2                                 passed                
  node1                                 passed                
Result: User equivalence check passed for user "grid"

Checking CRS user consistency
Result: CRS user consistency check successful

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                
  ------------------------------------  ------------------------
  node2                                 passed            
    node1                                 passed                

Verification of the hosts config file successful


Interface information for node "node2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.72   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500
 eth0   192.168.56.82   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500
 eth0   192.168.56.91   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500
 eth1   192.168.10.2    192.168.10.0    0.0.0.0         UNKNOWN         08:00:27:53:6F:4D 1500
 eth1   169.254.231.223 169.254.0.0     0.0.0.0         UNKNOWN         08:00:27:53:6F:4D 1500


Interface information for node "node1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.71   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500
 eth0   192.168.56.93   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500
 eth0   192.168.56.92   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500
 eth0   192.168.56.81   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500
 eth1   192.168.10.1    192.168.10.0    0.0.0.0         UNKNOWN         08:00:27:53:6F:46 1500
 eth1   169.254.204.171 169.254.0.0     0.0.0.0         UNKNOWN         08:00:27:53:6F:46 1500


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  node2[192.168.56.72]            node2[192.168.56.82]            yes            
  node2[192.168.56.72]            node2[192.168.56.91]            yes            
  node2[192.168.56.72]            node1[192.168.56.71]            yes            
  node2[192.168.56.72]            node1[192.168.56.93]            yes            
  node2[192.168.56.72]            node1[192.168.56.92]            yes            
  node2[192.168.56.72]            node1[192.168.56.81]            yes            
  node2[192.168.56.82]            node2[192.168.56.91]            yes            
  node2[192.168.56.82]            node1[192.168.56.71]            yes            
  node2[192.168.56.82]            node1[192.168.56.93]            yes            
  node2[192.168.56.82]            node1[192.168.56.92]            yes            
  node2[192.168.56.82]            node1[192.168.56.81]            yes            
  node2[192.168.56.91]            node1[192.168.56.71]            yes            
  node2[192.168.56.91]            node1[192.168.56.93]            yes            
  node2[192.168.56.91]            node1[192.168.56.92]            yes            
  node2[192.168.56.91]            node1[192.168.56.81]            yes            
  node1[192.168.56.71]            node1[192.168.56.93]            yes            
  node1[192.168.56.71]            node1[192.168.56.92]            yes            
  node1[192.168.56.71]            node1[192.168.56.81]            yes            
  node1[192.168.56.93]            node1[192.168.56.92]            yes            
  node1[192.168.56.93]            node1[192.168.56.81]            yes            
  node1[192.168.56.92]            node1[192.168.56.81]            yes            
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  node1:192.168.56.71             node2:192.168.56.72             passed        
  node1:192.168.56.71             node2:192.168.56.82             passed        
  node1:192.168.56.71             node2:192.168.56.91             passed    
   node1:192.168.56.71             node1:192.168.56.92             passed        
  node1:192.168.56.71             node1:192.168.56.81             passed        
Result: TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  node2[192.168.10.2]             node1[192.168.10.1]             yes            
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.10.0"
  Source                          Destination                     Connected?    
  ------------------------------  ------------------------------  ----------------
  node1:192.168.10.1              node2:192.168.10.2              passed        
Result: TCP connectivity check passed for subnet "192.168.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking OCR integrity...

OCR integrity check passed

Checking ASMLib configuration.
  Node Name                             Status                
  ------------------------------------  ------------------------
  node2                                 passed                
  node1                                 passed                
Result: Check for ASMLib configuration passed.

Check: Total memory
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         1.9598GB (2055048.0KB)    1.5GB (1572864.0KB)       passed  
  node1         1.9598GB (2055048.0KB)    1.5GB (1572864.0KB)       passed  
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         733.3867MB (750988.0KB)   50MB (51200.0KB)          passed  
  node1         742.1133MB (759924.0KB)   50MB (51200.0KB)          passed  
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed  
  node1         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed  
Result: Swap space check failed

Check: Free disk space for "node2:/data01/app/12C/grid_121020/,node2:/tmp"
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /data01/app/12C/grid_121020/  node2         /             8.8936GB      7.5GB         passed    
  /tmp              node2         /             8.8936GB      7.5GB         passed    
Result: Free disk space check passed for "node2:/data01/app/12C/grid_121020/,node2:/tmp"

Check: Free disk space for "node1:/data01/app/12C/grid_121020/,node1:/tmp"
  Path              Node Name     Mount point   Available     Required      Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  /data01/app/12C/grid_121020/  node1         /             6.709GB       7.5GB         failed    
  /tmp              node1         /             6.709GB       7.5GB         failed    
Result: Free disk space check failed for "node1:/data01/app/12C/grid_121020/,node1:/tmp"

Check: User existence for "grid"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  node2         passed                    exists(201)            
  node1         passed                    exists(201)            

Checking for multiple users with UID value 201
Result: Check for multiple users with UID value 201 passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  node2         passed                    exists                
  node1         passed                    exists                
Result: Group existence check passed for "oinstall"

Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             yes           yes           yes           yes           passed    
  node1             yes           yes           yes           yes           passed    
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Run level
  Node Name     run level                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         5                         3,5                       passed  
  node1         5                         3,5                       passed  
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  node2             hard          65536         65536         passed        
  node1             hard          65536         65536         passed        
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  node1             soft          65536         1024          passed        
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  node2             hard          16384         16384         passed        
  node1             hard          16384         16384         passed        
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status        
  ----------------  ------------  ------------  ------------  ----------------
  node2             soft          16384         2047          passed        
  node1             soft          16384         2047          passed        
Result: Soft limits check passed for "maximum user processes"

There are no oracle patches required for home "/data01/app/11.2.0/grid_11204".

There are no oracle patches required for home "/data01/app/12C/grid_121020/".

Check: System architecture
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         x86_64                    x86_64                    passed  
  node1         x86_64                    x86_64                    passed  
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status  
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed  
  node1         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed  
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             250           250           250           passed        
  node1             250           250           250           passed        
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             32000         32000         32000         passed        
  node1             32000         32000         32000         passed        
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             100           100           100           passed        
  node1             100           100           100           passed        
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
 node2             128           128           128           passed        
  node1             128           128           128           passed        
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4294967295    4294967295    1052184576    passed        
  node1             4294967295    4294967295    1052184576    passed        
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4096          4096          4096          passed        
  node1             4096          4096          4096          passed        
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             2097152       2097152       2097152       passed        
  node1             2097152       2097152       2097152       passed        
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             6815744       6815744       6815744       passed        
  node1             6815744       6815744       6815744       passed      
  Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed        
  node1             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed        
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             262144        262144        262144        passed        
  node1             262144        262144        262144        passed        
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4194304       4194304       4194304       passed        
  node1             4194304       4194304       4194304       passed        
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             262144        262144        262144        passed        
  node1             262144        262144        262144        passed        
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             1048576       1048576       1048576       passed        
  node1             1048576       1048576       1048576       passed        
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             1048576       1048576       1048576       passed        
  node1             1048576       1048576       1048576       passed        
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed  
  node1         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed  
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
  node1         compat-libcap1-1.10-1     compat-libcap1-1.10       passed  
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
  node1         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed  
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed  
  node1         libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed  
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed  
  node1         libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed  
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed  
  node1         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed  
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed  
  node1         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed  
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed  
  node1         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed  
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         gcc-4.4.7-3.el6           gcc-4.4.4                 passed  
  node1         gcc-4.4.7-3.el6           gcc-4.4.4                 passed  
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed  
  node1         gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed  
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         ksh-20100621-19.el6       ksh-20100621              passed  
  node1         ksh-20100621-19.el6       ksh-20100621              passed  
Result: Package existence check passed for "ksh"
Check: Package existence for "make"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         make-3.81-20.el6          make-3.81                 passed  
  node1         make-3.81-20.el6          make-3.81                 passed  
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed  
  node1         glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed  
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed  
  node1         glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed  
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
  node1         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed  
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
  node1         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed  
Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status                
  ------------------------------------  ------------------------
  node2                                 passed                
  node1                                 passed                

Check for consistency of root user's primary group passed

Check: Package existence for "cvuqdisk"
  Node Name     Available                 Required                  Status  
  ------------  ------------------------  ------------------------  ----------
  node2         cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed  
  node1         cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed  
Result: Package existence check passed for "cvuqdisk"

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synch
ronization on the cluster nodes

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?              
  ------------------------------------  ------------------------
  node2                                 yes                    
  node1                                 yes                    
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?    
  ------------------------------------  ------------------------
  node2                                 no                    
  node1                                 no                    
Result:
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed

Checking Core file name pattern consistency...
Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  node2         passed                    does not exist        
    node1         passed                    does not exist        
Result: User "grid" is not part of "root" group. Check passed

Check default user file creation mask
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  node2         0022                      0022                      passed  
  node1         0022                      0022                      passed  
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status                
  ------------------------------------  ------------------------
  node2                                 failed                
  node1                                 failed                
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: node2,node1

File "/etc/resolv.conf" is not consistent across nodes

UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations

Check: Time zone consistency
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

ASM Running check passed. ASM is running on all specified nodes

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed

Starting check for Reverse path filter setting ...
Reverse path filter setting is correct for all private interconnect network interfaces on node "node2.localdomain".
Reverse path filter setting is correct for all private interconnect network interfaces on node "node1.localdomain".

Check for Reverse path filter setting passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

In our case above failed parameters can be ignored as this is only a test server demo installation and swap and NTP failure can be ignored.

[grid@node1 11_to_12c_upgrade]$ cat cluvfy_pre_upgrade.log |grep -i failed
  node2         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed  
  node1         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed  
Result: Swap space check failed
  /data01/app/12C/grid_121020/  node1         /             6.709GB       7.5GB         failed    
  /tmp              node1         /             6.709GB       7.5GB         failed    
Result: Free disk space check failed for "node1:/data01/app/12C/grid_121020/,node1:/tmp"
NTP daemon slewing option check failed on some nodes
Result: Clock synchronization check using Network Time Protocol(NTP) failed
  node2                                 failed                
  node1                                 failed                

[root@node1 data01]# cat /proc/sys/kernel/panic_on_oops
1
[root@node1 data01]# ssh node2 cat /proc/sys/kernel/panic_on_oops
root@node2's password:
1
[root@node1 data01]#


Unset Oracle Environment Variables
Known Issue :Environment Variable ORA_CRS_HOME MUST be UNSET in 11gR2/12c GI (Doc ID 1502996.1)

For the installation owner running the installation, if you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME and $ORACLE_SID, as these environment variables are used during upgrade. For example:

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID
unset ORA_CRS_HOME

If you have set ORA_CRS_HOME as an environment variable, following instructions from Oracle Support, then unset it before starting an installation or upgrade. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.
Check to ensure that installation owner login shell profiles (for example, .profile or .cshrc) do not have ORA_CRS_HOME set.
If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.
Also, ensure that the $ORACLE_HOME/bin path is removed from your PATH environment variable.


Check for some known issues :

NOTE:1917543.1 - FAILS TO START ORA.CSSD WHEN UPGRADING GRID 12C ON NODE 1
Environment Variable ORA_CRS_HOME MUST be UNSET in 11gR2/12c GI (Doc ID 1502996.1)
NOTE:1918426.1 - 12.1.0.2 root script fails to start ora.ctssd if nodes name length are not the same
NOTE:1922908.1 - 12.1.0.2 GI: oratab being wrongly modified after instance restarts
NOTE:19185876.8 - Bug 19185876 - ORA-600 [kjshash:!mhint] from ASM LMON process during rolling upgrade from 11.2 to 12c
NOTE:1917917.1 - 12c GI rootupgrade.sh Fails on First Node With ORA-01034 if Node Number Starts From 0
NOTE:1580360.1 - GI 12c/12.1.0.x rootupgrade.sh fails: PRCR-1065 : Failed to stop resource ora.gsd
CLSRSC-507: The root script cannot proceed on this node <node-n> because either the first-node operations have not completed on node <node-1> or there was an error in obtaining the status of the first-node operations. (Doc ID 1919825.1)
GI Upgrade from 11.2.0.3.6+ to 11.2.0.4 or 12.1.0.1 Fails with User(qosadmin) is deactivated. AUTH FAILURE. (Doc ID 1577072.1)


START THE RUNINSTALLER :

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID
unset ORA_CRS_HOME


[grid@node1 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB.   Actual 6544 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1519 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-31_09-51-28AM. Please wait ...


























Execute rootupgrade.sh on node1 and check all the grid for upgraded version . Check all services are up and running for node1 ,

Execute rootupgrade.sh on node2 and check the version is upgraded now .

[grid@node2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node2] is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs softwareversion node1
Oracle Clusterware version on node [node1] is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs activeversion node1
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Manual Single database upgrade



imp pain FROMUSER=WMC TOUSER=WMC FILE=td46562_dev_exp.dmp log=dev_td46562_imp.log  feedback=10000 ignore=Y
create or replace directory dpump as '/backup/CH130522_2_aug/TestDB1/export';
grant read,write on directory dpump to public;

select sum(bytes/1024*1024*124) from dba_segments;

select sum(bytes)/1024/1024/1024 "Total DB size in GB"  from dba_data_files;
select sum(bytes/1024/1024/1024) tbspc_gb from dba_temp_files;


expdp \'/ as sysdba \'  dumpfile=dpump_TestDB1_01aug.DMP logfile=dpump_TestDB1_01aug.log directory=dpump full=y parallel=4
expdp \'/ as sysdba \'  dumpfile=dpump_obi01dev_1aug.DMP logfile=dpump_obi01dev_1aug.log directory=DATA_PUMP_DIR full=y parallel=4

expdp system/Summer_2011  dumpfile=expdp_GCRSTMR5.DMP logfile=expdp_GCRSTMR5_full.log directory=dppump_dir full=y estimate=statistics


nohup expdp \'/ as sysdba \' directory=dpump schemas=ADAPTIVDATALOAD dumpfile=STDB0001_ADAPTIVDATALOAD_25OCT2012.dmp logfile=STDB0001_ADAPTIVDATALOAD_25OCT2012.log &

nohup expdp \'/ as sysdba \' directory=dpump schemas=ADAPTIVDATALOAD dumpfile=STDB0001_ADAPTIVDATALOAD_25OCT2012.dmp logfile=STDB0001_ADAPTIVDATALOAD_25OCT2012.log &

expdp system directory=EXP_DP2 dumpfile=INC000006658485_23feb_1.dmp,INC000006658485_23feb_2.dmp logfile =INC000006658485_23feb.log schemas =eeevs01 job_name =INC000006658485_23feb_12_new3


Directory created

set pagesize 1000
set linesize 120
column directory_name format a25
column directory_path format a70
select directory_name directory_name, directory_path directory_path from dba_directories order by directory_name
/

1)  take running database snap with database onwer and Go to the change folder and take below prerequisit
        spool invalid_obj_before_CPUJul2014_crd01dev.log;
        set linesize 200 
        select * from v$instance
        /
        select sysdate from dual;
/
        select r.comp_name,r.version,r.status from dba_registry r;
        col object_name for a30;
        select owner,object_name,status from dba_objects where status !='VALID';
        SELECT * FROM registry$history where bundle_series like 'CPU%';
        spool off

b-inventory backup


Server:TestDB2:/home/oracle/Patch/CH129614 $cp -pr /oracle/oraInventory/*                               mkdir oracle_inventry
Server:TestDB2:/home/oracle/Patch/CH129614 $cd oracle _inventry
Server:TestDB2:/home/oracle/Patch/CH129614/oracle_inventry $cp -pr /oracle/oraInventory/* .
cp: /oracle/oraInventory/logs/OPatch2014-04-03_02-16-16-PM.log: Permission denied
Server:TestDB2:/home/oracle/Patch/CH129614/oracle_inventry $ls lrt   -lrt



Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/OPatch $$ORACLE_HOME/OPatch/opatch lsinventory -details >d_inventory_CRD01DEV_30Jul2014.log
or

$ORACLE_HOME/OPatch/opatch lsinventory >inventory_CRD01DEV_30Jul2014.log

 alter database backup controlfile to trace as '/backup/CH130522_2_aug/FUS06DEV/ctl_CRD01DEV_02Aug2014.trc';
 take the backup of spfile
create pfile='/backup/CH130522_2_aug/FUS06DEV/PFILE_FUS06DEV_02Aug2014.ora' from spfile;



2) take the backup of binary and database after the stoping the database.
    make the directory with change no and take the backup of home directory as below
  a) tar -cvf HOME_TestDB1_112010.tar /oracle/product/TestDB1/11.2.0.1 -Home backup
    
     tar -cvf HOME_fus07dev_2aug.tar /oracle/product/fus07dev/11.2.0.4
      tar -cvf HOME_TestDB1_2aug.tar /oracle/product/TestDB1/11.2.0.1
  tar -cvf HOME_obi02dev_2aug.tar /oracle/product/obi02dev/11.2.0.4

  b) full database backup.
     select NAME  from v$controlfile;
     select FILE_NAME from dba_temp_files
     select  name from v$datafile;
     select  MEMBER from   v$logfile;


2.1.3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check
the consistency.
$ <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>


3) unzip july patch

Server:TestDB1:/oracle/product/stage $cd july2014patch
Server:TestDB1:/oracle/product/stage/july2014patch $ls -ltr
total 90320
-r--r--r--    1 oracle   dba        46241073 Jul 25 13:22 p18522509_112040_AIX64-5L.zip
Server:TestDB1:/oracle/product/stage/july2014patch $ unzip p18522509_112040_AIX64-5L.zip



2)

Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/OPatch $$ORACLE_HOME/OPatch/opatch version
OPatch Version: 11.2.0.3.0

$ORACLE_HOME/OPatch/opatch lsinventory


OPatch succeeded.
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/OPatch $opatch lsinventory -details


Server:TestDB2:/home/oracle/Patch $mkdir CH129614
Server:TestDB2:/home/oracle/Patch $cd CH129614
Server:TestDB2:/home/oracle/Patch/CH129614 $history -1 8
388     cd OPatch
389     opatch lsinventory -detail
390     cd
391     mkdir Patch
392     stty erase
393     cd Patch
394     mkdir CH129614
395     cd CH129614
396     history -8
Server:TestDB2:/home/oracle/Patch/CH129614 $opatch lsinventory -detail

5) go to the patch director



    Ensure that the $PATH definition has the following executables: make, ar, ld, and nm.

    The location of these executables depends on your operating system. On many operating systems, they are located in /usr/ccs/bin, in which case you can set your PATH definition as follows:

    export PATH=$PATH:/usr/ccs/bin

Determine whether any currently installed one-off patches conflict with the PSU patch as follows:

Server:TestDB1:/oracle/product/TestDB1 $ls -ltr
total 16
drwxr-xr-x   76 oracle   dba            4096 Jan 11 2013  11.2.0.1
drwxr-xr-x   78 oracle   dba            4096 Jul  5 09:15 11.2.0.4
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $export ORACLE_HOME=/oracle/product/TestDB2/11.2.0.4
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $export LD_LIBRARY_PATH=/oracle/product/TestDB2/11.2.0.4/lib
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $export LD_LIBRARY_PATH=$ORACLE_HOME/lib
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $export PATH=$PATH:$ORACLE_HOME/bin
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $which Opatch
no Opatch in /usr/bin /etc /usr/sbin /usr/ucb /usr/bin/X11 /sbin /usr/java14_64/jre/bin /usr/java5/jre/bin /usr/java5/bin /opt/IBM/cmdb/nmap-4.76 /usr/local/bin /usr/bin /etc /usr/sbin /usr/ucb /home/oracle/bin /usr/bin/X11 /sbin . /usr/vacpp/bin /bin /usr/local/bin /usr/eco/bin /oracle/product/TestDB2/11.2.0.1/bin /oracle/local/bin /oracle/product/TestDB2/11.2.0.4/bin
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4 $$ORACLE_HOME/OPatch/opatch version
OPatch Version: 11.2.0.3.6


unzip p18522509_112040_<platform>.zip
cd 18522509
opatch prereq CheckConflictAgainstOHWithDetail -ph ./


Server:TestDB2:/oracle/product/stage/july2014patch $cd 18522509
Server:TestDB2:/oracle/product/stage/july2014patch/18522509 $e
Server:TestDB2:/oracle/product/stage/july2014patch/18522509 $echo $ORACLE _HOME   ---check the home it should correct home(latest home on which patch is being applied)
/oracle/product/TestDB2/11.2.0.4
Server:TestDB2:/oracle/product/stage/july2014patch/18522509 $opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /oracle/product/TestDB2/11.2.0.4
Central Inventory : /oracle/oraInventory
   from           : /oracle/product/TestDB2/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.6
OUI version       : 11.2.0.4.0
Log file location : /oracle/product/TestDB2/11.2.0.4/cfgtoollogs/opatch/opatch2014-07-26_07-31-20AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
Server:TestDB2:/oracle/product/stage/july2014patch/18522509 $opatch apply
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation.  All rights reserved.


Oracle Home       : /oracle/product/TestDB2/11.2.0.4
Central Inventory : /oracle/oraInventory
   from           : /oracle/product/TestDB2/11.2.0.4/oraInst.loc
OPatch version    : 11.2.0.3.6
OUI version       : 11.2.0.4.0
Log file location : /oracle/product/TestDB2/11.2.0.4/cfgtoollogs/opatch/opatch2014-07-26_07-32-24AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   18031668  18522509

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  y



Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/oracle/product/TestDB2/11.2.0.4')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying sub-patch '18031668' to OH '/oracle/product/TestDB2/11.2.0.4'

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.ldap.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.crs, 11.2.0.4.0...

Patching component oracle.precomp.common, 11.2.0.4.0...

Patching component oracle.ldap.rsf.ic, 11.2.0.4.0...

Patching component oracle.rdbms.deconfig, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...



============

    Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory.
    Copy the Pre-Upgrade Information Tool (utlu112i.sql) and utltzuv2.sql from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on your system.


$ORACLE_HOME/rdbms/admin/utlu112i.sql

2. Execute Pre Upgrade Script:

Should be change to the directory where utlu112i.sql  had been copied in the previous step.    Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu112i.sql file. Please note that the database should be started using the Source Oracle Home.

$ sqlplus '/ as sysdba'
SQL> spool upgrade_info.log
SQL> @utlu112i.sql
SQL> spool off
SQL>

or

Execute utlu112i.sql script from  the new 11gR2 (11.2.0.2) Home:

SQL>spool preupgradeinfo.txt
SQL > @<New 11gR2ORACLE_HOME>\rdbms\admin\utlu112i.sql
SQL > spool off
SQL > exit

Review preupgradeinfo.txt and fix the warnings from the output.
Review /tmp/upgrade.spl and fix the warnings from the output.
or

STEP  3
======
After installing the latest 11.2 RDBMS  software, start the 11.2.0.1 instance with the previous ORACLE_HOME and spool/run 11.2.0.2 $ORACLE_HOME/rdbms/admin/utlu112i.sql script against the running, previous instance.

Running the Pre-Upgrade Information Tool is mandatory whether you are upgrading with DBUA or manually. Otherwise you can expect to encounter the error:

12. Upgrade Database:

At the operating system prompt, change to the $ORACLE_HOME/rdbms/admin directory of 11gR2 Oracle Home.(with new ORACLE_HOME

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> startup UPGRADE

startup upgrade  pfile='/oracle/product/TestDB1/11.2.0.1/dbs/initTestDB1.ora'

Set the system to spool results to a log file for later verification after the upgrade is completed and start the upgrade script.

SQL> set echo on
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
SQL> spool off
eg

SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
SQL> Rem
SQL> Rem $Header: rdbms/admin/catupgrd.sql /st_rdbms_11.2.0/3 2011/05/18 15:07:25 cmlim Exp $
SQL> Rem
SQL> Rem catupgrd.sql
SQL> Rem
SQL> Rem Copyright (c) 1999, 2011, Oracle and/or its affiliates.
SQL> Rem All rights reserved.
SQL> Rem
SQL> Rem    NAME
SQL> Rem         catupgrd.sql - CATalog UPGraDe to the new release
SQL> Rem
SQL> Rem    DESCRIPTION
SQL> Rem        This script is to be used for upgrading a 9.2, 10.1 or 10.2
SQL> Rem        database to the new release.  This script provides a direct




These measures are an important final step to ensure the integrity and consistency of the newly upgraded Oracle Database software. Also, if you encountered a message listing obsolete initialization parameters when you started the database for upgrade, then remove the obsolete initialization parameters from the parameter file before restarting. If necessary, convert the SPFILE to a PFILE so you can edit the file to delete parameters.

eg:

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

startup pfile='/oracle/product/TestDB1/11.2.0.1/dbs/initTestDB1.ora'

SQL> startup  pfile='/oracle/product/TestDB2/11.2.0.1/dbs/initTestDB2.ora';
ORACLE instance started.

Total System Global Area 3423965184 bytes
Fixed Size                  2251168 bytes
Variable Size            1845495392 bytes
Database Buffers         1426063360 bytes
Redo Buffers              150155264 bytes
Database mounted.
Database opened.
SQL> set echo on

or




3.3.2 Loading Modified SQL Files into the Database

The following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.

    For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle.sql psu apply
    SQL> QUIT

    The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.

    For information about the catbundle.sql script, see My Oracle Support Document 605795.1 Introduction to Oracle Database catbundle.sql.

    Check the following log files in $ORACLE_HOME/cfgtoollogs/catbundle or $ORACLE_BASE/cfgtoollogs/catbundle for any errors:

    catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
    catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log

    where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors,


-- start the database with new home
   delete the old link for spfil ,password file and create it in new home
 

SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/oracle/cfgtoollogs/catbundle/catbundle_PSU_DPCPPT_APPLY_2015Jan05_17_51_04.log
SQL>
SQL>
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-01-05 17:52:12


Verfification
-------------------
1)
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql

@$ORACLE_HOME/rdbms/admin/utlrp.sql

2) change the oratab entry after taking backup
3) update the listerner entry with latest oracle_home
4) run that full database stat.
5) check the remotely connectivity and to which user database has been started .and send mail to customer with screenshot.


SQL> startup pfile='/oracle/product/TestDB2/11.2.0.1/dbs/initTestDB2.ora';

SQL> @utlu112s.sql
.
Oracle Database 11.2 Post-Upgrade Status Tool           07-26-2014 10:05:47
.
Component                               Current      Version     Elapsed Time
Name                                    Status       Number      HH:MM:SS
.
Oracle Server
.                                         VALID      11.2.0.4.0  00:19:39
JServer JAVA Virtual Machine
.                                         VALID      11.2.0.4.0  00:06:58
Oracle Workspace Manager
.                                         VALID      11.2.0.4.0  00:01:11
OLAP Analytic Workspace
.                                         VALID      11.2.0.4.0  00:01:36
OLAP Catalog
.                                         VALID      11.2.0.4.0  00:01:28
Oracle OLAP API
.                                         VALID      11.2.0.4.0  00:00:36
Oracle Enterprise Manager
.                                         VALID      11.2.0.4.0  00:12:42
Oracle XDK
.                                         VALID      11.2.0.4.0  00:01:10
Oracle Text
.                                         VALID      11.2.0.4.0  00:00:50
Oracle XML Database
.                                         VALID      11.2.0.4.0  00:03:47
Oracle Database Java Packages
.                                         VALID      11.2.0.4.0  00:00:20
Oracle Multimedia
.                                         VALID      11.2.0.4.0  00:03:29
Spatial
.                                         VALID      11.2.0.4.0  00:09:00
Oracle Expression Filter
.                                         VALID      11.2.0.4.0  00:00:12
Oracle Rules Manager
.                                         VALID      11.2.0.4.0  00:00:14
Oracle Application Express
.                                         VALID     3.2.1.00.10
Final Actions
.                                                                00:00:00
Total Upgrade Time: 01:03:22

PL/SQL procedure successfully completed.


SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/oracle/cfgtoollogs/catbundle/catbundle_PSU_TestDB2_APPLY_2014Jul26_10_07_14.log
SQL>



Server:TestDB2:/home/oracle $cd /oracle/product/TestDB2/11.2.0.1/dbs
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $ls -lrt
total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
lrwxrwxrwx    1 oracle   dba              45 Sep  4 2012  initTestDB2.ora -> /oracle/admin/TestDB2/pfile/initTestDB2.ora
-rw-r-----    1 oracle   dba              24 Sep  4 2012  lkTestDB2
lrwxrwxrwx    1 oracle   dba              47 Sep  5 2012  spfileTestDB2.ora -> /oracle/admin/TestDB2/pfile/spfileTestDB2.ora
lrwxrwxrwx    1 oracle   dba              42 Sep  5 2012  orapwTestDB2 -> /oracle/admin/TestDB2/pfile/orapwTestDB2
-rw-rw----    1 oracle   dba            1544 Jul 26 06:47 hc_TestDB2.dat
===============
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $echo $ORACLE_HOME
/oracle/product/TestDB2/11.2.0.4
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $

cd /oracle/product/TestDB2/11.2.0.1/dbs
rm initTestDB2.ora
cd /oracle/product/TestDB2/11.2.0.4/dbs
ln -s /oracle/admin/TestDB2/pfile/initTestDB2.ora initTestDB2.ora



Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $cp -p initTestDB2.ora initTestDB2.ora_26jul_bkp
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $rm initTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $ls -lrt /oracle/admin/TestDB2/pfile/initTestDB2.ora
-rw-------    1 oracle   dba            1088 Jul  9 2013  /oracle/admin/TestDB2/pfile/initTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.1/dbs $

total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Jul 26 08:12 lkTestDB2
-rw-rw----    1 oracle   dba            1544 Jul 26 10:23 hc_TestDB2.dat
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ln -s /oracle/admin/TestDB2/pfile/initTestDB2.ora initTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ls -ltr
total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Jul 26 08:12 lkTestDB2
-rw-rw----    1 oracle   dba            1544 Jul 26 10:23 hc_TestDB2.dat
lrwxrwxrwx    1 oracle   dba              45 Jul 26 10:24 initTestDB2.ora -> /oracle/admin/TestDB2/pfile/initTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ls -n /oracle/admin/TestDB2/pfile/spfileTestDB2.ora spfileTestDB2.ora
spfileTestDB2.ora not found
-rw-r-----    1 300      203            3584 May 24 12:15 /oracle/admin/TestDB2/pfile/spfileTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ln -s ^?^?^?^?^?^?
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ln -s  /oracle/admin/TestDB2/pfile/spfileTestDB2.ora spfileTestDB2.ora
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ln -s /oracle/admin/TestDB2/pfile/orapwTestDB2 orapwTestDB2
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $ls -ltr
total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Jul 26 08:12 lkTestDB2
lrwxrwxrwx    1 oracle   dba              45 Jul 26 10:24 initTestDB2.ora -> /oracle/admin/TestDB2/pfile/initTestDB2.ora
-rw-rw----    1 oracle   dba            1544 Jul 26 10:25 hc_TestDB2.dat
lrwxrwxrwx    1 oracle   dba              47 Jul 26 10:25 spfileTestDB2.ora -> /oracle/admin/TestDB2/pfile/spfileTestDB2.ora
lrwxrwxrwx    1 oracle   dba              42 Jul 26 10:25 orapwTestDB2 -> /oracle/admin/TestDB2/pfile/orapwTestDB2
Server:TestDB2:/oracle/product/TestDB2/11.2.0.4/dbs $


total 0
drwxr-xr-x    2 oracle   dba             256 Feb 25 2011  trace
drwxr-xr-x    2 oracle   dba             256 Sep  6 2012  log
drwxr-xr-x    3 oracle   dba             256 Jul 26 10:38 admin
Server:TestDB1:/oracle/network $cd admin
Server:TestDB1:/oracle/network/admin $ls -ltr
total 488
-rwxr-x---    1 oracle   dba            2373 Sep  5 2012  listener.ora.11g
-rwxr-x---    1 oracle   dba            2367 Jan 11 2013  listener.ora.11g_01
-rwxr-x---    1 oracle   dba            2732 Jul  5 12:00 listener.ora_Jul_5_2014
-rwxr-x---    1 oracle   dba            2850 Jul 30 15:26 listener.ora
-rw-r--r--    1 oracle   dba          227711 Jul 31 14:08 tnsnames.ora
drwxr-xr-x    2 oracle   dba            4096 Jul 31 14:08 archive
Server:TestDB1:/oracle/network/admin $

total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
lrwxrwxrwx    1 oracle   dba              45 Sep  4 2012  initTestDB1.ora -> /oracle/admin/TestDB1/pfile/initTestDB1.ora
-rw-r-----    1 oracle   dba              24 Sep  4 2012  lkTestDB1
lrwxrwxrwx    1 oracle   dba              47 Sep  5 2012  spfileTestDB1.ora -> /oracle/admin/TestDB1/pfile/spfileTestDB1.ora
lrwxrwxrwx    1 oracle   dba              42 Sep  5 2012  orapwTestDB1 -> /oracle/admin/TestDB1/pfile/orapwTestDB1
-rw-rw----    1 oracle   dba            1544 Aug  2 10:16 hc_TestDB1.dat
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $cp -p initTestDB1.ora initTestDB1.ora_02aug
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $rm initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $ls -ltr
total 32
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
lrwxrwxrwx    1 oracle   dba              45 Sep  4 2012  initTestDB1.ora -> /oracle/admin/TestDB1/pfile/initTestDB1.ora
-rw-r-----    1 oracle   dba              24 Sep  4 2012  lkTestDB1
lrwxrwxrwx    1 oracle   dba              47 Sep  5 2012  spfileTestDB1.ora -> /oracle/admin/TestDB1/pfile/spfileTestDB1.ora
lrwxrwxrwx    1 oracle   dba              42 Sep  5 2012  orapwTestDB1 -> /oracle/admin/TestDB1/pfile/orapwTestDB1
-rw-------    1 oracle   dba            1088 Jul  9 2013  initTestDB1.ora_02aug
-rw-rw----    1 oracle   dba            1544 Aug  2 10:16 hc_TestDB1.dat
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $rm initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $ls -ltr
total 32
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Sep  4 2012  lkTestDB1
lrwxrwxrwx    1 oracle   dba              47 Sep  5 2012  spfileTestDB1.ora -> /oracle/admin/TestDB1/pfile/spfileTestDB1.ora
lrwxrwxrwx    1 oracle   dba              42 Sep  5 2012  orapwTestDB1 -> /oracle/admin/TestDB1/pfile/orapwTestDB1
-rw-------    1 oracle   dba            1088 Jul  9 2013  initTestDB1.ora_02aug
-rw-rw----    1 oracle   dba            1544 Aug  2 10:16 hc_TestDB1.dat
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $ls -ltr /oracle/admin/TestDB1/pfile/initTestDB1.ora
-rw-------    1 oracle   dba            1088 Jul  9 2013  /oracle/admin/TestDB1/pfile/initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $cp -p  spfileTestDB1.ora  spfileTestDB1.ora_2ng^?
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $cp -p  spfileTestDB1.ora  spfileTestDB1.ora_2aug
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $rm spfileTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $cp -p orapwTestDB1 orapwTestDB1_02aug
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $rm spfileTestDB1.ora
rm: spfileTestDB1.ora: No such file or directory
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $rm orapwTestDB1
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $ls -ltr
total 48
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Sep  4 2012  lkTestDB1
-rw-------    1 oracle   dba            1088 Jul  9 2013  initTestDB1.ora_02aug
-rw-r-----    1 oracle   dba            1536 Jul 15 2013  orapwTestDB1_02aug
-rw-r-----    1 oracle   dba            3584 Aug  1 12:54 spfileTestDB1.ora_2aug
-rw-rw----    1 oracle   dba            1544 Aug  2 10:16 hc_TestDB1.dat
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1/dbs $cd ..
Server:TestDB1:/oracle/product/TestDB1/11.2.0.1 $cd ..
Server:TestDB1:/oracle/product/TestDB1 $ls -ltr
total 16
drwxr-xr-x   76 oracle   dba            4096 Jan 11 2013  11.2.0.1
drwxr-xr-x   79 oracle   dba            4096 Aug  2 10:29 11.2.0.4
Server:TestDB1:/oracle/product/TestDB1 $cd  11.2.0.4
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4 $cd dbs
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -ltr
total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Aug  2 10:33 lkTestDB1
-rw-rw----    1 oracle   dba            1544 Aug  2 12:13 hc_TestDB1.dat
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -n /oracle/admin/TestDB1/pfile/spfileTestDB1.ora spfileTestDB1.o
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -n /oracle/admin/TestDB1/pfile/spfileTestDB1.ora spfileTestDB1.ora
spfileTestDB1.ora not found
-rw-r-----    1 300      203            3584 Aug  1 12:54 /oracle/admin/TestDB1/pfile/spfileTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -n /oracle/admin/TestDB1/pfile/initTestDB1.ora initTestDB1.ora
initTestDB1.ora not found
-rw-------    1 300      203            1088 Jul  9 2013  /oracle/admin/TestDB1/pfile/initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -lr
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -ltr /oracle/admin/TestDB1/pfile/initTestDB1.ora
-rw-------    1 oracle   dba            1088 Jul  9 2013  /oracle/admin/TestDB1/pfile/initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ln -s /oracle/admin/TestDB1/pfile/spfileTestDB1.ora spfileTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ln -s /oracle/admin/TestDB1/pfile/initTestDB1.ora initTestDB1.ora
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ln -s /oracle/admin/TestDB1/pfile/orapwTestDB1 orapwTestDB1
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ls -ltr
total 24
-rw-r--r--    1 oracle   dba            2851 May 15 2009  init.ora
-rw-r-----    1 oracle   dba              24 Aug  2 10:33 lkTestDB1
lrwxrwxrwx    1 oracle   dba              47 Aug  2 12:20 spfileTestDB1.ora -> /oracle/admin/TestDB1/pfile/spfileTestDB1.ora
-rw-rw----    1 oracle   dba            1544 Aug  2 12:20 hc_TestDB1.dat
lrwxrwxrwx    1 oracle   dba              45 Aug  2 12:20 initTestDB1.ora -> /oracle/admin/TestDB1/pfile/initTestDB1.ora
lrwxrwxrwx    1 oracle   dba              42 Aug  2 12:20 orapwTestDB1 -> /oracle/admin/TestDB1/pfile/orapwTestDB1
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Sat Aug 2 12:20:47 2014

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> su

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area 3423965184 bytes
Fixed Size                  2251168 bytes
Variable Size            1879049824 bytes
Database Buffers         1392508928 bytes
Redo Buffers              150155264 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $cat /etc/oratab
# ------------------------------------------------------------------------------
#
# This file is used by ORACLE utilities.  It is created by root.sh and updated
# by the Database Configuration Assistant when creating a database.
#
# A colon, ':', is used as the field terminator.
# A new line terminates the entry.  Lines beginning
# with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home directory of
# the database respectively.  The third filed indicates to the dbstart utility
# that the database should , "Y",
# or should not, "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#-------------------------------------------------------------------------------
#       Amendment history
#-------------------------------------------------------------------------------
# When          Who     What
# 25-Feb-11     WTM     Created
# 09-Feb-11     WTM     Added bxl55dev, cis55dev, csf55dev, ebs55dev, eml55dev
#                       and inl55dev
# 21-Mar-11     WTM     Added cml55dev
# 04-Apr-11     Cardy   Added acm02dev
# 21-Aug-2012   PS      Added emagent , asfdev
# 03-Sep-2012   MR      Added listener_11g, fus06dev,fus07dev,TestDB1 and TestDB2
# 18-Mar-13     WTM     Added crd01dev
# 13-Jul-13     CGJ     Upgraded acm02dev and listener_11g to 11.2.0.3.
# 05-Jul-14     YVRB    Upgraded crd01dev,fus06dev,fus07dev and listener_11g to 11.2.0.4 - CH125745
# 26-Jul-14     SA      Upgraded TestDB2 11.2.0.4 - CH129614
# ------------------------------------------------------------------------------
#
dummy:/oracle/product/asfdev/11.2.0.4:N
dummy10205:/oracle/product/10.2.0.5:N
acm02dev:/oracle/product/acm02dev/11.2.0.3:N
bxl55dev:/oracle/product/bxl55dev/10.2.0.5:N
cis55dev:/oracle/product/cis55dev/10.2.0.5:N
cml55dev:/oracle/product/cml55dev/10.2.0.5:N
csf55dev:/oracle/product/csf55dev/10.2.0.5:N
ebs55dev:/oracle/product/ebs55dev/10.2.0.5:N
eml55dev:/oracle/product/eml55dev/10.2.0.5:N
inl55dev:/oracle/product/inl55dev/10.2.0.5:N
fus06dev:/oracle/product/fus06dev/11.2.0.4:N
fus07dev:/oracle/product/fus07dev/11.2.0.4:N
TestDB1:/oracle/product/TestDB1/11.2.0.1:N
TestDB2:/oracle/product/TestDB2/11.2.0.4:N
crd01dev:/oracle/product/crd01dev/11.2.0.4:N
listener_11g:/oracle/product/listener/11.2.0.4:N
listener_10g:/oracle/product/listener/10.2.0.5:N

emagent:/oracle/product/emgc_10g/sdgdev002_oracle10g/agent12g/agent_inst:N
agent12g1:/oracle/product/emgc_10g/sdgdev002_oracle10g/agent12g/core/12.1.0.1.0:N
asfdev:/oracle/product/asfdev/11.2.0.4:N
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $ct
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $vi /etc/oratab
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $. oraenv
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $.oraenv
ksh: .oraenv:  not found
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $. oraenv
ORACLE_SID = [TestDB1] ?
The Oracle base remains unchanged with value /oracle
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $echo $ORACLE_HOME
/oracle/product/TestDB1/11.2.0.4
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $cd
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4/dbs $cd ..
Server:TestDB1:/oracle/product/TestDB1/11.2.0.4 $cd ..
Server:TestDB1:/oracle/product/TestDB1 $cd ..
Server:TestDB1:/oracle/product $cd ..
Server:TestDB1:/oracle $cd network
Server:TestDB1:/oracle/network $ls -ltr
total 0
drwxr-xr-x    2 oracle   dba             256 Feb 25 2011  trace
drwxr-xr-x    2 oracle   dba             256 Sep  6 2012  log
drwxr-xr-x    3 oracle   dba             256 Jul 26 10:38 admin
Server:TestDB1:/oracle/network $cd admin
Server:TestDB1:/oracle/network/admin $ls -ltr
total 488
-rwxr-x---    1 oracle   dba            2373 Sep  5 2012  listener.ora.11g
-rwxr-x---    1 oracle   dba            2367 Jan 11 2013  listener.ora.11g_01
-rwxr-x---    1 oracle   dba            2732 Jul  5 12:00 listener.ora_Jul_5_2014
-rwxr-x---    1 oracle   dba            2850 Jul 30 15:26 listener.ora
-rw-r--r--    1 oracle   dba          227711 Jul 31 14:08 tnsnames.ora
drwxr-xr-x    2 oracle   dba            4096 Jul 31 14:08 archive
Server:TestDB1:/oracle/network/admin $

  oracle 24576426        1   0   Jul 21      -  2:40 ora_pmon_inl55dev
Server:TestDB1:/oracle/network/admin $ps -ef|grep tns
  oracle 60817490 12255602   1 12:30:47  pts/4  0:00 grep tns
  oracle  8585570        1   0   Jul 05      -  9:31 /oracle/product/listener/11.2.0.4/bin/tnslsnr listener_11g -inherit
Server:TestDB1:/oracle/network/admin $lsnrctl

LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.1.0 - Production on 02-AUG-2014 12:30:57

Copyright (c) 1991, 2009, Oracle.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> reload listener_11g
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=REGISTER)))
The command completed successfully
LSNRCTL>


  Instance "fus06dev", status READY, has 1 handler(s) for this service...
Service "fus06devXDB" has 1 instance(s).
  Instance "fus06dev", status READY, has 1 handler(s) for this service...
Service "fus07dev" has 2 instance(s).
  Instance "fus07dev", status UNKNOWN, has 1 handler(s) for this service...
  Instance "fus07dev", status READY, has 1 handler(s) for this service...
Service "fus07devXDB" has 1 instance(s).
  Instance "fus07dev", status READY, has 1 handler(s) for this service...
Service "TestDB1" has 2 instance(s).
  Instance "TestDB1", status UNKNOWN, has 1 handler(s) for this service...
  Instance "TestDB1", status READY, has 1 handler(s) for this service...
Service "TestDB1XDB" has 1 instance(s).
  Instance "TestDB1", status READY, has 1 handler(s) for this service...
Service "TestDB2" has 2 instance(s).
  Instance "TestDB2", status UNKNOWN, has 1 handler(s) for this service...
  Instance "TestDB2", status READY, has 1 handler(s) for this service...
Service "TestDB2XDB" has 1 instance(s).
  Instance "TestDB2", status READY, has 1 handler(s) for this service...
Service "inl55dev" has 1 instance(s).
  Instance "inl55dev", status UNKNOWN, has 1 handler(s) for this service...
Service "inl55dev.world" has 1 instance(s).
  Instance "inl55dev", status READY, has 1 handler(s) for this service...
Service "inl55dev_XPT.world" has 1 instance(s).
  Instance "inl55dev", status READY, has 1 handler(s) for this service...
The command completed successfully
LSNRCTL> xit

Take a list of INVALID objects in the database after  upgrade:

SQL> spool invalid_obj_after_DBupgradeto11202.log;
SQL> select * from v$instance;
SQL> select sysdate from dual;
SQL> select owner,object_name,status from dba_objects where status ='INVALID';
SQL> spool off;

Compare the invalid objects list with the one taken at section 1.1, step 6 before upgrade and verify that there are no new invalid objects after applying the patch.

Execute the command.
  ps –ef | grep ora_| grep CNTLNTQ1
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;


-----------------------


Prechecks :
perform flashcopy/backp of the db
Space on mount points, tablespace level.
Check for invalid components
Check for stale schema stats (using check_schema_stale_stats.sql )
Run pre upgrade tool (utlu112i_11204_009.sql)

2) Change the oratab entry

3) un-tar the oracle home tar file and Clone new Oracle home

4) create soft links and start db in upgrade mode

5) Run the catupgrd.sql script to do the upgrade

6) perform post db activities


Verification Steps :

a)Check the inventory to list out the latest patches



Cloning an Oracle Home

You may wish to clone an Oracle Home, for example you have all your databases on a single Oracle Home but you want to separate Development from Test.
 This could be so you can soak test Patch Set Updates (PSU) on Development before applying to Test and then Production. 
Or you might wish to have 2 Oracle Homes, so you can patch one and then switch all databases to the patched Oracle Home for minimal downtime.


Copying the Oracle Home

First you need to copy the Oracle Home at file level using cp as user root as shown below:

[root@v1ex2dbadm01 ~]# cp -Rp /u01/app/oracle/product/12.1.0.2/dbhome_1 /u01/app/oracle/product/12.1.0.2/dbhome_2

Then check the Oracle Home and the Cloned Oracle Home are the same size:

[root@v1ex2dbadm01 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_1 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_1
[root@v1ex2dbadm01 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_2 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_2

Then repeat on all the other nodes:

[root@v1ex2dbadm02 ~]# cp -Rp /u01/app/oracle/product/12.1.0.2/dbhome_1 /u01/app/oracle/product/12.1.0.2/dbhome_2
[root@v1ex2dbadm02 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_1 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_1
[root@v1ex2dbadm02 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_2 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_2


Cloning the Oracle Home

Now we have a copy of the Oracle Home, we next need to clone using the clone.pl perl script as shown below:

/usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
'-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
'-O"LOCAL_NODE=v1ex2dbadm01"' ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'

CLUSTER_NODES = all the nodes in the cluster
LOCAL_NODE = The node you are running the command on
ORACLE_BASE = The Oracle Base defined on the Server
ORACLE_HOME = The Cloned Oracle Home, already exported
ORACLE_HOME_NAME = The name you wish to give to the Cloned Oracle Home

[root@v1ex2dbadm01 ~]# su - oracle
[oracle@v1ex2dbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ /usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
> '-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
> '-O"LOCAL_NODE=v1ex2dbadm01"' ORACLE_BASE=/u01/app/oracle \
> ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'
./runInstaller -clone -waitForCompletion "CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}" "LOCAL_NODE=v1ex2dbadm01" "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2" "ORACLE_HOME_NAME=OraDB12Home2" -noConfig -silent -paramFile /u01/app/oracle/product/12.1.0.2/dbhome_2/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 5392 MB Passed
Checking swap space: must be greater than 500 MB. Actual 24532 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-06-21_02-57-14PM. Please wait ...You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2017-06-21_02-57-14PM.log
 .................................................. 5% Done.
 .................................................. 10% Done.
 .................................................. 15% Done.
 .................................................. 20% Done.
 .................................................. 25% Done.
 .................................................. 30% Done.
 .................................................. 35% Done.
 .................................................. 40% Done.
 .................................................. 45% Done.
 .................................................. 50% Done.
 .................................................. 55% Done.
 .................................................. 60% Done.
 .................................................. 65% Done.
 .................................................. 70% Done.
 .................................................. 75% Done.
 .................................................. 80% Done.
 .................................................. 85% Done.
 ..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB12Home2 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-06-21_02-57-14PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
 .................................................. 95% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh

Execute /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh on the following nodes:
[v1ex2dbadm01]

.................................................. 100% Done.

[oracle@v1ex2dbadm01 ~]$

Next check the re-linking is RDS not UDP:

[oracle@v1ex2dbadm01 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/skgxpinfo
rds
[oracle@v1ex2dbadm01 ~]$

If UDP is set (the output is udp instead of rds as shown above), then relink using command below:

cd $ORACLE_HOME/rdbms/lib; ORACLE_HOME=$ORACLE_HOME make -f ins_rdbms.mk ipc_rds ioracle

Then repeat on all the other nodes, remember to change LOCAL_NODE:

[root@v1ex2dbadm02 ~]# su - oracle
[oracle@v1ex2dbadm02 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm02 ~]$ /usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
> '-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
> '-O"LOCAL_NODE=v1ex2dbadm02"' ORACLE_BASE=/u01/app/oracle \
> ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'
./runInstaller -clone -waitForCompletion "CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}" "LOCAL_NODE=v1ex2dbadm02" "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2" "ORACLE_HOME_NAME=OraDB12Home2" -noConfig -silent -paramFile /u01/app/oracle/product/12.1.0.2/dbhome_2/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 10725 MB Passed
Checking swap space: must be greater than 500 MB. Actual 24565 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-06-21_03-05-37PM. Please wait ...You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2017-06-21_03-05-37PM.log
 .................................................. 5% Done.
 .................................................. 10% Done.
 .................................................. 15% Done.
 .................................................. 20% Done.
 .................................................. 25% Done.
 .................................................. 30% Done.
 .................................................. 35% Done.
 .................................................. 40% Done.
 .................................................. 45% Done.
 .................................................. 50% Done.
 .................................................. 55% Done.
 .................................................. 60% Done.
 .................................................. 65% Done.
 .................................................. 70% Done.
 .................................................. 75% Done.
 .................................................. 80% Done.
 .................................................. 85% Done.
 ..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB12Home2 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-06-21_03-05-37PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
 .................................................. 95% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh

Execute /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh on the following nodes:
[v1ex2dbadm02]

.................................................. 100% Done.

[oracle@v1ex2dbadm02 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/skgxpinfo
rds
[oracle@v1ex2dbadm02 ~]$

Next you need to run root.sh as shown below:

[oracle@v1ex2dbadm01 ~]$ exit
logout
[root@v1ex2dbadm01 ~]# /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh
Check /u01/app/oracle/product/12.1.0.2/dbhome_2/install/root_v1ex2dbadm01.v1.com_2017-06-21_15-04-16.log for the output of root script
[root@v1ex2dbadm01 ~]#

Then repeat on all the other nodes:

[oracle@v1ex2dbadm02 ~]$ exit
logout
[root@v1ex2dbadm02 ~]# /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh
Check /u01/app/oracle/product/12.1.0.2/dbhome_2/install/root_v1ex2dbadm02.v1.com_2017-06-21_15-06-49.log for the output of root script
[root@v1ex2dbadm02 ~]#

Verify that the Cloned Oracle Home comprises of all the nodes in the cluster:

[root@v1ex2dbadm01 ~]# su - oracle
[oracle@v1ex2dbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME | grep node
Rac system comprising of multiple nodes
Local node = v1ex2dbadm01
Remote node = v1ex2dbadm02
[oracle@v1ex2dbadm01 ~]$

Then repeat on all the other nodes:

[root@v1ex2dbadm02 ~]# su - oracle
[oracle@v1ex2dbadm02 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm02 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME | grep node
Rac system comprising of multiple nodes
Local node = v1ex2dbadm02
Remote node = v1ex2dbadm01
[oracle@v1ex2dbadm02 ~]$


Switching Databases to Cloned Oracle Home

Change the Oracle Home using server control:

[oracle@v1ex2dbadm01 ~]$ srvctl config database -d V1DEV -a
Database unique name: V1DEV
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +DATAC1/V1DEV/PARAMETERFILE/spfileV1DEV.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: v1jobservice
Type: RAC
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: dba
Database instances: V1DEV1,V1DEV2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$ srvctl modify database -d V1DEV -o /u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ srvctl config database -d V1DEV -a
Database unique name: V1DEV
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/V1DEV/PARAMETERFILE/spfileV1DEV.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: v1jobservice
Type: RAC
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: dba
Database instances: V1DEV1,V1DEV2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$

Next change /etc/oratab to reflect the new Oracle Home:

[oracle@v1ex2dbadm01 ~]$ vi /etc/oratab
[oracle@v1ex2dbadm01 ~]$ more /etc/oratab | grep dbhome_2
V1DEV1:/u01/app/oracle/product/12.1.0.2/dbhome_2:N # line added by Agent

Then repeat on all the other nodes:

[oracle@v1ex2dbadm02 ~]$ vi /etc/oratab
[oracle@v1ex2dbadm02 ~]$ more /etc/oratab | grep dbhome_2
V1DEV2:/u01/app/oracle/product/12.1.0.2/dbhome_2:N # line added by Agent

Now we rolling bounce the database:

[oracle@v1ex2dbadm01 ~]$ srvctl status database -d V1DEV -v
Instance V1DEV1 is running on node v1ex2dbadm01. Instance status: Open,Running from Old Oracle Home.
Instance V1DEV2 is running on node v1ex2dbadm02 with online services v1jobservice. Instance status: Open,Running from Old Oracle Home.
[oracle@v1ex2dbadm01 ~]$ srvctl stop instance -d V1DEV -i V1DEV1 -f
[oracle@v1ex2dbadm01 ~]$ srvctl start instance -d V1DEV -i V1DEV1
[oracle@v1ex2dbadm01 ~]$ srvctl stop instance -d V1DEV -i V1DEV2 -f
[oracle@v1ex2dbadm01 ~]$ srvctl start instance -d V1DEV -i V1DEV2
[oracle@v1ex2dbadm01 ~]$ srvctl status database -d V1DEV -v
Instance V1DEV1 is running on node v1ex2dbadm01. Instance status: Open.
Instance V1DEV2 is running on node v1ex2dbadm02 with online services v1jobservice. Instance status: Open.
[oracle@v1ex2dbadm01 ~]$



Related My Oracle Support (MOS) notes:

Master Note For Cloning Oracle Database Server ORACLE_HOME’s Using the Oracle Universal Installer (OUI) (Doc ID 1154613.1)

Cloning An Existing Oracle Database 12c Release 1 (12.1.0.x) RDBMS Installation Using OUI (Doc ID 1493677.1)