Wednesday, 28 February 2018

Oracle Cloud part 8

Figure 9.45. Initiate the installation process.

The following output will appear in the SSH window:
[oracle@bsfrac01 grid]$ pwd
/u01/app/oracle/software/Ora12c/Grid/grid
[oracle@bsfrac01 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 5620 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.
Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from
/tmp/OraInstall2013-02-26_01-00-01AM. Please wait ...
The following grid infrastructure processes or daemons should appear on the RAC nodes after setup.
PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 7619 oracle    20   0 98432 1252  612 S  6.9  0.0  30:33.70 sshd
24066 oracle    20   0  612m  48m  15m S  4.0  1.7  11:10.74 gipcd.bin
24700 oracle    -2   0 1368m  16m  14m S  4.0  0.6  10:20.84 asm_vktm_+asm1
28408 oracle    -2   0 1368m  15m  13m S  4.0  0.5  10:06.05 apx_vktm_+apx1
24336 oracle    RT   0 1157m 182m  80m S  3.6  6.4   9:49.82 ocssd.bin
24805 root      RT   0  788m 101m  71m S  3.6  3.6   8:03.76 osysmond.bin
 7670 oracle    20   0 1580m 227m 6076 S  2.3  8.0  71:35.10 java
23757 root      20   0 1194m  59m  19m S  2.3  2.1   5:36.65 ohasd.bin
24963 root      20   0 1146m  43m  16m S  2.3  1.5   6:24.32 orarootagent.bi
24812 root      20   0 1247m  79m  26m S  1.6  2.8   4:42.22 crsd.bin
24011 oracle    20   0 1150m  34m  17m S  1.3  1.2   1:46.04 oraagent.bin
24454 root      20   0  791m  36m  14m S  1.3  1.3   3:02.20 octssd.bin
25086 oracle    20   0 1754m 151m  19m S  1.3  5.3   2:53.32 java
 3728 oracle    20   0 15180 1256  896 R  1.0  0.0   0:00.08 top
24024 oracle    20   0  667m  38m  15m S  1.0  1.4   3:04.61 evmd.bin
24311 root      RT   0  918m 106m  71m S  0.7  3.7   0:16.43 cssdmonitor
24720 oracle    -2   0 1382m  33m  20m S  0.7  1.2   1:41.75 asm_lms0_+asm1
24864 root      RT   0  849m 160m  71m S  0.7  5.6   1:48.41 ologgerd
   57 root      20   0     0    0    0 S  0.3  0.0   0:12.15 kworker/1:1
24043 oracle    20   0  659m  36m  14m S  0.3  1.3   0:10.50 gpnpd.bin
24655 oracle    20   0 1368m  19m  17m S  0.3  0.7   0:07.28 asm_pmon_+asm1
24710 oracle    20   0 1374m  25m  17m S  0.3  0.9   0:26.34 asm_diag_+asm1
24716 oracle    20   0 1385m  40m  25m S  0.3  1.4   1:11.91 asm_lmon_+asm1
24718 oracle    20   0 1383m  30m  17m S  0.3  1.1   0:42.79 asm_lmd0_+asm1
24951 oracle    20   0 1180m  73m  20m S  0.3  2.6   2:35.66 oraagent.bin
25065 oracle    20   0 1050m 9072 1268 S  0.3  0.3   0:02.38 ons
30490 oracle    20   0 1373m  28m  23m S  0.3  1.0   0:02.43 oracle_30490_+a
Hub Node(s):
2318 gdm       20   0  332m 9924 8924 S 13.9  0.3   0:53.31 gnome-settings-
10097 oracle    -2   0 1368m  15m  13m S  5.3  0.6   7:47.66 apx_vktm_+apx2
 8958 oracle    20   0  602m  47m  15m S  4.6  1.7   9:34.31 gipcd.bin
 9173 root      RT   0  789m 101m  71m S  4.0  3.6   6:32.76 osysmond.bin
 9002 oracle    RT   0 1159m 175m  80m S  3.6  6.1  10:26.57 ocssd.bin
 9506 oracle    -2   0 1368m  16m  14m S  3.6  0.6   8:41.92 asm_vktm_+asm2
 8809 root      20   0 1190m  68m  30m S  2.0  2.4   4:42.47 ohasd.bin
 8909 oracle    20   0 1150m  39m  20m S  1.3  1.4   2:53.17 oraagent.bin
 8922 oracle    20   0  666m  39m  16m S  1.3  1.4   2:41.23 evmd.bin
 9151 root      20   0  725m  36m  14m S  1.3  1.3   2:32.39 octssd.bin
 9281 root      20   0  748m  29m  16m S  1.3  1.0   3:35.72 orarootagent.bi
 9180 root      20   0 1240m  68m  30m S  1.0  2.4   3:15.15 crsd.bin
 9521 oracle    20   0 1379m  33m  23m S  0.7  1.2   0:56.36 asm_dia0_+asm2
 9528 oracle    -2   0 1382m  33m  20m S  0.7  1.2   1:29.06 asm_lms0_+asm2
14933 oracle    20   0 15080 1144  812 R  0.7  0.0   0:03.13 top
 2207 root      20   0  121m  10m 6116 S  0.3  0.4   0:04.20 Xorg
 9347 oracle    20   0 1164m  61m  22m S  0.3  2.1   2:01.56 oraagent.bin
 9516 oracle    20   0 1374m  25m  17m S  0.3  0.9   0:22.80 asm_diag_+asm2
 9523 oracle    20   0 1385m  40m  25m S  0.3  1.4   1:01.14 asm_lmon_+asm2
 9526 oracle    20   0 1383m  30m  17m S  0.3  1.1   0:38.67 asm_lmd0_+asm2
 9532 oracle    20   0 1370m  21m  17m S  0.3  0.8   0:13.08 asm_lmhb_+asm2
Leaf Node(s):
$ ./crs_stat -t -v
 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 4085 oracle    20   0  602m  44m  14m S  5.6  1.6   0:17.24 gipcd.bin
 4179 root      RT   0  724m 100m  71m S  4.6  3.5   0:08.74 osysmond.bin
 3940 root      20   0 1191m  60m  29m S  2.6  2.1   0:14.51 ohasd.bin
 4161 root      20   0  725m  28m  13m S  2.0  1.0   0:04.29 octssd.bin
 4188 root      20   0 1202m  55m  27m S  2.0  1.9   0:07.27 crsd.bin
 4051 oracle    20   0  667m  31m  15m S  1.7  1.1   0:05.16 evmd.bin
 4128 oracle    RT   0  987m 127m  79m S  1.3  4.5   0:05.97 ocssdrim.bin
 4470 oracle    20   0 15080 1100  812 R  0.7  0.0   0:00.08 top
 4114 root      RT   0  852m 104m  71m S  0.3  3.7   0:01.12 cssdagent
Note
As part of the Flex ASM configuration (new feature), ASM is not running on the leaf node(s).
[oracle@bsfrac01 bin]$ ./crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    bsfrac01
ora.GRID1.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    bsfrac01
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    bsfrac01
ora....AF.lsnr ora....er.type 0/5    0/     OFFLINE   OFFLINE
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfrac02
ora....N2.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfrac01
ora....N3.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfrac04
ora.MGMTLSNR   ora....nr.type 0/0    0/0    OFFLINE   OFFLINE
ora.asm        ora.asm.type   0/5    0/0    ONLINE    ONLINE    bsfrac01
ora....01.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfrac01
ora....c01.ons application    0/3    0/0    ONLINE    ONLINE    bsfrac01
ora....c01.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfrac01
ora....02.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfrac02
ora....c02.ons application    0/3    0/0    ONLINE    ONLINE    bsfrac02
ora....c02.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfrac02
ora....03.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfrac03
ora....c03.ons application    0/3    0/0    ONLINE    ONLINE    bsfrac03
ora....c03.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfrac03
ora....04.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfrac04
ora....c04.ons application    0/3    0/0    ONLINE    ONLINE    bsfrac04
ora....c04.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfrac04
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    bsfrac01
ora.gns        ora.gns.type   0/5    0/0    ONLINE    ONLINE    bsfrac01
ora.gns.vip    ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfrac01
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    bsfrac01
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    bsfrac01
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    bsfrac01
ora.proxy_advm ora....vm.type 0/5    0/     ONLINE    ONLINE    bsfrac01
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfrac02
ora.scan2.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfrac01
ora.scan3.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfrac04
Summary
As economies the world over continue to shrink, organizations and entities are looking for new and innovative ways to achieve the next dimension of corporate efficiency. Virtualization is one of the key elements of modern cloud computing, providing the gateway to a cost-effective, agile, and elastic IT environment.
Based on the mainstream open-source Xen hypervisor, OVM along with Oracle RAC and EM12c can be used to formulate, build, and set up end-to-end industry-grade, state-of-the-art virtualized database clouds. If you’re not already on it, now is the perfect time to hop on the virtualization and cloud computing bandwagon.
The methods outlined in this chapter and the next are a perfect avenue to do so. They provide a step-by-step guide for setting up and installing your own RAC 12c environment either at home or in a corporate setting. Please continue reading on to the next chapter to finish the RAC 12c database cloud journey that we started in this one.









Chapter 10. Virtualizing RAC 12c (DB Clouds) on Oracle VM VirtualBox—RAC Databases
We started our Oracle Real Application Cluster (RAC) database cloud computing journey in Chapter 9 with Oracle VM (OVM) for x86 and reached the point where we had an Oracle grid infrastructure set up in a virtualized environment, based on Oracle VM for x86. The grid infrastructure serves as a consolidated Clusterware and storage (Automatic Storage Management, or ASM) platform for hosting and serving clustered RAC databases in the database cloud.
We continue our trek in this chapter by repeating some of the same steps with OVM VirtualBox instead of OVM for x86. This alternative approach presents you with the advantage of choice; you can pick a virtual solution based on your requirements. OVM enables you to virtualize RAC and set up database clouds in a corporate environment, whereas OVM VirtualBox enables you to do the same but on your own laptop, thereby eliminating the need for expensive hardware to set up, configure, and deploy Oracle RAC. The OVM VirtualBox approach is a great way of learning RAC in a quick and easy fashion, all implemented with the do-it-yourself methodology shown in this chapter.
Following this line of action, we resume our journey with OVM VirtualBox up to the point of setting up Oracle grid infrastructure. Once that stage is achieved, the subsequent steps in this chapter, detailing setting up RAC databases in the Oracle database cloud, apply equally to both approaches, OVM for x86 and OVM VirtualBox.
We also delve into cloud computing in detail from the perspective of Oracle. Some of this material is covered in the earlier chapters, but the information is worth reiterating.
As would have already been observed by now, if you have surfed through the last chapter, a step-by-step approach is followed in order to give you a 360-degree, A–Z roadmap for setting up virtualized Oracle RAC database clouds. You are free to choose from one of two virtualization solutions—OVM for x86 or OVM VirtualBox.
Following is a summary of topics presented in this chapter:
• OVM VirtualBox: A Brief Introduction
• What Is Cloud Computing? Synopsis and Overview
• Oracle’s Strategy for Cloud Computing
• EM12c and OVM—The Management and Virtualization Components for Oracle Database Clouds
• RAC Private Cloud on OVM VirtualBox—Software and Hardware Infrastructure Requirements
• Setting Up Virtualized Oracle RAC Clusters on OVM VirtualBox—Alternative Approaches
• Setting Up, Installing, and Configuring 12c Virtualized RAC Clusters on OVM VirtualBox—Step-by-Step Setup and Configuration
• OEM 12c—Implementing Database as a Service (DBaaS)
It is advisable to follow the steps outlined in this chapter on your own laptop, particularly if you are interested in setting up a brand-new 12c cluster on your own machine.
OVM VirtualBox: A Brief Introduction
OVM VirtualBox is a free, open-source virtualization product offering from Oracle that enables guest VM operating system (OS) virtualization on your own laptop or desktop machine. It can be utilized to install, configure, test, and learn Oracle RAC, alleviating the need for dedicated physical hardware and expensive physical shared storage. OVM VirtualBox can also be used for installation, configuration, and testing of various Oracle products, and it provides other virtualization applications. The latest version available at the time of writing is OVM VirtualBox 4.x.
OVM VirtualBox is a type 2 hypervisor—it installs on an existing preinstalled OS. It can be installed on the Linux, Macintosh, Solaris, and Windows OS families.
OVM VirtualBox can be downloaded from the Oracle Technology Network (OTN) website and can easily be installed by following the intuitive Installation Setup Wizard (see Figure 10.1). The entire process of downloading and installing OVM VirtualBox takes about 5 to 10 minutes.
Figure 10.1. A new installation of Oracle VM VirtualBox in a Windows 8 environment

What Is Cloud Computing? Synopsis and Overview
Cloud computing means a lot of different things to a lot of different people. So what exactly is cloud computing?
In its most generally accepted form, cloud computing is web- or network-based computing wherein abstracted resources are located and shared on the network, whether on an intranet (private cloud) or the Internet (public cloud), presented in a service-based model. Cloud is sometimes used as an alternative term or metaphor for the Internet. By its generally agreed-upon definition, cloud computing is on-demand, metered, and self-serviceable.
Cloud computing is an evolution of existing IT paradigms, strategies, and models: in many respects, it is a rebranding, reorganization, and re-presentation of various components in the overall IT ecosystem. Cloud computing is in flux, not completely mature, and still evolving.
In other words, most cloud computing models do not introduce newer technologies but rather improve on the existing technologies by making them more efficient. Focusing on the subject at hand, RAC plays an integral role in setting up and configuring Oracle database clouds.
Is cloud computing a paradigm shift? The answer is yes and no, depending on your perception, understanding, and implementation of your flavor of cloud computing.
Is cloud computing seeing massive adoption? It is catching on, and the prospects are very promising. Cloud computing is also commonly understood as elastic computing, which is fundamentally attained by merit of virtualization. Elastic computing is the capability to provide increased computing resources when needed.
Here are a few salient characteristics of an IT cloud:
• Dynamic, elastic, agile, and scalable
• Multitenant, secure, and reliable
• Metered, service based
Four models of deployment are currently prevalent in use:
• Private cloud (or enterprise cloud): Characterized by clouds on private networks (may someday replace the traditional data-center term)
• Public cloud: Shared (typically virtualized) resources over the Internet
• Hybrid cloud: A combination of private and public cloud models
• Community cloud: Organizations forming a shared cloud for common needs, goals, and purposes
Cloud Computing: ______ as a Service
Cloud computing can be summarized with the phrase (and is widely understood as) “fill-in-the-blank as a service.” For example:
• Database as a service (DBaaS)
• Storage as a service
• Software as a service (SaaS)
• Middleware as a service (MWaaS)
• Platform as a service (PaaS)
• Infrastructure as a service (IaaS)
• IT as a service (the holy grail of cloud computing)
Oracle’s Strategy for Cloud Computing
Oracle’s cloud computing strategy is a comprehensive one yet simple one—Oracle provides infrastructure, products, and support for public and private clouds.
Public Clouds
Services are based primarily on subscription-based application as a service, IaaS, and PaaS paradigms. Some of Oracle’s current cloud offerings include products such as Fusion CRM/HCM, RAC/Database Cloud Service, Oracle Social Network, and Oracle Java Cloud Service.
Private Clouds
To briefly summarize the components involved, with OVM for x86 and Enterprise Manager Cloud Control 12c (EM12c), you can comprehensively formulate, implement, administer, maintain, meter, and support private clouds behind your corporate firewalls.
EM12c and OVM: Management and Virtualization Components for Oracle Database Clouds
EM12c incorporates the cloud functionality in cloud management packs: Cloud Management Pack for Oracle Database. We talk more about this component at the end of the chapter. We also provide an overview of DBaaS.
The virtualization component OVM for x86 3.x has been integrated into the framework of EM12c and works hand-in-hand with EM12c to implement cloud IaaS. Working as a proxy agent, with the OVM 3 agent, you do not need an additional EM12c agent deployed on your OVM for an x86 machine.
With cloud application programming interfaces (APIs) and command-line interfaces (CLIs), self-service operations, out-of-the-box scaling capabilities, policy-based resource management, governance and chargeback/metering, cloud zones, and more, EM12c provides a wide variety of feature-rich functionality for setting up, managing, supporting, and administering Oracle database cloud infrastructures.
After the preceding overview of the various technologies, paradigms, and terms involved, it is now time to start setting up your own virtualized RAC 12c database cloud on OVM VirtualBox. The next section begins this journey by outlining the software and hardware requirements.
RAC Private Cloud on OVM VirtualBox—Software and Hardware Infrastructure Requirements
The software requirements are straightforward. The hardware requirements are a little bit more complicated but not overly so.
Software Requirements
As outlined in the following steps, setting up a virtualized RAC 12c cluster requires the following prerequisite software components:
• OVM VirtualBox 4.x: Download and install from the OTN website.
• Oracle Enterprise Linux x86_64 Release 6.x: Oracle Enterprise Linux (OEL) is Oracle’s version of the popular Red Hat Enterprise Linux platform.
Hardware Requirements
As outlined in the following steps, setting up a virtualized RAC 12c cluster requires a modern desktop or laptop, preinstalled with a Windows, Linux, Macintosh, or Solaris OS. This machine will serve as the host OVM VirtualBox machine. Table 10.1 presents the minimum requirements and the specs of the laptop that was used to follow and implement the steps detailed in this chapter.
Table 10.1. Hardware Requirements

Setting up, Installing, and Configuring an Oracle RAC 12c Private Cloud on OVM VirtualBox
In this section we run through two alternative approaches to setting up virtualized Oracle RAC on OVM VirtualBox. Then we walk through setting up, installing, and configuring 12c virtualized RAC on OVM VirtualBox.
Note
The following sections, “Step 10.1, Approach 1” and “Step 10.1, Approach 2,” are alternatives to each other: you can choose either one.
Step 10.1, Approach 1—Download and Import an OEL 6.x VM VirtualBox Appliance to Create RAC-Node-01
Approach 1 is simple. It involves one main step composed of a couple of substeps. An OVM VirtualBox appliance is a golden image of software, ready to go. The concept is similar to OVM templates discussed in Chapter 9. Download the prebuilt, preconfigured appliance for OEL 6.x from the OTN website and import it.
1. Choose Oracle VM VirtualBox → File → Import Appliance.
2. Select the .ova file for OEL 6.x and press the Import button (see Figure 10.2).
Figure 10.2. A new virtual machine for RAC-Node-01 created in Oracle VM VirtualBox

As you can see, this option is simple and easy.
Note
If you’ve already used Step 10.1, Approach 1, you can skip Step 10.1, Approach 2.
Step 10.1, Approach 2—Create an OEL 6.x Virtual Machine for Node 01 from a .ISO Image
The second option is a bit more involved. You begin by creating an OEL 6.x virtual machine for node 01 from and .ISO image. The first step of Approach 2 is to enter information about your new VirtualBox:
1. Choose Oracle VM VirtualBox → New, and then enter the following information:
• Name and OS
• Name: bsfracvx1 (substitute the name of your RAC-Node-01)
• Type: Linux
• Version: Oracle (64 bit)
• Memory size:
• 4 GB is the ideal/minimum size requirement. If your machine has 8 GB RAM, then 2.5 GB will suffice.
• Hard drive:
• Create a virtual hard drive now
• Hard drive file type:
• VirtualBox Disk Image (VDI)
• Storage on physical hard drive:
• Dynamically allocated
• File location and size:
• Specify the folder that will house the virtual hard drive file
• Virtual hard drive size: 35 GB
The finished product will look something like what is shown in Figure 10.3.
Figure 10.3. A new virtual machine for RAC-Node-01 created in Oracle VM VirtualBox

2. Download and install OEL 6.x, as shown in Figure 10.4.
Figure 10.4. Oracle Enterprise Linux 6 installation bootup menu

3. As mentioned in Chapter 9, at this point, download the OEL 6.x ISO from the Oracle eDelivery website, attach it to the RAC-Node-01 virtual machine as a virtual CD/DVD drive, boot from it, and then set up and install OEL 6.x.
Step 10.2—OVM VirtualBox: Customize the New Virtual Machine for RAC 12c
Now that you have the new virtual machine created, it is time to configure it the way you want it:
1. Select VM for RAC-Node-01 → Settings.
2. Enter the following (see Figure 10.5):
• System:
• Uncheck the Floppy option checkbox in the Boot Order option checkbox.
• Processor:
• Change the number of processors to qty: 2.
• Acceleration:
• Enable the Enable VT-x/AMD-v and Enabled Nested Paging checkbox options.
• Choose Network → Adapter 1 (see Figure 10.5):
• Check the Enable Network Adapter checkbox.
• Select Bridged Adapter as Attached to and the network interface card (NIC) on the system. In this case, it is the WiFi card on the Windows 8 laptop.
• Select Inter PRO/1000 MT Desktop... as the adapter type.
Note
This virtual network interface card (VNIC) will serve as the public network interface for the RAC 12c.
Figure 10.5. Configure network settings for the RAC-Node-01 virtual machine, Adapter 1 (public network interface for RAC)

3. Choose Network → Adapter 2 (see Figure 10.6). Make the same selections as shown in the previous step with one exception: this VNIC will be Attached to the Internal Network.
Note
This VNIC will serve as the first NIC for the HAIP-enabled private cluster interconnect.
Figure 10.6. Configure network settings for the RAC-Node-01 virtual machine, Adapter 2: Attached to Internal Network—Interface 1 for RAC HAIP-enabled private cluster interconnect

4. Choose Network → Adapter 3. Make the same selections as shown in the previous step.
Note
This VNIC will serve as the second NIC for the HAIP-enabled private cluster interconnect.
Step 10.3—OVM VirtualBox: Create, Configure, and Attach the Shared Virtual Disks for the RAC 12c Cluster
In this step, you create, configure, and attach the shared virtual disks for the cluster.
1. Create the shared disks for the RAC 12c cluster:
• GRID1 ASM disk group:
Qty: 5
ASM disk size: 15GB
• DATA1 ASM disk group:
Qty: 3
ASM disk size: 20GB
• RECO1 ASM disk group:
Qty: 1
ASM disk size: 20GB
The shared disks are shown here:
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_grid01.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_grid02.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_grid03.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_grid04.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 15360
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_grid05.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_data01.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_data02.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_data03.vdi
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe createhd --size 20480
--variant Fixed --format VDI --filename C:\TFMCloud\12c\SharedStorage\asm_reco01.vdi
2. Make the file-based virtual hard disks shareable for the ASM disk groups. Substitute the shared virtual disk filenames in the following commands created as a result of the VBoxManage.exe createhd commands in the previous section.
c:\TFMCloud\12c\SharedStorage\asm_grid01.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_grid02.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_grid03.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_grid04.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_grid05.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_data01.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_data02.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_data03.vdi --type shareable
VBoxManage modifyhd c:\TFMCloud\12c\SharedStorage\asm_reco01.vdi --type shareable
3. Attach the file-based virtual hard disks for the ASM disk groups to Node 1:
Note
Substitute the shared virtual disk filenames in the following commands created as a result of the VBoxManage createhd commands in the previous sections.
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_grid01.vdi --type hdd --port 1 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_grid02.vdi --type hdd --port 2 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_grid03.vdi --type hdd --port 3 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_grid04.vdi --type hdd --port 4 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_grid05.vdi --type hdd --port 5 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_data01.vdi --type hdd --port 6 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_data02.vdi --type hdd --port 7 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_data03.vdi --type hdd --port 8 --device 0
--mtype shareable --storagectl "SATA"
VBoxManage storageattach bsfracvx1 --medium c:\TFMCloud\12c\SharedStorage\asm_reco01.vdi --type hdd --port 9 --device 0
--mtype shareable --storagectl "SATA"
4. Verify the attachment and shareable status of the virtual shared ASM disks in OVM VirtualBox by choosing OVM VirtualBox → bsfracvx1 (Node 01) → Settings → Storage (see Figure 10.7).
Figure 10.7. Oracle VM VirtualBox: Shared virtual disk configuration for RAC 12c

Step 10.4—Configure the New Virtual Machine for RAC 12c
Power up the guest VM and configure it by implementing the following steps, which are outlined in Chapter 9:
1. Set the network configuration of the private cluster interconnect VNIC.
2. Modify the /etc/hosts file to include the relevant entries for RAC 12c.
3. Check for space requirements.
4. Disable the Linux software firewall.
5. Configure and restart the Network Time Protocol Daemon (NTPD) client.
6. Partition, format, and mount /u01 on the 25-GB local virtual hard disk.
7. Disable the SELINUX option.
8. Install Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP server).
9. Install X Window System desktop.
10. Reboot RAC-Node-01 for all of the preceding setups and configurations to take effect.
11. Perform Oracle software preinstallation steps on the RAC-Node-01 VM.
12. Create the required and relevant OS groups.
13. Create the oracle and grid OS users as the Oracle DB HOME software owners and grid infrastructure HOME software owners, respectively, and set their initial passwords.
14. Create the Optimal Flexible Architecture (OFA) directory structure for RAC 12c.
15. Observe and verify the required and relevant permissions of the created OFA directory structure.
16. Set up and configure the NTPD daemon.
17. Turn off and unconfigure the Avahi daemon.
18. Install packages and options for Linux kernel.
19. Create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups.
20. Verify the partition structures for the underlying disks within the GRID1, DATA1, and RECO1 ASM disk groups.
21. Configure ASMLIB ON RAC-Node-01.
22. Download and stage the Oracle software binaries.
At this point, the VM for RAC-Node-01 is ready to be cloned (illustrated in the next section).
Step 10.5—Clone Virtual Hard Drive for RAC-Node-02 from Node 01
Before cloning the virtual hard drive, make a backup copy of the cloned virtual hard drive—for example, bsfracvx1_localvhd.vmdk, on bsfracvx1—to establish a save point. This approach enables you to revert to a point-in-time saved copy of Node 01 so that if there are issues further down the road, you will not have to start over from scratch. However, this approach does translate into a larger space requirement on your desktop or laptop host machine.
Then follow these steps:
1. Run the clonehd command to clone the hard drive for RAC-Node-01.
2. Shut down the VM for RAC-Node-01, create another directory for RAC-Node-02, such as C:\Users\tfm1\VirtualBox VMs\bsfracv2, and run the VBoxManage.exe clonehd command to clone the hard drive of Node 01.
C:\Program Files\Oracle\VirtualBox\ VBoxManage.exe clonehd "C:\Users\tfm1\VirtualBox VMs\bsfracvx1\bsfracvx1_localvhd.vdi" "C:\Users\tfm1\VirtualBox VMs\bsfracvx2\bsfracvx2_localvhd.vdi"
Temporarily copy and relocate the cloned virtual hard drive for Node 02 to another temporary folder to avoid any errors during creation of the VM for RAC-Node-02, as outlined in the following section.
Step 10.6—Create and Configure the VM for RAC-Node-02 Using the Cloned Local Virtual Hard Drive from Node 01
Follow these steps to create and configure the VM for RAC-Node-02.
1. Go to Oracle VM VirtualBox → Machine → New.
2. Follow the same instructions as outlined earlier (Step 10.1, Approach 1) to create and configure the VM for RAC-Node-02 with one exception: instead of creating a new HD, select and attach an existing virtual HD and specify the name of the cloned virtual HD file created in the previous section (see Figure 10.8).
Figure 10.8. Oracle VM VirtualBox: Select the existing cloned virtual hard drive option for RAC-Node-02.

3. Configure and customize RAC-Node-02 by following the same steps as outlined in Step 10.1, Approach 2:
• System
• Processor
• Acceleration
• Network settings (adapters 1, 2, and 3)
• Attach the shareable ASM virtual disks to RAC-Node-02 using the VBoxManage storageattach command.
• Verify the attachment and shareable status of the virtual shared ASM disks within OVM VirtualBox.
4. Configure the network settings within the OS for RAC-Node-02.
5. Power up the VM for RAC-Node-02, and in the OS, edit the network settings as shown in the following:
• Change the hostname in /etc/sysconfig/network.
• Linux → Top Menu → System → Administration → Network:.
• Remove the System eth* VNICs: Select Device → Deactivate → Delete
• Change the connection names to match Node 01, from Auto eth* to System eth*.
• Modify the IP address information for the public and private network interfaces for RAC-Node-02.
• Modify the /etc/udev/rules.d/70-persistent-net.rules file to reflect the correct eth* entries:
$ vi /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:c3:58:84", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:06:ad:43", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}==
"08:00:27:86:01:91", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"
• Modify the .bash_profile file to reflect the hostname and second instance on Node 02 (the following also serves as an example of a .bash_profile file for a RAC 12c cluster):
[oracle@bsfracvx2 ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:/usr/kerberos/sbin:/usr/local/sbin:/sbin:/root/bin
export PATH

ORACLE_TERM=xterm
export ORACLE_TERM

ORACLE_SID=racvxdb2
export ORACLE_SID

ORACLE_HOSTNAME=bsfracvx2.bsflocal.com
export ORACLE_HOSTNAME

ORACLE_UNQNAME=RACVXDB
export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/12.1.0/dbhome_1
export ORACLE_HOME

TMP=/tmp
export TMP
TMPDIR=$TMP
export TMPDIR

PATH=/usr/sbin:$PATH
export PATH
PATH=$ORACLE_HOME/bin:$PATH
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "grid" -o  $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536

fi
umask 022
fi
Reboot Node 02. Power Node 01 and verify network connectivity between both nodes by pinging both public and private network interfaces between them.
Step 10.7—Enable X11 Forwarding on the Host Machine
Next, you need to enable X11 forwarding.
1. Install an X Window System display server, such as Xming, on the OS (in this case, Windows 8).
2. Enable X11 forwarding in your terminal emulator program, such as PuTTY (see Figure 10.9).
Figure 10.9. Terminal emulator (PuTTY): Enable X11 forwarding for redirection of graphical output from RAC servers to the host machine.

Step 10.8—Install and Set Up 12c Grid Infrastructure
Start the guest VMs constituting the RAC and install and set up Oracle 12c grid infrastructure by following the steps outlined in Chapter 9:
1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches, or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option.
3. Select the Configure a Flex Cluster option.
4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS).
6. Enter the relevant information for the RAC 12c nodes including HUB and LEAF nodes.
7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes.
8. Specify the network interfaces for public, private cluster interconnect and ASM.
9. Select the Configure Grid Infrastructure Management Repository option.
10. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level.
11. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
12. Select the Do not use Intelligent Platform Management Interface (IPMI) option.
13. Specify the OS groups for ASM.
14. Enter the Oracle BASE and HOME locations.
15. Enter the Oracle inventory location.
16. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts.
17. Generate and run any runfixup.sh scripts to fix any prerequisite issues.
18. Press Install to initiate the installation process for grid infrastructure.
At this point, Oracle 12c grid infrastructure has been set up: we are at the same level of setup in the RAC 12c installation process as we were at the end of Chapter 9. From this point onward, the steps outlined in the following sections apply equally to both virtualization (or physical hardware) approaches: OVM for VirtualBox and OVM for x86.
Step 10.9—Install the OEM 12c Agents on the RAC Nodes
The next two steps outline what is involved in installing and setting up EM12c agent software on the RAC 12c node virtual machines, which are used for monitoring the host machines and all the targets within them by EM12c. This section also assumes that you have an EM12c setup in place. If you are doing it all on your home laptop, then it is advisable to have EM12c installed on the host laptop itself. Step 10.10 details how to configure the Windows 8 firewall (if it is the underlying OS) for EM12c to communicate with the virtual machines for the RAC nodes.
Follow these steps to deploy/install the EM12c agents.
1. Choose OEM 12c → Top-Right Menu → Setup → Add Target → Add Targets Manually.
2. Add targets manually by selecting the Add Host Targets option. Press the Add Hosts button.
3. Set the host and platform by entering the hostnames and platform (Linux x86_x64) for the RAC nodes. Enter a session name to identify the job associated with the addition of the targets in EM12c.
4. Then tend to the additional installation details by entering the following information:
• The installation base directory, for example, /u01/app/oracle/agent12c
• The instance directory, for example, /u01/app/oracle/agent12c/agent_inst
• Named credentials for the oracle OS user
• The privileged delegation setting
• The port number
5. Now press the Deploy Agent button. Monitor the progress, as shown in Figure 10.10.
Figure 10.10. Oracle Enterprise Manager 12c agent deployment in progress on RAC 12c nodes

6. After the deployment process completes, run the root.sh script as the privileged root OS user.
7. Follow the process to promote all the non-host targets.
Step 10.10—Configure the Firewall on the Host Machine (Windows Only)
The default setup and functionality in Windows 8 does not allow a pingback; therefore, a custom inbound rule has to be configured to enable communication between the Windows host machine and the virtual machines for the RAC nodes. The following steps detail this process. For a non-Windows OS on the host machine, implement a similar process (if applicable) to enable communication between the EM12c host and the RAC node VMs.
1. Choose Start → Control Panel → Window Firewall → Advanced Settings → Inbound Rules.
2. Enable the File and Printer Sharing (Echo Request—ICMPv4-In) rule (see Figure 10.11).
Figure 10.11. Configuring Windows firewall on the host machine to enable communication with the RAC node VMs

Step 10.11—Create and Set Up the Required ASM Disk Groups for RAC Databases
This section details the steps for creating the ASM disk groups needed for the RAC 12c database(s).
1. Create the DATA1 ASM disk group using the ASM Configuration Assistant (ASMA):
[grid@bsfracvx1 bin]$ pwd
/u01/app/12.1.0/grid_1/bin
[grid@bsfracvx1 bin]$ export ORACLE_HOME=/u01/app/12.1.0/grid_1
[grid@bsfracvx1 bin]$ ./asmca
2. Press the Create button.
3. Choose the appropriate level of ASM disk group redundancy—External in this case (see Figure 10.12).
Figure 10.12. Settings for the DATA1 ASM disk group

4. Select the ASM disk group member disks.
5. Enter the values for the ASM disk group compatibility parameters.
6. Set the allocation unit size to 4 MB.
7. Repeat the preceding steps for the RECO1 ASM disk group.
The finished product is shown in Figure 10.13.
Figure 10.13. The required ASM disk groups in ASMCA for RAC 12c

Step 10.12—Install the RAC 12c Database Software Using the OUI
This section contains the steps to install the RAC database software into non-shared database homes. Implementing non-shared Oracle Homes is best practice, as it allows rolling patches to be applied to the RAC without the need to bring the entire cluster down for maintenance purposes.
1. Run the Oracle Universal Installer (OUI) from the RAC 12c Database staging area directory:
[oracle@bsfracvx1 database]$ pwd
/home/oracle/software/Ora12c/Database/database
[oracle@bsfracvx1 database]$ ./runInstaller
2. Enter the following information and make the following selections in the Wizard Entry screens of the OUI, as shown in the following screenshots. In certain cases, you’ll need to modify the entries according to the specific needs of your organization:
a. Enter the MOS credentials for support on software updates and patches, or choose to skip them (see Figure 10.14).
Figure 10.14. Choose to receive or not receive security updates.

b. Select the Install database software only option, as shown in Figure 10.15.
Figure 10.15. Choose Install database software only

c. Select Oracle Real Application Clusters database installation (see Figure 10.16).
Figure 10.16. Choose Oracle Real Application Clusters database installation

d. Select the RAC nodes on which the installation is to be performed (see Figure 10.17).
Figure 10.17. The RAC nodes

e. Click SSH Connectivity to ensure that it is established.
f. Select the appropriate product language(s).
g. Select the Database edition, in this case, Enterprise Edition (see Figure 10.18).
Figure 10.18. Choose Enterprise Edition.

h. Specify the Oracle base and home locations (see Figure 10.19).
Figure 10.19. Set Oracle base and home locations.

i. Specify the Oracle OS groups for the various job roles: OSDBA, OSOPER, OSBACKUPDBA, OSDGDBA, OSKMDBA (see Figure 10.20).
Figure 10.20. Set Oracle OS groups for the various job roles.

j. The warnings and errors are shown as part of the Verification Results in the next screen (see Figure 10.21).
Figure 10.21. Warnings observed as part of the Verification Results

k. Click Install to initiate the installation process for DB HOME software on the RAC nodes, as shown in Figure 10.22.
Figure 10.22. Click Install to start the installation.

3. As shown in Figure 10.23, after the installation completes, run the root.sh script on all the RAC nodes, as the privileged OS root user.
[root@bsfracvx1 dbhome_1]# ./root.sh
Performing root user operation for Oracle 12c
Figure 10.23. Postinstall root scripts

The following environment variables are set as
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0/dbhome_1
4. Enter the full pathname of the local bin directory:
[/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as the generic part of the root script runs.
Next, product-specific root actions will be performed.
Step 10.13—Create/Configure the RAC 12c Cluster Database Employing DBCA
At this point, we are ready to create and install a clustered RAC 12c database using the DBCA utility. DBCA is considered the best-practice database creation/setup tool because it enables the user to create a RAC (or non-RAC) database, complete with a whole host of industry-standard best practices built into it, along with other database management options as well. This section contains the steps to do so.
1. As the oracle OS user, run the DBCA utility from the RAC 12c DB HOME:
[oracle@bsfrac01 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/bin
[oracle@bsfrac01 bin]$ ./dbca
2. Enter the information in the following steps, and make the selections in the Wizard Entry screens of DBCA, as shown in the figures. In certain cases, you’ll need to modify the entries according to the specific needs of your organization.
3. Select the Create Database option (see Figure 10.24).
Figure 10.24. Choose Create Database.

4. Select the Advanced Mode option (see Figure 10.25).
Figure 10.25. Choose Advanced Mode.

5. Select the Oracle Real Application Clusters (RAC) database and Policy-Managed options (see Figure 10.26).
Figure 10.26. Choose the Oracle Real Application Clusters (RAC) database and Policy-Managed settings.

6. Select the General Purpose or Transaction Processing option (see Figure 10.26). Press the Show Details button on this screen.
7. Enter the global database name (see Figure 10.27).
Figure 10.27. Enter the database settings

8. Enable the Create As Container Database checkbox (see Figure 10.27).
9. Specify the appropriate options for the container database(s): number of pluggable databases, PDB name prefix (see Figure 10.27).
10. Enter the server pool information for the policy-managed RAC DB: server pool name, cardinality, and existing or new server pool (see Figure 10.28).
Figure 10.28. Server pool information

11. Specify the management options by registering the database with EM12c.
12. Enter and verify the passwords for the database users: SYS, SYSTEM, PDBADMIN, DBSNMP (see Figure 10.29).
Figure 10.29. Enter and confirm the passwords for the user accounts.

13. Enter the ASM disk groups for the data file locations (see Figure 10.30).
Figure 10.30. Set the ASM disk groups and enable archiving options.

14. Enable the Archiving option for the online Redo log files and enter the parameters for archiving (see Figure 10.30).
15. Press the Multiplex Redo Logs and Control Files button (see Figure 10.30), and enter the locations for the multiplexed files (see Figure 10.31).
Figure 10.31. Enter the locations for multiplexed files.

16. Enter the parameters for the Fast Recovery Area (FRA) (refer to Figure 10.30).
17. Enter the parameters for sample schemas, custom scripts, database vault, and label security.
18. Enter the parameters for database memory management (see Figure 10.32), sizing (see Figure 10.33), connection mode (see Figure 10.34), and character sets (see Figure 10.35).
Figure 10.32. Parameters for database memory management

Figure 10.33. Parameters for sizing

Figure 10.34. Parameters for the connection mode

Figure 10.35. Parameters for character sets

19. Press the All Initialization Parameters button, and then the Show Advanced Parameter button. Modify the RAC DB initialization parameters as needed (see Figure 10.36).
Figure 10.36. Initialization parameters

20. Press the Customize Storage Locations button. Modify the parameters for control files (see Figure 10.37) and data files. Redo log groups and files as needed.
Figure 10.37. Edit the control files

21. Enable the Create Database and Generate Database Creation Scripts checkbox options, and enter the location of the generated scripts’ destination directory (see Figure 10.38).
Figure 10.38. Enter the database creation options.

22. You may see a warning message about memory/SWAP sizes: check the Ignore All checkbox. and then press the Next button (see Figure 10.39).
Figure 10.39. Choose Ignore All

23. Press the Finish button to initiate the installation process for the new RAC 12c database on the RAC nodes. When the process completes, you will see a dialog box like the one shown in Figure 10.40.
Figure 10.40. The database creation is complete.

Step 10.14—Perform Sanity Checks on the New RAC 12c Database
Run the following commands in SQL*Plus to perform basic sanity checks on the new RAC 12c database.
SQL> select instance_name,status from gv$instance;

INSTANCE_NAME    STATUS
---------------- ------------
bsfrvxdb_1       OPEN
bsfrvxdb_2       OPEN

[grid@bsfracvx1 bin]$ ./lsnrctl status

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 20-MAR-2013 03:55:50

Copyright © 1991, 2013, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start date                19-APR-2013 18:23:25
Uptime                    0 days 9 hr. 32 min. 27 sec
Trace level               OFF
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener parameter file   /u01/app/12.1.0/grid_1/network/admin/listener.ora
Listener log file         /u01/app/grid/diag/tnslsnr/bsfracvx1/listener/alert/log.xml

Listening endpoints summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.116)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.160)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=bsfracvx1)(PORT=5500))(Security
=(my _wallet_directory=/u01/app/oracle/product/12.1.0/dbhome_1/admin/
bsfrvxdb/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX1", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 2 handler(s) for this service...
Service "bsfrvxdb" has 1 instance(s).
  Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
Service "bsfrvxdbXDB" has 1 instance(s).
  Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
Service "bsfrvxpdb" has 1 instance(s).
  Instance "bsfrvxdb_1", status READY, has 1 handler(s) for this service...
The command completed successfully

[root@bsfracvx1 bin]# ./crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora....SM.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora.DATA1.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora.GRID1.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfracvx2
ora....N2.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora....N3.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora.MGMTLSNR   ora....nr.type 0/0    0/0    ONLINE    ONLINE    bsfracvx1
ora.RECO1.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora.asm        ora.asm.type   0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora....X1.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora....vx1.ons application    0/3    0/0    ONLINE    ONLINE    bsfracvx1
ora....vx1.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfracvx1
ora....X2.lsnr application    0/5    0/0    ONLINE    ONLINE    bsfracvx2
ora....vx2.ons application    0/3    0/0    ONLINE    ONLINE    bsfracvx2
ora....vx2.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    bsfracvx2
ora.bsfrvdb.db ora....se.type 0/2    0/1    ONLINE    ONLINE    bsfracvx2
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora.gns        ora.gns.type   0/5    0/0    ONLINE    ONLINE    bsfracvx1
ora.gns.vip    ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfracvx1
ora.mgmtdb     ora....db.type 0/2    0/1    ONLINE    ONLINE    bsfracvx1
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    bsfracvx1
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    bsfracvx1
ora.proxy_advm ora....vm.type 0/5    0/     ONLINE    ONLINE    bsfracvx1
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfracvx2
ora.scan2.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfracvx1
ora.scan3.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    bsfracvx1

[root@bsfracvx1 bin]# ps -ef | grep pmon
grid      4398     1  0 21:11 ?        00:00:02 asm_pmon_+ASM1
grid      4886     1  0 21:13 ?        00:00:01 mdb_pmon_-MGMTDB
oracle    5732     1  0 21:16 ?        00:00:02 ora_pmon_bsfrvdb_2
grid      6138     1  0 21:18 ?        00:00:01 apx_pmon_+APX1
root     11718 11035  0 21:39 pts/0    00:00:00 grep pmon
Congratulations! The setup of your own virtualized RAC 12c cluster is now complete, and it is fully functional and up and running. The steps for creating clusters on physical resources are very similar, if not identical, to the steps outlined in this and the previous chapter.
EM12c: Implementing DBaaS
Now that we have learned how to create RAC 12c clusters from scratch, let us focus on the cloud management piece of the picture.
As discussed in Chapter 9, the following components are needed for setting up virtualized Oracle database clouds:
• OVM for x86
• EM12c
• Oracle Cloud Management Pack for Oracle Database
EM12c is the nerve center of cloud computing for Oracle products in general, including the Oracle database server family. The Cloud Management Pack for Oracle Databases provides the features, options, and framework to set up, configure, monitor, meter, account for, and chargeback Oracle database clouds, including self-service capability for Oracle databases (see Figure 10.41).
Figure 10.41. Database cloud self-service portal

Following are some salient feature, and benefits of setting up a database cloud using EM12c:
• Elasticity on demand enable by rapid and agile provisioning of database resources
• End-to-end management of the database cloud life-cycle process
• Self-service access for cloud consumers
• Definition of the service catalog and publishing of templates
• Pooling of cloud resources
• Performance monitoring of cloud databases
• Easy power-up, power-down, and retirement of cloud databases
• Role-based security implementation
• Accounting through chargeback and metering of cloud resources
Note
The process of setting up and configuring DBaaS in EM12c is beyond the scope of this book: you are encouraged to read the Oracle documentation on how to set up DBaaS in EM12c in order to complete the Oracle database cloud computing picture.
Virtualization and Cloud Computing: From the Perspective of Oracle DBAs
Following are some thought-provoking points and questions (as well as some answers) that we (the DBAs) should ask ourselves:
• Who are the end consumers and owners of the IT hardware?
Yes, the business implicitly owns everything, but from the administration, maintenance, support, and ownership standpoints, the answer is that we, the Oracle DBAs, are the end consumers of the machines.
• What if we put OVM directly into the control of Oracle DBAs?
No more waiting for system administrators, but of course, that would mean adding to your skills set.
• What if we could have the power of agile elasticity to set up and remove machines in our own hands?
We could rapidly prototype new environments without having to wait for and depend on the OS and sysadmin folks.
• Is super-rapid provisioning of new infrastructures really possible with virtualization? That sounds like someone is blowing a lot of hot air—right?
WRONG.
• Virtualization in production? No way is that going to happen on my watch: it would be an overhead and a performance nightmare—right?
WRONG.
• Multi-tenant virtualization: That would present security risks; guest VMs would not be isolated and secure enough—right?
WRONG.
• In order to implement my own private cloud using OVM, I would have to learn so much—right?
WRONG (OVM along with EM12c is ultra-easy to learn and implement; you can set up your entire virtualized infrastructure within a few hours; the real fun and productivity start after that.)
• Carrying the burden of legacy infrastructures, my professional back hurts on a daily basis. When will I get my hands on new machines that I have been promised by the OS and sysadmin folks for a while now?
Virtualization and cloud computing together are the consolidated answer to all of the above—unprecedented productivity, throughput, and resource efficiency, which quite simply are just not possible in the physical, non-cloud universe. Oracle RAC 12c is the database cloud and enables you to complete the overall corporate cloud picture.
Summary
As emphasized in this chapter and in Chapter 9, the significance of cloud computing can no longer be overlooked or ignored in the modern-day IT workplace. Whether it is your own private cloud behind the corporate firewall, a subscription-based model in a public cloud, or a hybrid of both paradigms, cloud computing is an inevitability that is happening now in the IT universe. This and the previous chapter presented a detailed, step-by-step way to build your own virtualized RAC 12c clouds, managed and integrated under the umbrella of EM12c. An overview of the various paradigms, technologies, and products involved in setting up virtualized RAC database clouds was also presented. You can choose one of two virtualization options: OVM for x86 or OVM VirtualBox. The latter option allows you to set up, configure, and learn RAC 12c comfortably and conveniently on your own home laptop or desktop machine.
Back to top

Oracle Cloud Part 7

Snap Clones
In simple terms, Snap Clone is a storage-agnostic self-service approach to rapidly creating space-efficient clones of large databases (and by and large, we’re talking terabytes or more). Now that’s probably more buzzwords in one sentence than anyone’s brain can deal with without exploding, so let’s look at some of those terms more closely:
• Storage agnostic: Snap Clone supports all storage vendors, both network attached storage (NAS) and storage area network (SAN).
• Self-service: In the XaaS world—where X can be any of infrastructure (I), middleware (MW), platform (P), or database (DB)—one of the key features is empowering the end user to do the work rather than waiting on some techie to find time in her or his otherwise busy schedule. So it’s the end user who makes the ad hoc clones here, not the storage admin.
• Rapid: People simply don’t have the time anymore to wait weeks for provisioning to happen (for that matter, they probably never did, but that’s another discussion), so you have to support the functionality to clone databases in minutes rather than the days or weeks it used to take.
• Space efficient: When you’re working with terabyte or larger databases, you may not have the storage to create full-sized clones, so you have to significantly reduce the storage footprint to start with.
The Challenges Snap Clone Addresses
Snap Clone can be used to address number of major challenges:
• Lack of automation: Manual tasks such as provisioning and cloning of new databases (e.g., for test or development systems) is one area that many DBAs complain is too time consuming. It can take days to weeks, often because of the need to coordinate the involvement of different groups, as shown in Figure 8.27.
Figure 8.27. The standard cloning workflow

When an end user, whether a developer or a QA engineer, needs a database, he or she typically has to go through an approval process like that shown in the figure, which then translates into a series of tasks for the DBA, the sysadmin, and the storage admin. The sysadmin has to provide the compute capacity, and the storage admin has to provide the space on a filer. Finally, the DBA would install the bits, create the database (optionally on Real Application Clusters), and deliver the database to the user. Clearly, this is a cumbersome and time-consuming process that needs to be improved.
• Database unfriendly solutions: Obviously, when there is a need looking for a solution, different people take different approaches to resolving that need. There are a variety of point solutions and storage solutions out there, but the vast bulk of them are not database aware. They tend to clone storage volumes rather than databases and have no visibility into the database stack, which of course makes it hard to triage performance issues as a DBA. They also lack the ability to track configuration, compliance, and data security issues, and they have limited or no lifecycle capabilities.
• Storage issues and archaic processes: One of the main issues DBAs face is storage. Data volumes are ever increasing, particularly in these Big Data days, and the growth can often outpace your storage capacity. You can throw more disks at the problem, but it never seems to be enough, and you can end up with degraded performance if you take the route of sharing clones between users. There can also be different processes and different priorities between the storage team and the DBA team, and you may still have fixed refresh cycles, making it difficult to clone on an ad hoc basis.
In the latest release of Snap Clone, you can provide thin clones via either software solutions (Solaris ZFS and CloneDB) or hardware solutions (Sun ZFS Storage Appliance, NetApp, and EMC). Let’s touch on each of these in a bit more detail. The examples presented here use Enterprise Manager 12.1.0.4.3 with the latest plugins (specifically for Snap Clone, the important plugin is the Enterprise Manager for Storage Management plugin release 12.1.0.5.0). If you are on a slightly different version, you may see a slightly different look and feel to the user interface and available functionality.
Software Solutions
Let’s start by looking at the software-based solutions.
Snap Clone using Solaris ZFS
Snap Clone using ZFS uses a single stock Solaris 11.1+ image, which can be either physical or virtual (note that it doesn’t use the Sun ZS3 appliance). It supports both NAS and SAN. If you are using SAN, then mount the logical unit numbers (LUNs) as raw disk and format with the ZFS filesystem. It’s important to note here that this does not require any snapshot or cloning licenses from the storage vendor, as these features are available for free.
Additional features provided with this solution include compression, de-duplication, I/O caching, and so on. If you also need high availability in this configuration, that can be handled externally either via Solaris Clusters or by using the high-availability features of the underlying hypervisor.
Figure 8.28 shows what the configuration looks like diagrammatically.
Figure 8.28. Snap Clone using ZFS

CloneDB Using dNFS
With CloneDB using direct network file storage (dNFS), you get the capability to create thin copies of a database from RMAN image copies. This approach uses the NFS v3 client that’s embedded in the database technology since 11.2.0.3. Currently, this is supported for single-instance databases, but only on filesystems (i.e., ASM is not yet supported).
The advantages of this approach include the following:
• It’s easy to set up.
• No special storage software is needed.
• It works on all platforms.
• It’s both time efficient (instantaneous cloning) and space efficient (you can create multiple clones based on one backup).
• It uses dNFS to improve the performance, high availability, and scalability of NFS storage.
Hardware Solution
As mentioned previously, the hardware solution for Snap Clone is available on a range of hardware, including Sun ZFS Storage Appliance, NetApp, and EMC. The EMC configuration is the most interesting one, as Snap Clone on EMC includes support for databases on ASM, so let’s look at that example in more detail.
Using Snap Clone on ASM and EMC storage provides the ability to create “live” thin clones of databases that are on ASM. A live clone is not snapshot based but rather a live clone of the database that can be within the same cluster or in a different one. Both single-instance and RAC are supported—supported versions are 10.2.0.5 and higher of the database and 11.2 and higher of the grid infrastructure code. This functionality works on both EMC VMAX (with Time Finder VPSnap) and VNX storage appliances.
Figure 8.29 shows what this configuration looks like diagrammatically.
Figure 8.29. Snap Clone using ASM on EMC storage

Snap Clone Setup
Now that we’ve looked at the different setups architecturally, let’s walk through the setup for Snap Clone. In this example, we use CloneDB because it requires the most straightforward setup. Refer to the Cloud Administration Guide (found under Private Cloud Setup and Administration in the EM12c documentation) for more details on the other setups.
The steps to set up Snap Clone are as follows:
1. Update the relevant plugins (or install them if you don’t already have them). You will need the following plugins (the versions mentioned here are for the 12.1.0.4.3 environment used in this example):
• Enterprise Manager for Cloud version 12.1.0.9.0
• Enterprise Manager for Oracle Cloud Framework version 12.1.0.2.0
• Enterprise Manager for Storage Management version 12.1.0.5.0
• Enterprise Manager for Oracle Database version 12.1.0.7.0
2. Set up the relevant roles, users, PaaS infrastructure zones, and database pools, as described in Chapter 5.
3. Create a database profile and service template so that self-service users can create thin clones as they require.
Let’s look at the details of step 3.
Creating a Database Profile
Creating a database profile to use Snap Clone is a little bit different from creating the profiles we looked at earlier.
1. Start from the same place (Setup → Cloud → Database → Data Sources → Data Profiles, and then click Create). In this case, we are going to create the profile using an RMAN Database Image, as shown in Figure 8.30. Note the message that also appears on this screen (at the bottom) that says Snap Clone is not enabled. This rather spurious message occurs because we don’t have any relevant hardware installed that we can use to create Snap Clone. However, it can be safely ignored because we are using CloneDB in this example.
Figure 8.30. Creating a database profile for Snap Clone, step 1

2. In this example, the database from which the profile is created is running in ARCHIVELOG mode. That means that on the next screen, shown in Figure 8.31, the backup can be done in online mode. A directory for the backup to take place in must also be specified.
Figure 8.31. Creating a database profile for Snap Clone, step 2

3. Steps 3 and 4 of the wizard are the same as in previous database profile creations in this chapter, so for brevity, they are omitted here. Now we can look at using that profile in the service template creation. Again, we start that from Setup → Cloud → Database → Service Templates, and then click the Create button. The main difference here is that once we select the database profile we just created, there will now be a Create CLONEDB option, as you can see in Figure 8.32.
Figure 8.32. Creating a service template for Snap Clone, step 1

4. The important information we need to provide to use Snap Clone with CloneDB is on the next screen. We need to tell the wizard a location that is shared using NFS. In this example, that the location is set up using /u02/copy-on-write as an NFS share, as shown in Figure 8.33.
Figure 8.33. Creating a service template for Snap Clone, step 2

5. Again the remaining steps of the wizard are the same as in the previous service template examples, they are omitted here. At the end, however, you should see a message that the service template has been created successfully, as shown in Figure 8.34. Once that is complete, the self-service user can select this template from the Self Service Catalog, just as shown in previous examples in earlier chapters.
Figure 8.34. Creating a service template for Snap Clone, step 3

Summary
In this chapter, we looked at the two main types of clones built in a cloud environment using Enterprise Manager 12c—full clones and snap (or thin) clones. A full clone involves taking a complete or full copy of an existing database, which can be done in one of two ways: by taking an RMAN backup or issuing an RMAN DUPLICATE command. A snap clone takes advantage of copy-on-write technology, available through both software and hardware solutions, to rapidly create space-efficient clones of large databases. These clones contain only changed blocks; unchanged blocks are read from the test master database the clone is built from. As a result, many snap clones can be taken from a single test master without using anywhere near the amount of storage the test master takes up.


From here you can see a number of regions—General, Request Status, Incidents, Top Resource Providers, Top Requested Services, and Top Charge Trends (data is displayed in the Charge Trends region only if you have enabled chargeback), so you get a good overview of the health of your cloud infrastructure. If you want to drill down to look at more details of what service instances you have available, you can click on the number to the right of Service Instances. This link will show you exactly what service instances are defined in your cloud environment, their status, the service type, the resource provider, the service template, and even who owns it (see Figure 7.2).
Figure 7.2. Service instances

From here you can also click on the name of the service instance (for example, crm00001) to drill into the home page for that particular target, or you can simply select the row containing that target to do the following tasks:
• Start or stop the instance. Which one of these options is available depends on the status the instance is currently in. For example, if the service instance is already started, the Start option will remain grayed out.
• Examine the history of the service instance by clicking the History icon. This feature shows all the operations that have taken place against this particular service instance since it was created.
• Delete the service instance if it is no longer required. Obviously, caution should be taken before deciding to perform this operation.
You may find that the two most important links are the ones that show failed requests (the red slice of the pie chart in Figure 7.1) and the heat map (which you get to by clicking View Map on the top right of the screen shown in Figure 7.1). The failed request statuses are ones you’ll want to investigate further to see what went wrong. Sometimes, this task is as simple as an attempt to start a database that’s already started, but at other times, you’ll find more complex issues that need further investigation and follow-up. You should also check here for requests that failed due to insufficient quota or resources and investigate whether more quota or resources need to be allocated.
The heap map is a way to graphically view which resource providers (such as platform as a service [PaaS] infrastructure zones and database pools) are using the most resources. Even in the relatively simple environment we’re using as a demonstration, the heat map (see Figure 7.3) shows that the host allocation for the Salt Lake City PaaS infrastructure zone, at 66 percent utilization, is starting to reach a level that may require adding more PaaS infrastructure zones. In more complex environments, you may see resource providers that show up as red. If you select that resource provider, you’ll see what the problem is in the Metric Details region shown below the heat map, and you can drill into the Value link to discover more details.
Figure 7.3. Heat map from a simple cloud environment

There is also a menu of items that you can select from using the Oracle Cloud menu on the top left of the Cloud Home page. Several items in this menu are useful from an administration and management perspective.
Members → Topology
This menu item shows use a graphical representation of how the different items in your cloud environment are related. It is particularly useful when the environment becomes more complicated, as it allows you to see at a glance how items are related to each other. By clicking on one of the items, you can see specific details listed under the Metric History region on the bottom right (see Figure 7.4 for an example).
Figure 7.4. Configuration Topology

Resource Providers, Service Templates, Service Instances
Each of these menu items takes you to a list of the different entities. From here, it is easy to select and edit each particular entity. You can also get to each of these pages by clicking the links next to each one in the General region of the Cloud Home page. From each page, you are also able to create new items or delete existing ones. Figure 7.5 shows an example from the Resource Providers page.
Figure 7.5. The Resource Providers page

Requests
If you click on Show All under Requests, you can see a complete list of the requests that have been made in your cloud environment, as shown in Figure 7.6. From here you can do the following:
• Select an individual request and look at the details for it.
• Reschedule the request if it hasn’t already been executed.
• Delete the request.
Figure 7.6. The Requests page

If you click on Requests: → Dashboard, you’ll be taken to a more graphical representation of the request trends over the past seven days, as shown in Figure 7.7. Note that you can click on the Settings spanner on the top left to change this time period to one more suitable to you (Figure 7.7, for example, shows the last 31 days’ worth of requests). This page is particularly useful for showing request trends to management.
Figure 7.7. The Request Dashboard

The Cloud Adviser
The information we’ve covered so far in this chapter is really the bread and butter, the day-to-day life of a cloud administrator. However, one of the main advantages of moving to a cloud environment is that it frees up a lot of the DBA’s time. No longer do you need to focus on the mundane tasks of creating databases, looking after backups, and so on. The time that you used to spend working on these tasks is now available for you to take on a much more important role—the cloud adviser.
What is the cloud adviser’s role? Well, it’s a role that many of us have been performing for years—the senior DBA role—expanded into the cloud environment. Since most of our business users do not have the technical background to be able to make informed decisions on technical issues, they still come to the DBA to provide that advice. Those technical issues fall into a variety of broad topics, which the rest of this chapter examines.
Security
Security is of paramount importance in the cloud world, because if you get the security model wrong in the cloud environment, you may not have an environment to administer and manage for very long! We’ve all no doubt heard of the security breaches that seem to be increasingly common in the cloud environment. Security breaches might involve stolen usernames and passwords from cloud providers and, more seriously, stolen data. Oracle database technology is used the world over to secure large volumes of confidential data—financial data, employee data, customer information, and so much more. Securing that data, and ensuring it remains secure, is an essential part of the DBA’s role. Following are some of the tools that can be used to assist with this task:
• Transparent Data Encryption (TDE): TDE is a way to stop would-be attackers from bypassing the database and reading sensitive information directly from the files that make up the Oracle database. This protection is extended to any backups that have been made of the database as well. TDE allows encryption of individual columns right up to entire tablespaces, so you can choose how much of your data to secure.
• Data Masking and Subsetting: As you move to the cloud environment, one approach that is often taken for security reasons is to migrate development and test environments to the cloud first, as these environments are often seen as lower risk. However, at the same time, you want to ensure that your developers and quality assurance (QA) engineers have environments that are as close to production as possible so they have a realistic environment to test performance on. Those two requirements may at first glance seem antithetical, but Enterprise Manager contains Data Masking and Subsetting tools that can be used to address the issue. Oracle Data Masking and Subsetting allows you to sanitize copies of production data for development and QA environments so that your production data is not compromised. It also allows you to use subsets of the data in cases where you simply do not have enough storage to take a full copy of your production data into these environments.
Of course, you need to secure your data against more than just external attacks. Most studies of security breaches indicate that these breaches are more often the work of insiders than of external hackers (regardless of the fact that the most reported-on breaches are external). So from a security perspective, you must have tools and technologies that can be used to both track and secure against internal attacks as well. Some of the Oracle products that help to address this space include Transparent Data Encryption, Oracle Audit Vault, and Oracle Database Vault.
Server and Database Sizing
In theory, scaling a database in the cloud world is as simple as requesting more processing capability. In reality, we all know that service providers, be they an internal IT department or an external hosting provider, will have limitations as far as the hardware they have available to them. One of the value-add roles that a DBA can perform is to determine the amount of resources required now to meet current needs as well as to scale to meet future requirements. Enterprise Manager 12c has a number of tools that can be used to assist the DBA in this arena, including the following:
• Consolidation Planner: Consolidation Planner is a tool that can be used to plan a cloud architecture for server consolidation or for any other consolidation model. It allows you to identify source and destination targets and applicable technical and functional constraints, such as where the application can reside. You can also use Consolidation Planner to determine whether source environments can be consolidated onto hardware that you have not yet purchased, providing greater confidence that machines you are looking at purchasing will be suitable for the workloads you plan to put on them.
• Chargeback: As we covered in Chapter 6, “Metering and Chargeback in Enterprise Manager 12c,” chargeback is an accounting tool that provides a way to aggregate the enormous amount of metrics data that Enterprise Manager collects, a way to report to consumers of those metrics how much they have used of those particular resources, and, if you have set it up to do so, a way for the IT department to charge those consumers for the resources they have used. However, its usefulness is not restricted to these functions. Once the chargeback application is configured and running, you accumulate a history of how many resources have been used by specific applications or databases, which in turn allows you to project some form of capacity planning figures for future growth. Obviously, you still need to talk to the application owners to get an idea if any new functionality may impact those figures, but at the very least, you have a starting point that can be used to predict future resource requirements.
Performance Tuning
As mentioned at the beginning of this chapter, contacting the DBA is all too often the first port of call for users complaining about performance, even before they can identify the database as the source of the performance issue. Of course, if we’ve ensured server and database sizing has been done correctly, we shouldn’t see too much in the way of performance issues. Nevertheless, we all know that server and database sizing is far too often a reactive rather than proactive exercise, so we still need to undertake the same sorts of performance problem identification and resolution exercises that we are familiar with from the non-cloud environment. This topic is far too large to cover here—indeed, whole books have been written about Oracle performance tuning—we won’t go into more detail now. Suffice it to say, you will still need to perform this role but with the added complexity of not necessarily knowing beforehand what other environments may be directly impacting yours. Consequently, you should become familiar with the pages covered at the beginning of this chapter, particularly the Cloud Home. It’s there that you can get an overview of what cloud instances have been created and can drill down to see all the different service entities that have been created and removed.
Summary
Oracle Enterprise Manager Cloud Control 12c should be your tool of choice when monitoring, administering, and managing the Oracle database cloud environment. As you’ve seen over the past few chapters, it provides the capability to set up both schema as a service and database as a service as well as to configure the chargeback application. Additionally, as we’ve seen in this chapter, Enterprise Manager 12c allows you to perform the ongoing management and administration of the cloud environment you have set up. One final step remains as an often undertaken task in the cloud environment: cloning of databases from one environment to another. That’s the subject of our next chapter.













Chapter 9. Virtualizing RAC 12c (DB Clouds) on Oracle VM—Grid Infrastructure
Cloud computing promises to usher in a new era for the corporate IT universe. Every day, we hear that the cloud within the typical IT organization is inevitable and bound to happen, if not already present in some form; it is reasonable to conclude that cloud computing is only a matter of when, not if.
The cloud computing winds of change have been blowing for quite a few years now, recently picking up momentum at Oracle Corporation—almost every key Oracle product focuses on cloud computing as a paradigm. This focus is evident in the “c” (for “cloud”) that is appended to the current release number of Oracle products. Real Application Clusters (RAC) is no exception to Oracle’s turn toward cloud computing.
Virtualization is the foundation of cloud computing because it is widely implemented today. What is virtualization, and what part does it play in the cloud universe? This chapter and the next insights, recommendations, and a step-by-step guide on setting up virtualized RACs, with an emphasis on virtualization, cloud computing, Oracle Virtual Machine (OVM) for x86, and Oracle Enterprise Manager Cloud Control 12c (EM12c). These virtualized RACs can then be utilized within the framework of database as a service (DBaaS) for rapid and easy deployment as database cloud services.
The overlap between the material in this and other chapters is intended to reiterate important concepts as well as to present the topics in the proper context.
Following is a summary of topics presented in this chapter:
• Building Oracle database clouds: The necessary ingredients
• What is virtualization?
• What are virtual machine (VM) monitors (hypervisors)?
• Types of hypervisors
• Types of virtualization
• Oracle VM for x86—360 degrees
• Xen—Synopsis and overview
• Oracle VM—Overview and architecture
• Oracle VM templates—Synopsis and overview
• Oracle VM 3.x—A brief introduction
• Setting up virtualized Oracle RAC clusters using Oracle VM: Alternative approaches
• Set up, install, and configure 12c virtualized RAC clusters: Step-by-step setup and configuration
This chapter guides you, step by step, through installing, setting up, and configuring a virtualized RAC 12c using OVM for x86. The next chapter takes a similar approach, with one major difference—the underlying virtualization technology (hypervisor) is Oracle VirtualBox instead of OVM for x86. This information gives you the choice of using either virtualization technology or both technologies to set up virtualized Oracle RAC 12c database clouds. An overview of cloud computing and the role and relevance of virtualization from the perspective of cloud computing are also covered in both chapters. All respective versions of the hypervisors used are the latest and greatest at the time of the publication of this book.
Database Clouds Based on RAC—The Necessary Ingredients
Cloud computing can be described as “fill-in-the-blank as a service”: for example, infrastructure as a service (IaaS), platform as a service (PaaS), and database as a service. A more detailed overview of cloud computing, its various flavors, paradigms, prevalent trends, and a whole lot more are presented in the next chapter.
How do we plan for, set up, build, and configure Oracle database clouds? The short answer is OVM for x86, EM12c, and RAC. Together they make up the true database cloud solution from Oracle, especially if you are planning your own private database clouds behind your corporate firewalls. OVM for x86 is used interchangeably with OVM in this chapter and the next.
An overview of virtualization is presented in this chapter with a follow-up section on cloud computing in the next chapter.
Virtualization—360 Degrees
Virtualization is the opposite of a physical entity in the IT universe. Here are some salient features and key points about virtualization:
• Virtualization is the foundation stone in the cloud computing era.
• Virtualization is an inevitability waiting to happen in the IT universe, one that you just can’t avoid: the sooner you embrace it, the better off you are.
• Virtualization can be summarized as an abstraction layer.
• Virtualization has proved to be a game-changer, resulting in unprecedented server utilization.
• Virtualization enables agile availability of resources to the end user, thereby shaving considerable time from the IT provisioning life cycle.
• Virtualization in the modern day can be characterized as the gateway and roadmap to secure and elastic corporate IT scalability.
• Virtualization implies a fantastic alternative to physical reality—the possibilities are endless.
• The alternative to virtualization consists of physical hosts with a lot of useless spare capacity, resulting in many resources being underutilized.
• Although Oracle database administrators (DBAs) were slow to uptake virtualizing their databases, the trend has finally gained momentum and reached critical mass.
What Are VM Monitors (Hypervisors)?
A VM monitor, also known as a hypervisor, enables OS kernels to run and coexist as guests, thereby enabling virtualization at the OS level. Hypervisors are responsible for allocation and coordination of CPU, memory, I/O, peripheral resources, and so on, to the guest VMs.
Types of Hypervisors
There are two types of hypervisor:
• Type 1: This type is known as a native, or more commonly, bare-metal hypervisor. It installs on bare-metal hardware and does not require an OS on which to be installed. Examples are VMware ESX/vSphere, Microsoft HyperV, Xen, and OVM. Bare-metal hypervisors are enterprise-grade hypervisors that enable cloud computing as it is widely known and understood today.
• Type 2: This type is known as a hosted hypervisor and is installed on an already existing OS on the system: examples are OVM VirtualBox, VMware Server, and VMware Workstation. Hosted hypervisors are mostly utilized for personal use, for example, learning new technologies and colocating various OS families on your laptop.
Here are some key points and salient features about hypervisors:
• A hypervisor is at the lowest level of the stack from a technology standpoint.
• A hypervisor enables agility and rapid deployment of resources within the IT space.
• Hypervisors result in increased efficiency by merit of elastic resource consolidation.
Following are some benefits and advantages of implementing hypervisors:
• Increased resource utilization
• Fault tolerance and high availability
• Isolation and multitenant support
• Support for a wide range of popular OS families
Types of Virtualization
There are three types of virtualization prevalent in the industry today (the first two categories are explained in the following sections, as they are relevant to this chapter):
• Paravirtualization
• Hardware-assisted/full virtualization
• Partial virtualization
What Is Paravirtualization?
In paravirtualization, guest VMs use a special hypercall application binary interface (ABI) in a modified OS for performance and simplicity. The modified OS communicates with the hypervisor, and tasks are relocated from the virtual domain to the host domain.
OVM implements this type of virtualization. The Oracle/Red Hat Enterprise Linux family of paravirtualized guests are supported with OVM as paravirtualized guests.
Paravirtualization is generally relatively faster than hardware virtualization. This is not to imply that either type of virtualization is either slow or not fast enough.
What Is Hardware-Assisted/Full Virtualization?
Hardware-assisted virtualization is also known as full or native virtualization and requires CPU support.
This type of virtualization enables unmodified guest OS kernels to run within a simulated hardware infrastructure but generally is relatively slower than paravirtualization.
Microsoft Windows and Oracle Solaris families of hardware/full virtualized guests are supported with paravirtualized drivers on OVM.
OVM for x86—360 Degrees
OVM for x86 is a type1 hypervisor based on Xen, the de facto nth-generation open-source hypervisor. Xen is a mainstream technology, widely used by dominant cloud computing providers such as Amazon and Rackspace, as well as by Oracle’s own public cloud. OVM provides both server virtualization and management components. OVM 3.x is based on Xen 4.x and has been significantly enhanced to be an industrial-grade product capable of configuring, administering, managing, and supporting thousands of servers hosting both Oracle and non-Oracle applications. Some of the advances in this relatively new version include dynamic resource scheduling (DRS), high availability–enabled server pools (clusters), and dynamic power management. OVM is augmented with the Virtual Assembly Builder and Template Builder components, which combine to form a complete virtualization picture within the OVM family.
Following are some of the key points about OVM’s capabilities and some of its advantages. However, as with any technology, OVM has its fair share of nuances, most of which can be taken care of by proper configuration and by following implementation best practices.
• Server load-balancing
• Centralized network and storage management
• Physical to virtual (P2V) and virtual to virtual (V2V) conversion
• Web services API
• Support for Windows, Linux, and Solaris as guest OS families
• Agility and fast deployment with OVM templates and Oracle Virtual Assembly Builder
• Web-based GUI management
• OVM zones—multiple server and storage pools
• High availability and live migration with OVM server pools
• Running mixed heterogeneous workloads within a single consolidated machine
• Very fast—delivers near-native performance
• Simple and easy installation—low learning curve
Another nice point is that OVM is free—you pay only for affordable, low-cost support.
Note
OVM is the only virtualization offering for the x86 architecture that is certified with all major Oracle products.
Xen—Synopsis and Overview
Xen originated at Cambridge University and is the leading open-source, industry-standard hypervisor. Ian Pratt founded XenSource, the company behind Xen, which was later acquired by Citrix in 2007. Xen 4.x is the latest version as well as the underlying version for OVM 3.x.
The Xen hypervisor is the virtualization base of Amazon EC2, the market leader in the cloud computing IaaS service model. Oracle is part of the Xen Advisory Board and contributes to its development. Other members of the Xen Advisory Board include Citrix, Hewlett Packard, IBM, Intel, Novell, Oracle, and Red Hat.
OVM—Overview and Architecture
OVM is made up of two components:
• OVM Server, the Xen-based open source hypervisor component
• OVM Manager, the Java-based thin-client GUI management component
OVM Server
OVM server is the actual hypervisor component based on Xen. It installs on bare-metal x86 hardware and does not require a preinstalled OS.
OVM boots a small 64-bit domain called DOM0, which is used for assigning, distributing, and coordinating CPU, I/O, and other resources. Guest VMs are created and configured as in DOMus.
OVM Manager
Based on WebLogic server, OVM Manager is a Java-based management server component with a Web-based UI. It utilizes Oracle Database as a management repository and comes prepackaged with a free XE version of Oracle Database, which can be converted to all other flavors of the Oracle Database server family.
Recently, with OVM 3.2.x, MySQL is now also supported as a repository database option. OVM agent processes are used on each OVM server for communication and management purposes. OVM uses server pools (or clusters) to group virtualization resources: each server pool encompasses one or more OVM servers.
What Are OVM Templates?
OVM templates, or Golden Images, are factory-packaged, preinstalled, and preconfigured images of preconfigured VMs containing software products that are complete with built-in best practices and are ready to go. They provide reusability and full-stack implementation. All major Oracle products—for example, Oracle Database server, Fusion Middleware, Enterprise Linux, and RAC—are available as OVM templates.
OVM templates are the vehicles to significant reduction of installation and configuration costs in the IT landscape.
Methods of Creating OVM Templates
The following methods can be employed/deployed to create OVM templates:
• P2V conversion
• Create VM templates from existing VM images
• Create VM templates from just enough operating system (JeOS)
OVM Builder
OVM Assembly Builder gives you the capability of a structured process for appliance consolidation into cohesive and reusable assemblies by rapidly creating and configuring full-stack topologies and provisioning them onto virtualized appliances.
OVM Builder is used for creating dedicated VMs called software appliances and facilitates deployment of the entire application as a single, automatically configured unit. This tool can facilitate the building of private clouds significantly by building VM assemblies and deploying OVM templates.
OVM 3.x—A Brief Introduction
OVM 3.x, the latest release, takes scalability to a whole new level. With tons of new features, OVM 3.3 is based on Xen 4.x.
Being highly scalable, this latest version of OVM includes many enhancements:
• A feature-rich Web-based UI, improved backup and recovery capability
• Simplified VM deployment, administration, and management with 64-bit DOM0
• Application-driven virtualization, up to 128 virtual CPUs, and 1 TB memory per guest VM
• Jobs-based VM operations
• Dynamic resource management
• Dynamic power management
• Comprehensive network and storage management
• Multiple-template cloning in a single step
• Over 100 factory-packaged best-practices built into OVM templates
• A centralized configuration and management solution in the form of OVM Manager.
In other words, 3.x is truly an enterprise-grade, groundbreaking release. OVM 3.x is completely and fully managed by a browser-based UI provided by OVM Manager.
If you haven’t already embarked on this journey, now is a great time to upgrade and migrate your OVM infrastructures from 2.x to 3.x.
OVM 3.x: High Availability–Enabled OVM Ecosystem
OVM provides broad-based high availability across the virtualization ecosystem in the form of high availability–enabled server pools (or clusters) on shared storage.
Salient features include:
• Live migration of guest VMs
• Automatic failover/restart of guest VMs in case of server failure
• Oracle Cluster File System 2 (OCFS2)—high availability on a cluster file system
• Server pool load balancing—using a best-fit algorithm, places guest VMs on the most appropriately loaded VM server
• Clustered OVM Manager
Virtualized RAC Using OVM Templates—Approach 1
This approach is the easiest and fastest way to set up your own virtualized RAC database clusters as part of virtualized Oracle RAC database clouds. Simply download the OVM for x86 templates for RAC, install them, and in less than an hour, you have your own virtualized RAC up and running. This methodology is truly revolutionary and illustrates the beauty and power of agile provisioning of complex infrastructures and applications in cloud environments using virtualized templates.
While this approach is not covered in complete detail, the main utility used to set up, configure, and deploy a virtualized RAC from OVM templates, DeployCluster, is presented in the following section.
Use DeployCluster to Configure and Deploy the Virtualized RAC
This section walks you through using the DeployCluster tool to rapidly configure and deploy a virtualized RAC database cluster. Listing 9.1 shows the example run.
Listing 9.1. DeployCluster Tool
[root@bsfmgr01 deploycluster]# ./deploycluster.py -u admin -p password -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
Oracle RAC OneCommand (v1.1.2) for Oracle VM - deploy cluster -
    (c) 2011-2012 Oracle Corporation
 (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1200:v1.1.2) - v2.6.6 -
    bsfmgr01.bsflocal.com (x86_64)
Invoked as root at Sat Sep 22 20:10:04 2012  (size: 37600, mtime: Sun Aug  5 12:37:58 2012)
Using: ./deploycluster.py -u admin -p **** -M bsfracovm1,bsfracovm2 -N bsfrac64.ini

INFO: Attempting to connect to Oracle VM Manager...

INFO: Oracle VM Client  (3.1.1.399) protocol (1.8) CONNECTED (tcp) to
      Oracle VM Manager (3.1.1.305) protocol (1.8) IP (192.168.1.51) UUID
(0004fb0000010000da73c3bcce15ca2e)

INFO: Inspecting /home/oracle/ovm3/deploycluster/bsfrac64.ini for number of nodes defined....
INFO: Detected 2 nodes in: /home/oracle/ovm3/deploycluster/bsfrac64.ini
INFO: Located a total of (2) VMs;
      2 VMs with a simple name of: ['bsfracovm1', 'bsfracovm2']
INFO: Verifying all (2) VMs are in Running state
INFO: VM with a simple name of "bsfracovm1" is in Running state...
INFO: VM with a simple name of "bsfracovm2" is in Running state...
INFO: Detected that all (2) VMs specified on command have (5) common shared disks
between them (ASM_MIN_DISKS=5)

INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details
as follows:
      netconfig.ini (Network setup): /home/oracle/ovm3/deploycluster/bsfrac64.ini
      buildcluster: yes
INFO: Starting to send cluster details to all (2) VM(s).....
INFO: Sending to VM with a simple name of "bsfracovm1"....
INFO: Sending to VM with a simple name of "bsfracovm2"......
INFO: Cluster details sent to (2) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (bsfracovm1)...
INFO: deploycluster.py completed successfully at 20:10:19 in 15.7 seconds (00m:15s)
Logfile at: /home/oracle/ovm3/deploycluster/deploycluster2.log
Figure 9.1 and Figure 9.2 each show parts of a sample run of the DeployCluster tool. On your monitor display, INFO: (Figure 9.1) and [ OK ] (Figure 9.2) should be green: all green means all good to go!
Figure 9.1. An example run of the DeployCluster tool for setting up RAC using Oracle VM for x86 templates

Figure 9.2. DeployCluster tool: Configuring and setting up RAC using Oracle VM for x86 templates

Note
At the time of writing, OVM for x86 templates for RAC were only available for up to version 11gR2 and not for 12c. This is the recommended approach for setting up Oracle RAC as database clouds; however, because of the absence of OVM templates for 12c, we have included the longer alternative approach outlined in the next section. The other rationale for including this approach is that it enables you to learn the specific actions required to set up and configure RAC 12c from scratch.
Set Up and Configure a Virtualized RAC Database Cloud—Approach 2
This section takes you through an alternative, step-by-step approach to setting up your own virtualized RAC 12c in OVM for x86.
Note
This chapter assumes that you already have an OVM 3.x server pool in an up-and-running state. If this is not the case, please refer to the OVM documentation to set up OVM 3.x. The following sections assume that you are familiar with basic RAC concepts (presented in earlier chapters). Also, this chapter and the next chapter are structured in a way that enables you to set up RAC database clouds in the comfort of your own home for learning purposes. Please note that the steps are identical to corporate RAC setups; however, the infrastructure is pared down to enable you to make use of hardware available at home.
The following hardware and software were used for setting up the OVM server pool for this example:
• OVM Manager and EM12c:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM
• OVM servers for server pool:
• (Qty: 3) 64-bit Intel x86 machines with 16 GB RAM each
• Shared storage:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM:
• Openfiler with 1 TB disk space available on iSCSI
Roadmap to a Virtualized RAC 12c Cluster: High-Level Steps
Following are the high-level steps to build your own virtualized RAC–based database cloud:
1. Set up and configure the required hardware and software for an OVM server pool on shared storage.
2. Prepare and plan—Do your homework.
3. Install and set up grid infrastructure.
4. Install and set up non-shared database home(s).
5. Create a RAC database.
6. Configure and set up the RAC database as a monitored target in EM12c.
All of the preceding steps are detailed, elaborated on, and/or executed in the following sections of this chapter and the next one, with several alternative options presented for some of the involved steps.
While all of the following steps apply equally to corporate environments, they are written in such a way that you can set up a virtualized database cloud environment in your home, thereby learning how to install, set up, configure, and monitor RAC with minimal hardware.
OVM: Prerequisites, Preparation, and Planning
Ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup.
Step 9.1—Set Up a Server Pool in OVM for x86
This chapter assumes that you already have a basic server pool in OVM for x86 complete with OVM Manager 3.x and EM12c release 2 set up, configured, and ready to go for deploying an Oracle RAC 12c cluster (the hardware/software configuration used in this chapter and the next one is outlined in the preceding section). In case you need help with this process, be assured that this is a simple and easy process with a minimum, low, and intuitive learning curve if you follow the appropriate OVM for x86 installation and setup documentation manuals.
The following sections detail the steps involved in configuring OVM for RAC 12c.
Step 9.2—Configure Network Time Protocol
Press the Push to All Servers button for Network Time Protocol (NTP).
Step 9.3—Create the Required Network for RAC 12c
Continue with the following steps:
1. Choose OVM Manager → Networking → Networks → Create (+) Button.
2. Select the Create a Hybrid Network with Bonds/Ports and VLANS option.
3. Enter the name and description of the OVM network. Select the Virtual Machine option.
4. Select the relevant ports.
5. Select the relevant VLAN segments.
6. Select the appropriate IP addressing scheme. Enter the IP addresses, net masks, and bonding options if applicable.
As shown in Figure 9.3, the new OVM network has been successfully created and is ready to be deployed and used.
Figure 9.3. Oracle VM 3.x: Dedicated network for Oracle RAC 12c

Step 9.4—Create the Required Shared Virtual Disks for the GRID1 ASM Disk Group
To create the disks, follow these steps:
1. Choose OVM Manager → Repositories → Select OVS Repository → Virtual Disks → Create (+) Button:
2. Create a virtual disk with the following options:
• Size: 15 GB
• Allocation type: Sparse allocation
• Shareable: Yes
3. Repeat the preceding process for all five GRID1 Automatic Storage Management (ASM) disks.
Note
For production environments, it is highly recommended to have physical block devices presented as virtual disks for the various ASM disk groups.
As shown in Figure 9.4, all of the virtualized shareable ASM disks for the GRID1 disk groups have been created and are now ready for use.
Figure 9.4. Oracle VM 3.x: Virtualized disks for GRID1 ASM disk group for RAC 12c

Note
Step 9.5 has two alternative approaches, both of which are explained in quite a bit of detail next. Each step has further substeps, which are illustrated as well.
Step 9.5, Approach 1
Step 9.5, Approach 1, is illustrated in the following substeps.
Step 9.5, Approach 1a—Create the VM for RAC Server Node 1 from Scratch Using an ISO Boot Image to Install OEL 6.x
To create the Oracle Enterprise Linux (OEL) 6.x VM for RAC using an ISO boot image, follow these steps (as shown in Figure 9.5):
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine Button.
2. Select the Create a New VM option.
3. As shown in Figure 9.5, select and enter the following options for the RAC-Node-01 VM:
• Server pool
• Server pool repository
• VM description
• High Availability: Unchecked
• Operating system: Oracle Linux 6
• Domain type: Xen PVM
• Start policy: Start on best server
• Memory (MB): 2,048 (minimum required is 4,096. However, in cases where you are short on physical memory and are building the RAC for learning purposes, this can suffice)
• Max memory (MB): 8,192
• Processors: 2
• Max processors: 4
• Priority: 50
• Processor cap %: 100
Figure 9.5. Oracle VM 3.x: Creating a VM for RAC-Node-01

4. Select Network and then press the Add VNIC button twice to create two virtual network interface cards (VNICs) for the RAC-Node-01 VM (see Figure 9.6).
Figure 9.6. Oracle VM 3.x: Specifying virtual network interface cards for the public and private cluster interconnect networks required for the RAC

5. Choose Next to move on to Setup Networks and Arrange Disks.
6. Select and enter the following options for the VM disks (see Figure 9.7):
• CD/DVD
• Virtual disk—Press the Create (+) button.
Figure 9.7. Oracle VM 3.x: Specifying virtualized disk/optical media for RAC-Node-01

7. Select the imported ISO for Linux 6.x.
8. Select and enter the following options:
• Repository
• Virtual disk name
• Description
• Shareable: Unchecked
• Size: 25GB
• Allocation type: sparse allocation
9. Select the ISO for OEL 6.x (see Figure 9.8).
Figure 9.8. Oracle VM 3.x: Specify the ISO for Oracle Enterprise Linux 6.x 64-bit x86

10. As shown in Figure 9.9, repeat the preceding process to add/select the following disks:
• ISODATA1 ASM disk group:
• Qty: 6
• Individual disk size: 50GB
• RECO1 ASM disk group:
• Qty: 1
• Individual disk size: 50GB
Select the Disk boot option. Press the Finish button to create the RAC-Node-01 VM.
Figure 9.9. Oracle VM 3.x: Guest VM for RAC-Node-01

Step 9.5, Approach 1b—Import the OEL 6.x x86-64 ISO into the OVM Repository
To import the OEL 6.x x86-64 ISO image into the OVM repository, follow these steps:
1. Go to OVM Manager → Repositories → Select OVM Repository → ISOs → Import ISO Button.
2. Select and enter the following (see Figure 9.10):
• Server
• ISO download location: ftp://oracle:password@192.168.2.20/software/OEL63_x86_64/V33411-01.iso (Replace the IP address, username, and password with your own.)
Figure 9.10. Oracle VM 3.x: Specify the FTP location for the OEL 6.x 64-bit x86 VM template

Note
Ensure that the Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP service) is set up correctly and that the ISO is available at the desired location and has the correct permissions.
Note that the status of the import process shows as In Progress, with a message showing Download Virtual CDROM....
Monitor the progress of the ISO import process in an SSH session to one of the Oracle VM servers (OVS) to which the OVS repository is connected.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 276480
-rw-r--r-- 1 root root 282066944 Feb 24 12:57 0004fb0000150000ba1fd09b4e2bd98c.iso
Keep checking periodically after brief intervals to monitor the progress of the ISO import process.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs

[root@bsfovs03 ISOs]# ls -l
total 2890752
-rw-r--r-- 1 root root 2959081472 Feb 24 13:04 0004fb0000150000ba1fd09b4e2bd98c.iso
At this point, the OEL 6.x ISO has been successfully imported (see Figure 9.11). Start up the VM, boot it from the OEL 6.x ISO, and go through the steps of setting up and installing Oracle Enterprise Linux 6.x.
Figure 9.11. Oracle VM 3.x: Imported ISO for Oracle Enterprise Linux 6.x 64-bit x86 is now ready for use as a bootable virtual CD/DVD drive

Step 9.5, Approach 2
Step 9.5, Approach 2, is illustrated in the following sub-steps.
Step 9.5, Approach 2a—Create the VM for RAC Server Node 1 Using an OVM OEL 6.x Template
To create the VM for RAC Server Node using an OVM OEL 6.x template, follow these steps:
1. Download the OEL 6.x OVM 3.x template from https://edelivery.oracle.com/linux.
2. Unzip the ISO and make it available via FTP.
3. Go to OVM Manager → Repositories → Select OVM Repository → Assemblies → Import VM Assembly Button (see Figure 9.12). Enter the following:
• Server
• VM assembly download location
Note that the status of the VM assembly import process shows as In Progress with a message showing “Downloading Assembly...” and then another one showing “Unpacking Template....”
Figure 9.12. Oracle VM 3.x: Specify the FTP location of the Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86

Monitor the progress of the VM assembly import process within an SSH session to one of the OVS servers to which the OVS repository is connected.
[root@bsfovs03 11941bfbbc]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/Assemblies/11941bfbbc

[root@bsfovs03 11941bfbbc]# ls -l
total 617472
drwxr-xr-x 2 root root      3896 Feb 24 15:20 imports
-rw-r--r-- 1 root root 631541760 Oct 10 16:17 package.ova
drwxr-xr-x 2 root root      3896 Feb 24 15:21 unpacked
Keep checking periodically after brief intervals.
As shown in Figure 9.13, the OVM assembly for OEL 6.3 x86-64 has been imported successfully and is now ready for use.
Figure 9.13. Oracle VM 3.x: Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86

Step 9.5, Approach 2b—Create the OEL 6.3 x86-64-PVM OVM Template from the Newly Created Assembly into the OVM Repository
To create the OEL 6.3 x86-64-PVM OVM template from the newly created assembly into the OVM repository,
1. Choose OVM Manager → Repositories → Select OVM Repository → VM Assemblies → Select VM Assembly → Create VM template.
2. Enter and select the following (see Figure 9.14):
• Assembly VMs
• VM template name
• Description
Figure 9.14. The VM template name and description

3. As shown in Figure 9.15, the OEL 6.3 x86_64 OVM template has been successfully created and is now ready for deployment.
Figure 9.15. The OEL 6.3 x86_64 OVM template

Step 9.5, Approach 2c—Edit the Newly Created OEL 6.3 x86-64-PVM OVM Template
To edit the newly created OEL 6.3 x86-64-PVM OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template → Edit.
2. Modify the following options as shown in the following screens:
• Max memory (MB): 8,192
• Memory (MB): 4,096
• Max processors: 8
• Processors: 4
• Enable High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Networks: Add/specify the appropriate network(s)
• Boot order: Disk
• Virtual disks:
• Add virtual disk for Oracle software binaries: 25GB
Step 9.5, Approach 2d—Create Clone Customizer for the RAC Node OVM Template
To create a clone customizer for the RAC node OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template.
2. Press the Create Clone Customizer button.
3. Specify the name and description of the new clone customizer for the RAC 12c cluster node VMs.
4. Modify the Clone Type to Thin Clone (see Figure 9.16). This is a fast and efficient way to create new VM clone machines.
Figure 9.16. Oracle VM 3.x: Specify the virtual disks for RAC-Node-01

5. Specify the network settings for the clone customizer (if any custom modifications are required).
Step 9.5, Approach 2e—Create the RAC-Node-01 VM from the VM Template Using the Advanced Clone Customizer Method
To create the RAC-Node-01 VM from the VM template using the Advanced Clone Customizer method do the following:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine.
2. Select the Clone from an existing VM Template option (see Figure 9.17). Enter and select the following options:
• Clone count: 1
• Repository: Select OVS repository
• VM template: Select the OEL 6.3 x86_64 template
• Server pool: Select the appropriate server pool
• Description
Figure 9.17. Cloning from the existing template

3. Press the Finish button to create the RAC-Node-01 VM. The finished product is shown in Figure 9.18.
Figure 9.18. The VM for RAC-Node-01 created from the Oracle Enterprise Linux 6.x 64-bit x86 template

Step 9.6—Edit the Newly Created VM for RAC-Node-01
To edit the VM for RAC-Node-01 follow these steps:
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Edit Virtual Machine.
2. Modify the following options as shown in the configuration tab (Figure 9.19):
• Operating system: Oracle Linux 6
• Max memory (MB): 8,192
• Max processors: 8
• Networks: Specify the appropriate network(s)
• High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Boot order: Disk
• Start policy: Start on best server
• Virtual disks:
• System (virtual disk): Add another disk for Oracle binaries: 25GB
• GRID1 ASM disk group:
Qty: 6 Disks
Individual disk size: 15G
• DATA1 ASM disk group:
Qty: 6 Disks
Individual disk size: 50G
• RECO1 ASM disk group:
Qty: 1 Disk
Individual disk size: 50G
Figure 9.19. Configure the newly created VM

3. On the Network tab, add two VNICs (see Figure 9.20), one each for the public and private cluster interconnects.
Figure 9.20. Add the required VNICs

4. Finally, on the Disks tab, attach the shared virtualized disks for the ASM disk groups (see Figure 9.21).
Figure 9.21. Oracle VM 3.x: Attach the shared virtualized disks for the GRID1, DATA1, and RECO1 ASM disk groups

Step 9.7—Start Up the RAC-Node-01 VM
To start up the RAC-Node-01 VM, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Start Virtual Machine.
2. Press the Launch Console button to start the VNC console window (see Figure 9.22).
Figure 9.22. Oracle VM 3.x: Boot RAC node—First-boot interview/configuration process

3. Configure the various options in the new VM in the first-boot interview process:
• System hostname: For example, bsfrac01.bsflocal.com (12c RAC-Node-01)
• Specify VNIC device: For example, ETH0
• Boot protocol (static/DHCP): Static
• Activate VNIC on boot: YES
• IP address of the interface: For example, 192.168.2.41 (for the public network: ensure that this is an unused IP address. If you have DNS server(s), register the IP address/hostname with it)
• Netmask: For example, 255.255.255.0
• IP address of gateway: For example, 192.168.2.1
• IP addresses of DNS servers(s): For example, 255.255.255.0
• OS root password: *******
Set Up and Configure RAC-Node-01
The following sections explain how to set up and configure Node 01 for RAC 12c.
Step 9.8—Set the Network Configuration of the Private Cluster Interconnect VNIC
To set the network configuration of the private cluster interconnect VNIC, issue the following commands:
[root@bsfrac01 network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@bsfrac01 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@bsfrac01 network-scripts]# vi ifcfg-eth1
You have new mail in /var/spool/mail/root
[root@bsfrac01 network-scripts]# cat ifcfg-eth1
DNS1=192.168.2.1
GATEWAY=192.168.2.1
NETMASK=255.255.255.0
IPADDR=192.168.3.40
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
DEVICE=eth1
Step 9.9—Modify /etc/hosts File to Include the Relevant Entries for RAC 12c
To modify the /etc/hosts file to include the relevant entries for RAC 12c, open the fie and edit it, as in the following:
[root@bsfrac01 network-scripts]# vi /etc/hosts

127.0.0.1 localhost.localdomain localhost localhost4
::1 localhost6.localdomain6 localhost6

# PUBLIC IP Addresses of 12c RAC Cluster
192.168.2.40            bsfrac01        bsfrac01.bsflocal.com
192.168.2.41            bsfrac02        bsfrac02.bsflocal.com
192.168.2.42            bsfrac03        bsfrac03.bsflocal.com
192.168.2.43            bsfrac04        bsfrac04.bsflocal.com
192.168.2.44            bsfrac05        bsfrac05.bsflocal.com

# SCAN IP Addresses of 12c RAC Cluster
192.168.2.70            bsfrac-scan    bsfrac-scan.bsflocal.com
192.168.2.71            bsfrac-scan    bsfrac-scan.bsflocal.com
192.168.2.72            bsfrac-scan    bsfrac-scan.bsflocal.com

# Virtual IP Addresses of 12c RAC Cluster
192.168.2.60            bsfrac-vip01   bsfrac-priv01.bsflocal.com
192.168.2.61            bsfrac-vip02   bsfrac-priv02.bsflocal.com
192.168.2.62            bsfrac-vip03   bsfrac-priv03.bsflocal.com
192.168.2.63            bsfrac-vip04   bsfrac-priv04.bsflocal.com
192.168.2.64            bsfrac-vip05   bsfrac-priv05.bsflocal.com

# Private Cluster Interconnect IP Addresses of 12c RAC Cluster
192.168.3.40            bsfrac-priv01   bsfrac-priv01.bsflocal.com
192.168.3.41            bsfrac-priv02   bsfrac-priv02.bsflocal.com
192.168.3.42            bsfrac-priv03   bsfrac-priv03.bsflocal.com
192.168.3.43            bsfrac-priv04   bsfrac-priv04.bsflocal.com
192.168.3.44            bsfrac-priv05   bsfrac-priv05.bsflocal.com
Note
Single Client Access Name (SCAN) listener IP information is included in the /etc/hosts file. The SCAN IPs should be registered with the appropriate DNS server(s).
Step 9.10—Check for Space Requirements
Ensure that enough TMP space is available to support RAC12c:
[root@bsfracvx1 ~]# df -h /tmp
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_bsfracvx1-lv_root
                       26G  5.3G   20G  22% /
Step 9.11—Disable the Linux Software Firewall
Next, disable the Linux software firewall.
Note
This step is optional and should be exercised with caution. Only do it if you have ancillary hardware/software firewalls in place in the corporate landscape.
[root@bsfrac01 ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED
2    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0
reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination
1    REJECT     all  --  0.0.0.0/0            0.0.0.0/0
 reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

[root@bsfrac01 ~]# service iptables off
Usage: iptables {start|stop|restart|condrestart|status|panic|save}
[root@bsfrac01 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules: c                             [  OK  ]
[root@bsfrac01 ~]# chkconfig iptables off
[root@bsfrac01 ~]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all      ::/0                 ::/0                state RELATED,ESTABLISHED
2    ACCEPT     icmpv6    ::/0                 ::/0
3    ACCEPT     all      ::/0                 ::/0
4    ACCEPT     tcp      ::/0                 ::/0                state NEW tcp dpt:22
5    REJECT     all      ::/0                 ::/0
reject-with icmp6-adm-prohibited

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination
1    REJECT     all      ::/0                 ::/0
reject-with icmp6-adm-prohibited

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

[root@bsfrac01 ~]# service ip6tables stop
ip6tables: Flushing firewall rules:                        [  OK  ]
ip6tables: Setting chains to policy ACCEPT: filter         [  OK  ]
ip6tables: Unloading modules:                              [  OK  ]
[root@bsfrac01 ~]# chkconfig ip6tables off
Step 9.12—Configure and Restart NTPD Client
Edit/configure the /etc/ntp.conf file and restart the Network Time Protocol Daemon (NTPD) server on the RAC node VM.
$ vi /etc/ntp.conf
# Modify the following line to reflect the NTP servers with which the time
# will be synchronized
server 192.168.2.20
Step 9.13—Partition, Format, and Mount /u01 on the 25-GB Local Virtual Hard Disk
To partition, format, and mount /u01 on the 25-GB local virtual hard disk, start by doing the following:
[root@bsfrac01 /]# mkdir /u01
You have new mail in /var/spool/mail/root
[root@bsfrac01 /]# mount /dev/xvdb1 /u01
[root@bsfrac01 /]# df -m
Filesystem           1M-blocks      Used Available Use% Mounted on
/dev/xvda2                9985      8087      1392  86% /
tmpfs                     1940         1      1940   1% /dev/shm
/dev/xvda1                  99        50        45  53% /boot
/dev/xvdb1               25195       172     23743   1% /u01
Make the mount point persistent by modifying the /etc/fstab file:
#
# /etc/fstab
# Created by anaconda on Fri Sep  7 08:14:40 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
LABEL=/                 /                       ext4    defaults        1 1
LABEL=/boot             /boot                   ext4    defaults        1 2
/dev/xvda3              swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/xvdb1              /u01                    ext4    defaults        0 0
Step 9.14—Disable the SELINUX Option
Disable the SELINUX option by modifying the following file:
[root@bsfrac01 /]# vi  /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing—SELinux security policy is enforced.
#     permissive—SELinux prints warnings instead of enforcing.
#     disabled—No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE—Can take one of these two values:
#     targeted—Targeted processes are protected,
#     mls—Multilevel security protection.
SELINUXTYPE—Targeted
Step 9.15—Install VSFTPD (FTP Server)
Install VSFTPD (FTP server) by performing the following:
[root@bsfrac01 ~]# yum install vsftpd
Note
This step is optional.
Step 9.16—Install X Window System Desktop
Install the X Window System desktop by performing the following steps:
[root@bsfrac01 /]# yum groupinstall "X Window System" desktop
Loaded plugins: security
Setting up Group Process
Package 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 already installed and latest version
Package hal-0.5.14-11.el6.x86_64 already installed and latest version
Package 1:dbus-1.2.24-7.0.1.el6_3.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:0.8.1-34.el6_3 will be installed
.
.
.
openssh-server                                            x86_64                           5.3p1-84.1.el6
ol6_latest                           298 k
 rhn-check                                                 noarch                           1.0.0-87.0.6.el6
ol6_latest                            60 k
 rhn-client-tools                                          noarch                           1.0.0-87.0.6.el6
ol6_latest                           492 k
 rhn-setup                                                 noarch                           1.0.0-87.0.6.el6
ol6_latest                            96 k

Transaction Summary
=====================================================================================================================================================================================================
Install     265 Package(s)
Upgrade      19 Package(s)

Total download size: 123 M
Is this ok [y/N]: y
Downloading Packages:
(1/284): ConsoleKit-x11-0.4.1-3.el6.x86_64.rpm  |  20 kB     00:00
(2/284): DeviceKit-power-014-3.el6.x86_64.rpm   |  90 kB     00:00
.
.
.
Dependency Updated:
  libreport.x86_64 0:2.0.9-5.0.1.el6_3.2                         libreport-cli.x86_64 0:2.0.9-5.0.1.el6_3.2                    libreport-plugin-kerneloops.x86_64 0:2.0.9-5.0.1.el6_3.2
  libreport-plugin-logger.x86_64 0:2.0.9-5.0.1.el6_3.2           libreport-plugin-
mailx.x86_64 0:2.0.9-5.0.1.el6_3.2           libreport-plugin-reportuploader.x86_64 0:2.0.9-
5.0.1.el6_3.2
  libreport-python.x86_64 0:2.0.9-5.0.1.el6_3.2                  nspr.x86_64 0:4.9.2-
0.el6_3.1                                 nss.x86_64 0:3.13.6-2.0.1.el6_3
  nss-sysinit.x86_64 0:3.13.6-2.0.1.el6_3                        nss-tools.x86_64 0:3.13.6-2.0.1.el6_3                         nss-util.x86_64 0:3.13.6-1.el6_3
  openssh.x86_64 0:5.3p1-84.1.el6                                openssh-clients.x86_64
0:5.3p1-84.1.el6                       openssh-server.x86_64 0:5.3p1-84.1.el6
  rhn-check.noarch 0:1.0.0-87.0.6.el6                            rhn-client-tools.noarch
0:1.0.0-87.0.6.el6                    rhn-setup.noarch 0:1.0.0-87.0.6.el6
Complete!
Note
The output of the X Window System desktop installation is very long and has been abbreviated.
Modify the /etc/inittab file to start with a GUI login and reboot the system:
#id:3:initdefault: # Change Option 3 to 5 as shown in the following lineid:5:initdefault:
Step 9.17—Reboot RAC-Node-01 for All of the Preceding Setups/Configurations to Take Effect
To reboot, issue the following command:
[root@bsfrac01 network-scripts]# shutdown -r
After a successful reboot, you will arrive at the login screen (see Figure 9.23).
Figure 9.23. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 login screen

Step 9.18—Verify the Network Settings after the Node Reboots
To verify the network settings after the reboot, do the following:
[root@bsfrac01 /]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:21:F6:00:00:01
          inet addr:192.168.2.40  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::221:f6ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:49103 errors:0 dropped:117 overruns:0 frame:0
          TX packets:12982 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:33876842 (32.3 MiB)  TX bytes:939705 (917.6 KiB)
          Interrupt:57

eth1      Link encap:Ethernet  HWaddr 00:21:F6:00:00:00
          inet addr:192.168.3.40  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::221:f6ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25180 errors:0 dropped:117 overruns:0 frame:0
          TX packets:198 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1316336 (1.2 MiB)  TX bytes:12163 (11.8 KiB)
          Interrupt:58

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:930 (930.0 b)  TX bytes:930 (930.0 b)
Oracle Software Preinstallation Steps on the RAC-Node-01 VM
Perform the following checks (steps 9.19–9.28) to satisfy the prerequisites for RAC 12c on Node 01.
Step 9.19—Check Space Requirement on /tmp
To check that the space requirement has been met, do the following (10 GB is recommended):
[oracle@bsfrac01 Database]$ df -h /tmp
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            9.8G  7.9G  1.4G  86% /
Step 9.20—Create the Required and Relevant OS Groups
Issue these commands to create the OS groups:
[root@bsfrac01 /]# groupadd -g 54327 asmadmin
[root@bsfrac01 /]# groupadd -g 54328 asmoper
[root@bsfrac01 /]# groupadd -g 54329 asmadmin
[root@bsfrac01 /]# groupadd -g 54324 asmdba
[root@bsfrac01 /]# groupadd -g 54324 backupdba
[root@bsfrac01 /]# groupadd -g 54325 dgdba
[root@bsfrac01 /]# groupadd -g 54326 kmdba
[root@bsfrac01 /]# groupadd -g 54321 oinstall
[root@bsfrac01 /]# groupadd -g 54322 dba
[root@bsfrac01 /]# groupadd -g 54323 oper
Note
Some of the preceding steps are optional, and whether they should be done depends on the user’s job function; for example, DBA, DMA, storage/system administrator, or other role.
Step 9.21—Create the Oracle and Grid OS Users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software Owners and Set Their Initial Passwords
To create the oracle and grid OS users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software owners, respectively, and set their initial passwords, issue these commands:
[root@bsfrac01 /]# useradd -u 54321 -g oinstall -G dba,asmdba oracle
[root@bsfrac01 /]# useradd -u 54322 -g oinstall -G asmadmin,asmdba grid

[root@bsfrac01 /]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

[root@bsfrac01 /]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Step 9.22—Create the Optimal Flexible Architecture (OFA) Directory Structure for RAC 12c
As the root OS user, run the following commands:
[root@bsfrac01 /]# mkdir -p  /u01/app/12.1.0/grid
[root@bsfrac01 /]# mkdir -p /u01/app/grid
[root@bsfrac01 /]# mkdir -p /u01/app/oracle
[root@bsfrac01 /]# chown -R grid:oinstall /u01
[root@bsfrac01 /]# chown oracle:oinstall /u01/app/oracle
[root@bsfrac01 /]# chmod -R 775 /u01/
Step 9.23—Observe/Verify the Required and Relevant Permissions of the Created OFA Directory Structure
Check the required and relevant permissions set for the OFA directory structure:
[root@bsfrac01 oracle]# ls -l /u01
total 4
drwxrwxr-x 5 grid oinstall 4096 Feb 25 23:29 app
[root@bsfrac01 oracle]# ls -l /u01/app/
total 12
drwxrwxr-x 3 grid   oinstall 4096 Feb 25 23:29 12.1.0
drwxrwxr-x 2 grid   oinstall 4096 Feb 25 23:29 grid
drwxrwxr-x 2 oracle oinstall 4096 Feb 25 23:29 oracle
Step 9.24—Set Up and Configure the NTPD
Configure the NTPD:
[root@bsfrac01 ~]# service ntpd start
Shutting down ntpd:                                        [  OK  ]
[root@bsfrac01 ~]# chkconfig ntpd on
Step 9.25—Turn Off and Unconfigure the Avahi daemon
Do the following to turn off and unconfigure the Avahi daemon:
[root@bsfrac01 ~]# service avahi-daemon stop
Shutting down Avahi daemon:                                [  OK  ]
[root@bsfrac01 ~]# chkconfig avahi-daemon off
Step 9.26—Install Packages/Options for Linux Kernel
Within OEL, using the GUI software installer (see Figure 9.24) or using the rpm command-line utility, ensure that the following packages for OEL 6.x x86_64 are installed with greater-than/equal-to versions. Additionally, download and install ancillary packages to aid with the performance of the RAC.
Figure 9.24. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 custom package/rpm installation

glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
ksh
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686

[root@bsfrac01 ~]# rpm -qa glibc*
glibc-common-2.12-1.80.el6_3.5.x86_64
glibc-devel-2.12-1.80.el6_3.5.x86_64
glibc-2.12-1.80.el6_3.5.x86_64
glibc-headers-2.12-1.80.el6_3.5.x86_64
[root@bsfrac01 ~]# rpm -qa libstdc++*
libstdc++-4.4.6-4.el6.x86_64
libstdc++-devel-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa gcc*
gcc-c++-4.4.6-4.el6.x86_64
gcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa ksh*
ksh-20100621-16.el6.x86_64
[root@bsfrac01 ~]# rpm -qa make*
make-3.81-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa sysstat*
sysstat-9.0.4-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libaio*
libaio-devel-0.3.107-10.el6.x86_64
libaio-0.3.107-10.el6.x86_64
[root@bsfrac01 ~]# rpm -qa binutils*
binutils-2.20.51.0.2-5.34.el6.x86_64
[root@bsfrac01 ~]# rpm -qa compat-lib*
compat-libcap1-1.10-1.x86_64
compat-libstdc++-33-3.2.3-69.el6.x86_64
Step 9.27—Create Primary Partitions for All the GRID1, DATA1, and RECO1 ASM Disk Groups
Next, create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups:
[root@bsfrac01 ~]# fdisk /dev/xvdc
device contains neither a valid DOS partition table, nor Sun, SGI, or OSF disklabel.
Building a new DOS disklabel with disk identifier 0x6a917f21.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Note
Repeat the preceding steps for all the ASM disks, including grid infrastructure disks.
Step 9.28—Verify the Partition Structures for the Underlying Disks in the GRID1, DATA1, and RECO1 ASM Disk Groups
Verify the partition structures for the underlying disks:
[root@bsfrac01 /]# fdisk –l
Step 9.29—Configure ASM Library on RAC-Node-01
Configure the ASM library by performing the following steps:
[root@bsfrac01 dev]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl+C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmdba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK01 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK02 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK03 /dev/xvde1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK04 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK05 /dev/xvdg1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK01 /dev/xvdh1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK02 /dev/xvdi1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK03 /dev/xvdj1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK04 /dev/xvdk1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK05 /dev/xvdl1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK06 /dev/xvdm1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk RECO1DISK06 /dev/xvdn1
Writing disk header: done
Instantiating disk: done
Note
You can also choose to set up and configure UDEV rules for the ASM disks:
[root@bsfrac01 dev]# /etc/init.d/oracleasm listdisks
DATA1DISK01
DATA1DISK02
DATA1DISK03
DATA1DISK04
DATA1DISK05
DATA1DISK06
GRID1DISK01
GRID1DISK02
GRID1DISK03
GRID1DISK04
GRID1DISK05
RECO1DISK06
Step 9.30—Download and Stage the Oracle Software Binaries
Download, unzip, and stage the Oracle software grid and database software binaries:
[oracle@bsfrac01 Database]$ unzip -q linuxx64_database_12.1BETA_130131_1of2.zip
Repeat the unzip process for all the software binary zip files, and verify the unzipped and staged directory structure.
Step 9.31—Establish a Save Point: Make a Backup Copy of the Ready-to-Go OVM in the Form of an OVM Template
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM → Right-click → Clone or Move.
Step 9.32—Remove (Temporarily) the Shared Disks and Clone the Ready-to-Go OVM
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button.
As shown in Figure 9.25, this action temporarily removes all the shared ASM virtual disks. You need to do this; otherwise, clones of these disks will be created unnecessarily during the ensuing cloning process for all the RAC nodes.
Figure 9.25. Oracle VM 3.x: RAC-Node-01—Local virtual disks after temporarily removing the virtualized shared disks for Automatic Storage Management

Step 9.33—Clone the Other Nodes of the RAC Cluster from the Ready-to-Go Node 01 VM
To clone the other nodes of the RAC, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM and right-click. Choose Clone or Move (see Figure 9.26).
Figure 9.26. Oracle VM 3.x: Clone the other RAC nodes from the VM from Node 01

2. Select the Create a clone of this VM option.
3. Select and enter the following options:
• Target server pool
• Description
• Clone to a: VM
• Clone count: Specify the clone count for the RAC nodes
As shown in Figure 9.27, all the nodes for a five-node RAC have been successfully created and are ready for further configuration.
Figure 9.27. Oracle VM 3.x: VMs for all nodes of the RAC have been created and cloned

Step 9.34—Reattach the Shared Disks to the RAC Node OVMs
Next reattach the shared disks to the RAC node OVMs:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button (see Figure 9.28).
Figure 9.28. Oracle VM 3.x: RAC-Node-01—Reattach the virtualized shared disks for Automatic Storage Management

2. Select the shared disks.
3. Repeat the preceding steps for all the RAC node VMs.
Step 9.35—Start Each Guest VM and Edit the Network Settings
1. Go to System → Preferences → Network Settings → IPv4 Settings.
2. Modify the IP addresses of the eth0 (public) and eth1 (private cluster interconnect) NICs, as shown in Figure 9.29.
Figure 9.29. Oracle VM 3.x: RAC-Node-01—Network configuration for the RAC node VMs

3. Repeat for all of the RAC nodes.
Install and Set Up 12c Grid Infrastructure
As you can see from the preceding sections, setting up and installing Oracle RAC is all about doing an extensive amount of homework in the right way. To summarize the activity covered in the last few sections, ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup:
• Dedicated virtual network for RAC has been created and configured.
• Virtualized shared ASM disks for the GRID1 ASM disk group are created and ready for use.
• VMs that will constitute the nodes of the RAC 12c have been created, set up, and configured.
• OEL 6.x is set up and configured on RAC-Node-01 using two alternative approaches; installing it from scratch or using the downloadable templates for OVM for x86.
• The VMs for the other nodes of the RAC 12c have been cloned from the RAC-Node-01 VM.
It’s now time to set up Oracle grid infrastructure and get the RAC 12c bird off the ground and in the air.
Step 9.36—In OVM Manager for x86, Start All the VMs for the Oracle RAC 12c Cluster
As shown in Figure 9.30, at this point you want to start the VMs for the RAC 12c cluster.
Figure 9.30. Oracle VM 3.x: Start up the VMs for the RAC 12c cluster

Step 9.37—In the Grid Infrastructure Staging Directory, Run the OUI to Set Up the Grid Infrastructure
Enter the information and make the selections in the Wizard Entry screens of the Oracle Universal Installer (OUI), as shown in Figures 9.31 through 9.45. In some cases, you will need to edit according to the specific needs of your organization.
Figure 9.31. Enter your support credentials.

1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches (see Figure 9.31), or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option (see Figure 9.32).
Figure 9.32. Select Install and Configure Oracle Grid Infrastructure for a Cluster.

3. Select the Configure a Flex Cluster option (see Figure 9.33).
Figure 9.33. Select Configure a Flex Cluster.

4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS) (see Figure 9.34).
Figure 9.34. Enter SCAN and GNS information.

6. Enter the relevant information for the RAC 12c nodes, including for HUB and LEAF nodes (see Figure 9.35).
Figure 9.35. Enter HUB and LEAF node information.

7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes, as in Figure 9.36.
Figure 9.36. Enter credentials for SSH connectivity.

8. Once step 7 is done, the system tests the SSH connectivity between the nodes (see Figure 9.37).
Figure 9.37. Password-less connectivity testing in progress

9. Specify the network interfaces for public, private cluster interconnect, and ASM. The system will validate those as well (see Figure 9.38).
Figure 9.38. The system validates the public and private interfaces across the cluster nodes.

10. Select the Configure Grid Infrastructure Management Repository option (see Figure 9.39).
Figure 9.39. Select the Configure Grid Infrastructure Management Repository option.

11. If you choose No, then the message in Figure 9.40 is displayed.
Figure 9.40. Grid Infrastructure Management Repository is not selected as an option.

12. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level (see Figure 9.41).
Figure 9.41. Specify the ASM disks for the GRID1 ASM disk group.

13. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
14. Select the Do Not Use Intelligent Platform Management Interface (IPMI) option.
15. Specify the OS groups for ASM.
16. Enter the Oracle BASE and HOME locations.
17. Enter the Oracle inventory location (see Figure 9.42).
Figure 9.42. Enter the inventory location.

18. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts (see Figure 9.43).
Figure 9.43. Enter the root password.

19. Generate and run any runfixup.sh scripts to remediate any prerequisite issues (see Figure 9.44).
Figure 9.44. Fix any issues.

20. Press Install to initiate the installation process for grid infrastructure (see Figure 9.45).
Figure 9.45. Initiate the installation process.