Snap Clones
In simple terms, Snap Clone is a storage-agnostic self-service approach to rapidly creating space-efficient clones of large databases (and by and large, we’re talking terabytes or more). Now that’s probably more buzzwords in one sentence than anyone’s brain can deal with without exploding, so let’s look at some of those terms more closely:
• Storage agnostic: Snap Clone supports all storage vendors, both network attached storage (NAS) and storage area network (SAN).
• Self-service: In the XaaS world—where X can be any of infrastructure (I), middleware (MW), platform (P), or database (DB)—one of the key features is empowering the end user to do the work rather than waiting on some techie to find time in her or his otherwise busy schedule. So it’s the end user who makes the ad hoc clones here, not the storage admin.
• Rapid: People simply don’t have the time anymore to wait weeks for provisioning to happen (for that matter, they probably never did, but that’s another discussion), so you have to support the functionality to clone databases in minutes rather than the days or weeks it used to take.
• Space efficient: When you’re working with terabyte or larger databases, you may not have the storage to create full-sized clones, so you have to significantly reduce the storage footprint to start with.
The Challenges Snap Clone Addresses
Snap Clone can be used to address number of major challenges:
• Lack of automation: Manual tasks such as provisioning and cloning of new databases (e.g., for test or development systems) is one area that many DBAs complain is too time consuming. It can take days to weeks, often because of the need to coordinate the involvement of different groups, as shown in Figure 8.27.
Figure 8.27. The standard cloning workflow
When an end user, whether a developer or a QA engineer, needs a database, he or she typically has to go through an approval process like that shown in the figure, which then translates into a series of tasks for the DBA, the sysadmin, and the storage admin. The sysadmin has to provide the compute capacity, and the storage admin has to provide the space on a filer. Finally, the DBA would install the bits, create the database (optionally on Real Application Clusters), and deliver the database to the user. Clearly, this is a cumbersome and time-consuming process that needs to be improved.
• Database unfriendly solutions: Obviously, when there is a need looking for a solution, different people take different approaches to resolving that need. There are a variety of point solutions and storage solutions out there, but the vast bulk of them are not database aware. They tend to clone storage volumes rather than databases and have no visibility into the database stack, which of course makes it hard to triage performance issues as a DBA. They also lack the ability to track configuration, compliance, and data security issues, and they have limited or no lifecycle capabilities.
• Storage issues and archaic processes: One of the main issues DBAs face is storage. Data volumes are ever increasing, particularly in these Big Data days, and the growth can often outpace your storage capacity. You can throw more disks at the problem, but it never seems to be enough, and you can end up with degraded performance if you take the route of sharing clones between users. There can also be different processes and different priorities between the storage team and the DBA team, and you may still have fixed refresh cycles, making it difficult to clone on an ad hoc basis.
In the latest release of Snap Clone, you can provide thin clones via either software solutions (Solaris ZFS and CloneDB) or hardware solutions (Sun ZFS Storage Appliance, NetApp, and EMC). Let’s touch on each of these in a bit more detail. The examples presented here use Enterprise Manager 12.1.0.4.3 with the latest plugins (specifically for Snap Clone, the important plugin is the Enterprise Manager for Storage Management plugin release 12.1.0.5.0). If you are on a slightly different version, you may see a slightly different look and feel to the user interface and available functionality.
Software Solutions
Let’s start by looking at the software-based solutions.
Snap Clone using Solaris ZFS
Snap Clone using ZFS uses a single stock Solaris 11.1+ image, which can be either physical or virtual (note that it doesn’t use the Sun ZS3 appliance). It supports both NAS and SAN. If you are using SAN, then mount the logical unit numbers (LUNs) as raw disk and format with the ZFS filesystem. It’s important to note here that this does not require any snapshot or cloning licenses from the storage vendor, as these features are available for free.
Additional features provided with this solution include compression, de-duplication, I/O caching, and so on. If you also need high availability in this configuration, that can be handled externally either via Solaris Clusters or by using the high-availability features of the underlying hypervisor.
Figure 8.28 shows what the configuration looks like diagrammatically.
Figure 8.28. Snap Clone using ZFS
CloneDB Using dNFS
With CloneDB using direct network file storage (dNFS), you get the capability to create thin copies of a database from RMAN image copies. This approach uses the NFS v3 client that’s embedded in the database technology since 11.2.0.3. Currently, this is supported for single-instance databases, but only on filesystems (i.e., ASM is not yet supported).
The advantages of this approach include the following:
• It’s easy to set up.
• No special storage software is needed.
• It works on all platforms.
• It’s both time efficient (instantaneous cloning) and space efficient (you can create multiple clones based on one backup).
• It uses dNFS to improve the performance, high availability, and scalability of NFS storage.
Hardware Solution
As mentioned previously, the hardware solution for Snap Clone is available on a range of hardware, including Sun ZFS Storage Appliance, NetApp, and EMC. The EMC configuration is the most interesting one, as Snap Clone on EMC includes support for databases on ASM, so let’s look at that example in more detail.
Using Snap Clone on ASM and EMC storage provides the ability to create “live” thin clones of databases that are on ASM. A live clone is not snapshot based but rather a live clone of the database that can be within the same cluster or in a different one. Both single-instance and RAC are supported—supported versions are 10.2.0.5 and higher of the database and 11.2 and higher of the grid infrastructure code. This functionality works on both EMC VMAX (with Time Finder VPSnap) and VNX storage appliances.
Figure 8.29 shows what this configuration looks like diagrammatically.
Figure 8.29. Snap Clone using ASM on EMC storage
Snap Clone Setup
Now that we’ve looked at the different setups architecturally, let’s walk through the setup for Snap Clone. In this example, we use CloneDB because it requires the most straightforward setup. Refer to the Cloud Administration Guide (found under Private Cloud Setup and Administration in the EM12c documentation) for more details on the other setups.
The steps to set up Snap Clone are as follows:
1. Update the relevant plugins (or install them if you don’t already have them). You will need the following plugins (the versions mentioned here are for the 12.1.0.4.3 environment used in this example):
• Enterprise Manager for Cloud version 12.1.0.9.0
• Enterprise Manager for Oracle Cloud Framework version 12.1.0.2.0
• Enterprise Manager for Storage Management version 12.1.0.5.0
• Enterprise Manager for Oracle Database version 12.1.0.7.0
2. Set up the relevant roles, users, PaaS infrastructure zones, and database pools, as described in Chapter 5.
3. Create a database profile and service template so that self-service users can create thin clones as they require.
Let’s look at the details of step 3.
Creating a Database Profile
Creating a database profile to use Snap Clone is a little bit different from creating the profiles we looked at earlier.
1. Start from the same place (Setup → Cloud → Database → Data Sources → Data Profiles, and then click Create). In this case, we are going to create the profile using an RMAN Database Image, as shown in Figure 8.30. Note the message that also appears on this screen (at the bottom) that says Snap Clone is not enabled. This rather spurious message occurs because we don’t have any relevant hardware installed that we can use to create Snap Clone. However, it can be safely ignored because we are using CloneDB in this example.
Figure 8.30. Creating a database profile for Snap Clone, step 1
2. In this example, the database from which the profile is created is running in ARCHIVELOG mode. That means that on the next screen, shown in Figure 8.31, the backup can be done in online mode. A directory for the backup to take place in must also be specified.
Figure 8.31. Creating a database profile for Snap Clone, step 2
3. Steps 3 and 4 of the wizard are the same as in previous database profile creations in this chapter, so for brevity, they are omitted here. Now we can look at using that profile in the service template creation. Again, we start that from Setup → Cloud → Database → Service Templates, and then click the Create button. The main difference here is that once we select the database profile we just created, there will now be a Create CLONEDB option, as you can see in Figure 8.32.
Figure 8.32. Creating a service template for Snap Clone, step 1
4. The important information we need to provide to use Snap Clone with CloneDB is on the next screen. We need to tell the wizard a location that is shared using NFS. In this example, that the location is set up using /u02/copy-on-write as an NFS share, as shown in Figure 8.33.
Figure 8.33. Creating a service template for Snap Clone, step 2
5. Again the remaining steps of the wizard are the same as in the previous service template examples, they are omitted here. At the end, however, you should see a message that the service template has been created successfully, as shown in Figure 8.34. Once that is complete, the self-service user can select this template from the Self Service Catalog, just as shown in previous examples in earlier chapters.
Figure 8.34. Creating a service template for Snap Clone, step 3
Summary
In this chapter, we looked at the two main types of clones built in a cloud environment using Enterprise Manager 12c—full clones and snap (or thin) clones. A full clone involves taking a complete or full copy of an existing database, which can be done in one of two ways: by taking an RMAN backup or issuing an RMAN DUPLICATE command. A snap clone takes advantage of copy-on-write technology, available through both software and hardware solutions, to rapidly create space-efficient clones of large databases. These clones contain only changed blocks; unchanged blocks are read from the test master database the clone is built from. As a result, many snap clones can be taken from a single test master without using anywhere near the amount of storage the test master takes up.
From here you can see a number of regions—General, Request Status, Incidents, Top Resource Providers, Top Requested Services, and Top Charge Trends (data is displayed in the Charge Trends region only if you have enabled chargeback), so you get a good overview of the health of your cloud infrastructure. If you want to drill down to look at more details of what service instances you have available, you can click on the number to the right of Service Instances. This link will show you exactly what service instances are defined in your cloud environment, their status, the service type, the resource provider, the service template, and even who owns it (see Figure 7.2).
Figure 7.2. Service instances
From here you can also click on the name of the service instance (for example, crm00001) to drill into the home page for that particular target, or you can simply select the row containing that target to do the following tasks:
• Start or stop the instance. Which one of these options is available depends on the status the instance is currently in. For example, if the service instance is already started, the Start option will remain grayed out.
• Examine the history of the service instance by clicking the History icon. This feature shows all the operations that have taken place against this particular service instance since it was created.
• Delete the service instance if it is no longer required. Obviously, caution should be taken before deciding to perform this operation.
You may find that the two most important links are the ones that show failed requests (the red slice of the pie chart in Figure 7.1) and the heat map (which you get to by clicking View Map on the top right of the screen shown in Figure 7.1). The failed request statuses are ones you’ll want to investigate further to see what went wrong. Sometimes, this task is as simple as an attempt to start a database that’s already started, but at other times, you’ll find more complex issues that need further investigation and follow-up. You should also check here for requests that failed due to insufficient quota or resources and investigate whether more quota or resources need to be allocated.
The heap map is a way to graphically view which resource providers (such as platform as a service [PaaS] infrastructure zones and database pools) are using the most resources. Even in the relatively simple environment we’re using as a demonstration, the heat map (see Figure 7.3) shows that the host allocation for the Salt Lake City PaaS infrastructure zone, at 66 percent utilization, is starting to reach a level that may require adding more PaaS infrastructure zones. In more complex environments, you may see resource providers that show up as red. If you select that resource provider, you’ll see what the problem is in the Metric Details region shown below the heat map, and you can drill into the Value link to discover more details.
Figure 7.3. Heat map from a simple cloud environment
There is also a menu of items that you can select from using the Oracle Cloud menu on the top left of the Cloud Home page. Several items in this menu are useful from an administration and management perspective.
Members → Topology
This menu item shows use a graphical representation of how the different items in your cloud environment are related. It is particularly useful when the environment becomes more complicated, as it allows you to see at a glance how items are related to each other. By clicking on one of the items, you can see specific details listed under the Metric History region on the bottom right (see Figure 7.4 for an example).
Figure 7.4. Configuration Topology
Resource Providers, Service Templates, Service Instances
Each of these menu items takes you to a list of the different entities. From here, it is easy to select and edit each particular entity. You can also get to each of these pages by clicking the links next to each one in the General region of the Cloud Home page. From each page, you are also able to create new items or delete existing ones. Figure 7.5 shows an example from the Resource Providers page.
Figure 7.5. The Resource Providers page
Requests
If you click on Show All under Requests, you can see a complete list of the requests that have been made in your cloud environment, as shown in Figure 7.6. From here you can do the following:
• Select an individual request and look at the details for it.
• Reschedule the request if it hasn’t already been executed.
• Delete the request.
Figure 7.6. The Requests page
If you click on Requests: → Dashboard, you’ll be taken to a more graphical representation of the request trends over the past seven days, as shown in Figure 7.7. Note that you can click on the Settings spanner on the top left to change this time period to one more suitable to you (Figure 7.7, for example, shows the last 31 days’ worth of requests). This page is particularly useful for showing request trends to management.
Figure 7.7. The Request Dashboard
The Cloud Adviser
The information we’ve covered so far in this chapter is really the bread and butter, the day-to-day life of a cloud administrator. However, one of the main advantages of moving to a cloud environment is that it frees up a lot of the DBA’s time. No longer do you need to focus on the mundane tasks of creating databases, looking after backups, and so on. The time that you used to spend working on these tasks is now available for you to take on a much more important role—the cloud adviser.
What is the cloud adviser’s role? Well, it’s a role that many of us have been performing for years—the senior DBA role—expanded into the cloud environment. Since most of our business users do not have the technical background to be able to make informed decisions on technical issues, they still come to the DBA to provide that advice. Those technical issues fall into a variety of broad topics, which the rest of this chapter examines.
Security
Security is of paramount importance in the cloud world, because if you get the security model wrong in the cloud environment, you may not have an environment to administer and manage for very long! We’ve all no doubt heard of the security breaches that seem to be increasingly common in the cloud environment. Security breaches might involve stolen usernames and passwords from cloud providers and, more seriously, stolen data. Oracle database technology is used the world over to secure large volumes of confidential data—financial data, employee data, customer information, and so much more. Securing that data, and ensuring it remains secure, is an essential part of the DBA’s role. Following are some of the tools that can be used to assist with this task:
• Transparent Data Encryption (TDE): TDE is a way to stop would-be attackers from bypassing the database and reading sensitive information directly from the files that make up the Oracle database. This protection is extended to any backups that have been made of the database as well. TDE allows encryption of individual columns right up to entire tablespaces, so you can choose how much of your data to secure.
• Data Masking and Subsetting: As you move to the cloud environment, one approach that is often taken for security reasons is to migrate development and test environments to the cloud first, as these environments are often seen as lower risk. However, at the same time, you want to ensure that your developers and quality assurance (QA) engineers have environments that are as close to production as possible so they have a realistic environment to test performance on. Those two requirements may at first glance seem antithetical, but Enterprise Manager contains Data Masking and Subsetting tools that can be used to address the issue. Oracle Data Masking and Subsetting allows you to sanitize copies of production data for development and QA environments so that your production data is not compromised. It also allows you to use subsets of the data in cases where you simply do not have enough storage to take a full copy of your production data into these environments.
Of course, you need to secure your data against more than just external attacks. Most studies of security breaches indicate that these breaches are more often the work of insiders than of external hackers (regardless of the fact that the most reported-on breaches are external). So from a security perspective, you must have tools and technologies that can be used to both track and secure against internal attacks as well. Some of the Oracle products that help to address this space include Transparent Data Encryption, Oracle Audit Vault, and Oracle Database Vault.
Server and Database Sizing
In theory, scaling a database in the cloud world is as simple as requesting more processing capability. In reality, we all know that service providers, be they an internal IT department or an external hosting provider, will have limitations as far as the hardware they have available to them. One of the value-add roles that a DBA can perform is to determine the amount of resources required now to meet current needs as well as to scale to meet future requirements. Enterprise Manager 12c has a number of tools that can be used to assist the DBA in this arena, including the following:
• Consolidation Planner: Consolidation Planner is a tool that can be used to plan a cloud architecture for server consolidation or for any other consolidation model. It allows you to identify source and destination targets and applicable technical and functional constraints, such as where the application can reside. You can also use Consolidation Planner to determine whether source environments can be consolidated onto hardware that you have not yet purchased, providing greater confidence that machines you are looking at purchasing will be suitable for the workloads you plan to put on them.
• Chargeback: As we covered in Chapter 6, “Metering and Chargeback in Enterprise Manager 12c,” chargeback is an accounting tool that provides a way to aggregate the enormous amount of metrics data that Enterprise Manager collects, a way to report to consumers of those metrics how much they have used of those particular resources, and, if you have set it up to do so, a way for the IT department to charge those consumers for the resources they have used. However, its usefulness is not restricted to these functions. Once the chargeback application is configured and running, you accumulate a history of how many resources have been used by specific applications or databases, which in turn allows you to project some form of capacity planning figures for future growth. Obviously, you still need to talk to the application owners to get an idea if any new functionality may impact those figures, but at the very least, you have a starting point that can be used to predict future resource requirements.
Performance Tuning
As mentioned at the beginning of this chapter, contacting the DBA is all too often the first port of call for users complaining about performance, even before they can identify the database as the source of the performance issue. Of course, if we’ve ensured server and database sizing has been done correctly, we shouldn’t see too much in the way of performance issues. Nevertheless, we all know that server and database sizing is far too often a reactive rather than proactive exercise, so we still need to undertake the same sorts of performance problem identification and resolution exercises that we are familiar with from the non-cloud environment. This topic is far too large to cover here—indeed, whole books have been written about Oracle performance tuning—we won’t go into more detail now. Suffice it to say, you will still need to perform this role but with the added complexity of not necessarily knowing beforehand what other environments may be directly impacting yours. Consequently, you should become familiar with the pages covered at the beginning of this chapter, particularly the Cloud Home. It’s there that you can get an overview of what cloud instances have been created and can drill down to see all the different service entities that have been created and removed.
Summary
Oracle Enterprise Manager Cloud Control 12c should be your tool of choice when monitoring, administering, and managing the Oracle database cloud environment. As you’ve seen over the past few chapters, it provides the capability to set up both schema as a service and database as a service as well as to configure the chargeback application. Additionally, as we’ve seen in this chapter, Enterprise Manager 12c allows you to perform the ongoing management and administration of the cloud environment you have set up. One final step remains as an often undertaken task in the cloud environment: cloning of databases from one environment to another. That’s the subject of our next chapter.
Chapter 9. Virtualizing RAC 12c (DB Clouds) on Oracle VM—Grid Infrastructure
Cloud computing promises to usher in a new era for the corporate IT universe. Every day, we hear that the cloud within the typical IT organization is inevitable and bound to happen, if not already present in some form; it is reasonable to conclude that cloud computing is only a matter of when, not if.
The cloud computing winds of change have been blowing for quite a few years now, recently picking up momentum at Oracle Corporation—almost every key Oracle product focuses on cloud computing as a paradigm. This focus is evident in the “c” (for “cloud”) that is appended to the current release number of Oracle products. Real Application Clusters (RAC) is no exception to Oracle’s turn toward cloud computing.
Virtualization is the foundation of cloud computing because it is widely implemented today. What is virtualization, and what part does it play in the cloud universe? This chapter and the next insights, recommendations, and a step-by-step guide on setting up virtualized RACs, with an emphasis on virtualization, cloud computing, Oracle Virtual Machine (OVM) for x86, and Oracle Enterprise Manager Cloud Control 12c (EM12c). These virtualized RACs can then be utilized within the framework of database as a service (DBaaS) for rapid and easy deployment as database cloud services.
The overlap between the material in this and other chapters is intended to reiterate important concepts as well as to present the topics in the proper context.
Following is a summary of topics presented in this chapter:
• Building Oracle database clouds: The necessary ingredients
• What is virtualization?
• What are virtual machine (VM) monitors (hypervisors)?
• Types of hypervisors
• Types of virtualization
• Oracle VM for x86—360 degrees
• Xen—Synopsis and overview
• Oracle VM—Overview and architecture
• Oracle VM templates—Synopsis and overview
• Oracle VM 3.x—A brief introduction
• Setting up virtualized Oracle RAC clusters using Oracle VM: Alternative approaches
• Set up, install, and configure 12c virtualized RAC clusters: Step-by-step setup and configuration
This chapter guides you, step by step, through installing, setting up, and configuring a virtualized RAC 12c using OVM for x86. The next chapter takes a similar approach, with one major difference—the underlying virtualization technology (hypervisor) is Oracle VirtualBox instead of OVM for x86. This information gives you the choice of using either virtualization technology or both technologies to set up virtualized Oracle RAC 12c database clouds. An overview of cloud computing and the role and relevance of virtualization from the perspective of cloud computing are also covered in both chapters. All respective versions of the hypervisors used are the latest and greatest at the time of the publication of this book.
Database Clouds Based on RAC—The Necessary Ingredients
Cloud computing can be described as “fill-in-the-blank as a service”: for example, infrastructure as a service (IaaS), platform as a service (PaaS), and database as a service. A more detailed overview of cloud computing, its various flavors, paradigms, prevalent trends, and a whole lot more are presented in the next chapter.
How do we plan for, set up, build, and configure Oracle database clouds? The short answer is OVM for x86, EM12c, and RAC. Together they make up the true database cloud solution from Oracle, especially if you are planning your own private database clouds behind your corporate firewalls. OVM for x86 is used interchangeably with OVM in this chapter and the next.
An overview of virtualization is presented in this chapter with a follow-up section on cloud computing in the next chapter.
Virtualization—360 Degrees
Virtualization is the opposite of a physical entity in the IT universe. Here are some salient features and key points about virtualization:
• Virtualization is the foundation stone in the cloud computing era.
• Virtualization is an inevitability waiting to happen in the IT universe, one that you just can’t avoid: the sooner you embrace it, the better off you are.
• Virtualization can be summarized as an abstraction layer.
• Virtualization has proved to be a game-changer, resulting in unprecedented server utilization.
• Virtualization enables agile availability of resources to the end user, thereby shaving considerable time from the IT provisioning life cycle.
• Virtualization in the modern day can be characterized as the gateway and roadmap to secure and elastic corporate IT scalability.
• Virtualization implies a fantastic alternative to physical reality—the possibilities are endless.
• The alternative to virtualization consists of physical hosts with a lot of useless spare capacity, resulting in many resources being underutilized.
• Although Oracle database administrators (DBAs) were slow to uptake virtualizing their databases, the trend has finally gained momentum and reached critical mass.
What Are VM Monitors (Hypervisors)?
A VM monitor, also known as a hypervisor, enables OS kernels to run and coexist as guests, thereby enabling virtualization at the OS level. Hypervisors are responsible for allocation and coordination of CPU, memory, I/O, peripheral resources, and so on, to the guest VMs.
Types of Hypervisors
There are two types of hypervisor:
• Type 1: This type is known as a native, or more commonly, bare-metal hypervisor. It installs on bare-metal hardware and does not require an OS on which to be installed. Examples are VMware ESX/vSphere, Microsoft HyperV, Xen, and OVM. Bare-metal hypervisors are enterprise-grade hypervisors that enable cloud computing as it is widely known and understood today.
• Type 2: This type is known as a hosted hypervisor and is installed on an already existing OS on the system: examples are OVM VirtualBox, VMware Server, and VMware Workstation. Hosted hypervisors are mostly utilized for personal use, for example, learning new technologies and colocating various OS families on your laptop.
Here are some key points and salient features about hypervisors:
• A hypervisor is at the lowest level of the stack from a technology standpoint.
• A hypervisor enables agility and rapid deployment of resources within the IT space.
• Hypervisors result in increased efficiency by merit of elastic resource consolidation.
Following are some benefits and advantages of implementing hypervisors:
• Increased resource utilization
• Fault tolerance and high availability
• Isolation and multitenant support
• Support for a wide range of popular OS families
Types of Virtualization
There are three types of virtualization prevalent in the industry today (the first two categories are explained in the following sections, as they are relevant to this chapter):
• Paravirtualization
• Hardware-assisted/full virtualization
• Partial virtualization
What Is Paravirtualization?
In paravirtualization, guest VMs use a special hypercall application binary interface (ABI) in a modified OS for performance and simplicity. The modified OS communicates with the hypervisor, and tasks are relocated from the virtual domain to the host domain.
OVM implements this type of virtualization. The Oracle/Red Hat Enterprise Linux family of paravirtualized guests are supported with OVM as paravirtualized guests.
Paravirtualization is generally relatively faster than hardware virtualization. This is not to imply that either type of virtualization is either slow or not fast enough.
What Is Hardware-Assisted/Full Virtualization?
Hardware-assisted virtualization is also known as full or native virtualization and requires CPU support.
This type of virtualization enables unmodified guest OS kernels to run within a simulated hardware infrastructure but generally is relatively slower than paravirtualization.
Microsoft Windows and Oracle Solaris families of hardware/full virtualized guests are supported with paravirtualized drivers on OVM.
OVM for x86—360 Degrees
OVM for x86 is a type1 hypervisor based on Xen, the de facto nth-generation open-source hypervisor. Xen is a mainstream technology, widely used by dominant cloud computing providers such as Amazon and Rackspace, as well as by Oracle’s own public cloud. OVM provides both server virtualization and management components. OVM 3.x is based on Xen 4.x and has been significantly enhanced to be an industrial-grade product capable of configuring, administering, managing, and supporting thousands of servers hosting both Oracle and non-Oracle applications. Some of the advances in this relatively new version include dynamic resource scheduling (DRS), high availability–enabled server pools (clusters), and dynamic power management. OVM is augmented with the Virtual Assembly Builder and Template Builder components, which combine to form a complete virtualization picture within the OVM family.
Following are some of the key points about OVM’s capabilities and some of its advantages. However, as with any technology, OVM has its fair share of nuances, most of which can be taken care of by proper configuration and by following implementation best practices.
• Server load-balancing
• Centralized network and storage management
• Physical to virtual (P2V) and virtual to virtual (V2V) conversion
• Web services API
• Support for Windows, Linux, and Solaris as guest OS families
• Agility and fast deployment with OVM templates and Oracle Virtual Assembly Builder
• Web-based GUI management
• OVM zones—multiple server and storage pools
• High availability and live migration with OVM server pools
• Running mixed heterogeneous workloads within a single consolidated machine
• Very fast—delivers near-native performance
• Simple and easy installation—low learning curve
Another nice point is that OVM is free—you pay only for affordable, low-cost support.
Note
OVM is the only virtualization offering for the x86 architecture that is certified with all major Oracle products.
Xen—Synopsis and Overview
Xen originated at Cambridge University and is the leading open-source, industry-standard hypervisor. Ian Pratt founded XenSource, the company behind Xen, which was later acquired by Citrix in 2007. Xen 4.x is the latest version as well as the underlying version for OVM 3.x.
The Xen hypervisor is the virtualization base of Amazon EC2, the market leader in the cloud computing IaaS service model. Oracle is part of the Xen Advisory Board and contributes to its development. Other members of the Xen Advisory Board include Citrix, Hewlett Packard, IBM, Intel, Novell, Oracle, and Red Hat.
OVM—Overview and Architecture
OVM is made up of two components:
• OVM Server, the Xen-based open source hypervisor component
• OVM Manager, the Java-based thin-client GUI management component
OVM Server
OVM server is the actual hypervisor component based on Xen. It installs on bare-metal x86 hardware and does not require a preinstalled OS.
OVM boots a small 64-bit domain called DOM0, which is used for assigning, distributing, and coordinating CPU, I/O, and other resources. Guest VMs are created and configured as in DOMus.
OVM Manager
Based on WebLogic server, OVM Manager is a Java-based management server component with a Web-based UI. It utilizes Oracle Database as a management repository and comes prepackaged with a free XE version of Oracle Database, which can be converted to all other flavors of the Oracle Database server family.
Recently, with OVM 3.2.x, MySQL is now also supported as a repository database option. OVM agent processes are used on each OVM server for communication and management purposes. OVM uses server pools (or clusters) to group virtualization resources: each server pool encompasses one or more OVM servers.
What Are OVM Templates?
OVM templates, or Golden Images, are factory-packaged, preinstalled, and preconfigured images of preconfigured VMs containing software products that are complete with built-in best practices and are ready to go. They provide reusability and full-stack implementation. All major Oracle products—for example, Oracle Database server, Fusion Middleware, Enterprise Linux, and RAC—are available as OVM templates.
OVM templates are the vehicles to significant reduction of installation and configuration costs in the IT landscape.
Methods of Creating OVM Templates
The following methods can be employed/deployed to create OVM templates:
• P2V conversion
• Create VM templates from existing VM images
• Create VM templates from just enough operating system (JeOS)
OVM Builder
OVM Assembly Builder gives you the capability of a structured process for appliance consolidation into cohesive and reusable assemblies by rapidly creating and configuring full-stack topologies and provisioning them onto virtualized appliances.
OVM Builder is used for creating dedicated VMs called software appliances and facilitates deployment of the entire application as a single, automatically configured unit. This tool can facilitate the building of private clouds significantly by building VM assemblies and deploying OVM templates.
OVM 3.x—A Brief Introduction
OVM 3.x, the latest release, takes scalability to a whole new level. With tons of new features, OVM 3.3 is based on Xen 4.x.
Being highly scalable, this latest version of OVM includes many enhancements:
• A feature-rich Web-based UI, improved backup and recovery capability
• Simplified VM deployment, administration, and management with 64-bit DOM0
• Application-driven virtualization, up to 128 virtual CPUs, and 1 TB memory per guest VM
• Jobs-based VM operations
• Dynamic resource management
• Dynamic power management
• Comprehensive network and storage management
• Multiple-template cloning in a single step
• Over 100 factory-packaged best-practices built into OVM templates
• A centralized configuration and management solution in the form of OVM Manager.
In other words, 3.x is truly an enterprise-grade, groundbreaking release. OVM 3.x is completely and fully managed by a browser-based UI provided by OVM Manager.
If you haven’t already embarked on this journey, now is a great time to upgrade and migrate your OVM infrastructures from 2.x to 3.x.
OVM 3.x: High Availability–Enabled OVM Ecosystem
OVM provides broad-based high availability across the virtualization ecosystem in the form of high availability–enabled server pools (or clusters) on shared storage.
Salient features include:
• Live migration of guest VMs
• Automatic failover/restart of guest VMs in case of server failure
• Oracle Cluster File System 2 (OCFS2)—high availability on a cluster file system
• Server pool load balancing—using a best-fit algorithm, places guest VMs on the most appropriately loaded VM server
• Clustered OVM Manager
Virtualized RAC Using OVM Templates—Approach 1
This approach is the easiest and fastest way to set up your own virtualized RAC database clusters as part of virtualized Oracle RAC database clouds. Simply download the OVM for x86 templates for RAC, install them, and in less than an hour, you have your own virtualized RAC up and running. This methodology is truly revolutionary and illustrates the beauty and power of agile provisioning of complex infrastructures and applications in cloud environments using virtualized templates.
While this approach is not covered in complete detail, the main utility used to set up, configure, and deploy a virtualized RAC from OVM templates, DeployCluster, is presented in the following section.
Use DeployCluster to Configure and Deploy the Virtualized RAC
This section walks you through using the DeployCluster tool to rapidly configure and deploy a virtualized RAC database cluster. Listing 9.1 shows the example run.
Listing 9.1. DeployCluster Tool
[root@bsfmgr01 deploycluster]# ./deploycluster.py -u admin -p password -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
Oracle RAC OneCommand (v1.1.2) for Oracle VM - deploy cluster -
(c) 2011-2012 Oracle Corporation
(com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1200:v1.1.2) - v2.6.6 -
bsfmgr01.bsflocal.com (x86_64)
Invoked as root at Sat Sep 22 20:10:04 2012 (size: 37600, mtime: Sun Aug 5 12:37:58 2012)
Using: ./deploycluster.py -u admin -p **** -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
INFO: Attempting to connect to Oracle VM Manager...
INFO: Oracle VM Client (3.1.1.399) protocol (1.8) CONNECTED (tcp) to
Oracle VM Manager (3.1.1.305) protocol (1.8) IP (192.168.1.51) UUID
(0004fb0000010000da73c3bcce15ca2e)
INFO: Inspecting /home/oracle/ovm3/deploycluster/bsfrac64.ini for number of nodes defined....
INFO: Detected 2 nodes in: /home/oracle/ovm3/deploycluster/bsfrac64.ini
INFO: Located a total of (2) VMs;
2 VMs with a simple name of: ['bsfracovm1', 'bsfracovm2']
INFO: Verifying all (2) VMs are in Running state
INFO: VM with a simple name of "bsfracovm1" is in Running state...
INFO: VM with a simple name of "bsfracovm2" is in Running state...
INFO: Detected that all (2) VMs specified on command have (5) common shared disks
between them (ASM_MIN_DISKS=5)
INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details
as follows:
netconfig.ini (Network setup): /home/oracle/ovm3/deploycluster/bsfrac64.ini
buildcluster: yes
INFO: Starting to send cluster details to all (2) VM(s).....
INFO: Sending to VM with a simple name of "bsfracovm1"....
INFO: Sending to VM with a simple name of "bsfracovm2"......
INFO: Cluster details sent to (2) VMs...
Check log (default location /u01/racovm/buildcluster.log) on build VM (bsfracovm1)...
INFO: deploycluster.py completed successfully at 20:10:19 in 15.7 seconds (00m:15s)
Logfile at: /home/oracle/ovm3/deploycluster/deploycluster2.log
Figure 9.1 and Figure 9.2 each show parts of a sample run of the DeployCluster tool. On your monitor display, INFO: (Figure 9.1) and [ OK ] (Figure 9.2) should be green: all green means all good to go!
Figure 9.1. An example run of the DeployCluster tool for setting up RAC using Oracle VM for x86 templates
Figure 9.2. DeployCluster tool: Configuring and setting up RAC using Oracle VM for x86 templates
Note
At the time of writing, OVM for x86 templates for RAC were only available for up to version 11gR2 and not for 12c. This is the recommended approach for setting up Oracle RAC as database clouds; however, because of the absence of OVM templates for 12c, we have included the longer alternative approach outlined in the next section. The other rationale for including this approach is that it enables you to learn the specific actions required to set up and configure RAC 12c from scratch.
Set Up and Configure a Virtualized RAC Database Cloud—Approach 2
This section takes you through an alternative, step-by-step approach to setting up your own virtualized RAC 12c in OVM for x86.
Note
This chapter assumes that you already have an OVM 3.x server pool in an up-and-running state. If this is not the case, please refer to the OVM documentation to set up OVM 3.x. The following sections assume that you are familiar with basic RAC concepts (presented in earlier chapters). Also, this chapter and the next chapter are structured in a way that enables you to set up RAC database clouds in the comfort of your own home for learning purposes. Please note that the steps are identical to corporate RAC setups; however, the infrastructure is pared down to enable you to make use of hardware available at home.
The following hardware and software were used for setting up the OVM server pool for this example:
• OVM Manager and EM12c:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM
• OVM servers for server pool:
• (Qty: 3) 64-bit Intel x86 machines with 16 GB RAM each
• Shared storage:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM:
• Openfiler with 1 TB disk space available on iSCSI
Roadmap to a Virtualized RAC 12c Cluster: High-Level Steps
Following are the high-level steps to build your own virtualized RAC–based database cloud:
1. Set up and configure the required hardware and software for an OVM server pool on shared storage.
2. Prepare and plan—Do your homework.
3. Install and set up grid infrastructure.
4. Install and set up non-shared database home(s).
5. Create a RAC database.
6. Configure and set up the RAC database as a monitored target in EM12c.
All of the preceding steps are detailed, elaborated on, and/or executed in the following sections of this chapter and the next one, with several alternative options presented for some of the involved steps.
While all of the following steps apply equally to corporate environments, they are written in such a way that you can set up a virtualized database cloud environment in your home, thereby learning how to install, set up, configure, and monitor RAC with minimal hardware.
OVM: Prerequisites, Preparation, and Planning
Ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup.
Step 9.1—Set Up a Server Pool in OVM for x86
This chapter assumes that you already have a basic server pool in OVM for x86 complete with OVM Manager 3.x and EM12c release 2 set up, configured, and ready to go for deploying an Oracle RAC 12c cluster (the hardware/software configuration used in this chapter and the next one is outlined in the preceding section). In case you need help with this process, be assured that this is a simple and easy process with a minimum, low, and intuitive learning curve if you follow the appropriate OVM for x86 installation and setup documentation manuals.
The following sections detail the steps involved in configuring OVM for RAC 12c.
Step 9.2—Configure Network Time Protocol
Press the Push to All Servers button for Network Time Protocol (NTP).
Step 9.3—Create the Required Network for RAC 12c
Continue with the following steps:
1. Choose OVM Manager → Networking → Networks → Create (+) Button.
2. Select the Create a Hybrid Network with Bonds/Ports and VLANS option.
3. Enter the name and description of the OVM network. Select the Virtual Machine option.
4. Select the relevant ports.
5. Select the relevant VLAN segments.
6. Select the appropriate IP addressing scheme. Enter the IP addresses, net masks, and bonding options if applicable.
As shown in Figure 9.3, the new OVM network has been successfully created and is ready to be deployed and used.
Figure 9.3. Oracle VM 3.x: Dedicated network for Oracle RAC 12c
Step 9.4—Create the Required Shared Virtual Disks for the GRID1 ASM Disk Group
To create the disks, follow these steps:
1. Choose OVM Manager → Repositories → Select OVS Repository → Virtual Disks → Create (+) Button:
2. Create a virtual disk with the following options:
• Size: 15 GB
• Allocation type: Sparse allocation
• Shareable: Yes
3. Repeat the preceding process for all five GRID1 Automatic Storage Management (ASM) disks.
Note
For production environments, it is highly recommended to have physical block devices presented as virtual disks for the various ASM disk groups.
As shown in Figure 9.4, all of the virtualized shareable ASM disks for the GRID1 disk groups have been created and are now ready for use.
Figure 9.4. Oracle VM 3.x: Virtualized disks for GRID1 ASM disk group for RAC 12c
Note
Step 9.5 has two alternative approaches, both of which are explained in quite a bit of detail next. Each step has further substeps, which are illustrated as well.
Step 9.5, Approach 1
Step 9.5, Approach 1, is illustrated in the following substeps.
Step 9.5, Approach 1a—Create the VM for RAC Server Node 1 from Scratch Using an ISO Boot Image to Install OEL 6.x
To create the Oracle Enterprise Linux (OEL) 6.x VM for RAC using an ISO boot image, follow these steps (as shown in Figure 9.5):
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine Button.
2. Select the Create a New VM option.
3. As shown in Figure 9.5, select and enter the following options for the RAC-Node-01 VM:
• Server pool
• Server pool repository
• VM description
• High Availability: Unchecked
• Operating system: Oracle Linux 6
• Domain type: Xen PVM
• Start policy: Start on best server
• Memory (MB): 2,048 (minimum required is 4,096. However, in cases where you are short on physical memory and are building the RAC for learning purposes, this can suffice)
• Max memory (MB): 8,192
• Processors: 2
• Max processors: 4
• Priority: 50
• Processor cap %: 100
Figure 9.5. Oracle VM 3.x: Creating a VM for RAC-Node-01
4. Select Network and then press the Add VNIC button twice to create two virtual network interface cards (VNICs) for the RAC-Node-01 VM (see Figure 9.6).
Figure 9.6. Oracle VM 3.x: Specifying virtual network interface cards for the public and private cluster interconnect networks required for the RAC
5. Choose Next to move on to Setup Networks and Arrange Disks.
6. Select and enter the following options for the VM disks (see Figure 9.7):
• CD/DVD
• Virtual disk—Press the Create (+) button.
Figure 9.7. Oracle VM 3.x: Specifying virtualized disk/optical media for RAC-Node-01
7. Select the imported ISO for Linux 6.x.
8. Select and enter the following options:
• Repository
• Virtual disk name
• Description
• Shareable: Unchecked
• Size: 25GB
• Allocation type: sparse allocation
9. Select the ISO for OEL 6.x (see Figure 9.8).
Figure 9.8. Oracle VM 3.x: Specify the ISO for Oracle Enterprise Linux 6.x 64-bit x86
10. As shown in Figure 9.9, repeat the preceding process to add/select the following disks:
• ISODATA1 ASM disk group:
• Qty: 6
• Individual disk size: 50GB
• RECO1 ASM disk group:
• Qty: 1
• Individual disk size: 50GB
Select the Disk boot option. Press the Finish button to create the RAC-Node-01 VM.
Figure 9.9. Oracle VM 3.x: Guest VM for RAC-Node-01
Step 9.5, Approach 1b—Import the OEL 6.x x86-64 ISO into the OVM Repository
To import the OEL 6.x x86-64 ISO image into the OVM repository, follow these steps:
1. Go to OVM Manager → Repositories → Select OVM Repository → ISOs → Import ISO Button.
2. Select and enter the following (see Figure 9.10):
• Server
• ISO download location: ftp://oracle:password@192.168.2.20/software/OEL63_x86_64/V33411-01.iso (Replace the IP address, username, and password with your own.)
Figure 9.10. Oracle VM 3.x: Specify the FTP location for the OEL 6.x 64-bit x86 VM template
Note
Ensure that the Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP service) is set up correctly and that the ISO is available at the desired location and has the correct permissions.
Note that the status of the import process shows as In Progress, with a message showing Download Virtual CDROM....
Monitor the progress of the ISO import process in an SSH session to one of the Oracle VM servers (OVS) to which the OVS repository is connected.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 276480
-rw-r--r-- 1 root root 282066944 Feb 24 12:57 0004fb0000150000ba1fd09b4e2bd98c.iso
Keep checking periodically after brief intervals to monitor the progress of the ISO import process.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 2890752
-rw-r--r-- 1 root root 2959081472 Feb 24 13:04 0004fb0000150000ba1fd09b4e2bd98c.iso
At this point, the OEL 6.x ISO has been successfully imported (see Figure 9.11). Start up the VM, boot it from the OEL 6.x ISO, and go through the steps of setting up and installing Oracle Enterprise Linux 6.x.
Figure 9.11. Oracle VM 3.x: Imported ISO for Oracle Enterprise Linux 6.x 64-bit x86 is now ready for use as a bootable virtual CD/DVD drive
Step 9.5, Approach 2
Step 9.5, Approach 2, is illustrated in the following sub-steps.
Step 9.5, Approach 2a—Create the VM for RAC Server Node 1 Using an OVM OEL 6.x Template
To create the VM for RAC Server Node using an OVM OEL 6.x template, follow these steps:
1. Download the OEL 6.x OVM 3.x template from https://edelivery.oracle.com/linux.
2. Unzip the ISO and make it available via FTP.
3. Go to OVM Manager → Repositories → Select OVM Repository → Assemblies → Import VM Assembly Button (see Figure 9.12). Enter the following:
• Server
• VM assembly download location
Note that the status of the VM assembly import process shows as In Progress with a message showing “Downloading Assembly...” and then another one showing “Unpacking Template....”
Figure 9.12. Oracle VM 3.x: Specify the FTP location of the Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86
Monitor the progress of the VM assembly import process within an SSH session to one of the OVS servers to which the OVS repository is connected.
[root@bsfovs03 11941bfbbc]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/Assemblies/11941bfbbc
[root@bsfovs03 11941bfbbc]# ls -l
total 617472
drwxr-xr-x 2 root root 3896 Feb 24 15:20 imports
-rw-r--r-- 1 root root 631541760 Oct 10 16:17 package.ova
drwxr-xr-x 2 root root 3896 Feb 24 15:21 unpacked
Keep checking periodically after brief intervals.
As shown in Figure 9.13, the OVM assembly for OEL 6.3 x86-64 has been imported successfully and is now ready for use.
Figure 9.13. Oracle VM 3.x: Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86
Step 9.5, Approach 2b—Create the OEL 6.3 x86-64-PVM OVM Template from the Newly Created Assembly into the OVM Repository
To create the OEL 6.3 x86-64-PVM OVM template from the newly created assembly into the OVM repository,
1. Choose OVM Manager → Repositories → Select OVM Repository → VM Assemblies → Select VM Assembly → Create VM template.
2. Enter and select the following (see Figure 9.14):
• Assembly VMs
• VM template name
• Description
Figure 9.14. The VM template name and description
3. As shown in Figure 9.15, the OEL 6.3 x86_64 OVM template has been successfully created and is now ready for deployment.
Figure 9.15. The OEL 6.3 x86_64 OVM template
Step 9.5, Approach 2c—Edit the Newly Created OEL 6.3 x86-64-PVM OVM Template
To edit the newly created OEL 6.3 x86-64-PVM OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template → Edit.
2. Modify the following options as shown in the following screens:
• Max memory (MB): 8,192
• Memory (MB): 4,096
• Max processors: 8
• Processors: 4
• Enable High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Networks: Add/specify the appropriate network(s)
• Boot order: Disk
• Virtual disks:
• Add virtual disk for Oracle software binaries: 25GB
Step 9.5, Approach 2d—Create Clone Customizer for the RAC Node OVM Template
To create a clone customizer for the RAC node OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template.
2. Press the Create Clone Customizer button.
3. Specify the name and description of the new clone customizer for the RAC 12c cluster node VMs.
4. Modify the Clone Type to Thin Clone (see Figure 9.16). This is a fast and efficient way to create new VM clone machines.
Figure 9.16. Oracle VM 3.x: Specify the virtual disks for RAC-Node-01
5. Specify the network settings for the clone customizer (if any custom modifications are required).
Step 9.5, Approach 2e—Create the RAC-Node-01 VM from the VM Template Using the Advanced Clone Customizer Method
To create the RAC-Node-01 VM from the VM template using the Advanced Clone Customizer method do the following:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine.
2. Select the Clone from an existing VM Template option (see Figure 9.17). Enter and select the following options:
• Clone count: 1
• Repository: Select OVS repository
• VM template: Select the OEL 6.3 x86_64 template
• Server pool: Select the appropriate server pool
• Description
Figure 9.17. Cloning from the existing template
3. Press the Finish button to create the RAC-Node-01 VM. The finished product is shown in Figure 9.18.
Figure 9.18. The VM for RAC-Node-01 created from the Oracle Enterprise Linux 6.x 64-bit x86 template
Step 9.6—Edit the Newly Created VM for RAC-Node-01
To edit the VM for RAC-Node-01 follow these steps:
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Edit Virtual Machine.
2. Modify the following options as shown in the configuration tab (Figure 9.19):
• Operating system: Oracle Linux 6
• Max memory (MB): 8,192
• Max processors: 8
• Networks: Specify the appropriate network(s)
• High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Boot order: Disk
• Start policy: Start on best server
• Virtual disks:
• System (virtual disk): Add another disk for Oracle binaries: 25GB
• GRID1 ASM disk group:
Qty: 6 Disks
Individual disk size: 15G
• DATA1 ASM disk group:
Qty: 6 Disks
Individual disk size: 50G
• RECO1 ASM disk group:
Qty: 1 Disk
Individual disk size: 50G
Figure 9.19. Configure the newly created VM
3. On the Network tab, add two VNICs (see Figure 9.20), one each for the public and private cluster interconnects.
Figure 9.20. Add the required VNICs
4. Finally, on the Disks tab, attach the shared virtualized disks for the ASM disk groups (see Figure 9.21).
Figure 9.21. Oracle VM 3.x: Attach the shared virtualized disks for the GRID1, DATA1, and RECO1 ASM disk groups
Step 9.7—Start Up the RAC-Node-01 VM
To start up the RAC-Node-01 VM, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Start Virtual Machine.
2. Press the Launch Console button to start the VNC console window (see Figure 9.22).
Figure 9.22. Oracle VM 3.x: Boot RAC node—First-boot interview/configuration process
3. Configure the various options in the new VM in the first-boot interview process:
• System hostname: For example, bsfrac01.bsflocal.com (12c RAC-Node-01)
• Specify VNIC device: For example, ETH0
• Boot protocol (static/DHCP): Static
• Activate VNIC on boot: YES
• IP address of the interface: For example, 192.168.2.41 (for the public network: ensure that this is an unused IP address. If you have DNS server(s), register the IP address/hostname with it)
• Netmask: For example, 255.255.255.0
• IP address of gateway: For example, 192.168.2.1
• IP addresses of DNS servers(s): For example, 255.255.255.0
• OS root password: *******
Set Up and Configure RAC-Node-01
The following sections explain how to set up and configure Node 01 for RAC 12c.
Step 9.8—Set the Network Configuration of the Private Cluster Interconnect VNIC
To set the network configuration of the private cluster interconnect VNIC, issue the following commands:
[root@bsfrac01 network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@bsfrac01 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@bsfrac01 network-scripts]# vi ifcfg-eth1
You have new mail in /var/spool/mail/root
[root@bsfrac01 network-scripts]# cat ifcfg-eth1
DNS1=192.168.2.1
GATEWAY=192.168.2.1
NETMASK=255.255.255.0
IPADDR=192.168.3.40
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
DEVICE=eth1
Step 9.9—Modify /etc/hosts File to Include the Relevant Entries for RAC 12c
To modify the /etc/hosts file to include the relevant entries for RAC 12c, open the fie and edit it, as in the following:
[root@bsfrac01 network-scripts]# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4
::1 localhost6.localdomain6 localhost6
# PUBLIC IP Addresses of 12c RAC Cluster
192.168.2.40 bsfrac01 bsfrac01.bsflocal.com
192.168.2.41 bsfrac02 bsfrac02.bsflocal.com
192.168.2.42 bsfrac03 bsfrac03.bsflocal.com
192.168.2.43 bsfrac04 bsfrac04.bsflocal.com
192.168.2.44 bsfrac05 bsfrac05.bsflocal.com
# SCAN IP Addresses of 12c RAC Cluster
192.168.2.70 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.71 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.72 bsfrac-scan bsfrac-scan.bsflocal.com
# Virtual IP Addresses of 12c RAC Cluster
192.168.2.60 bsfrac-vip01 bsfrac-priv01.bsflocal.com
192.168.2.61 bsfrac-vip02 bsfrac-priv02.bsflocal.com
192.168.2.62 bsfrac-vip03 bsfrac-priv03.bsflocal.com
192.168.2.63 bsfrac-vip04 bsfrac-priv04.bsflocal.com
192.168.2.64 bsfrac-vip05 bsfrac-priv05.bsflocal.com
# Private Cluster Interconnect IP Addresses of 12c RAC Cluster
192.168.3.40 bsfrac-priv01 bsfrac-priv01.bsflocal.com
192.168.3.41 bsfrac-priv02 bsfrac-priv02.bsflocal.com
192.168.3.42 bsfrac-priv03 bsfrac-priv03.bsflocal.com
192.168.3.43 bsfrac-priv04 bsfrac-priv04.bsflocal.com
192.168.3.44 bsfrac-priv05 bsfrac-priv05.bsflocal.com
Note
Single Client Access Name (SCAN) listener IP information is included in the /etc/hosts file. The SCAN IPs should be registered with the appropriate DNS server(s).
Step 9.10—Check for Space Requirements
Ensure that enough TMP space is available to support RAC12c:
[root@bsfracvx1 ~]# df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_bsfracvx1-lv_root
26G 5.3G 20G 22% /
Step 9.11—Disable the Linux Software Firewall
Next, disable the Linux software firewall.
Note
This step is optional and should be exercised with caution. Only do it if you have ancillary hardware/software firewalls in place in the corporate landscape.
[root@bsfrac01 ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service iptables off
Usage: iptables {start|stop|restart|condrestart|status|panic|save}
[root@bsfrac01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: c [ OK ]
[root@bsfrac01 ~]# chkconfig iptables off
[root@bsfrac01 ~]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED
2 ACCEPT icmpv6 ::/0 ::/0
3 ACCEPT all ::/0 ::/0
4 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:22
5 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service ip6tables stop
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Unloading modules: [ OK ]
[root@bsfrac01 ~]# chkconfig ip6tables off
Step 9.12—Configure and Restart NTPD Client
Edit/configure the /etc/ntp.conf file and restart the Network Time Protocol Daemon (NTPD) server on the RAC node VM.
$ vi /etc/ntp.conf
# Modify the following line to reflect the NTP servers with which the time
# will be synchronized
server 192.168.2.20
Step 9.13—Partition, Format, and Mount /u01 on the 25-GB Local Virtual Hard Disk
To partition, format, and mount /u01 on the 25-GB local virtual hard disk, start by doing the following:
[root@bsfrac01 /]# mkdir /u01
You have new mail in /var/spool/mail/root
[root@bsfrac01 /]# mount /dev/xvdb1 /u01
[root@bsfrac01 /]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9985 8087 1392 86% /
tmpfs 1940 1 1940 1% /dev/shm
/dev/xvda1 99 50 45 53% /boot
/dev/xvdb1 25195 172 23743 1% /u01
Make the mount point persistent by modifying the /etc/fstab file:
#
# /etc/fstab
# Created by anaconda on Fri Sep 7 08:14:40 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
LABEL=/ / ext4 defaults 1 1
LABEL=/boot /boot ext4 defaults 1 2
/dev/xvda3 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb1 /u01 ext4 defaults 0 0
Step 9.14—Disable the SELINUX Option
Disable the SELINUX option by modifying the following file:
[root@bsfrac01 /]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing—SELinux security policy is enforced.
# permissive—SELinux prints warnings instead of enforcing.
# disabled—No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE—Can take one of these two values:
# targeted—Targeted processes are protected,
# mls—Multilevel security protection.
SELINUXTYPE—Targeted
Step 9.15—Install VSFTPD (FTP Server)
Install VSFTPD (FTP server) by performing the following:
[root@bsfrac01 ~]# yum install vsftpd
Note
This step is optional.
Step 9.16—Install X Window System Desktop
Install the X Window System desktop by performing the following steps:
[root@bsfrac01 /]# yum groupinstall "X Window System" desktop
Loaded plugins: security
Setting up Group Process
Package 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 already installed and latest version
Package hal-0.5.14-11.el6.x86_64 already installed and latest version
Package 1:dbus-1.2.24-7.0.1.el6_3.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:0.8.1-34.el6_3 will be installed
.
.
.
openssh-server x86_64 5.3p1-84.1.el6
ol6_latest 298 k
rhn-check noarch 1.0.0-87.0.6.el6
ol6_latest 60 k
rhn-client-tools noarch 1.0.0-87.0.6.el6
ol6_latest 492 k
rhn-setup noarch 1.0.0-87.0.6.el6
ol6_latest 96 k
Transaction Summary
=====================================================================================================================================================================================================
Install 265 Package(s)
Upgrade 19 Package(s)
Total download size: 123 M
Is this ok [y/N]: y
Downloading Packages:
(1/284): ConsoleKit-x11-0.4.1-3.el6.x86_64.rpm | 20 kB 00:00
(2/284): DeviceKit-power-014-3.el6.x86_64.rpm | 90 kB 00:00
.
.
.
Dependency Updated:
libreport.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-cli.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-kerneloops.x86_64 0:2.0.9-5.0.1.el6_3.2
libreport-plugin-logger.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-
mailx.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-reportuploader.x86_64 0:2.0.9-
5.0.1.el6_3.2
libreport-python.x86_64 0:2.0.9-5.0.1.el6_3.2 nspr.x86_64 0:4.9.2-
0.el6_3.1 nss.x86_64 0:3.13.6-2.0.1.el6_3
nss-sysinit.x86_64 0:3.13.6-2.0.1.el6_3 nss-tools.x86_64 0:3.13.6-2.0.1.el6_3 nss-util.x86_64 0:3.13.6-1.el6_3
openssh.x86_64 0:5.3p1-84.1.el6 openssh-clients.x86_64
0:5.3p1-84.1.el6 openssh-server.x86_64 0:5.3p1-84.1.el6
rhn-check.noarch 0:1.0.0-87.0.6.el6 rhn-client-tools.noarch
0:1.0.0-87.0.6.el6 rhn-setup.noarch 0:1.0.0-87.0.6.el6
Complete!
Note
The output of the X Window System desktop installation is very long and has been abbreviated.
Modify the /etc/inittab file to start with a GUI login and reboot the system:
#id:3:initdefault: # Change Option 3 to 5 as shown in the following lineid:5:initdefault:
Step 9.17—Reboot RAC-Node-01 for All of the Preceding Setups/Configurations to Take Effect
To reboot, issue the following command:
[root@bsfrac01 network-scripts]# shutdown -r
After a successful reboot, you will arrive at the login screen (see Figure 9.23).
Figure 9.23. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 login screen
Step 9.18—Verify the Network Settings after the Node Reboots
To verify the network settings after the reboot, do the following:
[root@bsfrac01 /]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:21:F6:00:00:01
inet addr:192.168.2.40 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:49103 errors:0 dropped:117 overruns:0 frame:0
TX packets:12982 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33876842 (32.3 MiB) TX bytes:939705 (917.6 KiB)
Interrupt:57
eth1 Link encap:Ethernet HWaddr 00:21:F6:00:00:00
inet addr:192.168.3.40 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25180 errors:0 dropped:117 overruns:0 frame:0
TX packets:198 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1316336 (1.2 MiB) TX bytes:12163 (11.8 KiB)
Interrupt:58
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:930 (930.0 b) TX bytes:930 (930.0 b)
Oracle Software Preinstallation Steps on the RAC-Node-01 VM
Perform the following checks (steps 9.19–9.28) to satisfy the prerequisites for RAC 12c on Node 01.
Step 9.19—Check Space Requirement on /tmp
To check that the space requirement has been met, do the following (10 GB is recommended):
[oracle@bsfrac01 Database]$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 9.8G 7.9G 1.4G 86% /
Step 9.20—Create the Required and Relevant OS Groups
Issue these commands to create the OS groups:
[root@bsfrac01 /]# groupadd -g 54327 asmadmin
[root@bsfrac01 /]# groupadd -g 54328 asmoper
[root@bsfrac01 /]# groupadd -g 54329 asmadmin
[root@bsfrac01 /]# groupadd -g 54324 asmdba
[root@bsfrac01 /]# groupadd -g 54324 backupdba
[root@bsfrac01 /]# groupadd -g 54325 dgdba
[root@bsfrac01 /]# groupadd -g 54326 kmdba
[root@bsfrac01 /]# groupadd -g 54321 oinstall
[root@bsfrac01 /]# groupadd -g 54322 dba
[root@bsfrac01 /]# groupadd -g 54323 oper
Note
Some of the preceding steps are optional, and whether they should be done depends on the user’s job function; for example, DBA, DMA, storage/system administrator, or other role.
Step 9.21—Create the Oracle and Grid OS Users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software Owners and Set Their Initial Passwords
To create the oracle and grid OS users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software owners, respectively, and set their initial passwords, issue these commands:
[root@bsfrac01 /]# useradd -u 54321 -g oinstall -G dba,asmdba oracle
[root@bsfrac01 /]# useradd -u 54322 -g oinstall -G asmadmin,asmdba grid
[root@bsfrac01 /]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@bsfrac01 /]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Step 9.22—Create the Optimal Flexible Architecture (OFA) Directory Structure for RAC 12c
As the root OS user, run the following commands:
[root@bsfrac01 /]# mkdir -p /u01/app/12.1.0/grid
[root@bsfrac01 /]# mkdir -p /u01/app/grid
[root@bsfrac01 /]# mkdir -p /u01/app/oracle
[root@bsfrac01 /]# chown -R grid:oinstall /u01
[root@bsfrac01 /]# chown oracle:oinstall /u01/app/oracle
[root@bsfrac01 /]# chmod -R 775 /u01/
Step 9.23—Observe/Verify the Required and Relevant Permissions of the Created OFA Directory Structure
Check the required and relevant permissions set for the OFA directory structure:
[root@bsfrac01 oracle]# ls -l /u01
total 4
drwxrwxr-x 5 grid oinstall 4096 Feb 25 23:29 app
[root@bsfrac01 oracle]# ls -l /u01/app/
total 12
drwxrwxr-x 3 grid oinstall 4096 Feb 25 23:29 12.1.0
drwxrwxr-x 2 grid oinstall 4096 Feb 25 23:29 grid
drwxrwxr-x 2 oracle oinstall 4096 Feb 25 23:29 oracle
Step 9.24—Set Up and Configure the NTPD
Configure the NTPD:
[root@bsfrac01 ~]# service ntpd start
Shutting down ntpd: [ OK ]
[root@bsfrac01 ~]# chkconfig ntpd on
Step 9.25—Turn Off and Unconfigure the Avahi daemon
Do the following to turn off and unconfigure the Avahi daemon:
[root@bsfrac01 ~]# service avahi-daemon stop
Shutting down Avahi daemon: [ OK ]
[root@bsfrac01 ~]# chkconfig avahi-daemon off
Step 9.26—Install Packages/Options for Linux Kernel
Within OEL, using the GUI software installer (see Figure 9.24) or using the rpm command-line utility, ensure that the following packages for OEL 6.x x86_64 are installed with greater-than/equal-to versions. Additionally, download and install ancillary packages to aid with the performance of the RAC.
Figure 9.24. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 custom package/rpm installation
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
ksh
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
[root@bsfrac01 ~]# rpm -qa glibc*
glibc-common-2.12-1.80.el6_3.5.x86_64
glibc-devel-2.12-1.80.el6_3.5.x86_64
glibc-2.12-1.80.el6_3.5.x86_64
glibc-headers-2.12-1.80.el6_3.5.x86_64
[root@bsfrac01 ~]# rpm -qa libstdc++*
libstdc++-4.4.6-4.el6.x86_64
libstdc++-devel-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa gcc*
gcc-c++-4.4.6-4.el6.x86_64
gcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa ksh*
ksh-20100621-16.el6.x86_64
[root@bsfrac01 ~]# rpm -qa make*
make-3.81-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa sysstat*
sysstat-9.0.4-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libaio*
libaio-devel-0.3.107-10.el6.x86_64
libaio-0.3.107-10.el6.x86_64
[root@bsfrac01 ~]# rpm -qa binutils*
binutils-2.20.51.0.2-5.34.el6.x86_64
[root@bsfrac01 ~]# rpm -qa compat-lib*
compat-libcap1-1.10-1.x86_64
compat-libstdc++-33-3.2.3-69.el6.x86_64
Step 9.27—Create Primary Partitions for All the GRID1, DATA1, and RECO1 ASM Disk Groups
Next, create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups:
[root@bsfrac01 ~]# fdisk /dev/xvdc
device contains neither a valid DOS partition table, nor Sun, SGI, or OSF disklabel.
Building a new DOS disklabel with disk identifier 0x6a917f21.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Note
Repeat the preceding steps for all the ASM disks, including grid infrastructure disks.
Step 9.28—Verify the Partition Structures for the Underlying Disks in the GRID1, DATA1, and RECO1 ASM Disk Groups
Verify the partition structures for the underlying disks:
[root@bsfrac01 /]# fdisk –l
Step 9.29—Configure ASM Library on RAC-Node-01
Configure the ASM library by performing the following steps:
[root@bsfrac01 dev]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl+C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmdba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK01 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK02 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK03 /dev/xvde1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK04 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK05 /dev/xvdg1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK01 /dev/xvdh1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK02 /dev/xvdi1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK03 /dev/xvdj1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK04 /dev/xvdk1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK05 /dev/xvdl1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK06 /dev/xvdm1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk RECO1DISK06 /dev/xvdn1
Writing disk header: done
Instantiating disk: done
Note
You can also choose to set up and configure UDEV rules for the ASM disks:
[root@bsfrac01 dev]# /etc/init.d/oracleasm listdisks
DATA1DISK01
DATA1DISK02
DATA1DISK03
DATA1DISK04
DATA1DISK05
DATA1DISK06
GRID1DISK01
GRID1DISK02
GRID1DISK03
GRID1DISK04
GRID1DISK05
RECO1DISK06
Step 9.30—Download and Stage the Oracle Software Binaries
Download, unzip, and stage the Oracle software grid and database software binaries:
[oracle@bsfrac01 Database]$ unzip -q linuxx64_database_12.1BETA_130131_1of2.zip
Repeat the unzip process for all the software binary zip files, and verify the unzipped and staged directory structure.
Step 9.31—Establish a Save Point: Make a Backup Copy of the Ready-to-Go OVM in the Form of an OVM Template
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM → Right-click → Clone or Move.
Step 9.32—Remove (Temporarily) the Shared Disks and Clone the Ready-to-Go OVM
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button.
As shown in Figure 9.25, this action temporarily removes all the shared ASM virtual disks. You need to do this; otherwise, clones of these disks will be created unnecessarily during the ensuing cloning process for all the RAC nodes.
Figure 9.25. Oracle VM 3.x: RAC-Node-01—Local virtual disks after temporarily removing the virtualized shared disks for Automatic Storage Management
Step 9.33—Clone the Other Nodes of the RAC Cluster from the Ready-to-Go Node 01 VM
To clone the other nodes of the RAC, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM and right-click. Choose Clone or Move (see Figure 9.26).
Figure 9.26. Oracle VM 3.x: Clone the other RAC nodes from the VM from Node 01
2. Select the Create a clone of this VM option.
3. Select and enter the following options:
• Target server pool
• Description
• Clone to a: VM
• Clone count: Specify the clone count for the RAC nodes
As shown in Figure 9.27, all the nodes for a five-node RAC have been successfully created and are ready for further configuration.
Figure 9.27. Oracle VM 3.x: VMs for all nodes of the RAC have been created and cloned
Step 9.34—Reattach the Shared Disks to the RAC Node OVMs
Next reattach the shared disks to the RAC node OVMs:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button (see Figure 9.28).
Figure 9.28. Oracle VM 3.x: RAC-Node-01—Reattach the virtualized shared disks for Automatic Storage Management
2. Select the shared disks.
3. Repeat the preceding steps for all the RAC node VMs.
Step 9.35—Start Each Guest VM and Edit the Network Settings
1. Go to System → Preferences → Network Settings → IPv4 Settings.
2. Modify the IP addresses of the eth0 (public) and eth1 (private cluster interconnect) NICs, as shown in Figure 9.29.
Figure 9.29. Oracle VM 3.x: RAC-Node-01—Network configuration for the RAC node VMs
3. Repeat for all of the RAC nodes.
Install and Set Up 12c Grid Infrastructure
As you can see from the preceding sections, setting up and installing Oracle RAC is all about doing an extensive amount of homework in the right way. To summarize the activity covered in the last few sections, ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup:
• Dedicated virtual network for RAC has been created and configured.
• Virtualized shared ASM disks for the GRID1 ASM disk group are created and ready for use.
• VMs that will constitute the nodes of the RAC 12c have been created, set up, and configured.
• OEL 6.x is set up and configured on RAC-Node-01 using two alternative approaches; installing it from scratch or using the downloadable templates for OVM for x86.
• The VMs for the other nodes of the RAC 12c have been cloned from the RAC-Node-01 VM.
It’s now time to set up Oracle grid infrastructure and get the RAC 12c bird off the ground and in the air.
Step 9.36—In OVM Manager for x86, Start All the VMs for the Oracle RAC 12c Cluster
As shown in Figure 9.30, at this point you want to start the VMs for the RAC 12c cluster.
Figure 9.30. Oracle VM 3.x: Start up the VMs for the RAC 12c cluster
Step 9.37—In the Grid Infrastructure Staging Directory, Run the OUI to Set Up the Grid Infrastructure
Enter the information and make the selections in the Wizard Entry screens of the Oracle Universal Installer (OUI), as shown in Figures 9.31 through 9.45. In some cases, you will need to edit according to the specific needs of your organization.
Figure 9.31. Enter your support credentials.
1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches (see Figure 9.31), or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option (see Figure 9.32).
Figure 9.32. Select Install and Configure Oracle Grid Infrastructure for a Cluster.
3. Select the Configure a Flex Cluster option (see Figure 9.33).
Figure 9.33. Select Configure a Flex Cluster.
4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS) (see Figure 9.34).
Figure 9.34. Enter SCAN and GNS information.
6. Enter the relevant information for the RAC 12c nodes, including for HUB and LEAF nodes (see Figure 9.35).
Figure 9.35. Enter HUB and LEAF node information.
7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes, as in Figure 9.36.
Figure 9.36. Enter credentials for SSH connectivity.
8. Once step 7 is done, the system tests the SSH connectivity between the nodes (see Figure 9.37).
Figure 9.37. Password-less connectivity testing in progress
9. Specify the network interfaces for public, private cluster interconnect, and ASM. The system will validate those as well (see Figure 9.38).
Figure 9.38. The system validates the public and private interfaces across the cluster nodes.
10. Select the Configure Grid Infrastructure Management Repository option (see Figure 9.39).
Figure 9.39. Select the Configure Grid Infrastructure Management Repository option.
11. If you choose No, then the message in Figure 9.40 is displayed.
Figure 9.40. Grid Infrastructure Management Repository is not selected as an option.
12. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level (see Figure 9.41).
Figure 9.41. Specify the ASM disks for the GRID1 ASM disk group.
13. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
14. Select the Do Not Use Intelligent Platform Management Interface (IPMI) option.
15. Specify the OS groups for ASM.
16. Enter the Oracle BASE and HOME locations.
17. Enter the Oracle inventory location (see Figure 9.42).
Figure 9.42. Enter the inventory location.
18. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts (see Figure 9.43).
Figure 9.43. Enter the root password.
19. Generate and run any runfixup.sh scripts to remediate any prerequisite issues (see Figure 9.44).
Figure 9.44. Fix any issues.
20. Press Install to initiate the installation process for grid infrastructure (see Figure 9.45).
Figure 9.45. Initiate the installation process.
In simple terms, Snap Clone is a storage-agnostic self-service approach to rapidly creating space-efficient clones of large databases (and by and large, we’re talking terabytes or more). Now that’s probably more buzzwords in one sentence than anyone’s brain can deal with without exploding, so let’s look at some of those terms more closely:
• Storage agnostic: Snap Clone supports all storage vendors, both network attached storage (NAS) and storage area network (SAN).
• Self-service: In the XaaS world—where X can be any of infrastructure (I), middleware (MW), platform (P), or database (DB)—one of the key features is empowering the end user to do the work rather than waiting on some techie to find time in her or his otherwise busy schedule. So it’s the end user who makes the ad hoc clones here, not the storage admin.
• Rapid: People simply don’t have the time anymore to wait weeks for provisioning to happen (for that matter, they probably never did, but that’s another discussion), so you have to support the functionality to clone databases in minutes rather than the days or weeks it used to take.
• Space efficient: When you’re working with terabyte or larger databases, you may not have the storage to create full-sized clones, so you have to significantly reduce the storage footprint to start with.
The Challenges Snap Clone Addresses
Snap Clone can be used to address number of major challenges:
• Lack of automation: Manual tasks such as provisioning and cloning of new databases (e.g., for test or development systems) is one area that many DBAs complain is too time consuming. It can take days to weeks, often because of the need to coordinate the involvement of different groups, as shown in Figure 8.27.
Figure 8.27. The standard cloning workflow
When an end user, whether a developer or a QA engineer, needs a database, he or she typically has to go through an approval process like that shown in the figure, which then translates into a series of tasks for the DBA, the sysadmin, and the storage admin. The sysadmin has to provide the compute capacity, and the storage admin has to provide the space on a filer. Finally, the DBA would install the bits, create the database (optionally on Real Application Clusters), and deliver the database to the user. Clearly, this is a cumbersome and time-consuming process that needs to be improved.
• Database unfriendly solutions: Obviously, when there is a need looking for a solution, different people take different approaches to resolving that need. There are a variety of point solutions and storage solutions out there, but the vast bulk of them are not database aware. They tend to clone storage volumes rather than databases and have no visibility into the database stack, which of course makes it hard to triage performance issues as a DBA. They also lack the ability to track configuration, compliance, and data security issues, and they have limited or no lifecycle capabilities.
• Storage issues and archaic processes: One of the main issues DBAs face is storage. Data volumes are ever increasing, particularly in these Big Data days, and the growth can often outpace your storage capacity. You can throw more disks at the problem, but it never seems to be enough, and you can end up with degraded performance if you take the route of sharing clones between users. There can also be different processes and different priorities between the storage team and the DBA team, and you may still have fixed refresh cycles, making it difficult to clone on an ad hoc basis.
In the latest release of Snap Clone, you can provide thin clones via either software solutions (Solaris ZFS and CloneDB) or hardware solutions (Sun ZFS Storage Appliance, NetApp, and EMC). Let’s touch on each of these in a bit more detail. The examples presented here use Enterprise Manager 12.1.0.4.3 with the latest plugins (specifically for Snap Clone, the important plugin is the Enterprise Manager for Storage Management plugin release 12.1.0.5.0). If you are on a slightly different version, you may see a slightly different look and feel to the user interface and available functionality.
Software Solutions
Let’s start by looking at the software-based solutions.
Snap Clone using Solaris ZFS
Snap Clone using ZFS uses a single stock Solaris 11.1+ image, which can be either physical or virtual (note that it doesn’t use the Sun ZS3 appliance). It supports both NAS and SAN. If you are using SAN, then mount the logical unit numbers (LUNs) as raw disk and format with the ZFS filesystem. It’s important to note here that this does not require any snapshot or cloning licenses from the storage vendor, as these features are available for free.
Additional features provided with this solution include compression, de-duplication, I/O caching, and so on. If you also need high availability in this configuration, that can be handled externally either via Solaris Clusters or by using the high-availability features of the underlying hypervisor.
Figure 8.28 shows what the configuration looks like diagrammatically.
Figure 8.28. Snap Clone using ZFS
CloneDB Using dNFS
With CloneDB using direct network file storage (dNFS), you get the capability to create thin copies of a database from RMAN image copies. This approach uses the NFS v3 client that’s embedded in the database technology since 11.2.0.3. Currently, this is supported for single-instance databases, but only on filesystems (i.e., ASM is not yet supported).
The advantages of this approach include the following:
• It’s easy to set up.
• No special storage software is needed.
• It works on all platforms.
• It’s both time efficient (instantaneous cloning) and space efficient (you can create multiple clones based on one backup).
• It uses dNFS to improve the performance, high availability, and scalability of NFS storage.
Hardware Solution
As mentioned previously, the hardware solution for Snap Clone is available on a range of hardware, including Sun ZFS Storage Appliance, NetApp, and EMC. The EMC configuration is the most interesting one, as Snap Clone on EMC includes support for databases on ASM, so let’s look at that example in more detail.
Using Snap Clone on ASM and EMC storage provides the ability to create “live” thin clones of databases that are on ASM. A live clone is not snapshot based but rather a live clone of the database that can be within the same cluster or in a different one. Both single-instance and RAC are supported—supported versions are 10.2.0.5 and higher of the database and 11.2 and higher of the grid infrastructure code. This functionality works on both EMC VMAX (with Time Finder VPSnap) and VNX storage appliances.
Figure 8.29 shows what this configuration looks like diagrammatically.
Figure 8.29. Snap Clone using ASM on EMC storage
Snap Clone Setup
Now that we’ve looked at the different setups architecturally, let’s walk through the setup for Snap Clone. In this example, we use CloneDB because it requires the most straightforward setup. Refer to the Cloud Administration Guide (found under Private Cloud Setup and Administration in the EM12c documentation) for more details on the other setups.
The steps to set up Snap Clone are as follows:
1. Update the relevant plugins (or install them if you don’t already have them). You will need the following plugins (the versions mentioned here are for the 12.1.0.4.3 environment used in this example):
• Enterprise Manager for Cloud version 12.1.0.9.0
• Enterprise Manager for Oracle Cloud Framework version 12.1.0.2.0
• Enterprise Manager for Storage Management version 12.1.0.5.0
• Enterprise Manager for Oracle Database version 12.1.0.7.0
2. Set up the relevant roles, users, PaaS infrastructure zones, and database pools, as described in Chapter 5.
3. Create a database profile and service template so that self-service users can create thin clones as they require.
Let’s look at the details of step 3.
Creating a Database Profile
Creating a database profile to use Snap Clone is a little bit different from creating the profiles we looked at earlier.
1. Start from the same place (Setup → Cloud → Database → Data Sources → Data Profiles, and then click Create). In this case, we are going to create the profile using an RMAN Database Image, as shown in Figure 8.30. Note the message that also appears on this screen (at the bottom) that says Snap Clone is not enabled. This rather spurious message occurs because we don’t have any relevant hardware installed that we can use to create Snap Clone. However, it can be safely ignored because we are using CloneDB in this example.
Figure 8.30. Creating a database profile for Snap Clone, step 1
2. In this example, the database from which the profile is created is running in ARCHIVELOG mode. That means that on the next screen, shown in Figure 8.31, the backup can be done in online mode. A directory for the backup to take place in must also be specified.
Figure 8.31. Creating a database profile for Snap Clone, step 2
3. Steps 3 and 4 of the wizard are the same as in previous database profile creations in this chapter, so for brevity, they are omitted here. Now we can look at using that profile in the service template creation. Again, we start that from Setup → Cloud → Database → Service Templates, and then click the Create button. The main difference here is that once we select the database profile we just created, there will now be a Create CLONEDB option, as you can see in Figure 8.32.
Figure 8.32. Creating a service template for Snap Clone, step 1
4. The important information we need to provide to use Snap Clone with CloneDB is on the next screen. We need to tell the wizard a location that is shared using NFS. In this example, that the location is set up using /u02/copy-on-write as an NFS share, as shown in Figure 8.33.
Figure 8.33. Creating a service template for Snap Clone, step 2
5. Again the remaining steps of the wizard are the same as in the previous service template examples, they are omitted here. At the end, however, you should see a message that the service template has been created successfully, as shown in Figure 8.34. Once that is complete, the self-service user can select this template from the Self Service Catalog, just as shown in previous examples in earlier chapters.
Figure 8.34. Creating a service template for Snap Clone, step 3
Summary
In this chapter, we looked at the two main types of clones built in a cloud environment using Enterprise Manager 12c—full clones and snap (or thin) clones. A full clone involves taking a complete or full copy of an existing database, which can be done in one of two ways: by taking an RMAN backup or issuing an RMAN DUPLICATE command. A snap clone takes advantage of copy-on-write technology, available through both software and hardware solutions, to rapidly create space-efficient clones of large databases. These clones contain only changed blocks; unchanged blocks are read from the test master database the clone is built from. As a result, many snap clones can be taken from a single test master without using anywhere near the amount of storage the test master takes up.
From here you can see a number of regions—General, Request Status, Incidents, Top Resource Providers, Top Requested Services, and Top Charge Trends (data is displayed in the Charge Trends region only if you have enabled chargeback), so you get a good overview of the health of your cloud infrastructure. If you want to drill down to look at more details of what service instances you have available, you can click on the number to the right of Service Instances. This link will show you exactly what service instances are defined in your cloud environment, their status, the service type, the resource provider, the service template, and even who owns it (see Figure 7.2).
Figure 7.2. Service instances
From here you can also click on the name of the service instance (for example, crm00001) to drill into the home page for that particular target, or you can simply select the row containing that target to do the following tasks:
• Start or stop the instance. Which one of these options is available depends on the status the instance is currently in. For example, if the service instance is already started, the Start option will remain grayed out.
• Examine the history of the service instance by clicking the History icon. This feature shows all the operations that have taken place against this particular service instance since it was created.
• Delete the service instance if it is no longer required. Obviously, caution should be taken before deciding to perform this operation.
You may find that the two most important links are the ones that show failed requests (the red slice of the pie chart in Figure 7.1) and the heat map (which you get to by clicking View Map on the top right of the screen shown in Figure 7.1). The failed request statuses are ones you’ll want to investigate further to see what went wrong. Sometimes, this task is as simple as an attempt to start a database that’s already started, but at other times, you’ll find more complex issues that need further investigation and follow-up. You should also check here for requests that failed due to insufficient quota or resources and investigate whether more quota or resources need to be allocated.
The heap map is a way to graphically view which resource providers (such as platform as a service [PaaS] infrastructure zones and database pools) are using the most resources. Even in the relatively simple environment we’re using as a demonstration, the heat map (see Figure 7.3) shows that the host allocation for the Salt Lake City PaaS infrastructure zone, at 66 percent utilization, is starting to reach a level that may require adding more PaaS infrastructure zones. In more complex environments, you may see resource providers that show up as red. If you select that resource provider, you’ll see what the problem is in the Metric Details region shown below the heat map, and you can drill into the Value link to discover more details.
Figure 7.3. Heat map from a simple cloud environment
There is also a menu of items that you can select from using the Oracle Cloud menu on the top left of the Cloud Home page. Several items in this menu are useful from an administration and management perspective.
Members → Topology
This menu item shows use a graphical representation of how the different items in your cloud environment are related. It is particularly useful when the environment becomes more complicated, as it allows you to see at a glance how items are related to each other. By clicking on one of the items, you can see specific details listed under the Metric History region on the bottom right (see Figure 7.4 for an example).
Figure 7.4. Configuration Topology
Resource Providers, Service Templates, Service Instances
Each of these menu items takes you to a list of the different entities. From here, it is easy to select and edit each particular entity. You can also get to each of these pages by clicking the links next to each one in the General region of the Cloud Home page. From each page, you are also able to create new items or delete existing ones. Figure 7.5 shows an example from the Resource Providers page.
Figure 7.5. The Resource Providers page
Requests
If you click on Show All under Requests, you can see a complete list of the requests that have been made in your cloud environment, as shown in Figure 7.6. From here you can do the following:
• Select an individual request and look at the details for it.
• Reschedule the request if it hasn’t already been executed.
• Delete the request.
Figure 7.6. The Requests page
If you click on Requests: → Dashboard, you’ll be taken to a more graphical representation of the request trends over the past seven days, as shown in Figure 7.7. Note that you can click on the Settings spanner on the top left to change this time period to one more suitable to you (Figure 7.7, for example, shows the last 31 days’ worth of requests). This page is particularly useful for showing request trends to management.
Figure 7.7. The Request Dashboard
The Cloud Adviser
The information we’ve covered so far in this chapter is really the bread and butter, the day-to-day life of a cloud administrator. However, one of the main advantages of moving to a cloud environment is that it frees up a lot of the DBA’s time. No longer do you need to focus on the mundane tasks of creating databases, looking after backups, and so on. The time that you used to spend working on these tasks is now available for you to take on a much more important role—the cloud adviser.
What is the cloud adviser’s role? Well, it’s a role that many of us have been performing for years—the senior DBA role—expanded into the cloud environment. Since most of our business users do not have the technical background to be able to make informed decisions on technical issues, they still come to the DBA to provide that advice. Those technical issues fall into a variety of broad topics, which the rest of this chapter examines.
Security
Security is of paramount importance in the cloud world, because if you get the security model wrong in the cloud environment, you may not have an environment to administer and manage for very long! We’ve all no doubt heard of the security breaches that seem to be increasingly common in the cloud environment. Security breaches might involve stolen usernames and passwords from cloud providers and, more seriously, stolen data. Oracle database technology is used the world over to secure large volumes of confidential data—financial data, employee data, customer information, and so much more. Securing that data, and ensuring it remains secure, is an essential part of the DBA’s role. Following are some of the tools that can be used to assist with this task:
• Transparent Data Encryption (TDE): TDE is a way to stop would-be attackers from bypassing the database and reading sensitive information directly from the files that make up the Oracle database. This protection is extended to any backups that have been made of the database as well. TDE allows encryption of individual columns right up to entire tablespaces, so you can choose how much of your data to secure.
• Data Masking and Subsetting: As you move to the cloud environment, one approach that is often taken for security reasons is to migrate development and test environments to the cloud first, as these environments are often seen as lower risk. However, at the same time, you want to ensure that your developers and quality assurance (QA) engineers have environments that are as close to production as possible so they have a realistic environment to test performance on. Those two requirements may at first glance seem antithetical, but Enterprise Manager contains Data Masking and Subsetting tools that can be used to address the issue. Oracle Data Masking and Subsetting allows you to sanitize copies of production data for development and QA environments so that your production data is not compromised. It also allows you to use subsets of the data in cases where you simply do not have enough storage to take a full copy of your production data into these environments.
Of course, you need to secure your data against more than just external attacks. Most studies of security breaches indicate that these breaches are more often the work of insiders than of external hackers (regardless of the fact that the most reported-on breaches are external). So from a security perspective, you must have tools and technologies that can be used to both track and secure against internal attacks as well. Some of the Oracle products that help to address this space include Transparent Data Encryption, Oracle Audit Vault, and Oracle Database Vault.
Server and Database Sizing
In theory, scaling a database in the cloud world is as simple as requesting more processing capability. In reality, we all know that service providers, be they an internal IT department or an external hosting provider, will have limitations as far as the hardware they have available to them. One of the value-add roles that a DBA can perform is to determine the amount of resources required now to meet current needs as well as to scale to meet future requirements. Enterprise Manager 12c has a number of tools that can be used to assist the DBA in this arena, including the following:
• Consolidation Planner: Consolidation Planner is a tool that can be used to plan a cloud architecture for server consolidation or for any other consolidation model. It allows you to identify source and destination targets and applicable technical and functional constraints, such as where the application can reside. You can also use Consolidation Planner to determine whether source environments can be consolidated onto hardware that you have not yet purchased, providing greater confidence that machines you are looking at purchasing will be suitable for the workloads you plan to put on them.
• Chargeback: As we covered in Chapter 6, “Metering and Chargeback in Enterprise Manager 12c,” chargeback is an accounting tool that provides a way to aggregate the enormous amount of metrics data that Enterprise Manager collects, a way to report to consumers of those metrics how much they have used of those particular resources, and, if you have set it up to do so, a way for the IT department to charge those consumers for the resources they have used. However, its usefulness is not restricted to these functions. Once the chargeback application is configured and running, you accumulate a history of how many resources have been used by specific applications or databases, which in turn allows you to project some form of capacity planning figures for future growth. Obviously, you still need to talk to the application owners to get an idea if any new functionality may impact those figures, but at the very least, you have a starting point that can be used to predict future resource requirements.
Performance Tuning
As mentioned at the beginning of this chapter, contacting the DBA is all too often the first port of call for users complaining about performance, even before they can identify the database as the source of the performance issue. Of course, if we’ve ensured server and database sizing has been done correctly, we shouldn’t see too much in the way of performance issues. Nevertheless, we all know that server and database sizing is far too often a reactive rather than proactive exercise, so we still need to undertake the same sorts of performance problem identification and resolution exercises that we are familiar with from the non-cloud environment. This topic is far too large to cover here—indeed, whole books have been written about Oracle performance tuning—we won’t go into more detail now. Suffice it to say, you will still need to perform this role but with the added complexity of not necessarily knowing beforehand what other environments may be directly impacting yours. Consequently, you should become familiar with the pages covered at the beginning of this chapter, particularly the Cloud Home. It’s there that you can get an overview of what cloud instances have been created and can drill down to see all the different service entities that have been created and removed.
Summary
Oracle Enterprise Manager Cloud Control 12c should be your tool of choice when monitoring, administering, and managing the Oracle database cloud environment. As you’ve seen over the past few chapters, it provides the capability to set up both schema as a service and database as a service as well as to configure the chargeback application. Additionally, as we’ve seen in this chapter, Enterprise Manager 12c allows you to perform the ongoing management and administration of the cloud environment you have set up. One final step remains as an often undertaken task in the cloud environment: cloning of databases from one environment to another. That’s the subject of our next chapter.
Chapter 9. Virtualizing RAC 12c (DB Clouds) on Oracle VM—Grid Infrastructure
Cloud computing promises to usher in a new era for the corporate IT universe. Every day, we hear that the cloud within the typical IT organization is inevitable and bound to happen, if not already present in some form; it is reasonable to conclude that cloud computing is only a matter of when, not if.
The cloud computing winds of change have been blowing for quite a few years now, recently picking up momentum at Oracle Corporation—almost every key Oracle product focuses on cloud computing as a paradigm. This focus is evident in the “c” (for “cloud”) that is appended to the current release number of Oracle products. Real Application Clusters (RAC) is no exception to Oracle’s turn toward cloud computing.
Virtualization is the foundation of cloud computing because it is widely implemented today. What is virtualization, and what part does it play in the cloud universe? This chapter and the next insights, recommendations, and a step-by-step guide on setting up virtualized RACs, with an emphasis on virtualization, cloud computing, Oracle Virtual Machine (OVM) for x86, and Oracle Enterprise Manager Cloud Control 12c (EM12c). These virtualized RACs can then be utilized within the framework of database as a service (DBaaS) for rapid and easy deployment as database cloud services.
The overlap between the material in this and other chapters is intended to reiterate important concepts as well as to present the topics in the proper context.
Following is a summary of topics presented in this chapter:
• Building Oracle database clouds: The necessary ingredients
• What is virtualization?
• What are virtual machine (VM) monitors (hypervisors)?
• Types of hypervisors
• Types of virtualization
• Oracle VM for x86—360 degrees
• Xen—Synopsis and overview
• Oracle VM—Overview and architecture
• Oracle VM templates—Synopsis and overview
• Oracle VM 3.x—A brief introduction
• Setting up virtualized Oracle RAC clusters using Oracle VM: Alternative approaches
• Set up, install, and configure 12c virtualized RAC clusters: Step-by-step setup and configuration
This chapter guides you, step by step, through installing, setting up, and configuring a virtualized RAC 12c using OVM for x86. The next chapter takes a similar approach, with one major difference—the underlying virtualization technology (hypervisor) is Oracle VirtualBox instead of OVM for x86. This information gives you the choice of using either virtualization technology or both technologies to set up virtualized Oracle RAC 12c database clouds. An overview of cloud computing and the role and relevance of virtualization from the perspective of cloud computing are also covered in both chapters. All respective versions of the hypervisors used are the latest and greatest at the time of the publication of this book.
Database Clouds Based on RAC—The Necessary Ingredients
Cloud computing can be described as “fill-in-the-blank as a service”: for example, infrastructure as a service (IaaS), platform as a service (PaaS), and database as a service. A more detailed overview of cloud computing, its various flavors, paradigms, prevalent trends, and a whole lot more are presented in the next chapter.
How do we plan for, set up, build, and configure Oracle database clouds? The short answer is OVM for x86, EM12c, and RAC. Together they make up the true database cloud solution from Oracle, especially if you are planning your own private database clouds behind your corporate firewalls. OVM for x86 is used interchangeably with OVM in this chapter and the next.
An overview of virtualization is presented in this chapter with a follow-up section on cloud computing in the next chapter.
Virtualization—360 Degrees
Virtualization is the opposite of a physical entity in the IT universe. Here are some salient features and key points about virtualization:
• Virtualization is the foundation stone in the cloud computing era.
• Virtualization is an inevitability waiting to happen in the IT universe, one that you just can’t avoid: the sooner you embrace it, the better off you are.
• Virtualization can be summarized as an abstraction layer.
• Virtualization has proved to be a game-changer, resulting in unprecedented server utilization.
• Virtualization enables agile availability of resources to the end user, thereby shaving considerable time from the IT provisioning life cycle.
• Virtualization in the modern day can be characterized as the gateway and roadmap to secure and elastic corporate IT scalability.
• Virtualization implies a fantastic alternative to physical reality—the possibilities are endless.
• The alternative to virtualization consists of physical hosts with a lot of useless spare capacity, resulting in many resources being underutilized.
• Although Oracle database administrators (DBAs) were slow to uptake virtualizing their databases, the trend has finally gained momentum and reached critical mass.
What Are VM Monitors (Hypervisors)?
A VM monitor, also known as a hypervisor, enables OS kernels to run and coexist as guests, thereby enabling virtualization at the OS level. Hypervisors are responsible for allocation and coordination of CPU, memory, I/O, peripheral resources, and so on, to the guest VMs.
Types of Hypervisors
There are two types of hypervisor:
• Type 1: This type is known as a native, or more commonly, bare-metal hypervisor. It installs on bare-metal hardware and does not require an OS on which to be installed. Examples are VMware ESX/vSphere, Microsoft HyperV, Xen, and OVM. Bare-metal hypervisors are enterprise-grade hypervisors that enable cloud computing as it is widely known and understood today.
• Type 2: This type is known as a hosted hypervisor and is installed on an already existing OS on the system: examples are OVM VirtualBox, VMware Server, and VMware Workstation. Hosted hypervisors are mostly utilized for personal use, for example, learning new technologies and colocating various OS families on your laptop.
Here are some key points and salient features about hypervisors:
• A hypervisor is at the lowest level of the stack from a technology standpoint.
• A hypervisor enables agility and rapid deployment of resources within the IT space.
• Hypervisors result in increased efficiency by merit of elastic resource consolidation.
Following are some benefits and advantages of implementing hypervisors:
• Increased resource utilization
• Fault tolerance and high availability
• Isolation and multitenant support
• Support for a wide range of popular OS families
Types of Virtualization
There are three types of virtualization prevalent in the industry today (the first two categories are explained in the following sections, as they are relevant to this chapter):
• Paravirtualization
• Hardware-assisted/full virtualization
• Partial virtualization
What Is Paravirtualization?
In paravirtualization, guest VMs use a special hypercall application binary interface (ABI) in a modified OS for performance and simplicity. The modified OS communicates with the hypervisor, and tasks are relocated from the virtual domain to the host domain.
OVM implements this type of virtualization. The Oracle/Red Hat Enterprise Linux family of paravirtualized guests are supported with OVM as paravirtualized guests.
Paravirtualization is generally relatively faster than hardware virtualization. This is not to imply that either type of virtualization is either slow or not fast enough.
What Is Hardware-Assisted/Full Virtualization?
Hardware-assisted virtualization is also known as full or native virtualization and requires CPU support.
This type of virtualization enables unmodified guest OS kernels to run within a simulated hardware infrastructure but generally is relatively slower than paravirtualization.
Microsoft Windows and Oracle Solaris families of hardware/full virtualized guests are supported with paravirtualized drivers on OVM.
OVM for x86—360 Degrees
OVM for x86 is a type1 hypervisor based on Xen, the de facto nth-generation open-source hypervisor. Xen is a mainstream technology, widely used by dominant cloud computing providers such as Amazon and Rackspace, as well as by Oracle’s own public cloud. OVM provides both server virtualization and management components. OVM 3.x is based on Xen 4.x and has been significantly enhanced to be an industrial-grade product capable of configuring, administering, managing, and supporting thousands of servers hosting both Oracle and non-Oracle applications. Some of the advances in this relatively new version include dynamic resource scheduling (DRS), high availability–enabled server pools (clusters), and dynamic power management. OVM is augmented with the Virtual Assembly Builder and Template Builder components, which combine to form a complete virtualization picture within the OVM family.
Following are some of the key points about OVM’s capabilities and some of its advantages. However, as with any technology, OVM has its fair share of nuances, most of which can be taken care of by proper configuration and by following implementation best practices.
• Server load-balancing
• Centralized network and storage management
• Physical to virtual (P2V) and virtual to virtual (V2V) conversion
• Web services API
• Support for Windows, Linux, and Solaris as guest OS families
• Agility and fast deployment with OVM templates and Oracle Virtual Assembly Builder
• Web-based GUI management
• OVM zones—multiple server and storage pools
• High availability and live migration with OVM server pools
• Running mixed heterogeneous workloads within a single consolidated machine
• Very fast—delivers near-native performance
• Simple and easy installation—low learning curve
Another nice point is that OVM is free—you pay only for affordable, low-cost support.
Note
OVM is the only virtualization offering for the x86 architecture that is certified with all major Oracle products.
Xen—Synopsis and Overview
Xen originated at Cambridge University and is the leading open-source, industry-standard hypervisor. Ian Pratt founded XenSource, the company behind Xen, which was later acquired by Citrix in 2007. Xen 4.x is the latest version as well as the underlying version for OVM 3.x.
The Xen hypervisor is the virtualization base of Amazon EC2, the market leader in the cloud computing IaaS service model. Oracle is part of the Xen Advisory Board and contributes to its development. Other members of the Xen Advisory Board include Citrix, Hewlett Packard, IBM, Intel, Novell, Oracle, and Red Hat.
OVM—Overview and Architecture
OVM is made up of two components:
• OVM Server, the Xen-based open source hypervisor component
• OVM Manager, the Java-based thin-client GUI management component
OVM Server
OVM server is the actual hypervisor component based on Xen. It installs on bare-metal x86 hardware and does not require a preinstalled OS.
OVM boots a small 64-bit domain called DOM0, which is used for assigning, distributing, and coordinating CPU, I/O, and other resources. Guest VMs are created and configured as in DOMus.
OVM Manager
Based on WebLogic server, OVM Manager is a Java-based management server component with a Web-based UI. It utilizes Oracle Database as a management repository and comes prepackaged with a free XE version of Oracle Database, which can be converted to all other flavors of the Oracle Database server family.
Recently, with OVM 3.2.x, MySQL is now also supported as a repository database option. OVM agent processes are used on each OVM server for communication and management purposes. OVM uses server pools (or clusters) to group virtualization resources: each server pool encompasses one or more OVM servers.
What Are OVM Templates?
OVM templates, or Golden Images, are factory-packaged, preinstalled, and preconfigured images of preconfigured VMs containing software products that are complete with built-in best practices and are ready to go. They provide reusability and full-stack implementation. All major Oracle products—for example, Oracle Database server, Fusion Middleware, Enterprise Linux, and RAC—are available as OVM templates.
OVM templates are the vehicles to significant reduction of installation and configuration costs in the IT landscape.
Methods of Creating OVM Templates
The following methods can be employed/deployed to create OVM templates:
• P2V conversion
• Create VM templates from existing VM images
• Create VM templates from just enough operating system (JeOS)
OVM Builder
OVM Assembly Builder gives you the capability of a structured process for appliance consolidation into cohesive and reusable assemblies by rapidly creating and configuring full-stack topologies and provisioning them onto virtualized appliances.
OVM Builder is used for creating dedicated VMs called software appliances and facilitates deployment of the entire application as a single, automatically configured unit. This tool can facilitate the building of private clouds significantly by building VM assemblies and deploying OVM templates.
OVM 3.x—A Brief Introduction
OVM 3.x, the latest release, takes scalability to a whole new level. With tons of new features, OVM 3.3 is based on Xen 4.x.
Being highly scalable, this latest version of OVM includes many enhancements:
• A feature-rich Web-based UI, improved backup and recovery capability
• Simplified VM deployment, administration, and management with 64-bit DOM0
• Application-driven virtualization, up to 128 virtual CPUs, and 1 TB memory per guest VM
• Jobs-based VM operations
• Dynamic resource management
• Dynamic power management
• Comprehensive network and storage management
• Multiple-template cloning in a single step
• Over 100 factory-packaged best-practices built into OVM templates
• A centralized configuration and management solution in the form of OVM Manager.
In other words, 3.x is truly an enterprise-grade, groundbreaking release. OVM 3.x is completely and fully managed by a browser-based UI provided by OVM Manager.
If you haven’t already embarked on this journey, now is a great time to upgrade and migrate your OVM infrastructures from 2.x to 3.x.
OVM 3.x: High Availability–Enabled OVM Ecosystem
OVM provides broad-based high availability across the virtualization ecosystem in the form of high availability–enabled server pools (or clusters) on shared storage.
Salient features include:
• Live migration of guest VMs
• Automatic failover/restart of guest VMs in case of server failure
• Oracle Cluster File System 2 (OCFS2)—high availability on a cluster file system
• Server pool load balancing—using a best-fit algorithm, places guest VMs on the most appropriately loaded VM server
• Clustered OVM Manager
Virtualized RAC Using OVM Templates—Approach 1
This approach is the easiest and fastest way to set up your own virtualized RAC database clusters as part of virtualized Oracle RAC database clouds. Simply download the OVM for x86 templates for RAC, install them, and in less than an hour, you have your own virtualized RAC up and running. This methodology is truly revolutionary and illustrates the beauty and power of agile provisioning of complex infrastructures and applications in cloud environments using virtualized templates.
While this approach is not covered in complete detail, the main utility used to set up, configure, and deploy a virtualized RAC from OVM templates, DeployCluster, is presented in the following section.
Use DeployCluster to Configure and Deploy the Virtualized RAC
This section walks you through using the DeployCluster tool to rapidly configure and deploy a virtualized RAC database cluster. Listing 9.1 shows the example run.
Listing 9.1. DeployCluster Tool
[root@bsfmgr01 deploycluster]# ./deploycluster.py -u admin -p password -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
Oracle RAC OneCommand (v1.1.2) for Oracle VM - deploy cluster -
(c) 2011-2012 Oracle Corporation
(com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1200:v1.1.2) - v2.6.6 -
bsfmgr01.bsflocal.com (x86_64)
Invoked as root at Sat Sep 22 20:10:04 2012 (size: 37600, mtime: Sun Aug 5 12:37:58 2012)
Using: ./deploycluster.py -u admin -p **** -M bsfracovm1,bsfracovm2 -N bsfrac64.ini
INFO: Attempting to connect to Oracle VM Manager...
INFO: Oracle VM Client (3.1.1.399) protocol (1.8) CONNECTED (tcp) to
Oracle VM Manager (3.1.1.305) protocol (1.8) IP (192.168.1.51) UUID
(0004fb0000010000da73c3bcce15ca2e)
INFO: Inspecting /home/oracle/ovm3/deploycluster/bsfrac64.ini for number of nodes defined....
INFO: Detected 2 nodes in: /home/oracle/ovm3/deploycluster/bsfrac64.ini
INFO: Located a total of (2) VMs;
2 VMs with a simple name of: ['bsfracovm1', 'bsfracovm2']
INFO: Verifying all (2) VMs are in Running state
INFO: VM with a simple name of "bsfracovm1" is in Running state...
INFO: VM with a simple name of "bsfracovm2" is in Running state...
INFO: Detected that all (2) VMs specified on command have (5) common shared disks
between them (ASM_MIN_DISKS=5)
INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details
as follows:
netconfig.ini (Network setup): /home/oracle/ovm3/deploycluster/bsfrac64.ini
buildcluster: yes
INFO: Starting to send cluster details to all (2) VM(s).....
INFO: Sending to VM with a simple name of "bsfracovm1"....
INFO: Sending to VM with a simple name of "bsfracovm2"......
INFO: Cluster details sent to (2) VMs...
Check log (default location /u01/racovm/buildcluster.log) on build VM (bsfracovm1)...
INFO: deploycluster.py completed successfully at 20:10:19 in 15.7 seconds (00m:15s)
Logfile at: /home/oracle/ovm3/deploycluster/deploycluster2.log
Figure 9.1 and Figure 9.2 each show parts of a sample run of the DeployCluster tool. On your monitor display, INFO: (Figure 9.1) and [ OK ] (Figure 9.2) should be green: all green means all good to go!
Figure 9.1. An example run of the DeployCluster tool for setting up RAC using Oracle VM for x86 templates
Figure 9.2. DeployCluster tool: Configuring and setting up RAC using Oracle VM for x86 templates
Note
At the time of writing, OVM for x86 templates for RAC were only available for up to version 11gR2 and not for 12c. This is the recommended approach for setting up Oracle RAC as database clouds; however, because of the absence of OVM templates for 12c, we have included the longer alternative approach outlined in the next section. The other rationale for including this approach is that it enables you to learn the specific actions required to set up and configure RAC 12c from scratch.
Set Up and Configure a Virtualized RAC Database Cloud—Approach 2
This section takes you through an alternative, step-by-step approach to setting up your own virtualized RAC 12c in OVM for x86.
Note
This chapter assumes that you already have an OVM 3.x server pool in an up-and-running state. If this is not the case, please refer to the OVM documentation to set up OVM 3.x. The following sections assume that you are familiar with basic RAC concepts (presented in earlier chapters). Also, this chapter and the next chapter are structured in a way that enables you to set up RAC database clouds in the comfort of your own home for learning purposes. Please note that the steps are identical to corporate RAC setups; however, the infrastructure is pared down to enable you to make use of hardware available at home.
The following hardware and software were used for setting up the OVM server pool for this example:
• OVM Manager and EM12c:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM
• OVM servers for server pool:
• (Qty: 3) 64-bit Intel x86 machines with 16 GB RAM each
• Shared storage:
• (Qty: 1) 64-bit Intel x86 machine with 8 GB RAM:
• Openfiler with 1 TB disk space available on iSCSI
Roadmap to a Virtualized RAC 12c Cluster: High-Level Steps
Following are the high-level steps to build your own virtualized RAC–based database cloud:
1. Set up and configure the required hardware and software for an OVM server pool on shared storage.
2. Prepare and plan—Do your homework.
3. Install and set up grid infrastructure.
4. Install and set up non-shared database home(s).
5. Create a RAC database.
6. Configure and set up the RAC database as a monitored target in EM12c.
All of the preceding steps are detailed, elaborated on, and/or executed in the following sections of this chapter and the next one, with several alternative options presented for some of the involved steps.
While all of the following steps apply equally to corporate environments, they are written in such a way that you can set up a virtualized database cloud environment in your home, thereby learning how to install, set up, configure, and monitor RAC with minimal hardware.
OVM: Prerequisites, Preparation, and Planning
Ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup.
Step 9.1—Set Up a Server Pool in OVM for x86
This chapter assumes that you already have a basic server pool in OVM for x86 complete with OVM Manager 3.x and EM12c release 2 set up, configured, and ready to go for deploying an Oracle RAC 12c cluster (the hardware/software configuration used in this chapter and the next one is outlined in the preceding section). In case you need help with this process, be assured that this is a simple and easy process with a minimum, low, and intuitive learning curve if you follow the appropriate OVM for x86 installation and setup documentation manuals.
The following sections detail the steps involved in configuring OVM for RAC 12c.
Step 9.2—Configure Network Time Protocol
Press the Push to All Servers button for Network Time Protocol (NTP).
Step 9.3—Create the Required Network for RAC 12c
Continue with the following steps:
1. Choose OVM Manager → Networking → Networks → Create (+) Button.
2. Select the Create a Hybrid Network with Bonds/Ports and VLANS option.
3. Enter the name and description of the OVM network. Select the Virtual Machine option.
4. Select the relevant ports.
5. Select the relevant VLAN segments.
6. Select the appropriate IP addressing scheme. Enter the IP addresses, net masks, and bonding options if applicable.
As shown in Figure 9.3, the new OVM network has been successfully created and is ready to be deployed and used.
Figure 9.3. Oracle VM 3.x: Dedicated network for Oracle RAC 12c
Step 9.4—Create the Required Shared Virtual Disks for the GRID1 ASM Disk Group
To create the disks, follow these steps:
1. Choose OVM Manager → Repositories → Select OVS Repository → Virtual Disks → Create (+) Button:
2. Create a virtual disk with the following options:
• Size: 15 GB
• Allocation type: Sparse allocation
• Shareable: Yes
3. Repeat the preceding process for all five GRID1 Automatic Storage Management (ASM) disks.
Note
For production environments, it is highly recommended to have physical block devices presented as virtual disks for the various ASM disk groups.
As shown in Figure 9.4, all of the virtualized shareable ASM disks for the GRID1 disk groups have been created and are now ready for use.
Figure 9.4. Oracle VM 3.x: Virtualized disks for GRID1 ASM disk group for RAC 12c
Note
Step 9.5 has two alternative approaches, both of which are explained in quite a bit of detail next. Each step has further substeps, which are illustrated as well.
Step 9.5, Approach 1
Step 9.5, Approach 1, is illustrated in the following substeps.
Step 9.5, Approach 1a—Create the VM for RAC Server Node 1 from Scratch Using an ISO Boot Image to Install OEL 6.x
To create the Oracle Enterprise Linux (OEL) 6.x VM for RAC using an ISO boot image, follow these steps (as shown in Figure 9.5):
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine Button.
2. Select the Create a New VM option.
3. As shown in Figure 9.5, select and enter the following options for the RAC-Node-01 VM:
• Server pool
• Server pool repository
• VM description
• High Availability: Unchecked
• Operating system: Oracle Linux 6
• Domain type: Xen PVM
• Start policy: Start on best server
• Memory (MB): 2,048 (minimum required is 4,096. However, in cases where you are short on physical memory and are building the RAC for learning purposes, this can suffice)
• Max memory (MB): 8,192
• Processors: 2
• Max processors: 4
• Priority: 50
• Processor cap %: 100
Figure 9.5. Oracle VM 3.x: Creating a VM for RAC-Node-01
4. Select Network and then press the Add VNIC button twice to create two virtual network interface cards (VNICs) for the RAC-Node-01 VM (see Figure 9.6).
Figure 9.6. Oracle VM 3.x: Specifying virtual network interface cards for the public and private cluster interconnect networks required for the RAC
5. Choose Next to move on to Setup Networks and Arrange Disks.
6. Select and enter the following options for the VM disks (see Figure 9.7):
• CD/DVD
• Virtual disk—Press the Create (+) button.
Figure 9.7. Oracle VM 3.x: Specifying virtualized disk/optical media for RAC-Node-01
7. Select the imported ISO for Linux 6.x.
8. Select and enter the following options:
• Repository
• Virtual disk name
• Description
• Shareable: Unchecked
• Size: 25GB
• Allocation type: sparse allocation
9. Select the ISO for OEL 6.x (see Figure 9.8).
Figure 9.8. Oracle VM 3.x: Specify the ISO for Oracle Enterprise Linux 6.x 64-bit x86
10. As shown in Figure 9.9, repeat the preceding process to add/select the following disks:
• ISODATA1 ASM disk group:
• Qty: 6
• Individual disk size: 50GB
• RECO1 ASM disk group:
• Qty: 1
• Individual disk size: 50GB
Select the Disk boot option. Press the Finish button to create the RAC-Node-01 VM.
Figure 9.9. Oracle VM 3.x: Guest VM for RAC-Node-01
Step 9.5, Approach 1b—Import the OEL 6.x x86-64 ISO into the OVM Repository
To import the OEL 6.x x86-64 ISO image into the OVM repository, follow these steps:
1. Go to OVM Manager → Repositories → Select OVM Repository → ISOs → Import ISO Button.
2. Select and enter the following (see Figure 9.10):
• Server
• ISO download location: ftp://oracle:password@192.168.2.20/software/OEL63_x86_64/V33411-01.iso (Replace the IP address, username, and password with your own.)
Figure 9.10. Oracle VM 3.x: Specify the FTP location for the OEL 6.x 64-bit x86 VM template
Note
Ensure that the Very Secure File Transfer Protocol Daemon (VSFTPD) server (FTP service) is set up correctly and that the ISO is available at the desired location and has the correct permissions.
Note that the status of the import process shows as In Progress, with a message showing Download Virtual CDROM....
Monitor the progress of the ISO import process in an SSH session to one of the Oracle VM servers (OVS) to which the OVS repository is connected.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 276480
-rw-r--r-- 1 root root 282066944 Feb 24 12:57 0004fb0000150000ba1fd09b4e2bd98c.iso
Keep checking periodically after brief intervals to monitor the progress of the ISO import process.
[root@bsfovs03 ISOs]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/ISOs
[root@bsfovs03 ISOs]# ls -l
total 2890752
-rw-r--r-- 1 root root 2959081472 Feb 24 13:04 0004fb0000150000ba1fd09b4e2bd98c.iso
At this point, the OEL 6.x ISO has been successfully imported (see Figure 9.11). Start up the VM, boot it from the OEL 6.x ISO, and go through the steps of setting up and installing Oracle Enterprise Linux 6.x.
Figure 9.11. Oracle VM 3.x: Imported ISO for Oracle Enterprise Linux 6.x 64-bit x86 is now ready for use as a bootable virtual CD/DVD drive
Step 9.5, Approach 2
Step 9.5, Approach 2, is illustrated in the following sub-steps.
Step 9.5, Approach 2a—Create the VM for RAC Server Node 1 Using an OVM OEL 6.x Template
To create the VM for RAC Server Node using an OVM OEL 6.x template, follow these steps:
1. Download the OEL 6.x OVM 3.x template from https://edelivery.oracle.com/linux.
2. Unzip the ISO and make it available via FTP.
3. Go to OVM Manager → Repositories → Select OVM Repository → Assemblies → Import VM Assembly Button (see Figure 9.12). Enter the following:
• Server
• VM assembly download location
Note that the status of the VM assembly import process shows as In Progress with a message showing “Downloading Assembly...” and then another one showing “Unpacking Template....”
Figure 9.12. Oracle VM 3.x: Specify the FTP location of the Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86
Monitor the progress of the VM assembly import process within an SSH session to one of the OVS servers to which the OVS repository is connected.
[root@bsfovs03 11941bfbbc]# pwd
/OVS/Repositories/0004fb0000030000d0cb473db1b6a1ae/Assemblies/11941bfbbc
[root@bsfovs03 11941bfbbc]# ls -l
total 617472
drwxr-xr-x 2 root root 3896 Feb 24 15:20 imports
-rw-r--r-- 1 root root 631541760 Oct 10 16:17 package.ova
drwxr-xr-x 2 root root 3896 Feb 24 15:21 unpacked
Keep checking periodically after brief intervals.
As shown in Figure 9.13, the OVM assembly for OEL 6.3 x86-64 has been imported successfully and is now ready for use.
Figure 9.13. Oracle VM 3.x: Oracle VM assembly for Oracle Enterprise Linux 6.x 64-bit x86
Step 9.5, Approach 2b—Create the OEL 6.3 x86-64-PVM OVM Template from the Newly Created Assembly into the OVM Repository
To create the OEL 6.3 x86-64-PVM OVM template from the newly created assembly into the OVM repository,
1. Choose OVM Manager → Repositories → Select OVM Repository → VM Assemblies → Select VM Assembly → Create VM template.
2. Enter and select the following (see Figure 9.14):
• Assembly VMs
• VM template name
• Description
Figure 9.14. The VM template name and description
3. As shown in Figure 9.15, the OEL 6.3 x86_64 OVM template has been successfully created and is now ready for deployment.
Figure 9.15. The OEL 6.3 x86_64 OVM template
Step 9.5, Approach 2c—Edit the Newly Created OEL 6.3 x86-64-PVM OVM Template
To edit the newly created OEL 6.3 x86-64-PVM OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template → Edit.
2. Modify the following options as shown in the following screens:
• Max memory (MB): 8,192
• Memory (MB): 4,096
• Max processors: 8
• Processors: 4
• Enable High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Networks: Add/specify the appropriate network(s)
• Boot order: Disk
• Virtual disks:
• Add virtual disk for Oracle software binaries: 25GB
Step 9.5, Approach 2d—Create Clone Customizer for the RAC Node OVM Template
To create a clone customizer for the RAC node OVM template, follow these steps:
1. Choose OVM Manager → Repositories → VM Templates → Select VM Template.
2. Press the Create Clone Customizer button.
3. Specify the name and description of the new clone customizer for the RAC 12c cluster node VMs.
4. Modify the Clone Type to Thin Clone (see Figure 9.16). This is a fast and efficient way to create new VM clone machines.
Figure 9.16. Oracle VM 3.x: Specify the virtual disks for RAC-Node-01
5. Specify the network settings for the clone customizer (if any custom modifications are required).
Step 9.5, Approach 2e—Create the RAC-Node-01 VM from the VM Template Using the Advanced Clone Customizer Method
To create the RAC-Node-01 VM from the VM template using the Advanced Clone Customizer method do the following:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Create Virtual Machine.
2. Select the Clone from an existing VM Template option (see Figure 9.17). Enter and select the following options:
• Clone count: 1
• Repository: Select OVS repository
• VM template: Select the OEL 6.3 x86_64 template
• Server pool: Select the appropriate server pool
• Description
Figure 9.17. Cloning from the existing template
3. Press the Finish button to create the RAC-Node-01 VM. The finished product is shown in Figure 9.18.
Figure 9.18. The VM for RAC-Node-01 created from the Oracle Enterprise Linux 6.x 64-bit x86 template
Step 9.6—Edit the Newly Created VM for RAC-Node-01
To edit the VM for RAC-Node-01 follow these steps:
1. Choose OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Edit Virtual Machine.
2. Modify the following options as shown in the configuration tab (Figure 9.19):
• Operating system: Oracle Linux 6
• Max memory (MB): 8,192
• Max processors: 8
• Networks: Specify the appropriate network(s)
• High Availability: Unchecked (OVM HA is incompatible with Oracle RAC)
• Boot order: Disk
• Start policy: Start on best server
• Virtual disks:
• System (virtual disk): Add another disk for Oracle binaries: 25GB
• GRID1 ASM disk group:
Qty: 6 Disks
Individual disk size: 15G
• DATA1 ASM disk group:
Qty: 6 Disks
Individual disk size: 50G
• RECO1 ASM disk group:
Qty: 1 Disk
Individual disk size: 50G
Figure 9.19. Configure the newly created VM
3. On the Network tab, add two VNICs (see Figure 9.20), one each for the public and private cluster interconnects.
Figure 9.20. Add the required VNICs
4. Finally, on the Disks tab, attach the shared virtualized disks for the ASM disk groups (see Figure 9.21).
Figure 9.21. Oracle VM 3.x: Attach the shared virtualized disks for the GRID1, DATA1, and RECO1 ASM disk groups
Step 9.7—Start Up the RAC-Node-01 VM
To start up the RAC-Node-01 VM, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Virtual Machine → Start Virtual Machine.
2. Press the Launch Console button to start the VNC console window (see Figure 9.22).
Figure 9.22. Oracle VM 3.x: Boot RAC node—First-boot interview/configuration process
3. Configure the various options in the new VM in the first-boot interview process:
• System hostname: For example, bsfrac01.bsflocal.com (12c RAC-Node-01)
• Specify VNIC device: For example, ETH0
• Boot protocol (static/DHCP): Static
• Activate VNIC on boot: YES
• IP address of the interface: For example, 192.168.2.41 (for the public network: ensure that this is an unused IP address. If you have DNS server(s), register the IP address/hostname with it)
• Netmask: For example, 255.255.255.0
• IP address of gateway: For example, 192.168.2.1
• IP addresses of DNS servers(s): For example, 255.255.255.0
• OS root password: *******
Set Up and Configure RAC-Node-01
The following sections explain how to set up and configure Node 01 for RAC 12c.
Step 9.8—Set the Network Configuration of the Private Cluster Interconnect VNIC
To set the network configuration of the private cluster interconnect VNIC, issue the following commands:
[root@bsfrac01 network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@bsfrac01 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@bsfrac01 network-scripts]# vi ifcfg-eth1
You have new mail in /var/spool/mail/root
[root@bsfrac01 network-scripts]# cat ifcfg-eth1
DNS1=192.168.2.1
GATEWAY=192.168.2.1
NETMASK=255.255.255.0
IPADDR=192.168.3.40
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
DEVICE=eth1
Step 9.9—Modify /etc/hosts File to Include the Relevant Entries for RAC 12c
To modify the /etc/hosts file to include the relevant entries for RAC 12c, open the fie and edit it, as in the following:
[root@bsfrac01 network-scripts]# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4
::1 localhost6.localdomain6 localhost6
# PUBLIC IP Addresses of 12c RAC Cluster
192.168.2.40 bsfrac01 bsfrac01.bsflocal.com
192.168.2.41 bsfrac02 bsfrac02.bsflocal.com
192.168.2.42 bsfrac03 bsfrac03.bsflocal.com
192.168.2.43 bsfrac04 bsfrac04.bsflocal.com
192.168.2.44 bsfrac05 bsfrac05.bsflocal.com
# SCAN IP Addresses of 12c RAC Cluster
192.168.2.70 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.71 bsfrac-scan bsfrac-scan.bsflocal.com
192.168.2.72 bsfrac-scan bsfrac-scan.bsflocal.com
# Virtual IP Addresses of 12c RAC Cluster
192.168.2.60 bsfrac-vip01 bsfrac-priv01.bsflocal.com
192.168.2.61 bsfrac-vip02 bsfrac-priv02.bsflocal.com
192.168.2.62 bsfrac-vip03 bsfrac-priv03.bsflocal.com
192.168.2.63 bsfrac-vip04 bsfrac-priv04.bsflocal.com
192.168.2.64 bsfrac-vip05 bsfrac-priv05.bsflocal.com
# Private Cluster Interconnect IP Addresses of 12c RAC Cluster
192.168.3.40 bsfrac-priv01 bsfrac-priv01.bsflocal.com
192.168.3.41 bsfrac-priv02 bsfrac-priv02.bsflocal.com
192.168.3.42 bsfrac-priv03 bsfrac-priv03.bsflocal.com
192.168.3.43 bsfrac-priv04 bsfrac-priv04.bsflocal.com
192.168.3.44 bsfrac-priv05 bsfrac-priv05.bsflocal.com
Note
Single Client Access Name (SCAN) listener IP information is included in the /etc/hosts file. The SCAN IPs should be registered with the appropriate DNS server(s).
Step 9.10—Check for Space Requirements
Ensure that enough TMP space is available to support RAC12c:
[root@bsfracvx1 ~]# df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_bsfracvx1-lv_root
26G 5.3G 20G 22% /
Step 9.11—Disable the Linux Software Firewall
Next, disable the Linux software firewall.
Note
This step is optional and should be exercised with caution. Only do it if you have ancillary hardware/software firewalls in place in the corporate landscape.
[root@bsfrac01 ~]# service iptables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all -- 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service iptables off
Usage: iptables {start|stop|restart|condrestart|status|panic|save}
[root@bsfrac01 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: c [ OK ]
[root@bsfrac01 ~]# chkconfig iptables off
[root@bsfrac01 ~]# service ip6tables status
Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all ::/0 ::/0 state RELATED,ESTABLISHED
2 ACCEPT icmpv6 ::/0 ::/0
3 ACCEPT all ::/0 ::/0
4 ACCEPT tcp ::/0 ::/0 state NEW tcp dpt:22
5 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all ::/0 ::/0
reject-with icmp6-adm-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
[root@bsfrac01 ~]# service ip6tables stop
ip6tables: Flushing firewall rules: [ OK ]
ip6tables: Setting chains to policy ACCEPT: filter [ OK ]
ip6tables: Unloading modules: [ OK ]
[root@bsfrac01 ~]# chkconfig ip6tables off
Step 9.12—Configure and Restart NTPD Client
Edit/configure the /etc/ntp.conf file and restart the Network Time Protocol Daemon (NTPD) server on the RAC node VM.
$ vi /etc/ntp.conf
# Modify the following line to reflect the NTP servers with which the time
# will be synchronized
server 192.168.2.20
Step 9.13—Partition, Format, and Mount /u01 on the 25-GB Local Virtual Hard Disk
To partition, format, and mount /u01 on the 25-GB local virtual hard disk, start by doing the following:
[root@bsfrac01 /]# mkdir /u01
You have new mail in /var/spool/mail/root
[root@bsfrac01 /]# mount /dev/xvdb1 /u01
[root@bsfrac01 /]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/xvda2 9985 8087 1392 86% /
tmpfs 1940 1 1940 1% /dev/shm
/dev/xvda1 99 50 45 53% /boot
/dev/xvdb1 25195 172 23743 1% /u01
Make the mount point persistent by modifying the /etc/fstab file:
#
# /etc/fstab
# Created by anaconda on Fri Sep 7 08:14:40 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
LABEL=/ / ext4 defaults 1 1
LABEL=/boot /boot ext4 defaults 1 2
/dev/xvda3 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb1 /u01 ext4 defaults 0 0
Step 9.14—Disable the SELINUX Option
Disable the SELINUX option by modifying the following file:
[root@bsfrac01 /]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing—SELinux security policy is enforced.
# permissive—SELinux prints warnings instead of enforcing.
# disabled—No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE—Can take one of these two values:
# targeted—Targeted processes are protected,
# mls—Multilevel security protection.
SELINUXTYPE—Targeted
Step 9.15—Install VSFTPD (FTP Server)
Install VSFTPD (FTP server) by performing the following:
[root@bsfrac01 ~]# yum install vsftpd
Note
This step is optional.
Step 9.16—Install X Window System Desktop
Install the X Window System desktop by performing the following steps:
[root@bsfrac01 /]# yum groupinstall "X Window System" desktop
Loaded plugins: security
Setting up Group Process
Package 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64 already installed and latest version
Package hal-0.5.14-11.el6.x86_64 already installed and latest version
Package 1:dbus-1.2.24-7.0.1.el6_3.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:0.8.1-34.el6_3 will be installed
.
.
.
openssh-server x86_64 5.3p1-84.1.el6
ol6_latest 298 k
rhn-check noarch 1.0.0-87.0.6.el6
ol6_latest 60 k
rhn-client-tools noarch 1.0.0-87.0.6.el6
ol6_latest 492 k
rhn-setup noarch 1.0.0-87.0.6.el6
ol6_latest 96 k
Transaction Summary
=====================================================================================================================================================================================================
Install 265 Package(s)
Upgrade 19 Package(s)
Total download size: 123 M
Is this ok [y/N]: y
Downloading Packages:
(1/284): ConsoleKit-x11-0.4.1-3.el6.x86_64.rpm | 20 kB 00:00
(2/284): DeviceKit-power-014-3.el6.x86_64.rpm | 90 kB 00:00
.
.
.
Dependency Updated:
libreport.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-cli.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-kerneloops.x86_64 0:2.0.9-5.0.1.el6_3.2
libreport-plugin-logger.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-
mailx.x86_64 0:2.0.9-5.0.1.el6_3.2 libreport-plugin-reportuploader.x86_64 0:2.0.9-
5.0.1.el6_3.2
libreport-python.x86_64 0:2.0.9-5.0.1.el6_3.2 nspr.x86_64 0:4.9.2-
0.el6_3.1 nss.x86_64 0:3.13.6-2.0.1.el6_3
nss-sysinit.x86_64 0:3.13.6-2.0.1.el6_3 nss-tools.x86_64 0:3.13.6-2.0.1.el6_3 nss-util.x86_64 0:3.13.6-1.el6_3
openssh.x86_64 0:5.3p1-84.1.el6 openssh-clients.x86_64
0:5.3p1-84.1.el6 openssh-server.x86_64 0:5.3p1-84.1.el6
rhn-check.noarch 0:1.0.0-87.0.6.el6 rhn-client-tools.noarch
0:1.0.0-87.0.6.el6 rhn-setup.noarch 0:1.0.0-87.0.6.el6
Complete!
Note
The output of the X Window System desktop installation is very long and has been abbreviated.
Modify the /etc/inittab file to start with a GUI login and reboot the system:
#id:3:initdefault: # Change Option 3 to 5 as shown in the following lineid:5:initdefault:
Step 9.17—Reboot RAC-Node-01 for All of the Preceding Setups/Configurations to Take Effect
To reboot, issue the following command:
[root@bsfrac01 network-scripts]# shutdown -r
After a successful reboot, you will arrive at the login screen (see Figure 9.23).
Figure 9.23. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 login screen
Step 9.18—Verify the Network Settings after the Node Reboots
To verify the network settings after the reboot, do the following:
[root@bsfrac01 /]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:21:F6:00:00:01
inet addr:192.168.2.40 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:49103 errors:0 dropped:117 overruns:0 frame:0
TX packets:12982 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33876842 (32.3 MiB) TX bytes:939705 (917.6 KiB)
Interrupt:57
eth1 Link encap:Ethernet HWaddr 00:21:F6:00:00:00
inet addr:192.168.3.40 Bcast:192.168.3.255 Mask:255.255.255.0
inet6 addr: fe80::221:f6ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25180 errors:0 dropped:117 overruns:0 frame:0
TX packets:198 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1316336 (1.2 MiB) TX bytes:12163 (11.8 KiB)
Interrupt:58
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:930 (930.0 b) TX bytes:930 (930.0 b)
Oracle Software Preinstallation Steps on the RAC-Node-01 VM
Perform the following checks (steps 9.19–9.28) to satisfy the prerequisites for RAC 12c on Node 01.
Step 9.19—Check Space Requirement on /tmp
To check that the space requirement has been met, do the following (10 GB is recommended):
[oracle@bsfrac01 Database]$ df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 9.8G 7.9G 1.4G 86% /
Step 9.20—Create the Required and Relevant OS Groups
Issue these commands to create the OS groups:
[root@bsfrac01 /]# groupadd -g 54327 asmadmin
[root@bsfrac01 /]# groupadd -g 54328 asmoper
[root@bsfrac01 /]# groupadd -g 54329 asmadmin
[root@bsfrac01 /]# groupadd -g 54324 asmdba
[root@bsfrac01 /]# groupadd -g 54324 backupdba
[root@bsfrac01 /]# groupadd -g 54325 dgdba
[root@bsfrac01 /]# groupadd -g 54326 kmdba
[root@bsfrac01 /]# groupadd -g 54321 oinstall
[root@bsfrac01 /]# groupadd -g 54322 dba
[root@bsfrac01 /]# groupadd -g 54323 oper
Note
Some of the preceding steps are optional, and whether they should be done depends on the user’s job function; for example, DBA, DMA, storage/system administrator, or other role.
Step 9.21—Create the Oracle and Grid OS Users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software Owners and Set Their Initial Passwords
To create the oracle and grid OS users as the Oracle DB HOME Software Owners and Grid Infrastructure HOME Software owners, respectively, and set their initial passwords, issue these commands:
[root@bsfrac01 /]# useradd -u 54321 -g oinstall -G dba,asmdba oracle
[root@bsfrac01 /]# useradd -u 54322 -g oinstall -G asmadmin,asmdba grid
[root@bsfrac01 /]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@bsfrac01 /]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Step 9.22—Create the Optimal Flexible Architecture (OFA) Directory Structure for RAC 12c
As the root OS user, run the following commands:
[root@bsfrac01 /]# mkdir -p /u01/app/12.1.0/grid
[root@bsfrac01 /]# mkdir -p /u01/app/grid
[root@bsfrac01 /]# mkdir -p /u01/app/oracle
[root@bsfrac01 /]# chown -R grid:oinstall /u01
[root@bsfrac01 /]# chown oracle:oinstall /u01/app/oracle
[root@bsfrac01 /]# chmod -R 775 /u01/
Step 9.23—Observe/Verify the Required and Relevant Permissions of the Created OFA Directory Structure
Check the required and relevant permissions set for the OFA directory structure:
[root@bsfrac01 oracle]# ls -l /u01
total 4
drwxrwxr-x 5 grid oinstall 4096 Feb 25 23:29 app
[root@bsfrac01 oracle]# ls -l /u01/app/
total 12
drwxrwxr-x 3 grid oinstall 4096 Feb 25 23:29 12.1.0
drwxrwxr-x 2 grid oinstall 4096 Feb 25 23:29 grid
drwxrwxr-x 2 oracle oinstall 4096 Feb 25 23:29 oracle
Step 9.24—Set Up and Configure the NTPD
Configure the NTPD:
[root@bsfrac01 ~]# service ntpd start
Shutting down ntpd: [ OK ]
[root@bsfrac01 ~]# chkconfig ntpd on
Step 9.25—Turn Off and Unconfigure the Avahi daemon
Do the following to turn off and unconfigure the Avahi daemon:
[root@bsfrac01 ~]# service avahi-daemon stop
Shutting down Avahi daemon: [ OK ]
[root@bsfrac01 ~]# chkconfig avahi-daemon off
Step 9.26—Install Packages/Options for Linux Kernel
Within OEL, using the GUI software installer (see Figure 9.24) or using the rpm command-line utility, ensure that the following packages for OEL 6.x x86_64 are installed with greater-than/equal-to versions. Additionally, download and install ancillary packages to aid with the performance of the RAC.
Figure 9.24. Oracle VM 3.x: RAC-Node-01—OS/Linux 6.x 64-bit x86 custom package/rpm installation
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
ksh
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
[root@bsfrac01 ~]# rpm -qa glibc*
glibc-common-2.12-1.80.el6_3.5.x86_64
glibc-devel-2.12-1.80.el6_3.5.x86_64
glibc-2.12-1.80.el6_3.5.x86_64
glibc-headers-2.12-1.80.el6_3.5.x86_64
[root@bsfrac01 ~]# rpm -qa libstdc++*
libstdc++-4.4.6-4.el6.x86_64
libstdc++-devel-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa gcc*
gcc-c++-4.4.6-4.el6.x86_64
gcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa ksh*
ksh-20100621-16.el6.x86_64
[root@bsfrac01 ~]# rpm -qa make*
make-3.81-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa sysstat*
sysstat-9.0.4-20.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libgcc*
libgcc-4.4.6-4.el6.x86_64
[root@bsfrac01 ~]# rpm -qa libaio*
libaio-devel-0.3.107-10.el6.x86_64
libaio-0.3.107-10.el6.x86_64
[root@bsfrac01 ~]# rpm -qa binutils*
binutils-2.20.51.0.2-5.34.el6.x86_64
[root@bsfrac01 ~]# rpm -qa compat-lib*
compat-libcap1-1.10-1.x86_64
compat-libstdc++-33-3.2.3-69.el6.x86_64
Step 9.27—Create Primary Partitions for All the GRID1, DATA1, and RECO1 ASM Disk Groups
Next, create primary partitions for all the GRID1, DATA1, and RECO1 ASM disk groups:
[root@bsfrac01 ~]# fdisk /dev/xvdc
device contains neither a valid DOS partition table, nor Sun, SGI, or OSF disklabel.
Building a new DOS disklabel with disk identifier 0x6a917f21.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Note
Repeat the preceding steps for all the ASM disks, including grid infrastructure disks.
Step 9.28—Verify the Partition Structures for the Underlying Disks in the GRID1, DATA1, and RECO1 ASM Disk Groups
Verify the partition structures for the underlying disks:
[root@bsfrac01 /]# fdisk –l
Step 9.29—Configure ASM Library on RAC-Node-01
Configure the ASM library by performing the following steps:
[root@bsfrac01 dev]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl+C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmdba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK01 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK02 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK03 /dev/xvde1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK04 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk GRID1DISK05 /dev/xvdg1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK01 /dev/xvdh1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK02 /dev/xvdi1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK03 /dev/xvdj1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK04 /dev/xvdk1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK05 /dev/xvdl1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk DATA1DISK06 /dev/xvdm1
Writing disk header: done
Instantiating disk: done
[root@bsfrac01 dev]# oracleasm createdisk RECO1DISK06 /dev/xvdn1
Writing disk header: done
Instantiating disk: done
Note
You can also choose to set up and configure UDEV rules for the ASM disks:
[root@bsfrac01 dev]# /etc/init.d/oracleasm listdisks
DATA1DISK01
DATA1DISK02
DATA1DISK03
DATA1DISK04
DATA1DISK05
DATA1DISK06
GRID1DISK01
GRID1DISK02
GRID1DISK03
GRID1DISK04
GRID1DISK05
RECO1DISK06
Step 9.30—Download and Stage the Oracle Software Binaries
Download, unzip, and stage the Oracle software grid and database software binaries:
[oracle@bsfrac01 Database]$ unzip -q linuxx64_database_12.1BETA_130131_1of2.zip
Repeat the unzip process for all the software binary zip files, and verify the unzipped and staged directory structure.
Step 9.31—Establish a Save Point: Make a Backup Copy of the Ready-to-Go OVM in the Form of an OVM Template
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM → Right-click → Clone or Move.
Step 9.32—Remove (Temporarily) the Shared Disks and Clone the Ready-to-Go OVM
Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button.
As shown in Figure 9.25, this action temporarily removes all the shared ASM virtual disks. You need to do this; otherwise, clones of these disks will be created unnecessarily during the ensuing cloning process for all the RAC nodes.
Figure 9.25. Oracle VM 3.x: RAC-Node-01—Local virtual disks after temporarily removing the virtualized shared disks for Automatic Storage Management
Step 9.33—Clone the Other Nodes of the RAC Cluster from the Ready-to-Go Node 01 VM
To clone the other nodes of the RAC, follow these steps:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM and right-click. Choose Clone or Move (see Figure 9.26).
Figure 9.26. Oracle VM 3.x: Clone the other RAC nodes from the VM from Node 01
2. Select the Create a clone of this VM option.
3. Select and enter the following options:
• Target server pool
• Description
• Clone to a: VM
• Clone count: Specify the clone count for the RAC nodes
As shown in Figure 9.27, all the nodes for a five-node RAC have been successfully created and are ready for further configuration.
Figure 9.27. Oracle VM 3.x: VMs for all nodes of the RAC have been created and cloned
Step 9.34—Reattach the Shared Disks to the RAC Node OVMs
Next reattach the shared disks to the RAC node OVMs:
1. Go to OVM Manager → Servers and VMs → Select Server Pool → Select Oracle VM, and click the Edit button (see Figure 9.28).
Figure 9.28. Oracle VM 3.x: RAC-Node-01—Reattach the virtualized shared disks for Automatic Storage Management
2. Select the shared disks.
3. Repeat the preceding steps for all the RAC node VMs.
Step 9.35—Start Each Guest VM and Edit the Network Settings
1. Go to System → Preferences → Network Settings → IPv4 Settings.
2. Modify the IP addresses of the eth0 (public) and eth1 (private cluster interconnect) NICs, as shown in Figure 9.29.
Figure 9.29. Oracle VM 3.x: RAC-Node-01—Network configuration for the RAC node VMs
3. Repeat for all of the RAC nodes.
Install and Set Up 12c Grid Infrastructure
As you can see from the preceding sections, setting up and installing Oracle RAC is all about doing an extensive amount of homework in the right way. To summarize the activity covered in the last few sections, ensure that the following virtual infrastructure is available and ready for deployment in a brand-new RAC setup:
• Dedicated virtual network for RAC has been created and configured.
• Virtualized shared ASM disks for the GRID1 ASM disk group are created and ready for use.
• VMs that will constitute the nodes of the RAC 12c have been created, set up, and configured.
• OEL 6.x is set up and configured on RAC-Node-01 using two alternative approaches; installing it from scratch or using the downloadable templates for OVM for x86.
• The VMs for the other nodes of the RAC 12c have been cloned from the RAC-Node-01 VM.
It’s now time to set up Oracle grid infrastructure and get the RAC 12c bird off the ground and in the air.
Step 9.36—In OVM Manager for x86, Start All the VMs for the Oracle RAC 12c Cluster
As shown in Figure 9.30, at this point you want to start the VMs for the RAC 12c cluster.
Figure 9.30. Oracle VM 3.x: Start up the VMs for the RAC 12c cluster
Step 9.37—In the Grid Infrastructure Staging Directory, Run the OUI to Set Up the Grid Infrastructure
Enter the information and make the selections in the Wizard Entry screens of the Oracle Universal Installer (OUI), as shown in Figures 9.31 through 9.45. In some cases, you will need to edit according to the specific needs of your organization.
Figure 9.31. Enter your support credentials.
1. Enter the My Oracle Support (MOS) credentials for support on software updates and patches (see Figure 9.31), or choose to skip them.
2. Select the Install and Configure Oracle Grid Infrastructure for a Cluster option (see Figure 9.32).
Figure 9.32. Select Install and Configure Oracle Grid Infrastructure for a Cluster.
3. Select the Configure a Flex Cluster option (see Figure 9.33).
Figure 9.33. Select Configure a Flex Cluster.
4. Select the appropriate product language(s).
5. Enter the required information for Single Client Access Name (SCAN) and Grid Naming Service (GNS) (see Figure 9.34).
Figure 9.34. Enter SCAN and GNS information.
6. Enter the relevant information for the RAC 12c nodes, including for HUB and LEAF nodes (see Figure 9.35).
Figure 9.35. Enter HUB and LEAF node information.
7. Enter the required information for establishing and testing SSH connectivity and user equivalence between all the RAC nodes, as in Figure 9.36.
Figure 9.36. Enter credentials for SSH connectivity.
8. Once step 7 is done, the system tests the SSH connectivity between the nodes (see Figure 9.37).
Figure 9.37. Password-less connectivity testing in progress
9. Specify the network interfaces for public, private cluster interconnect, and ASM. The system will validate those as well (see Figure 9.38).
Figure 9.38. The system validates the public and private interfaces across the cluster nodes.
10. Select the Configure Grid Infrastructure Management Repository option (see Figure 9.39).
Figure 9.39. Select the Configure Grid Infrastructure Management Repository option.
11. If you choose No, then the message in Figure 9.40 is displayed.
Figure 9.40. Grid Infrastructure Management Repository is not selected as an option.
12. Specify the ASM disks for the GRID1 ASM disk group with a HIGH redundancy level (see Figure 9.41).
Figure 9.41. Specify the ASM disks for the GRID1 ASM disk group.
13. Enter the passwords for the Oracle SYS and ASMSNMP DB users.
14. Select the Do Not Use Intelligent Platform Management Interface (IPMI) option.
15. Specify the OS groups for ASM.
16. Enter the Oracle BASE and HOME locations.
17. Enter the Oracle inventory location (see Figure 9.42).
Figure 9.42. Enter the inventory location.
18. Enter the root OS password or sudo access credentials to automatically run the root.sh configuration scripts (see Figure 9.43).
Figure 9.43. Enter the root password.
19. Generate and run any runfixup.sh scripts to remediate any prerequisite issues (see Figure 9.44).
Figure 9.44. Fix any issues.
20. Press Install to initiate the installation process for grid infrastructure (see Figure 9.45).
Figure 9.45. Initiate the installation process.
No comments:
Post a Comment