Wednesday 28 February 2018

Oracle Cloud part 6

Figure 8.14. Profile creation output

Creating a Service Template from a Database Profile
Once you have created a database profile, either by taking an RMAN backup or using an existing RMAN backup, the next step in making this profile available to the self-service user is to create a service template. It is these service templates that will populate the catalog that the self-service user can select from, which is why creating a service template is such an important step.
To create a service template using either of the profiles we’ve just discussed, you must be logged into EM12c as a user who has been granted the EM_SSA_ADMINISTRATOR role.
1. To create a template, follow the same path as you did previously to get to Data Sources (Setup → Cloud → Database), only in this case you select the next option in the list, Service Templates, as shown in Figure 8.15.
Figure 8.15. Accessing service templates

2. This takes you to the Database: Service Templates page, where you will see a list of the service templates that already exist (provided other service templates have already been created, of course!). Probably the most useful thing you can see on this screen is the number of service instances. This will tell you how many times the service template has been used to provision a database. To start the service template creation wizard, simply click the Create button, as shown in Figure 8.16.
Figure 8.16. Starting the Service Template wizard

3. The first step of the wizard, the General page, enables you to specify the following information:
• Name: The name of the service template will be selected by the self-service user from the self-service catalog, so it is wise to make it a recognizable name and to minimize the amount of technical information in the name. The reason for this second point is that self-service users are generally business users, so a name such as “3-node RAC Cluster using 11.2.0.3 with Data Guard” is completely meaningless to them.
• Description: Expand on the name a little more here. You might include the database name the clone is taken from, for example, as self-service users will normally recognize that name.
• Source Identification: This region is where you specify to create the database using a profile (or to use an existing database if you haven’t created profiles). If you are creating a database using a profile, here is also where you specify which profile you are using as well as the profile version (you can either specify the latest version or allow the user to select the version at request time). Once you specify a profile, the Shared Location field at the bottom of the page will be auto-populated (note: if you do not have an NFS mount or similar for the shared location, you will need to manually copy the files to this location before moving to step 2 of the wizard).
• Database Definition: You can either create a single-instance or RAC database using this wizard. The Database Definition region is where you specify this option along with the number of nodes if this is a RAC database. You can also specify the database SID here (or allow the user to select one at request time), as well as the domain name. In addition, you can allow the user to create standby databases, along with the main database being created, by clicking the Enable Standby Database checkbox.
• Pools and Zones: Service templates can be configured to provision databases in more than one zone. This region is where you specify one or more zones and associated resource pools that the service template can provision databases into. Once you have selected the zone and pool, the host target will be populated for you in the Reference Host field.
Figure 8.17 shows a completed example of this General page using one of the templates we created previously. You can simply click Next to move to the next step of the wizard.
Figure 8.17. Step 2 of the Template Creation wizard

4. On the next screen, the reference host information is prepopulated for you according to the information you supplied in step 2 of the wizard. You need to make selections for the following regions:
• Storage type: You can use either Automatic Storage Management (ASM) or file system–based storage. If you choose ASM, you need to specify the disk group (note that this is a mandatory field as you would expect, even though the user interface doesn’t indicate that by having an asterisk next to the Disk Group field. If you attempt to use ASM for the storage group but do not specify a disk group, you’ll receive an error when you click the Next button). If you choose File System, you will need to either specify a location or accept the default of $ORACLE_BASE/oradata.
• Fast Recovery: The Fast Recovery region is not mandatory, but if you want your self-service users to schedule backups and be able to restore their databases to a previous point in time, you should enable the fast recovery area, specify a location (either ASM or file system based), specify a size for the fast recovery area, and select the Enable Archiving checkbox.
• Listener Port: This is again a mandatory field. You can click on the magnifying glass, and the listener port number will be populated on the basis of the host that was chosen from Step 2 of the wizard.
• Administrator Credentials: In this mandatory field, you must either use the same password for the SYS, SYSTEM, and DBSNMP users that will be created in the new database or specify different passwords for each user.
• Non Administrator Credentials: If you want to restrict access to the other schemas that are normally in an Oracle database, you can select those here and specify a password for them. These schemas will then be locked down so they can’t be accessed by self-service users.
• Master Account Privileges: Here you can specify a name for the master account and either add or remove privileges that will be granted to this account.
Once you have provided information for all of these areas, as shown in Figure 8.18, you can click the Next button to move to the next step of the wizard.
Figure 8.18. Step 3 of the Template Creation wizard

5. On the Initialization Parameters screen (shown in Figure 8.19), you can edit any of a range of initialization parameters for the database. Note that not all parameters can be changed here—just the subset that Oracle Corporation believes you may want to change.
Figure 8.19. Step 4 of the Template Creation wizard

6. On the fifth screen of the Template Creation wizard, you can specify the following:
• Custom Scripts for Create: These are scripts that are stored as directives in the Software Library. You can execute scripts before and after creating the service instance and can also specify a SQL script to run after the service instance creation (obviously, executing a SQL script before service instance creation doesn’t make sense). You can specify which user the SQL script will run as, too. These scripts allow you to do a lot of customization work, even to perform your own data masking routines as SQL scripts if you want.
• Custom Scripts for Delete: Just as you can specify the scripts to run before and after service instance creation, you can specify scripts to run before and after deleting a service instance.
• Target Properties: In the Target Properties region, you can specify global target properties, such as lifecycle status, department, and so on. If you do specify a global target property, you can then make it mandatory by clicking the Required checkbox. You can also lock the value you have entered for the property by clicking the padlock icon. Locking will prevent self-service users from modifying the value of the property. In the example shown in Figure 8.20, the LifeCycle Status property is set to Test, but it is left as both optional and changeable by the self-service user.
Figure 8.20. Step 5 of the Template Creation wizard

7. On the next screen, Roles, you need to add the roles that will be able to use this template. Enterprise Manager users assigned this role (or roles) will then have access to the template, and it will appear in the Self-Service Catalog for those users. To add a role, you simply click the Add button and select the relevant roles. In the example shown in Figure 8.21, the Developer role is added.
Figure 8.21. Step 6 of the Template Creation wizard

8. The final step of the Template Creation wizard allows you to review all the inputs you have made thus far. If you want to make any changes, you can use the Back button to go back to the step where you need to make corrections. Otherwise, you can simply click Create, as shown in Figure 8.22, to create the service template.
Figure 8.22. Step 7 of the Template Creation wizard

9. Once you have created the service template, you will see a message saying the template was created successfully, as shown in Figure 8.23.
Figure 8.23. Confirmation that the service template was created

RMAN DUPLICATE
In the previous section, we covered creating a full database clone using an RMAN backup, either a new backup done as part of the cloning procedure or an existing RMAN backup that was done for another purpose (usually your normal backup routines). There is another way to create a full database clone that does not require the use of a backup. This feature, also known as live cloning, uses an RMAN DUPLICATE command to duplicate your live database. Just as when using an RMAN backup to create a full database clone, there are a number of steps to creating a live clone:
1. Enabling the DBaaS cloud involves some common setup tasks required for setting up a private database cloud. These tasks are covered in Chapter 5, so if you need to, refresh your memory by rereading that chapter. In summary, the tasks are
a. Defining roles and assigning users.
b. Creating a PaaS infrastructure zone.
c. Creating a database pool.
d. Configuring request settings.
e. Setting quotas.
2. Creating a service template using Live Clone.
Let’s look at the details of this last step.
1. Go to the Database Cloud page by following the path Setup → Cloud → Database. From there, click Service Templates, as shown in Figure 8.24.
Figure 8.24. Accessing the Service Templates page

2. Once you are on the Service Templates page, click the Create button, as shown in Figure 8.25.
Figure 8.25. Starting the Create Template wizard

3. On this next screen, you choose to use live cloning. As you can see on the right side of Figure 8.26, instead of using a profile that was created earlier, I’ve select Using existing Database and specified the name of the database I’m cloning from.
Figure 8.26. Step 1 of the Create Template wizard

The remaining screens are the same as they were for creating a service template using an RMAN backup, so they are not reproduced here—just look back to the section “Creating a Service Template from a Database Profile.”
There are two points that you need to keep in mind when using RMAN DUPLICATE to take a live clone:
• Because (generally) we are cloning off of the production environment, the clone command will generate load on that environment. This may be different from using an RMAN backup, as the backup can be placed on another machine, thus moving the load to that machine.
• Because we are doing this task directly from a service template, and thus bypassing the creation of a data profile, there is no opportunity to have this cloning performed as a scheduled operation unless the self-service user changes the default start time. If immediate cloning is not desirable from a load perspective, you will need to educate the self-service user to change the start date and time.
Snap Clones
In simple terms, Snap Clone is a storage-agnostic self-service approach to rapidly creating space-efficient clones of large databases (and by and large, we’re talking terabytes or more). Now that’s probably more buzzwords in one sentence than anyone’s brain can deal with without exploding, so let’s look at some of those terms more closely:
• Storage agnostic: Snap Clone supports all storage vendors, both network attached storage (NAS) and storage area network (SAN).
• Self-service: In the XaaS world—where X can be any of infrastructure (I), middleware (MW), platform (P), or database (DB)—one of the key features is empowering the end user to do the work rather than waiting on some techie to find time in her or his otherwise busy schedule. So it’s the end user who makes the ad hoc clones here, not the storage admin.
• Rapid: People simply don’t have the time anymore to wait weeks for provisioning to happen (for that matter, they probably never did, but that’s another discussion), so you have to support the functionality to clone databases in minutes rather than the days or weeks it used to take.
• Space efficient: When you’re working with terabyte or larger databases, you may not have the storage to create full-sized clones, so you have to significantly reduce the storage footprint to start with.
The Challenges Snap Clone Addresses
Snap Clone can be used to address number of major challenges:
• Lack of automation: Manual tasks such as provisioning and cloning of new databases (e.g., for test or development systems) is one area that many DBAs complain is too time consuming. It can take days to weeks, often because of the need to coordinate the involvement of different groups, as shown in Figure 8.27.
Figure 8.27. The standard cloning workflow

When an end user, whether a developer or a QA engineer, needs a database, he or she typically has to go through an approval process like that shown in the figure, which then translates into a series of tasks for the DBA, the sysadmin, and the storage admin. The sysadmin has to provide the compute capacity, and the storage admin has to provide the space on a filer. Finally, the DBA would install the bits, create the database (optionally on Real Application Clusters), and deliver the database to the user. Clearly, this is a cumbersome and time-consuming process that needs to be improved.
• Database unfriendly solutions: Obviously, when there is a need looking for a solution, different people take different approaches to resolving that need. There are a variety of point solutions and storage solutions out there, but the vast bulk of them are not database aware. They tend to clone storage volumes rather than databases and have no visibility into the database stack, which of course makes it hard to triage performance issues as a DBA. They also lack the ability to track configuration, compliance, and data security issues, and they have limited or no lifecycle capabilities.
• Storage issues and archaic processes: One of the main issues DBAs face is storage. Data volumes are ever increasing, particularly in these Big Data days, and the growth can often outpace your storage capacity. You can throw more disks at the problem, but it never seems to be enough, and you can end up with degraded performance if you take the route of sharing clones between users. There can also be different processes and different priorities between the storage team and the DBA team, and you may still have fixed refresh cycles, making it difficult to clone on an ad hoc basis.
In the latest release of Snap Clone, you can provide thin clones via either software solutions (Solaris ZFS and CloneDB) or hardware solutions (Sun ZFS Storage Appliance, NetApp, and EMC). Let’s touch on each of these in a bit more detail. The examples presented here use Enterprise Manager 12.1.0.4.3 with the latest plugins (specifically for Snap Clone, the important plugin is the Enterprise Manager for Storage Management plugin release 12.1.0.5.0). If you are on a slightly different version, you may see a slightly different look and feel to the user interface and available functionality.
Software Solutions
Let’s start by looking at the software-based solutions.
Snap Clone using Solaris ZFS
Snap Clone using ZFS uses a single stock Solaris 11.1+ image, which can be either physical or virtual (note that it doesn’t use the Sun ZS3 appliance). It supports both NAS and SAN. If you are using SAN, then mount the logical unit numbers (LUNs) as raw disk and format with the ZFS filesystem. It’s important to note here that this does not require any snapshot or cloning licenses from the storage vendor, as these features are available for free.
Additional features provided with this solution include compression, de-duplication, I/O caching, and so on. If you also need high availability in this configuration, that can be handled externally either via Solaris Clusters or by using the high-availability features of the underlying hypervisor.
Figure 8.28 shows what the configuration looks like diagrammatically.
Figure 8.28. Snap Clone using ZFS

CloneDB Using dNFS
With CloneDB using direct network file storage (dNFS), you get the capability to create thin copies of a database from RMAN image copies. This approach uses the NFS v3 client that’s embedded in the database technology since 11.2.0.3. Currently, this is supported for single-instance databases, but only on filesystems (i.e., ASM is not yet supported).
The advantages of this approach include the following:
• It’s easy to set up.
• No special storage software is needed.
• It works on all platforms.
• It’s both time efficient (instantaneous cloning) and space efficient (you can create multiple clones based on one backup).
• It uses dNFS to improve the performance, high availability, and scalability of NFS storage.
Hardware Solution
As mentioned previously, the hardware solution for Snap Clone is available on a range of hardware, including Sun ZFS Storage Appliance, NetApp, and EMC. The EMC configuration is the most interesting one, as Snap Clone on EMC includes support for databases on ASM, so let’s look at that example in more detail.
Using Snap Clone on ASM and EMC storage provides the ability to create “live” thin clones of databases that are on ASM. A live clone is not snapshot based but rather a live clone of the database that can be within the same cluster or in a different one. Both single-instance and RAC are supported—supported versions are 10.2.0.5 and higher of the database and 11.2 and higher of the grid infrastructure code. This functionality works on both EMC VMAX (with Time Finder VPSnap) and VNX storage appliances.
Figure 8.29 shows what this configuration looks like diagrammatically.
Figure 8.29. Snap Clone using ASM on EMC storage

Snap Clone Setup
Now that we’ve looked at the different setups architecturally, let’s walk through the setup for Snap Clone. In this example, we use CloneDB because it requires the most straightforward setup. Refer to the Cloud Administration Guide (found under Private Cloud Setup and Administration in the EM12c documentation) for more details on the other setups.
The steps to set up Snap Clone are as follows:
1. Update the relevant plugins (or install them if you don’t already have them). You will need the following plugins (the versions mentioned here are for the 12.1.0.4.3 environment used in this example):
• Enterprise Manager for Cloud version 12.1.0.9.0
• Enterprise Manager for Oracle Cloud Framework version 12.1.0.2.0
• Enterprise Manager for Storage Management version 12.1.0.5.0
• Enterprise Manager for Oracle Database version 12.1.0.7.0
2. Set up the relevant roles, users, PaaS infrastructure zones, and database pools, as described in Chapter 5.
3. Create a database profile and service template so that self-service users can create thin clones as they require.
Let’s look at the details of step 3.
Creating a Database Profile
Creating a database profile to use Snap Clone is a little bit different from creating the profiles we looked at earlier.
1. Start from the same place (Setup → Cloud → Database → Data Sources → Data Profiles, and then click Create). In this case, we are going to create the profile using an RMAN Database Image, as shown in Figure 8.30. Note the message that also appears on this screen (at the bottom) that says Snap Clone is not enabled. This rather spurious message occurs because we don’t have any relevant hardware installed that we can use to create Snap Clone. However, it can be safely ignored because we are using CloneDB in this example.
Figure 8.30. Creating a database profile for Snap Clone, step 1

2. In this example, the database from which the profile is created is running in ARCHIVELOG mode. That means that on the next screen, shown in Figure 8.31, the backup can be done in online mode. A directory for the backup to take place in must also be specified.
Figure 8.31. Creating a database profile for Snap Clone, step 2

3. Steps 3 and 4 of the wizard are the same as in previous database profile creations in this chapter, so for brevity, they are omitted here. Now we can look at using that profile in the service template creation. Again, we start that from Setup → Cloud → Database → Service Templates, and then click the Create button. The main difference here is that once we select the database profile we just created, there will now be a Create CLONEDB option, as you can see in Figure 8.32.
Figure 8.32. Creating a service template for Snap Clone, step 1

4. The important information we need to provide to use Snap Clone with CloneDB is on the next screen. We need to tell the wizard a location that is shared using NFS. In this example, that the location is set up using /u02/copy-on-write as an NFS share, as shown in Figure 8.33.
Figure 8.33. Creating a service template for Snap Clone, step 2

5. Again the remaining steps of the wizard are the same as in the previous service template examples, they are omitted here. At the end, however, you should see a message that the service template has been created successfully, as shown in Figure 8.34. Once that is complete, the self-service user can select this template from the Self Service Catalog, just as shown in previous examples in earlier chapters.
Figure 8.34. Creating a service template for Snap Clone, step 3

Summary
In this chapter, we looked at the two main types of clones built in a cloud environment using Enterprise Manager 12c—full clones and snap (or thin) clones. A full clone involves taking a complete or full copy of an existing database, which can be done in one of two ways: by taking an RMAN backup or issuing an RMAN DUPLICATE command. A snap clone takes advantage of copy-on-write technology, available through both software and hardware solutions, to rapidly create space-efficient clones of large databases. These clones contain only changed blocks; unchanged blocks are read from the test master database the clone is built from. As a result, many snap clones can be taken from a single test master without using anywhere near the amount of storage the test master takes up.


From here you can see a number of regions—General, Request Status, Incidents, Top Resource Providers, Top Requested Services, and Top Charge Trends (data is displayed in the Charge Trends region only if you have enabled chargeback), so you get a good overview of the health of your cloud infrastructure. If you want to drill down to look at more details of what service instances you have available, you can click on the number to the right of Service Instances. This link will show you exactly what service instances are defined in your cloud environment, their status, the service type, the resource provider, the service template, and even who owns it (see Figure 7.2).
Figure 7.2. Service instances

From here you can also click on the name of the service instance (for example, crm00001) to drill into the home page for that particular target, or you can simply select the row containing that target to do the following tasks:
• Start or stop the instance. Which one of these options is available depends on the status the instance is currently in. For example, if the service instance is already started, the Start option will remain grayed out.
• Examine the history of the service instance by clicking the History icon. This feature shows all the operations that have taken place against this particular service instance since it was created.
• Delete the service instance if it is no longer required. Obviously, caution should be taken before deciding to perform this operation.
You may find that the two most important links are the ones that show failed requests (the red slice of the pie chart in Figure 7.1) and the heat map (which you get to by clicking View Map on the top right of the screen shown in Figure 7.1). The failed request statuses are ones you’ll want to investigate further to see what went wrong. Sometimes, this task is as simple as an attempt to start a database that’s already started, but at other times, you’ll find more complex issues that need further investigation and follow-up. You should also check here for requests that failed due to insufficient quota or resources and investigate whether more quota or resources need to be allocated.
The heap map is a way to graphically view which resource providers (such as platform as a service [PaaS] infrastructure zones and database pools) are using the most resources. Even in the relatively simple environment we’re using as a demonstration, the heat map (see Figure 7.3) shows that the host allocation for the Salt Lake City PaaS infrastructure zone, at 66 percent utilization, is starting to reach a level that may require adding more PaaS infrastructure zones. In more complex environments, you may see resource providers that show up as red. If you select that resource provider, you’ll see what the problem is in the Metric Details region shown below the heat map, and you can drill into the Value link to discover more details.
Figure 7.3. Heat map from a simple cloud environment

There is also a menu of items that you can select from using the Oracle Cloud menu on the top left of the Cloud Home page. Several items in this menu are useful from an administration and management perspective.
Members → Topology
This menu item shows use a graphical representation of how the different items in your cloud environment are related. It is particularly useful when the environment becomes more complicated, as it allows you to see at a glance how items are related to each other. By clicking on one of the items, you can see specific details listed under the Metric History region on the bottom right (see Figure 7.4 for an example).
Figure 7.4. Configuration Topology

Resource Providers, Service Templates, Service Instances
Each of these menu items takes you to a list of the different entities. From here, it is easy to select and edit each particular entity. You can also get to each of these pages by clicking the links next to each one in the General region of the Cloud Home page. From each page, you are also able to create new items or delete existing ones. Figure 7.5 shows an example from the Resource Providers page.
Figure 7.5. The Resource Providers page

Requests
If you click on Show All under Requests, you can see a complete list of the requests that have been made in your cloud environment, as shown in Figure 7.6. From here you can do the following:
• Select an individual request and look at the details for it.
• Reschedule the request if it hasn’t already been executed.
• Delete the request.
Figure 7.6. The Requests page

If you click on Requests: → Dashboard, you’ll be taken to a more graphical representation of the request trends over the past seven days, as shown in Figure 7.7. Note that you can click on the Settings spanner on the top left to change this time period to one more suitable to you (Figure 7.7, for example, shows the last 31 days’ worth of requests). This page is particularly useful for showing request trends to management.
Figure 7.7. The Request Dashboard

The Cloud Adviser
The information we’ve covered so far in this chapter is really the bread and butter, the day-to-day life of a cloud administrator. However, one of the main advantages of moving to a cloud environment is that it frees up a lot of the DBA’s time. No longer do you need to focus on the mundane tasks of creating databases, looking after backups, and so on. The time that you used to spend working on these tasks is now available for you to take on a much more important role—the cloud adviser.
What is the cloud adviser’s role? Well, it’s a role that many of us have been performing for years—the senior DBA role—expanded into the cloud environment. Since most of our business users do not have the technical background to be able to make informed decisions on technical issues, they still come to the DBA to provide that advice. Those technical issues fall into a variety of broad topics, which the rest of this chapter examines.
Security
Security is of paramount importance in the cloud world, because if you get the security model wrong in the cloud environment, you may not have an environment to administer and manage for very long! We’ve all no doubt heard of the security breaches that seem to be increasingly common in the cloud environment. Security breaches might involve stolen usernames and passwords from cloud providers and, more seriously, stolen data. Oracle database technology is used the world over to secure large volumes of confidential data—financial data, employee data, customer information, and so much more. Securing that data, and ensuring it remains secure, is an essential part of the DBA’s role. Following are some of the tools that can be used to assist with this task:
• Transparent Data Encryption (TDE): TDE is a way to stop would-be attackers from bypassing the database and reading sensitive information directly from the files that make up the Oracle database. This protection is extended to any backups that have been made of the database as well. TDE allows encryption of individual columns right up to entire tablespaces, so you can choose how much of your data to secure.
• Data Masking and Subsetting: As you move to the cloud environment, one approach that is often taken for security reasons is to migrate development and test environments to the cloud first, as these environments are often seen as lower risk. However, at the same time, you want to ensure that your developers and quality assurance (QA) engineers have environments that are as close to production as possible so they have a realistic environment to test performance on. Those two requirements may at first glance seem antithetical, but Enterprise Manager contains Data Masking and Subsetting tools that can be used to address the issue. Oracle Data Masking and Subsetting allows you to sanitize copies of production data for development and QA environments so that your production data is not compromised. It also allows you to use subsets of the data in cases where you simply do not have enough storage to take a full copy of your production data into these environments.
Of course, you need to secure your data against more than just external attacks. Most studies of security breaches indicate that these breaches are more often the work of insiders than of external hackers (regardless of the fact that the most reported-on breaches are external). So from a security perspective, you must have tools and technologies that can be used to both track and secure against internal attacks as well. Some of the Oracle products that help to address this space include Transparent Data Encryption, Oracle Audit Vault, and Oracle Database Vault.

No comments:

Post a Comment