Wednesday 28 February 2018

Oracle cloud part 5

Deployment Issues
Now that we understand the architecture and components that are used in the different consolidation models and how schema as a service is both set up and used, we need to examine some standard deployment issues that must be addressed. These issues include security, operational, resource, and fault isolation, as well as scalability and high availability. See Chapter 1, “Database as a Service Concepts—360 Degrees,” for definitions of these terms. Here we look at how each of these issues affects schema as a service.
Security Isolation when Using Schema as a Service
In the schema as a service environment, the effect of granting a privilege or role is contained to the schema where the grant was made, thus ensuring greater security. One of the main issues (if not the main issue) in a schema as a service environment is namespace collision. Namespace collision may be mistakenly resolved by creating public synonyms, which of course is not recommended from a security perspective. The result is that while, by itself, schema as a service does not lead to reduced security, the decisions of the administrator can end up meaning security is decreased when compared to PDBaaS.
For most configurations, Oracle’s out-of-the-box database security profiles are sufficient to limit access to data in the schema as a service environment. However, it is also possible to provide deeper security using functionality such as encryption, Database Vault, and Audit Vault.
Operational Isolation when Using Schema as a Service
From a backup and recovery perspective, schema as a service tablespaces can be both backed up and recovered individually, even recovered to different points in time. This capability increases the operational isolation significantly.
As more schemas are consolidated into a single database, operations that affect an ORACLE_HOME will affect more schemas. However, this drawback is offset to a certain extent by the ease with which transportable tablespaces can be used to move the schemas to a different database. Having said that, moving a schema in this way is not quite as straightforward as moving a pluggable database, so schema as a service doesn’t rank as highly as PDBaaS for that reason. In addition, schema-based consolidation lacks isolation from both the database lifecycle management and independence of patching and upgrades perspective.
However, the area that makes schema as a service much more difficult from an operational perspective is the issue of namespace collision. Namespace collision occurs because a single database cannot contain multiple copies of the same database object in a single schema. Namespace collision means there can only be one EMPLOYEES table owned by the HR user at any one point in time.
For schema as a service, this is not so much a concern at the database layer as it is at the application layer. The implementation of schema as a service in Enterprise Manager requires you to provide a schema prefix when you issue a Create Schema as a service instance request, and it creates the new schema with that prefix.
For example, if you were using the HR schema from the prod database to create a copy of the HR schema in the test database, and you provided the schema prefix MYHR, the test database would have a schema called MYHR_HR owning a copy of the prod HR database objects.
From the database perspective, then, we can indeed create multiple copies of a schema in a single database, and each schema will be named using the naming convention SCHEMA_PREFIX_ORIGINAL_SCHEMA_NAME, removing the issue of namespace collision.
However, from an application perspective, there is clearly still an issue, as any application based off the original HR schema will expect the objects to be owned by HR. There are at least three ways to address this issue:
• Private synonyms: For each user in the database who will be accessing the HR application, create a set of private synonyms for each object used in the HR application. This task would need to be performed for every object used in the HR application and for every user who would be accessing the application. Obviously, it can be done in a scripted manner but still involves manual intervention by the database administrator.
• Public synonyms: One way to address the need for creating private synonyms for every user, as just shown, is to use public synonyms instead. However, by their very nature, only one public synonym can be created for an object owned by a specific user, so this approach removes the ability to consolidate multiple schemas into a single database and therefore is not really a resolution we can use.
• Logon trigger: Create a logon trigger for each user who will use the application to include a statement of the form ALTER SESSION SET CURRENT_SCHEMA=MYHR_HR. Again, this would require manual intervention after the schema has been created.
Of course, we could also modify the application code to change every HR schema reference to MYHR_HR, but that is rarely something that is easily achieved. The end result of this approach is that, from an application layer, namespace collisions cause a lot of difficulty in the schema as a service paradigm.
However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed greatest consolidation to occur, and a number of customers have successfully used one of the preceding namespace collision resolutions in production environments.
Resource Isolation when Using Schema as a Service
Schemas created by schema as a service are just the same as any other database schema. As a result, it is quite simple to use Oracle Resource Manager to create resource consumer groups, map sessions to those groups, and then assign resources to those groups based on resource plan directives.
However, because you do not know which database schemas will be created in when using schema as a service, these sorts of methods are all interventions by the DBA after the schema creation. Following are two methods that do not require this manual intervention after the service has been created:
1. Create and select workloads sensibly. If more than one workload is created, SSA users can specify the workload size that will best meet their requirements.
2. Proper define placement constraints at the database pool level. When a database pool is created, the self-service administrator can set maximum ceilings for resource utilization as placement constraints. These constraints can define
a. The maximum number of database services for each database.
b. The maximum CPU allocation for the service request.
c. The maximum memory allocation for the service request.
The service instance will then be provisioned on the member that best satisfies these placement constraints.
Fault Isolation when Using Schema as a Service
Fault isolation in a schema as a service request is normally provided at the schema level, so application faults in one schema will not cause other applications to fail.
It is also possible that login storms or an improperly configured midtier will impact other applications.
In addition, the more schemas that are consolidated into a single database, the more impact a fault at the database level will have. Of course, there are some faults (such as dropping a table incorrectly) that can be resolved at the schema level, thus isolating the fault from other schemas in the same database.
Once a fault has been isolated and resolved, there are two parts of the database architecture that allow fast recoverability and thus smaller mean time to repair (MTTR) in any database, including one used with schema as a service.
1. Flashback functionality, including both Flashback Drop and Flashback Table:
a. Flashback Drop allows you to reverse the effects of dropping a table, including any dependent objects such as triggers and indexes.
b. Flashback Table allows you to undo the effects of accidentally removing (or indeed adding) some or all of the contents of a table, without affecting other database objects. This feature allows you to recover from logical data corruptions (such as adding or deleting rows from the table) much more quickly than you might otherwise.
2. Point-in-time recoverability can be performed at the individual tablespace level, so if you have multiple schemas affected by an issue, you can issue parallel point-in-time recovery commands to improve MTTR.
Scalability when Using Schema as a Service
Scalability is a fundamental characteristic of DBaaS architectures by virtue of their support for self-service, elasticity, and multitenancy. Oracle’s database technologies provide a number of different ways to support scalability when delivering database services, all of which are applicable in schema as a service. These include
• Resource management/quality of service.
• Addition of extra storage through such functionality as multiple Exadata Database Machine frames.
• Horizontal scaling via RAC when service demands increase beyond the capabilities of a single machine.
• Scalable management resources where Enterprise Manager can add management nodes as the number of targets under management grows.
High Availability when Using Schema as a Service
As we discussed in Chapter 1, not all consumers require the same level of availability in a cloud environment. Oracle provides different levels of availability to accommodate the unique needs of consumers in the cloud environment. Table 4.1 (reproduced from Chapter 1 for convenience) shows the availability levels offered through Oracle’s service plans.
Table 4.1. Availability Levels

Summary
In this chapter, we looked at the architecture and components that make up a consolidated environment, as well as the deployment issues that need to be faced when undertaking a consolidation exercise. We also walked through the details of setting up and using schema as a service.
From the application layer, namespace collisions can cause substantial difficulty in the schema as a service paradigm. However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed the greatest consolidation to occur, and a number of customers have successfully used one of the namespace collision resolutions outlined in this chapter in production environments. In the next chapter, we cover using PDBaaS, which addresses the namespace collision issue very successfully.



Chapter 6. Metering and Chargeback in Enterprise Manager 12c
The previous few chapters covered setting up Schema as a Service, using the self-service portal with schema as a service, setting up Pluggable Database as a Service (PDBaaS), and using the self-service portal with PDBaaS, all of these using Oracle Enterprise Manager Cloud Control 12c. Now let’s move on to an area where you start to get more back from all of this work—metering and chargeback.
Metering is something that Enterprise Manager has done since its very first release. It’s a measurement of some form of resource—obviously in the case of Enterprise Manager, it’s a measurement of how much computing resources, such as CPU, I/O, memory, and storage, have been used by an object. When I think way back when to the very first release of Enterprise Manager I ever saw—the 0.76 release, whenever that was!—the thing that comes to mind most is that it had this remarkably pretty tablespace map that showed you diagrammatically just where every block in an object was in a particular tablespace. Remarkably pretty, as I said—but virtually useless because all you could do was look at the pretty colors!
Clearly, metering has come a long, long way since that time, and if you have had Enterprise Manager up and running for some time, you now have at your fingertips metrics on so many different things that you may be lost trying to work out what you can do with it all. Well, that’s where chargeback comes into play. In simple terms, chargeback is (as the name implies) an accounting tool. In Enterprise Manager terms, it has three main functions:
• It provides a way of aggregating the enormous amount of metrics data that Enterprise Manager collects.
• It provides reports to the consumers of those metrics of how much they have used of those particular metrics.
• If you have set it up to do so, it provides a way for the IT department to charge those consumers for the resources they have used.
Let’s expand on that last point a little further. Within the Chargeback application, the cloud administrator can set specific charges for specific resources. As an example, you might decide to charge $1 a month per gigabyte of memory used for a database. Those charges can be transferred to some form of billing application, such as Oracle’s Self-Service E-Billing application, and end up being charged as a real cost to the end user. However, only a small number of people are actually using it to charge a cost to the end user. There are two reasons for that:
• First, most end users are still not in the mindset of paying for computing power in the same way as they pay for other utilities—that is, paying for the amount of computing power that is actually consumed, as we do with our gas, electricity, and phone bills.
• Second, most people have difficulty deciding just how much to charge for a “unit” (whatever that might be) of computing power. In fact, arguments over just what to charge for a unit of computing power have been known to last much longer than any meetings held to decide to actually implement chargeback!
The end result is that customers often choose to implement showback rather than chargeback. Showback is in many ways similar to chargeback. It’s the ability to provide reports to end users that show how much computing resource they have used and to show them how much it would have cost the end users if the IT department had decided to actually charge for it. In some ways, this information is just as beneficial to the IT department as to the end users because it allows them to have a much better grasp on what they need to know for budgeting purposes, and it avoids the endless arguments about whether end users are being charged too much.
Terminology
Okay, let’s talk about some of the new terminology you need to understand before we implement chargeback (from now on, we use the term chargeback to cover both chargeback and showback for simplicity’s sake, and because the feature is actually called chargeback in the Enterprise Manager Cloud Control product).
Chargeback Entities
The first concept you need to understand is that of a chargeback entity. In Enterprise Manager terms, a target typically uses some form of resource, and the chargeback application calculates the cost of that resource usage. In releases prior to Enterprise Manager 12.1.0.4, the chargeback application collected configuration information and metrics for a subset of Enterprise Manager targets. In the 12.1.0.4 release, you can add chargeback support for Enterprise Manager target types for which there is no current out-of-the-box chargeback support via the use of Enterprise Manager command-line interface (EMCLI) verbs. These chargeback targets, both out-of-the-box and custom types, are collectively known as entities.
You can get a complete list of all the items that we can set up chargeback for by clicking on Charge Plans, then Charge Item Library (see Figure 6.1).
Figure 6.1. Accessing the charge item library

You will then see a list of all the charge items, including a brief description of each item (see Figure 6.2).
Figure 6.2. Available charge items

Charge Plans
A charge plan is what Enterprise Manager uses to associate the resources being charged for and the rates at which they are charged. Two types of charge plans are available:
• Universal charge plan: The universal charge plan contains the rates for CPU, storage, and memory. Although it is called the “universal” charge plan, it isn’t really universal because it doesn’t apply to all entity types. For example, it is not applicable to J2EE applications.
• Extended charge plans: The universal charge plan is an obvious starting point, but there are many situations where entity-specific charges are required. Let’s say you have a lot of people who understand Linux, but there is a new environment being added to your data center that requires Windows knowledge. If you had to pay a contractor to come in to look after that environment because it was outside your knowledge zone, it would be fair to charge usage of the Windows environment at a higher rate. As another example, let’s say your standard environments did not use Real Application Clusters (RAC), and a new environment has come in that requires the high availability you can get from a RAC environment. RAC is, of course, a database option that you need to pay an additional license fee for, so that should be charged at a higher rate. An extended charge plan can be used to meet these sorts of requirements, as it provides greater flexibility to chargeback administrators. Extended charge plans allow you to do the following:
• Set up specific charges for specific entities
• Define rates based on configuration and usage
• Assign a flat rate regardless of configuration or usage
• Override the rates set for the universal plan
An out-of-the-box extended plan is provided that you can use as a basis for creating your own extended plans. This plan defines charges based on machine sizes for the Oracle Virtual Machine Guest entity.
Cost Centers
Obviously, when charges for resource usage are implemented, these charges must be assigned to something. In the chargeback application, the costs are assigned to a cost center. Cost centers are typically organized in a hierarchy and may correspond to different parts of an organization—for example, sales, development, human resources, and so forth—or they may correspond to different customers—for example, where you are a hosting company and host multiple customer environments. In either case, cost centers are defined as a hierarchy within the chargeback application. You can also import cost centers that have been implemented in your Lightweight Directory Access Protocol (LDAP) server if you want to use those.
Reports
The main benefit you get from using chargeback is the vast amount of information it puts at your fingertips. This information can be reported on by administrators in a variety of formats available via the Oracle BI Publisher tool, including pie charts and bar graphs, and you can drill down to charges based on a specific cost center, entity type, or resource. You can also make use of trending reports over time and can use this information to aid you in your IT budget planning. Outside the chargeback application itself, self-service users can view chargeback information related to the resources they have used within the self-service portal.
Setting Up Chargeback
Now you have an understanding of the capabilities of the chargeback application in the Enterprise Manager product suite. The next step, of course, is to set it up. Chargeback can be set up in Enterprise Manager 12c by any user with the EM_CBA_ADMIN role or, of course, as SYSMAN.
1. Once you are logged in as a user with the relevant privileges, follow the path Enterprise → Chargeback (see Figure 6.3).
Figure 6.3. Setting up chargeback, step 1

2. The default currency symbol displayed in the chargeback application is the dollar sign ($). Note that this is just a symbol and does not have any impact on chargeback calculations, but you can change it if you want by clicking on the Settings tab (see Figure 6.4).
Figure 6.4. Setting up chargeback, step 2

3. You can enter a new currency symbol here, but for this environment, leave it at the default and click Change Plans (see Figure 6.5).
Figure 6.5. Setting up chargeback, step 3

4. The first thing to do is set rates for the universal charge plan, which covers the CPU usage, memory allocation, and storage allocation metrics. Click Set Rates (see Figure 6.6).
Figure 6.6. Setting up chargeback, step 4

5. Set the rates appropriately, and click Save (see Figure 6.7).
Figure 6.7. Setting up chargeback, step 5

6. The universal charge plan is useful, but in some situations, you may want to apply charges on other entities. That’s where an extended charge plan can be used. To create an extended charge plan, click Create, then click Plan (see Figure 6.8).
Figure 6.8. Setting up chargeback, step 6

7. Provide a meaningful name for the charge plan, then click Add to select an entity type for which you can set rates (see Figure 6.9).
Figure 6.9. Setting up chargeback, step 7

8. You can add multiple entity types to the charge plan at once, so choose Host and Oracle Pluggable Database, then click OK (see Figure 6.10).
Figure 6.10. Setting up chargeback, step 8

9. You can now set up specific configurations for each entity type by selecting the entity and then clicking Setup Configurations (see Figure 6.11).
Figure 6.11. Setting up chargeback, step 9

10. Click the Add button (see Figure 6.12).
Figure 6.12. Setting up chargeback, step 10

11. In this example, different rates are charged for different machine architectures. This is the sort of thing you would do if you wanted to charge more for maintaining machines that are either outdated or for which you have less skill and may need to bring in consultants. To do this, you can set the Condition Item to Machine Architecture, then click the Search button to search for different values you can choose (see Figure 6.13).
Figure 6.13. Setting up chargeback, step 11

12. In this scenario, we want to set up three different configurations:
• Intel x86_64 as our default
• Intel i686 as more expensive because it’s outdated
• PA-RISC 64-bit as more expensive because we have less skills in that area
You can select each in turn from the list, and click OK, as in Figure 6.14.
Figure 6.14. Setting up chargeback, step 12

13. Click OK (see Figure 6.15).
Figure 6.15. Setting up chargeback, step 13

14. On the Setup Configurations pop-up, click OK again (see Figure 6.16).
Figure 6.16. Setting up chargeback, step 14

15. You can now specify multipliers for each architecture. In this example, the preferred architecture (Intel x86-64) is set at 1, Intel i686 is set to 2x, PA-RISC 64 bit is set to 1.5x, and other architectures is set to 3x. Next, can click Oracle Pluggable Database to set up configurations for it (see Figure 6.17).
Figure 6.17. Setting up chargeback, step 15

16. Click Setup Configurations (see Figure 6.18).
Figure 6.18. Setting up chargeback, step 16

17. Click Add (see Figure 6.19).
Figure 6.19. Setting up chargeback, step 17

18. On the Add Configuration pop-up, select Version for the Condition Item (see Figure 6.20).
Figure 6.20. Setting up chargeback, step 18

19. In this scenario, clients who have not switched to the latest version of the database are charged more as a way of encouraging migration, so the Condition Operator and Condition Value are set to validate that. The option to charge more when using paid options such as partitioning and so on is also available. Click OK (see Figure 6.21).
Figure 6.21. Setting up chargeback, step 19

20. On the Setup Configurations pop-up, click OK (see Figure 6.22).
Figure 6.22. Setting up chargeback, step 20

21. Back on the Create Plan: HR Chargeback Plan page, you can increase the multiplier for configurations that aren’t on the latest version, and click Save (see Figure 6.23).
Figure 6.23. Setting up chargeback, step 21

22. Once the charge plans are configured satisfactorily, you can also define cost centers that the charges will be assigned to. Cost centers are normally configured in a business hierarchy, so let’s walk through the process of building that. Start by clicking the Cost Centers tab (see Figure 6.24).
Figure 6.24. Setting up chargeback, step 22

23. Cost centers can either be imported from an existing LDAP configuration (done via the Action menu) or defined in the chargeback application. In this example, they are added to the chargeback application. To do this, click Add (see Figure 6.25).
Figure 6.25. Setting up chargeback, step 23

24. On the New Cost Center pop-up, enter a cost center name and display name, then click OK (see Figure 6.26).
Figure 6.26. Setting up chargeback, step 24

25. You can do this as many times as you like. To add a cost center in a business hierarchy, first create a cost center at the top level (as shown previously), then create a cost center using the Member of radio button, and select the relevant top-level cost center, as shown here (see Figure 6.27).
Figure 6.27. Setting up chargeback, step 25

26. Once you have defined all the cost centers you need, the next step is to add the relevant entities. An entity is either a target in Enterprise Manager or a custom resource that has been added to Enterprise Manager. Custom resources can include target types for which there is no current out-of-the-box chargeback support (covered in more detail in the documentation). To add entities, click the Entities tab (see Figure 6.28).
Figure 6.28. Setting up chargeback, step 26

27. Click Add Entities (see Figure 6.29).
Figure 6.29. Setting up chargeback, step 27

28. This will start the Add Entities wizard. Click Add to select targets for chargeback (see Figure 6.30).
Figure 6.30. Setting up chargeback, step 28

29. Now it is time to add hosts. To search for hosts to add, deselect All, and select Host from the Target Type dropdown (see Figure 6.31).
Figure 6.31. Setting up chargeback, step 29

30. From the list of hosts, select one or more hosts to add as an entity, and click Select (see Figure 6.32).
Figure 6.32. Setting up chargeback, step 30

31. On the Make Assignments wizard step, select the entity name, and click Assign Plan (see Figure 6.33).
Figure 6.33. Setting up chargeback, step 31

32. Select HR Chargeback Plan, and click OK to add the chargeback plan to this host (see Figure 6.34).
Figure 6.34. Setting up chargeback, step 32

33. To add a cost center to this host, click Assign Cost Center (see Figure 6.35).
Figure 6.35. Setting up chargeback, step 33

34. Select the relevant cost center, and click OK (see Figure 6.36).
Figure 6.36. Setting up chargeback, step 34

35. Back on the Make Assignments step, click Next (see Figure 6.37).
Figure 6.37. Setting up chargeback, step 35

36. Review the settings, and if they are correct, click Submit (see Figure 6.38).
Figure 6.38. Setting up chargeback, step 36

37. You can walk through the same process to add one of the container databases and a platform as a service (PaaS) infrastructure zone as well. Once that has been done, you can schedule an on-demand data collection to start seeing the information for these entities in the chargeback application. To do that, select On-demand data collection from the Action dropdown menu (see Figure 6.39).
Figure 6.39. Setting up chargeback, step 37

38. Click Yes on the Confirmation pop-up (see Figure 6.40).
Figure 6.40. Setting up chargeback, step 38

39. In the Information message, click here to view the job status (see Figure 6.41).
Figure 6.41. Setting up chargeback, step 39

40. Once the job status changes to 1 Succeeded, click Enterprise, then Chargeback (see Figure 6.42).
Figure 6.42. Setting up chargeback, step 40

41. You should see information being collected both in the Summary and Usage Trends regions (see Figure 6.43).
Figure 6.43. Setting up chargeback, step 41

It might seem like a very long-winded setup, but in reality, chargeback takes only a short time to configure. Of course, once it’s configured, you then have all the joys of getting money out of the end clients for their resource usage—good luck with that one, I can’t help you there!
Summary
The metering functionality that has been part of the Enterprise Manager product since its very early days allows us to both collate and report on the resource usage of particular targets within the Enterprise Manager environment. In combination with the chargeback application that was introduced with the 12c version of Enterprise Manager, you can now show business units the real cost of providing computing power. You can even charge the end users for the costs of their resource usage.



Chapter 7. Manage and Administer the Database Cloud in Enterprise Manager 12c
Now that we’ve covered in some detail how to set up and configure the various database cloud models in Enterprise Manager Cloud Control 12c, you may be thinking that there really is no need for a database administrator (DBA) in the cloud world. After all, once the pools, zones, profiles, and templates are set up, database creation (and even destruction) is now in the hands of the self-service user. Depending on whether you are using a public or private cloud model, many of the time-consuming tasks such as installation, setup, and configuration may now be in the hands of cloud providers rather than DBAs.
In fact, nothing could be further from the truth. DBAs are still very much in demand in cloud environments. Moving to a cloud environment may remove much of the tactical work that DBAs have had to manage for so many years, but that simply frees them up to move to a more strategic position where they can focus on areas that are of more value to the business.
Although the creation and destruction of databases may now lie in the hands of self-service users, it will be a long time before users take on some of the other roles of the DBA, such as performance tuning (if indeed they ever take it on!). Old habits die hard, and it has long been the habit of users to reach out immediately to the DBA when they run into a performance problem. “The database is slow!” has been the catch cry for many years, even though it is often not the fault of the database. With so many technology layers between the end user and the database, one of the main reasons that users will still turn to the DBA for performance problems is that they simply lack the knowledge and skills to identify where the problem lies, let alone to understand and address the problem.
It may be more difficult in the cloud environment for the DBA to identify what sorts of performance issues are occurring. The reason for this added difficulty will largely be that DBAs are more remote from the day-to-day running of these databases than they may have been previously. One of the main roles of the DBA in the non-cloud world is to ensure databases are up and running, available to end users, and performing well. To that end, many DBAs have developed their own scripts that they run as cron jobs in the *nix world (or batch jobs in the Windows world), or they use tools such as Enterprise Manager to perform similar sorts of functionality. Of course, DBAs traditionally have had some form of access to the databases they are looking after, either by directly logging onto the machines that hold the databases or by accessing the databases across the network. In the cloud world, self-service users can create databases (up to the quotas that they have been provided with, of course) without the DBA even knowing the databases exist, let alone where they are. Thankfully, there is a straightforward answer to this issue—the Cloud Home page.
The Cloud Home Page
Once you have completed the setup of your cloud services and self-service users have started submitting requests, the Cloud Home page is useful for getting a cohesive view of all the different service types. To access the Cloud Home page, follow the path Enterprise → Cloud → Cloud Home, and you will see a screen like that shown in Figure 7.1
Figure 7.1. The Cloud Home page





Chapter 8. Cloning Databases in Enterprise Manager 12c
Cloning databases has been a part of the role of a database administrator (DBA) since data was first chiseled onto stone tablets. DBAs are frequently called on to both clone and refresh database copies for all sorts of business reasons. As the Oracle product set has broadened over the years, more and more ways to clone data have been introduced. This chapter restricts the discussion to the two main types of clones built in a cloud environment using Oracle Enterprise Manager Cloud Control 12c (EM12c)—full clones and snap (or thin) clones. Also, since the thrust of this entire book is on building databases in a cloud environment, the details covered here are specific to using EM12c in the cloud environment.
Full Clones
A full clone, as the name implies, involves taking a complete or full copy of an existing database. This can be done in one of two ways—by taking an Recovery Manager (RMAN) backup or by issuing an RMAN DUPLICATE command—so let’s take a look at each of those options.
RMAN Backups
Oracle’s Recovery Manager utility, more commonly known as RMAN, has been around since Oracle 8.0, so you probably need no introduction to it. RMAN backups, which are created using either backup sets or image copies, can be used to create profiles, which in turn can be used to create full database clones.
Creating a full database clone using an RMAN backup in the cloud environment requires a number of steps:
1. Enabling the DBaaS cloud involves some common setup tasks required for setting up a private database cloud. These tasks are covered in Chapter 5, “Database Consolidation in Enterprise Manager 12c,” so if you need to, refresh your memory by rereading that chapter. In summary, the tasks are
a. Defining roles and assigning users.
b. Creating a platform as a service (PaaS) infrastructure zone.
c. Creating a database pool.
d. Configuring request settings.
e. Setting quotas.
2. Creating a profile either by taking an RMAN backup or using an existing RMAN backup.
3. Creating a service template based on the profile just created.
Let’s walk through the last two steps in more detail.
Creating a Profile by Taking an RMAN Backup
To create a profile by taking an RMAN backup, start on the Getting Started: Databases page.
1. Go to Setup → Cloud → Database. From here, click on the Data Sources link and ensure the Data Profiles tab is active, and then click Create, as shown in Figure 8.1.
Figure 8.1. Creating a Data Profile

2. The Create Database Provisioning Profile wizard starts, which has only four steps. Step 1 looks pretty bland when you first see it, as it simply asks you to select a reference target. This is the entity that the backup will be taken from, so in many ways, “reference source” would be a better description. When you select your reference target, the rest of the screen is populated on the basis of that target. You can create a profile based on either an ORACLE_HOME target type or a database target type. In our case, we want to create a profile based on a database, so select the specific database by clicking the magnifying glass on the first page of the wizard. That opens the screen shown in Figure 8.2, where you select the Ref database as the reference target.
Figure 8.2. Selecting the Reference Target

3. Once you click the Select button, it takes you back to a much more populated version of step 1 of the wizard. Here, you can click on the Structure and Data radio button (the Create and RMAN Backup radio buttons are selected by default), and also specify credentials. In the example shown in Figure 8.3, preferred credentials for both the host containing the database and the database itself are already created, so you can just use those and click Next.
Figure 8.3. Step 1 of the Create Profile wizard

4. Step 2 of the wizard enables you to specify some standard RMAN configuration options. If the database you are using is running in ARCHIVELOG mode, you can choose to perform either an online or an offline backup. You can also choose the number of channels to use, whether or not to use compression, the backup location, and details such as the backup filename format, backup file tag, and name of the control file. In Figure 8.4, all the settings have been left at their defaults (obviously, if you have RMAN settings you normally would use, you can change any of them), and you can just click Next.
Figure 8.4. Step 2 of the Create Profile wizard

5. On the third step of the wizard, you are prompted for Profile Information (see Figure 8.5):
• Profile Location (mandatory): This is the location in the software library where the database profile will be created.
• Profile Name (mandatory): This is a unique name for the profile.
• Description (optional): Provide a description for the profile that identifies the information you want to see in the description.
• Profile Version and Vendor (optional): These fields should be prepopulated. There is no need to change them.
• Notes (optional): Again, this field will be prepopulated. You can add or delete anything you need here.
• Schedule: In the Schedule section, you can start the profile creation immediately or schedule it for some later time. You can also set a repeat policy to repeat the profile creation regularly. If you are scheduling a repeat creation, you need to place the backup on a Network File System (NFS) directory that is shared among all pool members.
• Purge Policy: The Purge Policy region allows you to either not purge any data that’s been collected or to purge it after a certain number of days or snapshots.
• Working Directory: This field allows you to specify a directory for any temporary files to be kept as part of the profile creation. It defaults to /tmp.
Figure 8.5. Step 3 of the Create Profile wizard

6. The Review step enables you to review all of the inputs to the wizard you have made. If you are satisfied that you have made all the right choices, you can click on the Submit button to create the profile, as shown in Figure 8.6.
Figure 8.6. Step 4 of the Create Profile wizard

You are taken to the Procedure Activity page where you can watch the profile being created. Normally, I change the View Data value on the far right to refresh every 30 seconds, and choose Select All under the View menu to watch all the steps automatically refresh. If there are any problems with the profile creation, you can select the checkbox to the left of the step that fails to see more information displayed on the right of the screen. In the example we’ve walked through here, you can see in Figure 8.7 that there were no problems and the profile was created in just under 2 minutes.
Figure 8.7. Profile creation output

Creating a Profile Using an Existing RMAN Backup
Now let’s look at how to create a database profile using an existing RMAN backup. In most cases, using an existing backup is a more common scenario than using a new backup because most people already have RMAN backups of their databases.
The steps for creating a profile using an existing RMAN backup start in exactly the same way as creating a profile by taking an RMAN backup. Start on the Getting Started: Databases page (you can reach it by following the path Setup → Cloud → Database). From here, click on the Data Sources link and ensure the Data Profiles tab is active, then click Create, as shown in Figure 8.8.
Figure 8.8. Creating a Data Profile

Again, this starts the Create Database Provisioning Profile wizard. When you select your reference target, the rest of the screen is populated on the basis of that target. In our case, we want to create a profile based on an existing RMAN backup of a database, so you can select the specific database by clicking the magnifying glass on the first page of the wizard. That opens the screen shown in Figure 8.9, where the Ref database has been selected as our reference target.
Figure 8.9. Selecting the Reference Target

Once you click the Select button, it takes you back to a much more populated version of step 1 of the wizard.
1. Here, you can click on the Structure and Data radio button but this time also specify credentials. In the example in Figure 8.10, preferred credentials for both the host containing the database and the database itself are already created, so just use those, and click Next. You will also notice a third option here (Export Schema Objects) that isn’t covered so far because it’s used for schema as a service, not complete clones.
Figure 8.10. Step 1 of the Create Profile wizard

2. This step takes you to a screen that shows a list of all the existing backups you can use to create the profile. You need to be a little careful when selecting the backup you want to use here, as unfortunately they’re not listed in any logical order. When you select the backup you want to use to create the profile, the backup set pieces for that backup are listed below in a master detail form. Click the Next button to move to the next screen, as shown in Figure 8.11.
Figure 8.11. Step 2 of the Create Profile wizard

















3. On the next screen, you are again prompted for profile information, schedule, and working directory. In this case, because we are working off a single existing RMAN backup, it makes no sense to set a repeat policy, so the options we had to do that when creating a profile based on a new RMAN backup are not available to us. It is useful to include the fact that this profile is taken from an existing RMAN backup in the description field, as shown in Figure 8.12, but doing so is totally optional. Once you have made any changes you want on this page, click the Next button.
Figure 8.12. Step 3 of the Create Profile wizard

4. You are taken to the review screen where you can click the Submit button again, as shown in Figure 8.13.
Figure 8.13. Step 4 of the Create Profile wizard

Again, you can change the View Data value on the far right to refresh every 30 seconds, and choose Select All under the View menu to watch all the steps automatically refresh. If there are any problems with the profile creation, you can select the checkbox to the left of the step that fails to see more information displayed on the right of the screen. In the example we’ve walked through here, you can see in Figure 8.14 that there were no problems, and the profile was created in just under 1 minute.Deployment Issues
Now that we understand the architecture and components that are used in the different consolidation models and how schema as a service is both set up and used, we need to examine some standard deployment issues that must be addressed. These issues include security, operational, resource, and fault isolation, as well as scalability and high availability. See Chapter 1, “Database as a Service Concepts—360 Degrees,” for definitions of these terms. Here we look at how each of these issues affects schema as a service.
Security Isolation when Using Schema as a Service
In the schema as a service environment, the effect of granting a privilege or role is contained to the schema where the grant was made, thus ensuring greater security. One of the main issues (if not the main issue) in a schema as a service environment is namespace collision. Namespace collision may be mistakenly resolved by creating public synonyms, which of course is not recommended from a security perspective. The result is that while, by itself, schema as a service does not lead to reduced security, the decisions of the administrator can end up meaning security is decreased when compared to PDBaaS.
For most configurations, Oracle’s out-of-the-box database security profiles are sufficient to limit access to data in the schema as a service environment. However, it is also possible to provide deeper security using functionality such as encryption, Database Vault, and Audit Vault.
Operational Isolation when Using Schema as a Service
From a backup and recovery perspective, schema as a service tablespaces can be both backed up and recovered individually, even recovered to different points in time. This capability increases the operational isolation significantly.
As more schemas are consolidated into a single database, operations that affect an ORACLE_HOME will affect more schemas. However, this drawback is offset to a certain extent by the ease with which transportable tablespaces can be used to move the schemas to a different database. Having said that, moving a schema in this way is not quite as straightforward as moving a pluggable database, so schema as a service doesn’t rank as highly as PDBaaS for that reason. In addition, schema-based consolidation lacks isolation from both the database lifecycle management and independence of patching and upgrades perspective.
However, the area that makes schema as a service much more difficult from an operational perspective is the issue of namespace collision. Namespace collision occurs because a single database cannot contain multiple copies of the same database object in a single schema. Namespace collision means there can only be one EMPLOYEES table owned by the HR user at any one point in time.
For schema as a service, this is not so much a concern at the database layer as it is at the application layer. The implementation of schema as a service in Enterprise Manager requires you to provide a schema prefix when you issue a Create Schema as a service instance request, and it creates the new schema with that prefix.
For example, if you were using the HR schema from the prod database to create a copy of the HR schema in the test database, and you provided the schema prefix MYHR, the test database would have a schema called MYHR_HR owning a copy of the prod HR database objects.
From the database perspective, then, we can indeed create multiple copies of a schema in a single database, and each schema will be named using the naming convention SCHEMA_PREFIX_ORIGINAL_SCHEMA_NAME, removing the issue of namespace collision.
However, from an application perspective, there is clearly still an issue, as any application based off the original HR schema will expect the objects to be owned by HR. There are at least three ways to address this issue:
• Private synonyms: For each user in the database who will be accessing the HR application, create a set of private synonyms for each object used in the HR application. This task would need to be performed for every object used in the HR application and for every user who would be accessing the application. Obviously, it can be done in a scripted manner but still involves manual intervention by the database administrator.
• Public synonyms: One way to address the need for creating private synonyms for every user, as just shown, is to use public synonyms instead. However, by their very nature, only one public synonym can be created for an object owned by a specific user, so this approach removes the ability to consolidate multiple schemas into a single database and therefore is not really a resolution we can use.
• Logon trigger: Create a logon trigger for each user who will use the application to include a statement of the form ALTER SESSION SET CURRENT_SCHEMA=MYHR_HR. Again, this would require manual intervention after the schema has been created.
Of course, we could also modify the application code to change every HR schema reference to MYHR_HR, but that is rarely something that is easily achieved. The end result of this approach is that, from an application layer, namespace collisions cause a lot of difficulty in the schema as a service paradigm.
However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed greatest consolidation to occur, and a number of customers have successfully used one of the preceding namespace collision resolutions in production environments.
Resource Isolation when Using Schema as a Service
Schemas created by schema as a service are just the same as any other database schema. As a result, it is quite simple to use Oracle Resource Manager to create resource consumer groups, map sessions to those groups, and then assign resources to those groups based on resource plan directives.
However, because you do not know which database schemas will be created in when using schema as a service, these sorts of methods are all interventions by the DBA after the schema creation. Following are two methods that do not require this manual intervention after the service has been created:
1. Create and select workloads sensibly. If more than one workload is created, SSA users can specify the workload size that will best meet their requirements.
2. Proper define placement constraints at the database pool level. When a database pool is created, the self-service administrator can set maximum ceilings for resource utilization as placement constraints. These constraints can define
a. The maximum number of database services for each database.
b. The maximum CPU allocation for the service request.
c. The maximum memory allocation for the service request.
The service instance will then be provisioned on the member that best satisfies these placement constraints.
Fault Isolation when Using Schema as a Service
Fault isolation in a schema as a service request is normally provided at the schema level, so application faults in one schema will not cause other applications to fail.
It is also possible that login storms or an improperly configured midtier will impact other applications.
In addition, the more schemas that are consolidated into a single database, the more impact a fault at the database level will have. Of course, there are some faults (such as dropping a table incorrectly) that can be resolved at the schema level, thus isolating the fault from other schemas in the same database.
Once a fault has been isolated and resolved, there are two parts of the database architecture that allow fast recoverability and thus smaller mean time to repair (MTTR) in any database, including one used with schema as a service.
1. Flashback functionality, including both Flashback Drop and Flashback Table:
a. Flashback Drop allows you to reverse the effects of dropping a table, including any dependent objects such as triggers and indexes.
b. Flashback Table allows you to undo the effects of accidentally removing (or indeed adding) some or all of the contents of a table, without affecting other database objects. This feature allows you to recover from logical data corruptions (such as adding or deleting rows from the table) much more quickly than you might otherwise.
2. Point-in-time recoverability can be performed at the individual tablespace level, so if you have multiple schemas affected by an issue, you can issue parallel point-in-time recovery commands to improve MTTR.
Scalability when Using Schema as a Service
Scalability is a fundamental characteristic of DBaaS architectures by virtue of their support for self-service, elasticity, and multitenancy. Oracle’s database technologies provide a number of different ways to support scalability when delivering database services, all of which are applicable in schema as a service. These include
• Resource management/quality of service.
• Addition of extra storage through such functionality as multiple Exadata Database Machine frames.
• Horizontal scaling via RAC when service demands increase beyond the capabilities of a single machine.
• Scalable management resources where Enterprise Manager can add management nodes as the number of targets under management grows.
High Availability when Using Schema as a Service
As we discussed in Chapter 1, not all consumers require the same level of availability in a cloud environment. Oracle provides different levels of availability to accommodate the unique needs of consumers in the cloud environment. Table 4.1 (reproduced from Chapter 1 for convenience) shows the availability levels offered through Oracle’s service plans.
Table 4.1. Availability Levels

Summary
In this chapter, we looked at the architecture and components that make up a consolidated environment, as well as the deployment issues that need to be faced when undertaking a consolidation exercise. We also walked through the details of setting up and using schema as a service.
From the application layer, namespace collisions can cause substantial difficulty in the schema as a service paradigm. However, prior to the advent of PDBaaS, schema as a service was the consolidation model that allowed the greatest consolidation to occur, and a number of customers have successfully used one of the namespace collision resolutions outlined in this chapter in production environments. In the next chapter, we cover using PDBaaS, which addresses the namespace collision issue very successfully.



Chapter 6. Metering and Chargeback in Enterprise Manager 12c
The previous few chapters covered setting up Schema as a Service, using the self-service portal with schema as a service, setting up Pluggable Database as a Service (PDBaaS), and using the self-service portal with PDBaaS, all of these using Oracle Enterprise Manager Cloud Control 12c. Now let’s move on to an area where you start to get more back from all of this work—metering and chargeback.
Metering is something that Enterprise Manager has done since its very first release. It’s a measurement of some form of resource—obviously in the case of Enterprise Manager, it’s a measurement of how much computing resources, such as CPU, I/O, memory, and storage, have been used by an object. When I think way back when to the very first release of Enterprise Manager I ever saw—the 0.76 release, whenever that was!—the thing that comes to mind most is that it had this remarkably pretty tablespace map that showed you diagrammatically just where every block in an object was in a particular tablespace. Remarkably pretty, as I said—but virtually useless because all you could do was look at the pretty colors!
Clearly, metering has come a long, long way since that time, and if you have had Enterprise Manager up and running for some time, you now have at your fingertips metrics on so many different things that you may be lost trying to work out what you can do with it all. Well, that’s where chargeback comes into play. In simple terms, chargeback is (as the name implies) an accounting tool. In Enterprise Manager terms, it has three main functions:
• It provides a way of aggregating the enormous amount of metrics data that Enterprise Manager collects.
• It provides reports to the consumers of those metrics of how much they have used of those particular metrics.
• If you have set it up to do so, it provides a way for the IT department to charge those consumers for the resources they have used.
Let’s expand on that last point a little further. Within the Chargeback application, the cloud administrator can set specific charges for specific resources. As an example, you might decide to charge $1 a month per gigabyte of memory used for a database. Those charges can be transferred to some form of billing application, such as Oracle’s Self-Service E-Billing application, and end up being charged as a real cost to the end user. However, only a small number of people are actually using it to charge a cost to the end user. There are two reasons for that:
• First, most end users are still not in the mindset of paying for computing power in the same way as they pay for other utilities—that is, paying for the amount of computing power that is actually consumed, as we do with our gas, electricity, and phone bills.
• Second, most people have difficulty deciding just how much to charge for a “unit” (whatever that might be) of computing power. In fact, arguments over just what to charge for a unit of computing power have been known to last much longer than any meetings held to decide to actually implement chargeback!
The end result is that customers often choose to implement showback rather than chargeback. Showback is in many ways similar to chargeback. It’s the ability to provide reports to end users that show how much computing resource they have used and to show them how much it would have cost the end users if the IT department had decided to actually charge for it. In some ways, this information is just as beneficial to the IT department as to the end users because it allows them to have a much better grasp on what they need to know for budgeting purposes, and it avoids the endless arguments about whether end users are being charged too much.
Terminology
Okay, let’s talk about some of the new terminology you need to understand before we implement chargeback (from now on, we use the term chargeback to cover both chargeback and showback for simplicity’s sake, and because the feature is actually called chargeback in the Enterprise Manager Cloud Control product).
Chargeback Entities
The first concept you need to understand is that of a chargeback entity. In Enterprise Manager terms, a target typically uses some form of resource, and the chargeback application calculates the cost of that resource usage. In releases prior to Enterprise Manager 12.1.0.4, the chargeback application collected configuration information and metrics for a subset of Enterprise Manager targets. In the 12.1.0.4 release, you can add chargeback support for Enterprise Manager target types for which there is no current out-of-the-box chargeback support via the use of Enterprise Manager command-line interface (EMCLI) verbs. These chargeback targets, both out-of-the-box and custom types, are collectively known as entities.
You can get a complete list of all the items that we can set up chargeback for by clicking on Charge Plans, then Charge Item Library (see Figure 6.1).
Figure 6.1. Accessing the charge item library

You will then see a list of all the charge items, including a brief description of each item (see Figure 6.2).
Figure 6.2. Available charge items

Charge Plans
A charge plan is what Enterprise Manager uses to associate the resources being charged for and the rates at which they are charged. Two types of charge plans are available:
• Universal charge plan: The universal charge plan contains the rates for CPU, storage, and memory. Although it is called the “universal” charge plan, it isn’t really universal because it doesn’t apply to all entity types. For example, it is not applicable to J2EE applications.
• Extended charge plans: The universal charge plan is an obvious starting point, but there are many situations where entity-specific charges are required. Let’s say you have a lot of people who understand Linux, but there is a new environment being added to your data center that requires Windows knowledge. If you had to pay a contractor to come in to look after that environment because it was outside your knowledge zone, it would be fair to charge usage of the Windows environment at a higher rate. As another example, let’s say your standard environments did not use Real Application Clusters (RAC), and a new environment has come in that requires the high availability you can get from a RAC environment. RAC is, of course, a database option that you need to pay an additional license fee for, so that should be charged at a higher rate. An extended charge plan can be used to meet these sorts of requirements, as it provides greater flexibility to chargeback administrators. Extended charge plans allow you to do the following:
• Set up specific charges for specific entities
• Define rates based on configuration and usage
• Assign a flat rate regardless of configuration or usage
• Override the rates set for the universal plan
An out-of-the-box extended plan is provided that you can use as a basis for creating your own extended plans. This plan defines charges based on machine sizes for the Oracle Virtual Machine Guest entity.
Cost Centers
Obviously, when charges for resource usage are implemented, these charges must be assigned to something. In the chargeback application, the costs are assigned to a cost center. Cost centers are typically organized in a hierarchy and may correspond to different parts of an organization—for example, sales, development, human resources, and so forth—or they may correspond to different customers—for example, where you are a hosting company and host multiple customer environments. In either case, cost centers are defined as a hierarchy within the chargeback application. You can also import cost centers that have been implemented in your Lightweight Directory Access Protocol (LDAP) server if you want to use those.
Reports
The main benefit you get from using chargeback is the vast amount of information it puts at your fingertips. This information can be reported on by administrators in a variety of formats available via the Oracle BI Publisher tool, including pie charts and bar graphs, and you can drill down to charges based on a specific cost center, entity type, or resource. You can also make use of trending reports over time and can use this information to aid you in your IT budget planning. Outside the chargeback application itself, self-service users can view chargeback information related to the resources they have used within the self-service portal.
Setting Up Chargeback
Now you have an understanding of the capabilities of the chargeback application in the Enterprise Manager product suite. The next step, of course, is to set it up. Chargeback can be set up in Enterprise Manager 12c by any user with the EM_CBA_ADMIN role or, of course, as SYSMAN.
1. Once you are logged in as a user with the relevant privileges, follow the path Enterprise → Chargeback (see Figure 6.3).
Figure 6.3. Setting up chargeback, step 1

2. The default currency symbol displayed in the chargeback application is the dollar sign ($). Note that this is just a symbol and does not have any impact on chargeback calculations, but you can change it if you want by clicking on the Settings tab (see Figure 6.4).
Figure 6.4. Setting up chargeback, step 2

3. You can enter a new currency symbol here, but for this environment, leave it at the default and click Change Plans (see Figure 6.5).
Figure 6.5. Setting up chargeback, step 3

4. The first thing to do is set rates for the universal charge plan, which covers the CPU usage, memory allocation, and storage allocation metrics. Click Set Rates (see Figure 6.6).
Figure 6.6. Setting up chargeback, step 4

5. Set the rates appropriately, and click Save (see Figure 6.7).
Figure 6.7. Setting up chargeback, step 5

6. The universal charge plan is useful, but in some situations, you may want to apply charges on other entities. That’s where an extended charge plan can be used. To create an extended charge plan, click Create, then click Plan (see Figure 6.8).
Figure 6.8. Setting up chargeback, step 6

7. Provide a meaningful name for the charge plan, then click Add to select an entity type for which you can set rates (see Figure 6.9).
Figure 6.9. Setting up chargeback, step 7

8. You can add multiple entity types to the charge plan at once, so choose Host and Oracle Pluggable Database, then click OK (see Figure 6.10).
Figure 6.10. Setting up chargeback, step 8

9. You can now set up specific configurations for each entity type by selecting the entity and then clicking Setup Configurations (see Figure 6.11).
Figure 6.11. Setting up chargeback, step 9

10. Click the Add button (see Figure 6.12).
Figure 6.12. Setting up chargeback, step 10

11. In this example, different rates are charged for different machine architectures. This is the sort of thing you would do if you wanted to charge more for maintaining machines that are either outdated or for which you have less skill and may need to bring in consultants. To do this, you can set the Condition Item to Machine Architecture, then click the Search button to search for different values you can choose (see Figure 6.13).
Figure 6.13. Setting up chargeback, step 11

12. In this scenario, we want to set up three different configurations:
• Intel x86_64 as our default
• Intel i686 as more expensive because it’s outdated
• PA-RISC 64-bit as more expensive because we have less skills in that area
You can select each in turn from the list, and click OK, as in Figure 6.14.
Figure 6.14. Setting up chargeback, step 12

13. Click OK (see Figure 6.15).
Figure 6.15. Setting up chargeback, step 13

14. On the Setup Configurations pop-up, click OK again (see Figure 6.16).
Figure 6.16. Setting up chargeback, step 14

15. You can now specify multipliers for each architecture. In this example, the preferred architecture (Intel x86-64) is set at 1, Intel i686 is set to 2x, PA-RISC 64 bit is set to 1.5x, and other architectures is set to 3x. Next, can click Oracle Pluggable Database to set up configurations for it (see Figure 6.17).
Figure 6.17. Setting up chargeback, step 15

16. Click Setup Configurations (see Figure 6.18).
Figure 6.18. Setting up chargeback, step 16

17. Click Add (see Figure 6.19).
Figure 6.19. Setting up chargeback, step 17

18. On the Add Configuration pop-up, select Version for the Condition Item (see Figure 6.20).
Figure 6.20. Setting up chargeback, step 18

19. In this scenario, clients who have not switched to the latest version of the database are charged more as a way of encouraging migration, so the Condition Operator and Condition Value are set to validate that. The option to charge more when using paid options such as partitioning and so on is also available. Click OK (see Figure 6.21).
Figure 6.21. Setting up chargeback, step 19

20. On the Setup Configurations pop-up, click OK (see Figure 6.22).
Figure 6.22. Setting up chargeback, step 20

21. Back on the Create Plan: HR Chargeback Plan page, you can increase the multiplier for configurations that aren’t on the latest version, and click Save (see Figure 6.23).
Figure 6.23. Setting up chargeback, step 21

22. Once the charge plans are configured satisfactorily, you can also define cost centers that the charges will be assigned to. Cost centers are normally configured in a business hierarchy, so let’s walk through the process of building that. Start by clicking the Cost Centers tab (see Figure 6.24).
Figure 6.24. Setting up chargeback, step 22

23. Cost centers can either be imported from an existing LDAP configuration (done via the Action menu) or defined in the chargeback application. In this example, they are added to the chargeback application. To do this, click Add (see Figure 6.25).
Figure 6.25. Setting up chargeback, step 23

24. On the New Cost Center pop-up, enter a cost center name and display name, then click OK (see Figure 6.26).
Figure 6.26. Setting up chargeback, step 24

25. You can do this as many times as you like. To add a cost center in a business hierarchy, first create a cost center at the top level (as shown previously), then create a cost center using the Member of radio button, and select the relevant top-level cost center, as shown here (see Figure 6.27).
Figure 6.27. Setting up chargeback, step 25

26. Once you have defined all the cost centers you need, the next step is to add the relevant entities. An entity is either a target in Enterprise Manager or a custom resource that has been added to Enterprise Manager. Custom resources can include target types for which there is no current out-of-the-box chargeback support (covered in more detail in the documentation). To add entities, click the Entities tab (see Figure 6.28).
Figure 6.28. Setting up chargeback, step 26

27. Click Add Entities (see Figure 6.29).
Figure 6.29. Setting up chargeback, step 27

28. This will start the Add Entities wizard. Click Add to select targets for chargeback (see Figure 6.30).
Figure 6.30. Setting up chargeback, step 28

29. Now it is time to add hosts. To search for hosts to add, deselect All, and select Host from the Target Type dropdown (see Figure 6.31).
Figure 6.31. Setting up chargeback, step 29

30. From the list of hosts, select one or more hosts to add as an entity, and click Select (see Figure 6.32).
Figure 6.32. Setting up chargeback, step 30

31. On the Make Assignments wizard step, select the entity name, and click Assign Plan (see Figure 6.33).
Figure 6.33. Setting up chargeback, step 31

32. Select HR Chargeback Plan, and click OK to add the chargeback plan to this host (see Figure 6.34).
Figure 6.34. Setting up chargeback, step 32

33. To add a cost center to this host, click Assign Cost Center (see Figure 6.35).
Figure 6.35. Setting up chargeback, step 33

34. Select the relevant cost center, and click OK (see Figure 6.36).
Figure 6.36. Setting up chargeback, step 34

35. Back on the Make Assignments step, click Next (see Figure 6.37).
Figure 6.37. Setting up chargeback, step 35

36. Review the settings, and if they are correct, click Submit (see Figure 6.38).
Figure 6.38. Setting up chargeback, step 36

37. You can walk through the same process to add one of the container databases and a platform as a service (PaaS) infrastructure zone as well. Once that has been done, you can schedule an on-demand data collection to start seeing the information for these entities in the chargeback application. To do that, select On-demand data collection from the Action dropdown menu (see Figure 6.39).
Figure 6.39. Setting up chargeback, step 37

38. Click Yes on the Confirmation pop-up (see Figure 6.40).
Figure 6.40. Setting up chargeback, step 38

39. In the Information message, click here to view the job status (see Figure 6.41).
Figure 6.41. Setting up chargeback, step 39

40. Once the job status changes to 1 Succeeded, click Enterprise, then Chargeback (see Figure 6.42).
Figure 6.42. Setting up chargeback, step 40

41. You should see information being collected both in the Summary and Usage Trends regions (see Figure 6.43).
Figure 6.43. Setting up chargeback, step 41

It might seem like a very long-winded setup, but in reality, chargeback takes only a short time to configure. Of course, once it’s configured, you then have all the joys of getting money out of the end clients for their resource usage—good luck with that one, I can’t help you there!
Summary
The metering functionality that has been part of the Enterprise Manager product since its very early days allows us to both collate and report on the resource usage of particular targets within the Enterprise Manager environment. In combination with the chargeback application that was introduced with the 12c version of Enterprise Manager, you can now show business units the real cost of providing computing power. You can even charge the end users for the costs of their resource usage.



Chapter 7. Manage and Administer the Database Cloud in Enterprise Manager 12c
Now that we’ve covered in some detail how to set up and configure the various database cloud models in Enterprise Manager Cloud Control 12c, you may be thinking that there really is no need for a database administrator (DBA) in the cloud world. After all, once the pools, zones, profiles, and templates are set up, database creation (and even destruction) is now in the hands of the self-service user. Depending on whether you are using a public or private cloud model, many of the time-consuming tasks such as installation, setup, and configuration may now be in the hands of cloud providers rather than DBAs.
In fact, nothing could be further from the truth. DBAs are still very much in demand in cloud environments. Moving to a cloud environment may remove much of the tactical work that DBAs have had to manage for so many years, but that simply frees them up to move to a more strategic position where they can focus on areas that are of more value to the business.
Although the creation and destruction of databases may now lie in the hands of self-service users, it will be a long time before users take on some of the other roles of the DBA, such as performance tuning (if indeed they ever take it on!). Old habits die hard, and it has long been the habit of users to reach out immediately to the DBA when they run into a performance problem. “The database is slow!” has been the catch cry for many years, even though it is often not the fault of the database. With so many technology layers between the end user and the database, one of the main reasons that users will still turn to the DBA for performance problems is that they simply lack the knowledge and skills to identify where the problem lies, let alone to understand and address the problem.
It may be more difficult in the cloud environment for the DBA to identify what sorts of performance issues are occurring. The reason for this added difficulty will largely be that DBAs are more remote from the day-to-day running of these databases than they may have been previously. One of the main roles of the DBA in the non-cloud world is to ensure databases are up and running, available to end users, and performing well. To that end, many DBAs have developed their own scripts that they run as cron jobs in the *nix world (or batch jobs in the Windows world), or they use tools such as Enterprise Manager to perform similar sorts of functionality. Of course, DBAs traditionally have had some form of access to the databases they are looking after, either by directly logging onto the machines that hold the databases or by accessing the databases across the network. In the cloud world, self-service users can create databases (up to the quotas that they have been provided with, of course) without the DBA even knowing the databases exist, let alone where they are. Thankfully, there is a straightforward answer to this issue—the Cloud Home page.
The Cloud Home Page
Once you have completed the setup of your cloud services and self-service users have started submitting requests, the Cloud Home page is useful for getting a cohesive view of all the different service types. To access the Cloud Home page, follow the path Enterprise → Cloud → Cloud Home, and you will see a screen like that shown in Figure 7.1
Figure 7.1. The Cloud Home page





Chapter 8. Cloning Databases in Enterprise Manager 12c
Cloning databases has been a part of the role of a database administrator (DBA) since data was first chiseled onto stone tablets. DBAs are frequently called on to both clone and refresh database copies for all sorts of business reasons. As the Oracle product set has broadened over the years, more and more ways to clone data have been introduced. This chapter restricts the discussion to the two main types of clones built in a cloud environment using Oracle Enterprise Manager Cloud Control 12c (EM12c)—full clones and snap (or thin) clones. Also, since the thrust of this entire book is on building databases in a cloud environment, the details covered here are specific to using EM12c in the cloud environment.
Full Clones
A full clone, as the name implies, involves taking a complete or full copy of an existing database. This can be done in one of two ways—by taking an Recovery Manager (RMAN) backup or by issuing an RMAN DUPLICATE command—so let’s take a look at each of those options.
RMAN Backups
Oracle’s Recovery Manager utility, more commonly known as RMAN, has been around since Oracle 8.0, so you probably need no introduction to it. RMAN backups, which are created using either backup sets or image copies, can be used to create profiles, which in turn can be used to create full database clones.
Creating a full database clone using an RMAN backup in the cloud environment requires a number of steps:
1. Enabling the DBaaS cloud involves some common setup tasks required for setting up a private database cloud. These tasks are covered in Chapter 5, “Database Consolidation in Enterprise Manager 12c,” so if you need to, refresh your memory by rereading that chapter. In summary, the tasks are
a. Defining roles and assigning users.
b. Creating a platform as a service (PaaS) infrastructure zone.
c. Creating a database pool.
d. Configuring request settings.
e. Setting quotas.
2. Creating a profile either by taking an RMAN backup or using an existing RMAN backup.
3. Creating a service template based on the profile just created.
Let’s walk through the last two steps in more detail.
Creating a Profile by Taking an RMAN Backup
To create a profile by taking an RMAN backup, start on the Getting Started: Databases page.
1. Go to Setup → Cloud → Database. From here, click on the Data Sources link and ensure the Data Profiles tab is active, and then click Create, as shown in Figure 8.1.
Figure 8.1. Creating a Data Profile

2. The Create Database Provisioning Profile wizard starts, which has only four steps. Step 1 looks pretty bland when you first see it, as it simply asks you to select a reference target. This is the entity that the backup will be taken from, so in many ways, “reference source” would be a better description. When you select your reference target, the rest of the screen is populated on the basis of that target. You can create a profile based on either an ORACLE_HOME target type or a database target type. In our case, we want to create a profile based on a database, so select the specific database by clicking the magnifying glass on the first page of the wizard. That opens the screen shown in Figure 8.2, where you select the Ref database as the reference target.
Figure 8.2. Selecting the Reference Target

3. Once you click the Select button, it takes you back to a much more populated version of step 1 of the wizard. Here, you can click on the Structure and Data radio button (the Create and RMAN Backup radio buttons are selected by default), and also specify credentials. In the example shown in Figure 8.3, preferred credentials for both the host containing the database and the database itself are already created, so you can just use those and click Next.
Figure 8.3. Step 1 of the Create Profile wizard

4. Step 2 of the wizard enables you to specify some standard RMAN configuration options. If the database you are using is running in ARCHIVELOG mode, you can choose to perform either an online or an offline backup. You can also choose the number of channels to use, whether or not to use compression, the backup location, and details such as the backup filename format, backup file tag, and name of the control file. In Figure 8.4, all the settings have been left at their defaults (obviously, if you have RMAN settings you normally would use, you can change any of them), and you can just click Next.
Figure 8.4. Step 2 of the Create Profile wizard

5. On the third step of the wizard, you are prompted for Profile Information (see Figure 8.5):
• Profile Location (mandatory): This is the location in the software library where the database profile will be created.
• Profile Name (mandatory): This is a unique name for the profile.
• Description (optional): Provide a description for the profile that identifies the information you want to see in the description.
• Profile Version and Vendor (optional): These fields should be prepopulated. There is no need to change them.
• Notes (optional): Again, this field will be prepopulated. You can add or delete anything you need here.
• Schedule: In the Schedule section, you can start the profile creation immediately or schedule it for some later time. You can also set a repeat policy to repeat the profile creation regularly. If you are scheduling a repeat creation, you need to place the backup on a Network File System (NFS) directory that is shared among all pool members.
• Purge Policy: The Purge Policy region allows you to either not purge any data that’s been collected or to purge it after a certain number of days or snapshots.
• Working Directory: This field allows you to specify a directory for any temporary files to be kept as part of the profile creation. It defaults to /tmp.
Figure 8.5. Step 3 of the Create Profile wizard

6. The Review step enables you to review all of the inputs to the wizard you have made. If you are satisfied that you have made all the right choices, you can click on the Submit button to create the profile, as shown in Figure 8.6.
Figure 8.6. Step 4 of the Create Profile wizard

You are taken to the Procedure Activity page where you can watch the profile being created. Normally, I change the View Data value on the far right to refresh every 30 seconds, and choose Select All under the View menu to watch all the steps automatically refresh. If there are any problems with the profile creation, you can select the checkbox to the left of the step that fails to see more information displayed on the right of the screen. In the example we’ve walked through here, you can see in Figure 8.7 that there were no problems and the profile was created in just under 2 minutes.
Figure 8.7. Profile creation output

Creating a Profile Using an Existing RMAN Backup
Now let’s look at how to create a database profile using an existing RMAN backup. In most cases, using an existing backup is a more common scenario than using a new backup because most people already have RMAN backups of their databases.
The steps for creating a profile using an existing RMAN backup start in exactly the same way as creating a profile by taking an RMAN backup. Start on the Getting Started: Databases page (you can reach it by following the path Setup → Cloud → Database). From here, click on the Data Sources link and ensure the Data Profiles tab is active, then click Create, as shown in Figure 8.8.
Figure 8.8. Creating a Data Profile

Again, this starts the Create Database Provisioning Profile wizard. When you select your reference target, the rest of the screen is populated on the basis of that target. In our case, we want to create a profile based on an existing RMAN backup of a database, so you can select the specific database by clicking the magnifying glass on the first page of the wizard. That opens the screen shown in Figure 8.9, where the Ref database has been selected as our reference target.
Figure 8.9. Selecting the Reference Target

Once you click the Select button, it takes you back to a much more populated version of step 1 of the wizard.
1. Here, you can click on the Structure and Data radio button but this time also specify credentials. In the example in Figure 8.10, preferred credentials for both the host containing the database and the database itself are already created, so just use those, and click Next. You will also notice a third option here (Export Schema Objects) that isn’t covered so far because it’s used for schema as a service, not complete clones.
Figure 8.10. Step 1 of the Create Profile wizard

2. This step takes you to a screen that shows a list of all the existing backups you can use to create the profile. You need to be a little careful when selecting the backup you want to use here, as unfortunately they’re not listed in any logical order. When you select the backup you want to use to create the profile, the backup set pieces for that backup are listed below in a master detail form. Click the Next button to move to the next screen, as shown in Figure 8.11.
Figure 8.11. Step 2 of the Create Profile wizard

















3. On the next screen, you are again prompted for profile information, schedule, and working directory. In this case, because we are working off a single existing RMAN backup, it makes no sense to set a repeat policy, so the options we had to do that when creating a profile based on a new RMAN backup are not available to us. It is useful to include the fact that this profile is taken from an existing RMAN backup in the description field, as shown in Figure 8.12, but doing so is totally optional. Once you have made any changes you want on this page, click the Next button.
Figure 8.12. Step 3 of the Create Profile wizard

4. You are taken to the review screen where you can click the Submit button again, as shown in Figure 8.13.
Figure 8.13. Step 4 of the Create Profile wizard

Again, you can change the View Data value on the far right to refresh every 30 seconds, and choose Select All under the View menu to watch all the steps automatically refresh. If there are any problems with the profile creation, you can select the checkbox to the left of the step that fails to see more information displayed on the right of the screen. In the example we’ve walked through here, you can see in Figure 8.14 that there were no problems, and the profile was created in just under 1 minute.

No comments:

Post a Comment