Wednesday, 28 February 2018

oracle 12 c part 2


DBA Responsibilities in a Traditional Environment

In the traditional database paradigm, the DBA role falls loosely into two categories: the application DBA and the operational DBA (see Figure 2.1). They work on the same database but look at it from two different perspectives. DBAs in both roles must understand the core concepts and internals of the database to perform their job. They also need to understand the various database features, how they work, and how to best use them.
Figure 2.1. DBA responsibility in a traditional model
Image
Depending upon the role or interests, DBAs may gravitate to functionality at an object level (partitioning, indexing, compression, virtual columns), to functionality useful to programmers and database users (results caches, advanced queueing, or Java within the dataset, XML processing, etc.), or to functionality applicable at the database level and focused on infrastructure needs (Data Guard, Real Application Clusters [RAC], recovery management, encryption, etc.).
Application DBAs tend to work with databases from a logical implementation viewpoint. They look at the database from an application- and data-centric focus. They are more in tune with relational data models and star schema warehouse models, data relationships and usage, and database features directly consumable by the application team, such as partitioning, queueing, replication, Procedural Language/Structured Query Language (PL/SQL) features, and enhancements.
Operational DBAs, by contrast, tend to look at the database from a physical implementation perspective. They administer the physical aspects of the database, such as backups, database storage allocation and capacity, database server health, disaster recovery, and consolidation. Operational DBAs are focused on database features that are directly consumed by the infrastructure side of the equation. The benchmark of success is meeting key performance metrics around I/O and network latencies.
The roles of the application DBA and the operational DBA are equally important, and in many areas, they overlap. Both must understand the core concepts of the database and how it functions. Both are concerned with performance tuning, availability, and scalability.
Take performance tuning as an example. An application DBA tunes the database with the purpose of optimizing the application and its use of the database. An operational DBA tunes the database with the purpose of optimizing the physical infrastructure—reducing I/O latencies or pressure, relieving memory pressure, improving operating system performance, and fine-tuning the network.
More specifically, both DBAs tune SQL. The application DBA tunes SQL by understanding the application and how it uses the data and by working with the development team to optimize that process. The operational DBA tunes SQL by looking at Explain Plans, I/O characteristics, and latencies.
The most successful DBAs are those who can look at the same problem from both viewpoints. The best solution is the one that balances infrastructure and application concerns.
DBAs are often seen as the gatekeepers of the databases. They see the database as “their” property and themselves as the guardians of their database. Server administrators, storage administrators, backup administrators, and network administrators share this outlook as it applies to their areas. Consequently, IT infrastructure management is often seen as a collection of siloed groups, much like an island nation of small, individual islands.
In actuality, there is a smaller and relatively unknown group of DBAs that we call database engineers. This group of DBAs straddles the fence between the various groups responsible for the IT infrastructure, and their role is similar to that of application DBAs. Database engineers work closely with the infrastructure teams—with the storage team to build adequate storage, with the server team to choose servers and operating systems to best support database operations, and with the network teams to configure the networking, all to ensure that the database performance can be maximized and optimized.

What’s Changed with DBaaS

With the evolution of cloud computing, the roles and viewpoints of the DBAs also have evolved. Cloud computing introduces a fundamental change in database ownership and responsibility. This change has a ripple effect felt throughout the entire workflow between applications and the database. As a result, each individual role is now distinct, visible, and critical in its own way.
As defined in Chapter 1, the database cloud, or database as a service (DBaaS), is all about providing a platform that frees the application development teams from having to worry about servers and hardware, allowing them to focus all of their attention on application ownership and development.
In other words, DBaaS splits the ownership and responsibility into two tracks, one at an infrastructure layer and the other at an application layer. DBaaS divides performance responsibilities by differentiating between application-owner performance and infrastructure-owner performance:
• Infrastructure teams own and are responsible for performance at a hardware and infrastructure layer.
• Application teams own performance responsibility from an application perspective, which includes how the application interacts with the database.
DBaaS divides scalability responsibilities, once again, by differentiating between application scalability and infrastructure scalability:
• Infrastructure teams own the responsibility of creating a platform that is elastic and scalable on demand. In other words, infrastructure teams own managing capacity in anticipation of growth.
• Application teams own the responsibility of creating an application that is scalable on demand and, more important, understanding and obtaining what the application needs for scaling.
Availability responsibilities also are divided by ownership. It is crucial to understand the distinction between performance and scalability, on the one hand, and availability, on the other. Performance and scalability have a direct impact on availability.
The database and infrastructure team is responsible for availability of the service itself, and the application team is responsible for availability at the application layer. This statement is broad and rather useless unless we better define and distinguish service availability and application availability.
To start with, let us agree that if the application is unresponsive to the end user, it can be considered as being unavailable. The real question is, Why was the application unresponsive to the end user? The difference between the availability of the DBaaS and the availability of the application is rooted in the problem that is making the application unavailable in the first place.
From the DBaaS team’s perspective, if the database is up and running and is able to respond to a login request, it is considered as being available. But that seems too generic and simplistic; therefore, we add quality of service (QoS) and the corresponding service level agreements (SLAs) to this definition of availability. The SLA might state availability requirements in terms of I/O response time as compared to I/O throughput expectations, such as input/output operations per second (IOPS), megabytes per second (MBps), and CPU capacity. Therefore, from a DBaaS provider perspective, the database is considered as being available if it is up and running, accepting database login requests, and producing QoS metrics that show all SLAs are being met.
Some examples where the DBaaS provider will be responsible for the outage and for identifying and resolving the issues are:
• The archive destination space was not managed properly and fills up as a result. The database is unavailable because the archiver has insufficient space to archive and release the online redo logs.
• There was an issue at the storage area network (SAN) layer, which caused the I/O performance to drop dramatically.
• A database backup job kicked off at the wrong time or has been running for too long and is impacting the data file I/O performance.
• The database or the server crashes or aborts due to either a hardware or a software bug.
The key point is that, from a business unit perspective, availability is based on the ability to perform the business function according to the SLA. In other words, this is the availability of the entire application the business uses to deliver the function. For an application to be available, therefore, the infrastructure must be available and the application must be able to use and capitalize on the infrastructure strength. Let us take a deeper look at availability from a database perspective.
Database availability is defined in terms of the database being open and able to accept connections and respond to a beacon set of requests within a defined response-time window. Events that would constitute the database itself being down include, but are not limited to, the following:
• The database being hung in an archiver stuck mode.
• The database failing requests due to lack of space.
• The database is up and running, but the listener is down, and therefore the database is not available.
Once the database is deemed available, the application owners and developers are responsible for identifying and resolving the issue that is interfering with availability. The reason this rather hard line is drawn here is that the application development team has the ownership of and authority over the application design and architecture. In such cases, an availability hit is recorded against the application, and it is the application team’s responsibility to identify and remediate the situation. Following are a few examples where availability is impacted by application design and architecture:
• Excessive row-level locking and blocking causes an unresponsive application.
• The application does not manage its connection pool properly and consequently exceeds the configured maximum number of connections. As a result, the application receives an error when establishing connections and is rendered unresponsive and unavailable.
• The database load and throughput exceed the volume specified by the SLA for the solution design.
• Availability requirements of the database have increased since the original solution design.
In terms of cloud service, these examples emphasize the need for proper and clear definition of what is called the service catalog. The service catalog describes all the responsibilities and expectations of the cloud provider as well as of the cloud subscriber. From an availability perspective, for example, the service catalog would describe
• The definition and expectations on overall SLAs related to the database infrastructure.
• The QoS and beacons used to measure and report on SLAs.
• The definition of planned outages versus unplanned outages and their durations.
Note
Defining uptime SLAs is relatively easy, but defining and developing an overarching QoS is where the difficulties arise. The QoS must also take into account the infrastructure capabilities, such as automated relocation of resources.
This discussion leads directly to the topic of feature set and capabilities of a good DBaaS solution. Not only must the solution be proactive in terms of ability to handle pure infrastructure issues, it must also be able to assist application owners in identifying, addressing, and managing application availability. In other words, a good DBaaS solution is one that provides the features and toolkits that enable the consumer and provider to be proactive. For example, the solution should have in place
• A monitoring and alerting system that is comprehensive and customizable in terms of metrics monitored and who receives the notifications.
• A dashboard system that clearly shows how the service is performing against the QoS goals and SLAs.
• A system that allows the consumers of DBaaS to scale and grow on demand.
What this really means is that DBaaS has changed the database paradigm by redefining and redrawing the lines of ownership and responsibility and, at the same time, combining the individual infrastructure fiefdoms into a single role, at least from the end user viewpoint. End users expect a service delivered at an agreed-upon quality level, with the toolkit and functionality that gives them visibility, control, and flexibility. The consumer is not—and does not want to be—aware of the mechanics of how that service is delivered.

The New Role of the Cloud DBA

In the earlier sections, we talked about application DBAs, operational DBAs, and database engineers. With the introduction of DBaaS, these roles are now spread out and sometimes even decentralized. What we mean is that some aspects of the DBA role must logically move under the umbrella of the application development teams, while others are centralized to a much greater extent:
• The application DBAs work with the application development team very closely. They are the point of integration between the developers and infrastructure team. In other words, they can speak the developers’ language as well as the infrastructure team’s language and bridge that gap.
• The operational DBAs are now part of the centralized group, taking care of the day-to-day care maintenance of the underlying infrastructure for the database.
• The database engineers are the integration point between the application and operational DBAs. They are the ones who work with the application DBAs to define the QoS metrics and SLAs.
In order to understand these roles and how they map into the overall DBaaS platform and concept, we need to define the roles and responsibilities in greater detail.

The Application DBA Role in DBaaS

As we said earlier, the application DBA is a part of the application development team working closely with the developers. To be able to influence the application design and architecture, the application DBA role is now decentralized and embedded within the application development team.
The application DBA’s primary responsibility is to work with the application developers and architects early on and throughout the application development lifecycle:
• Ensure the application development team understands how databases work and uses them appropriately.
• Ensure the application development team understands how the database events can impact an application.
• Work with the application development team to understand how database events and outages are manifested to the application. This understanding enables the application development team to build in mechanisms to gracefully handle and manage these events.
• Work with the application development team to design and build the logical database using the appropriate features to maximize performance.
• Facilitate communications between the application development team and the core DBaaS teams.
• Translate the application development team’s performance, availability, and infrastructure requirements into terms and metrics the DBaaS team can use to properly size, design, and deploy a solution.
• Translate the DBaaS team’s assessments, statements, and findings such that the application development team can understand and use the information appropriately. This information includes projections for storage space, CPU requirements, I/O metrics, uptime, and availability.
In other words, application DBAs are key during the application design and development phase. Their role becomes one of ensuring that both the database and the application in terms of how it uses the database are architected and designed properly. The best way to achieve this goal is for the DBA to be embedded early on in the process and to work with the application teams proactively on potential issues. As a result, the application DBAs work closely with both the application teams and the database engineers on the DBaaS team.

The Operational DBA Role in DBaaS

The operational DBA role still remains pretty much intact and crucial in the new paradigm created by DBaaS. The operational DBA’s primary responsibility in a DBaaS environment is still one of ensuring that the ancillary processes around managing the database are managed and controlled effectively. These responsibilities include things such as the following:
• As the first line of defense for a database, triage and handle database-related alerts.
• Work with the DBaaS team to ensure that the database is being managed to meet and exceed agreed-upon QoS and SLA requirements. For example:
• Ensure database backups are being performed per the defined SLAs.
• Ensure I/O performance is compliant with the stated SLAs.
• Ensure patching and maintenance work is being performed in compliance with the end user availability SLA as well as being communicated to the end user teams.
• Database performance issues occurring due to infrastructure problems are being identified and resolved as per the specified SLAs.
• Communicate with the application development team, especially the application DBA, on application-owned performance bottlenecks and issues. These issues could include, for example, excessive application-level row locking, excessive redo generation, or inefficient use of partitioning leading to query plans that perform an excessive amount of I/O.
• Communicate with the application development team when the database is coming close to utilizing all the assigned resources and needs more. Resource usage could be in terms of CPU utilization, I/O limits, and storage space.
In other words, operational DBAs perform key roles once application databases are promoted into production. They are the primary recipients of the monitoring system’s alerts and thus work very closely with the application DBAs and database engineers to ensure a database is performing as expected or better than expected.

The Database Engineer Role in DBaaS

The database engineer role is not necessarily a new role in terms of function, but it is a new role in terms of its singular focus and attention. Database engineers focus on designing, building, and deploying the overall DBaaS. The primary responsibilities of the database engineer therefore includes the following:
• Define, design, and build the DBaaS tiers based on QoS and SLA requirements.
• Define the solution architecture to deliver databases based on the defined metrics.
• Define and develop the operational tools and procedures required to support the ongoing database maintenance and management.
• Define and develop the tools and dashboards that enable the service consumer to measure and validate service delivery and to manage capacity needs on demand.
• Plan and manage capacity growth and demands at the DBaaS offering level rather than at a single database level. Capacity planning efforts account for storage volume, storage performance, compute capacity, as well as network bandwidth and capacity.
Let us look at a few examples of these tasks, high availability being the first. There are many technologies and methods available to provide for database-level high availability. For example, the most often used technologies include Oracle RAC, Oracle Data Guard, Oracle Golden Gate Replication, storage replication, database sharing, virtualization techniques and features, and so on. One solution stack might be as follows:
• Use Oracle RAC along with Oracle Data Guard to deliver the highest level of availability.
• At the next level, deploy single-instance with Data Guard.
• At the third and lowest level, deploy single-instance databases with nightly backups.
From an I/O performance perspective, the DBaaS could provide tiered storage, starting with a low-end Internet small computer system interface (iSCSI), Network File System (NFS) storage going all the way up to high-end SAN using tiered storage and solid-state drives (SSDs) or flash memory drives.
For capacity and scaling, the solution could either be RAC, wherein compute capacity is added by growing the system horizontally, or it could be based on a virtualization tool, allowing for vertical scaling by adding virtual CPUs, and so on.
Another crucial aspect of the overall DBaaS is database backup. Once again, multiple options and technologies are available for performing database backups. The most common examples are Oracle Recovery Manager (RMAN)-based backups to either active or standby databases and RMAN in conjunction with storage technology such as snapshots and mirror splits. These backups can be written to different locations or devices to meet the backup speeds or RTO objectives. Options for database backup destinations include dedicated backup devices such as the Oracle Zero Data Loss appliance or the Data Domain appliance or cheaper SANs being managed as virtual tape libraries.

Preparing to Be a Cloud DBA

Depending on the organizational structure and whether it is a private cloud or a public cloud, the following roles can be fulfilled by DBAs in different parts of the organization:
• In the case of the public cloud database provider, the database engineer and operational DBA roles are with the cloud provider, but the application DBA is embedded into the application development team.
• In the case of large institutions building a private cloud database that is shared across the entire organization, the same structure is followed.
• In the case of private cloud, these roles may be combined under the same organizational entity as well.
It is important to understand that the DBA organizational structure may morph from one structure to another based on organizational needs and demands. Figure 2.2 represents the DBA roles that exist in the DBaaS-based solution. As stated earlier, the same roles existed before but are now much more visible and distinctly defined.
Figure 2.2. DBA responsibilities in a database cloud model
Image
As a DBA working in the current traditional model who is preparing to move into the DBaaS model, the transition is not about skill sets only. You would obviously need to possess or acquire the specific skills needed to execute your responsibilities within the role. The transition is more about mindset. The transition into DBaaS or database clouds is also about delivering a service and being accountable for it.
The approach to DBaaS is more consultative and democratic than dictatorial in nature. To take it a step further, the very concept of DBaaS is based on free market and demand and supply. The DBaaS provider is providing a service. End users will consume the service if it meets their needs in terms of functionality, cost, ease of use, and more. On the flip side, the provider cannot accept any requests or conditions that it cannot deliver, which leads us to the second mindset change: adaptability and flexibility (within reason).
From a technical perspective, DBAs in the DBaaS paradigm, particularly operational DBAs, still need a lot of the same skills as those in a traditional database environment. Database engineers and application DBAs will need to pick up some ancillary supporting technologies.
Application DBAs are going to work with developers more closely, so they should learn and understand some of the new technologies in the application space, development languages (Java, Ruby, etc.), technologies supporting applications such as application servers, web servers, and even new concepts such as big data and NoSQL.
Database engineers and cloud architects also need to become familiar with technologies and concepts in the infrastructure space, including server, virtualization, storage, and networking technology improvements and changes. These technologies are the underpinning of DBaaS and therefore essential to a successful database cloud service.

Summary

The purpose of this chapter was to help database technologists understand the implications of database cloud services to their environments and jobs. DBaaS is an offshoot of infrastructure as a service and platform as a service, which are well-entrenched and successful technologies.
Cloud DBAs can expect to interact with a larger set of technologies than they have in the past and therefore will need to expand their existing database-related skills.




Chapter 3. Cloud Computing with DBaaS—Benefits and Advantages over Traditional IT Computing

We have covered the basic ideology and concepts around cloud computing and understand that database as a service (DBaaS) is a special case of infrastructure as a service (IaaS) and platform as a service (PaaS). We know that with DBaaS, the roles and responsibilities of the DBAs need to be adjusted.
In this chapter, we look at the various technologies and options that Oracle has developed that make it possible to deliver DBaaS along with all of its demands and expectations. We will see how these technologies can enable and benefit an organization, allowing it to manage infrastructure resources efficiently while at the same time delivering the expected performance with resource demand elasticity on demand.

DBaaS Evolution: Pre–Database Cloud Strategies

When it comes to managing databases and the infrastructure supporting them, the focus has been on database consolidation. One approach to database consolidation is to consolidate multiple databases into one physical database, which can be done in two ways:
• Convert independent databases into dedicated schemas within a single database, or
• Use pluggable databases, a new feature of Oracle 12c, and combine multiple databases into a single, larger-container database.
A second approach is to consolidate multiple databases onto a single piece of hardware, which also can be done in two ways:
• Consolidate multiple databases, as is, on fewer, larger servers and hardware, or
• Use virtualization technologies to convert physical servers to virtual servers sharing server resources via the hypervisor.
Consolidating databases as schemas has some obvious drawbacks and restrictions, because of which, schema-level consolidation is not a scalable and manageable option. Schema-level consolidation assumes that
• The candidate databases do not have any object namespace collisions. We can avoid this if we qualify object names with the schema owner prefix, but that solution impacts the portability to a large extent.
• The candidate databases do not have object ownership or security privileges and permission-level collisions.
• All the candidate databases have the same performance, uptime, and availability requirements. Another key aspect to consider is the peak load times when consolidating databases. Multiple databases with the load peaking at the same time may present a capacity and scaling issue.
Object name, user, and security privilege collisions can either imply application-level modifications or expose security risks. We cannot combine a database that has an 8 a.m. to 5 p.m. uptime expectation with another that has a 24/7 uptime expectation without compromising on uptime.
An alternative approach is to consolidate multiple databases onto a physical server that is larger in terms of both memory and CPU power. Once again, we must consolidate databases that share the same uptime requirements or risk failing to meet the uptime or flexibility requirements database.
In either case, we still must handle the problem of resource allocation and management. In a combined database (multitenant) environment, it’s possible that one of the databases issues a query or runs a process that can get out of control and consume more than its share of resources, negatively affecting the other tenant databases. Until now, the toolkit to measure, quantify, and throttle resource consumption across databases has had limited capabilities.
To overcome some of these challenges, a new alternative was developed: consolidating databases on a virtualized platform. Assuming a one-to-one correlation between database and the virtual server, virtualization allows us to allocate resources (CPU and memory) to a database by allocating the resources to the virtual servers they run on. However, there is one caveat to this approach: oversubscription of resources is a real possibility. Oversubscription of resources is the case where the sum of virtual resources allocated is larger than the sum of the physical resources available.
It is important to keep in mind that oversubscription almost always occurs. Not having oversubscription directly implies excess capacity not being used and thus being wasted. The key with a cloud deployment is how to combine workloads to share resources efficiently.
In other words, in a situation where the workload on the virtual servers is high across multiple virtual machines (VMs) simultaneously, resource starvation and contention occurs, which in turn can lead to performance and stability issues.
Comparatively speaking, it is relatively easy to manage memory by ensuring oversubscription does not occur, especially since the servers are capable of scaling to more than 1 TB in memory. CPU resource usage is directly related to processing workload. Processing workloads can vary by time of the day as well as by workload type. In other words, demands on and usage of CPU resources are not uniform but rather temporal and time sensitive in nature.
If we were able to add the capabilities of moving CPU resources around in a dynamic fashion, while at the same time guaranteeing performance, virtualization becomes an extremely viable alternative.

Delivering DBaaS with Oracle Technologies

Oracle has developed new or enhanced existing hardware technologies, database software options, and features as well as ancillary software options and features that are aimed specifically at delivering DBaaS. These technologies and enhancements are best used in conjunction with existing Oracle database features:
• Pluggable databases, new in Oracle 12c, to support multitenancy
• Oracle Flex Clusters and Flex Automatic Storage Management (ASM), new in Oracle 12c
• Oracle Enterprise Manager
• Oracle Engineered Systems
• Oracle Exadata Database Machine and Oracle Database Appliance are specifically targeted for database implementations.
• The Oracle Virtual Compute Appliance is a more generic, multipurpose converged appliance with virtualization technologies built in using Oracle Virtual Machine.

What Is Database Multitenancy?

Multitenancy is a generic term referring to the situation in which multiple individual entities reside in the same physical space and share the same resources. Schema-level consolidation, server consolidation, and server virtualization represent earlier methods and techniques used to achieve multitenancy.
With Oracle 12c, Oracle introduced a new approach, with the same goal of resource optimization. This approach is called pluggable databases and represents a hybrid approach between schema-based and server-level consolidation. Pluggable databases represents the single most important fundamental and conceptual change in Oracle’s approach to database consolidation. The shared resource unit is an Oracle database (container database), and tenants are fully self-contained databases (pluggable databases).
The core concept of database-level multitenancy is not new; it has long been a part of the SQL Server and Sybase core architectures and structures. Pluggable databases represents an adaptation of the SQL Server and Sybase multitenancy model into the Oracle database architecture. The fundamental concepts that differentiate Oracle in the database space (V$ performance views and wait events model, read consistency model, row locking, and nonescalating locks) have not changed. In other words, the pluggable databases represents the best of both breeds (architectures).
Oracle’s 12c multitenancy option addresses the issues and concerns that have proven to be challenges with the server consolidation approaches available to date. Pluggable databases have taken the benefits of the various options that have been used in the past and adapted them specifically to focus the implementation on Oracle databases and at the same time eliminate the challenges of the current approaches.
Pluggable Databases—Multitenancy in Oracle 12c
The Oracle 12c database multitenancy architecture requires that we add a few new terms to the Oracle lexicon:
• The container database (CDB) is the database that is capable of supporting multitenancy.
• The noncontainer database (non-CDB) is used to identify the databases as we know them from previous releases. These databases do not support the Oracle 12c multitenancy architecture.
• A CDB has a root container that is used to manage CDBs and to maintain system metadata and objects only. The root container is also referred to as CDB$ROOT.
• The application databases that contain user data and metadata are called pluggable databases (PDBs).
The primary goal of using multitenancy is to address some very common but key customer challenges:
• Maximize consolidation density by using the shared resources more efficiently.
• Reduce operating expenses by simplifying administration tasks. Administrative tasks that are normally performed and repeated at the non-CDB or PDB level are now performed at the CDB level. The gains are a result of consolidating and eliminating repeated tasks into a single task. Examples of tasks that can be simplified include backups, patches, and upgrades.
• Simplify database provisioning.
• Simplify and capacity management.
The word pluggable is used to describe multitenancy in Oracle 12c for a very specific reason: a PDB can be unplugged from one CDB and plugged into a separate CDB with little effort and impact to the rest of the system (see Figure 3.1). A PDB is a complete database from a logical and application standpoint, whereas the root container’s primary function is to store and manage the metadata needed for PDBs.
Figure 3.1. Oracle 12c multitenancy—Unplug and plug in a pluggable database
Image
A single CDB can support up to 252 PDBs all connected to and managed by the root container. In order to achieve this mobility, CDBs have to be able to support the mobility of the metadata appropriately. At the same time, we also need to be able to support administering both the CDB as a whole and the individual PDBs.
Therefore, in the case of CDBs, some central structures, such as control files, online redo logs, undo tablespace, and system metadata (i.e., data dictionary), are owned by the root container. These are then either logically or physically virtualized by the CDB from the root container to the PDB (see Figure 3.2).
Figure 3.2. Oracle 12c multitenancy pluggable database model
Image
Basic Concepts of Pluggable Databases
The resources under the CDB that are common and shared across all the PDBs go beyond just physical CPU and memory. Following are some of the core functional aspects of an Oracle database:
• Most of the background processes are shared.
• The memory structures associated with the system global area (SGA) are shared by the PDBs. The SGA for each PDB is a logically virtualized view of the SGA managed by the CDB.
• The V$ or GV$ views are based primarily on the contents of the SGA and therefore now contain a CON_ID field, which identifies the database context for the information.
• The internal structures and objects that any database requires to function are managed and maintained at the CDB level. These include SPFILEs, control files, online redo logs, standby redo logs, and undo tablespace.
• Users and privileges can be maintained at the CDB layer or can flow down to an individual PDB from the CDB.
The chart in Figure 3.3 is taken from an Oracle white paper,1 which gives us an idea of the gains pluggable databases are capable of delivering from an overall capacity and performance standpoint.2 The comparison is between consolidation of traditional standalone databases and consolidation using pluggable databases.
1. “Oracle Multitenant on SuperCluster T5-8: Scalability Study,” April 2014, www.oracle.com/technetwork/database/multitenant/learn-more/oraclemultitenantt5-8-final-2185108.pdf.
2. “Oracle Multitenant,” June 2013, www.oracle.com/technetwork/database/multitenant-wp-12c-1949736.pdf.
Figure 3.3. Scaling with pluggable databases
Image
Following are the key points to take away from this example:
• Using pluggable databases, we can consolidate 50 percent more databases while maintaining the same throughput (168 versus 252).
• Comparing throughput when consolidating 252 databases in each method, pluggable databases delivered an 81 percent improvement in transactions per second (TPS).
• The improved throughout was delivered with a 33 percent reduction in CPU resources required (192 cores reduced to 128 cores).
• With pluggable databases, the same throughput was delivered and at the same time eliminated approximately 63 percent of the I/O (355,000 I/O operations per second [IOPS] reduced to 131,200 IOPS).
The data dictionary of an Oracle database contains metadata about both the Oracle system structures and the application (see Figure 3.4). In the case of a CDB, the data dictionary is physically partitioned between the PDB and the root. The root contains system metadata only, and the PDB contains application-specific metadata only.
Figure 3.4. Data dictionary virtualization in a pluggable database/container database
Image
A new CDB-specific set of data dictionary views have been added. For each DBA_* data dictionary view, there is now a CDB_* view. The CDB_* views contain information for all objects in the CDB across all the PDBs.
Oracle 12c pluggable databases are compatible with all other Oracle technologies, including Oracle Real Application Cluster (RAC), Data Guard, GoldenGate, and Data Pump. In other words,
• We can use pluggable databases with the CDB being Oracle RAC aware. The PDBs also automatically become Oracle RAC databases.
• We can setup GoldenGate replication between PDBs or between a PDB and a non–CDB.
• Recovery Manager (RMAN) database backups are executed from the context of the CDB (not the PDB). However, RMAN allows us to perform a point-in-time recovery (PITR) restore of a PDB without impacting the availability of the other pluggable databases.
• Data Guard works exactly the same way but is configured and set up at the CDB level. What this means is that when we execute a switchover or failover operation, the entire CDB undergoes the role change.
The reason RMAN and Data Guard are configured and managed at a CDB level is that they are dependent on the online redo logs, which are defined and managed at the CDB level. The online redo logs (as well as the undo) are logically virtualized; that is, each redo log entry is annotated with the CON_ID of the CDB where the change occurred. The undo tablespace is also virtualized the same way as from a database recovery perspective; the undo and redo go hand in hand.
From an application perspective, there is no difference between a PDB and a regular database. The application will connect to the PDB directly over SQL Net in exactly the same as always. The use of pluggable databases does not place any requirement upon the application to be CDB, root container, or pluggable container aware.3 The security model associated with PDBs is covered later.
3. Certain limitations and requirements exist when creating CDB-level users. For example, the username for the common user must be prefixed with C## or c## and contain only ASCII or EBCDIC characters.
With the base assumption that the application has been tested and validated against a non-CDB Oracle 12c database, Oracle guarantees that the application will see no change in behavior or response from a pluggable databases versus a non-CDB database. In other words, any application SQL against its application-owned objects will execute and perform exactly the same way, PDB or not. This guarantee comes with a caveat and, in the opinion of the authors, a technically sound and valid one at that.
There is a very small, specific subset of SQL statements that are database-context aware and sensitive and that cannot be issued when connected to a PDB. These commands are almost entirely administrative in nature and will generate an ORA-65040 error code when issued against a PDB:
• Alter system commands modifying initialization parameters marked as non
• Creating a pluggable database when the user is not connected to the root container
• Attempting to close the SEED PDB
However, if the application code is designed and developed following industry-accepted best practices and standards, there is no reason it would need to execute such statements. The only exception to this statement would be if database administration tools and software, such as Enterprise Manager, should require such capabilities.
Reasons such as these are why we strongly recommend application-level regression testing prior to the actually going live with PDBs.
Pluggable Databases Features
The ability to unplug a database from a CDB and plug it into another CDB by itself is a huge game changer. The benefits derive from the flexibility and simplicity inherent in this ability. The reason patching exercises are simplified with this approach is that the PDB application metadata is separated from the CDB root container metadata. Remember, the PDB inherits the core Oracle system information from the root container.
With PDBs, patching and upgrades become a lot easier and incur a lot less downtime. To patch a database, you simply unplug the PDB from the lower-version CDB and plug it into higher-version CDB. This same unplug/plug functionality can also be used in the reverse to downgrade or back out a database patch. The existing rules for downgrades and backing with respect to the setting of the “compatible” parameter being at an equal or lower version than the target still apply. Effectively, the time and effort to create a new CDB at the higher patch level is a one-time effort and is amortized across the entire database upgraded using the CDB.
Note
In some instances, the upgrade/downgrade processing cannot be done implicitly. In such cases, the DBA is notified that a particular Oracle-provided script has to be run.
Another way to perform patching exercises is to patch a CDB that owns and manages multiple PDBs. With one patching exercise, all the PDBs within a single CDB are patched simultaneously. If a CDB contained 10 PDBs, it would take the same amount of time, effort, and outage windows to patch 10 PDBs as it would take to patch one non-CDB database. Effectively, we have saved the time, effort, and outage time of nine non-CDB patching exercises.
From an availability perspective, we can react quickly to situations such as server or CDB loss. With the simple task of unplugging and plugging, a PDB can be relocated to another server. This relocation exercise may be in response to a capacity planning decision or it may be due to server loss or the loss of a CDB.
Pluggable databases also simplify and speed up processes related to provisioning. Each CDB comes with a seed database, which is used to instantiate a new PDB within a CDB. All that needs to be provided in the “create pluggable database” command, apart from the name of the database, is a valid path prefix and creation of local administrative roles.
PDBs also come with many options and methods of cloning a PDB to create a replica. This ability is extremely useful when it comes to development and test database refreshes. The multitenant architecture allows for
• Cloning to be created within the same CDB.
• Cloning across CDBs using Oracle SQL Net connectivity.
• Cloning based on storage-level snapshot functionality if available.
Another important point to note is that PDBs are fully compatible with and extend existing Oracle database functionality and features. As stated earlier, Oracle RMAN and Data Guard are supported and managed at the CDB itself. By managing database backups and Data Guard at a CDB level, the administrative overhead is significantly dropped. Once again, the operating cost of managing backups and DR for all PDBs is reduced to that of managing one non-CDB database. This cost of administering one non-CDB is amortized across all the PDBs—managing backups of one PDB or 10 PDBs is the same.
PDBs are fully RAC compliant and compatible. The difference is that the CDB in such a case is a RAC-enabled, RAC-aware CDB. The PDBs inherit the RAC functionality when plugged into a RAC CDB. The PDBs can be made available across all the nodes or across a subset of nodes. Access to the PDBs is managed exactly as it would be in single-instance configuration using Dynamic Database Services. Combining RAC with PDBs gives us local high availability, capacity management, and scalability from Oracle RAC and the flexibility and ease of administration of PDBs.
Database resource management has been enhanced now to be CDB/PDB aware. The ability to create a resource governance plan has been around for a while, since each PDB is a fully compliant database by itself. With Oracle 12c, under the multitenant architecture, we can now create resource governance plans at a CDB level. The CDB resource governance plan can be used to control the number of concurrent sessions, CPU usage, and parallel server process usage across PDBs. With Exadata, file I/O resources can also be controlled with CDB-level directives, as it is used with I/O Resource Manager (IORM).
The resource governance is based on the share and cap model. Resources are broken into shares, and each resource is given a set of resources to use but can also be capped on the high side of resource usage. For CDB-level plans, the resource shares and caps are assigned to the various PDBs by creating CDB plan directives.
Security under Multitenancy
A pluggable database maintains and extends the already existing strong database security model to include and support Oracle 12c multitenancy. As we can see in Figure 3.5, we can have users at different levels, with access and privileges at either a CDB level or at each individual PDB level. Similarly, application accounts can be created individually for each PDB.
Figure 3.5. Security model under pluggable databases model
Image
A user ID (usually called a USERID) can be created at the CDB level. All user-defined common user IDs must be created with a prefix of C##. You can give the common user access to a specific subset of PDBs by granting the user the “create session” privilege. The privileges granted to the same common user can vary across each of the PDBs the user has access to. The common user’s schema on each PDB he or she has access to is unique at a PDB level.
Local user accounts, by contrast, are created individually at the PDB level. The standard user naming convention applies to local user names. The same local user can exist on different PDBs, but these USERIDs have no other commonality between them. Local users can access only the PDB in which they are created and have no access to any of the other PDBs or even to the CDB.
The same concept also applies to roles. A common role is a database role, defined and created in the root container. A common role can be assigned only to “common users,” and all the privileges cascade down to the PDB level. A local role, on the other hand, is created at the individual PDB level and can only contain roles and privileges applicable to administering at the individual PDB.
The same concept is used to extend the standard database auditing features and functionality. A common audit configuration is created at the CDB level and visible to and enforced for each of the PDBs within the CDB. A local audit configuration is defined and created at the individual local PDB level and therefore applies only to the specific PDB it was created in.

Other Oracle Database Technologies and Features

Pluggable databases and multitenancy are key features of Oracle 12c, especially from a consolidation and database cloud perspective. Other features of Oracle 12c are useful and applicable specifically to database cloud implementations and initiatives.
Oracle RAC functionality is core to database cloud implementations for obvious reasons. First, Oracle RAC has a lot of potential for high availability because Oracle RAC, by definition, provides the ability to withstand and survive a cluster node loss or a database instance loss situation. The second reason is that Oracle RAC provides scalability and capacity planning. We can scale the environment out horizontally by adding additional servers to the RAC cluster.
Another important feature in the Oracle clustering space is Oracle Flex Clusters. Oracle Flex Clusters is an enhancement to the core Oracle Clusterware functionality available through the grid infrastructure. It is not an enhancement of Oracle RAC.
Oracle Flex Clusters basically contain two kinds of cluster nodes: hub and leaf nodes (see Figure 3.6). Regardless of the node type, each node belongs to the basic grid infrastructure cluster and uses the private interconnect for interprocess communication. The hub nodes are tightly coupled and have access to the shared storage as well. Leaf nodes are a part of the grid infrastructure cluster but do not have access to the shared storage. Each leaf node communicates with a dedicated hub node that it connect to for data requests. In a Flex Cluster, we can have up to 64 hub nodes and a much higher number of leaf nodes.
Figure 3.6. Oracle Flex Cluster
Image
What this means is that the hub nodes with access to the shared storage are where the databases are running. The leaf nodes are not part of the database cluster, since they do not have access to the shared storage. The leaf nodes are used to run application-level software that uses the core grid infrastructure functionality to provide fault tolerance for the application tier in much the same way as it does for the database tier.
In a Flex Cluster, the grid infrastructure provides additional functionality that allows us to change the role or mode of a node, going from a leaf node to a hub node, or vice versa. As a part the role change, the grid infrastructure Clusterware will start and stop the services as needed.
In a more traditional RAC, all nodes in the database cluster will be running their own ASM instances. Therefore, the background processes on each database node in the cluster communicate with the local ASM instances to execute and implement the I/O requests at the storage level. In the event an ASM instance is lost, the database instances would also crash. More important, during patching updates, the ASM instance is also shut down, which results in service interruptions to the databases.
Within Oracle clustering, there is another concept called Flex ASM (see Figure 3.7). Flex ASM is a solution to many of the availability-related issues discussed previously. In order to run Flex ASM, the cluster needs to be configured in Flex mode. In Flex ASM, the one-to-one relationship between ASM and database instances is decoupled. ASM is actually running on only a few of the hub nodes. The database instances are talking to the ASM instances remotely over a separate network called the ASM network. In the event an ASM instance is lost, all the database instances will be redirected to a surviving ASM instance and the loss will not cause an interruption of service.
Figure 3.7. Oracle RAC with Flex ASM
Image
Oracle Enterprise Manager Cloud Control 12c (EM12c) is another key component to deploying Oracle database in a DBaaS model. EM12c supports all the database functionality available and makes administration much easier. In addition, EM12c is focused on delivering an administration tool that is cloud aware and includes support for a lot of the cloud computing functionality. EM12c functionality and applicability to database clouds is covered extensively in later chapters, so we do not go into much detail here.
Following are some of the other features that can be used to great advantage when it comes to deploying Oracle 12c:
• Enhanced functionality for resource management and allocation using RMAN.
Use of instance caging to limit CPU resource allocation at a database level. This helps ensure that no single database is causing a CPU bottleneck by consuming more than its fair share of resources.
• In Oracle RAC, there is a concept of admin-managed versus policy-managed clusters. Admin-managed configurations basically use the standard Oracle RAC administration and configuration functionality. Policy-managed configurations use the concept of server pools. A server pool is an arrangement of the Oracle RAC nodes into logical groups. Database instances are then mapped to the server pool. Based on load, new servers can be added to or removed from a server pool.

Oracle Engineered Systems

Apart from the software stack, Oracle also has a complete array of hardware that is database aware, tuned specifically from the point of view running an Oracle database. These systems are able to provide some extreme-level performance and have a significant amount of capacity as well as expansion capabilities. Depending on the requirements and expectations, one or more of these platforms may be applicable to a given scenario.
At the entry level, Oracle has the Oracle Database Appliance (ODA) X5-2. Each ODA consists of two compute nodes, and each node has two sockets of an 18-core Intel processor. Additionally, included with ODA is a storage shelf that can deliver up to 64 TB of raw space. An additional storage cabinet can be added to the configuration, which in effect doubles the raw storage capacity.
At the next level, Oracle has introduced the Virtual Compute Appliance (VCA). This is an engineered system with built-in Z File System (ZFS) storage, capable of supporting up to 25 compute nodes, and it uses Oracle Virtual Machine (OVM) as the virtualization platform. The VCA comes with an appliance manager that simplifies the administration, provisioning, and scaling of servers under the umbrella of OVM.
Last but not least, Oracle delivers the Exadata Database Machine, which is a highly specialized Oracle database–focused black-box appliance that includes storage, network, and compute capacity needed to run the solution. The Exadata provides a lot of database-aware features and tuning that help it deliver extreme performance. Even the storage in an Exadata is database and SQL aware, and consequently can thoroughly optimize the performance. Specific to database consolidation and database clouds, the Exadata also includes I/O resource manager (IORM). IORM allows the Exadata administrator to specify and control I/O prioritization at an inter-database level and not just an intra-database tier.
Another key point built into the Exadata model is its capacity scaling model. An organization can start with what is known as an eighth rack and scale up to a quarter rack, half rack, full rack, and across eight full racks. In terms of compute capacity, using the X5 model standards, that is equivalent to saying that we can start with two compute nodes with 18 cores each (eighth rack), expand to two compute nodes with 36 cores each (quarter rack), expand further to four compute nodes with 36 cores each (half rack), then to eight compute nodes with 36 cores each (full rack), and all the way up to 64 compute nodes, each with 36 cores (8 full racks daisy chained). With Exadata X5, Oracle enhanced the licensing model to be more flexible and based on capacity.

Cloud Computing with DBaaS

The hardware and software required to deliver robust, scalable, manageable DBaaS functionality using Oracle as the database platform exists. Any organization of any size can deploy a DBaaS service that will deliver on the following primary goals:
Agility: Respond to requests and resource needs in a fast, real-time manner.
Availability: Deliver a service that meets the defined goals of performance and availability.
Elasticity: Provide the ability to shrink or expand resource footprints as needed to meet computing demands.
End user reporting and accounting: Provide the ability to capture and present usage metrics, performance against key performance indicators, and so on.

Benefits and Advantages

The benefits to an organization that can successfully deploy a DBaaS solution can be measured in two ways: by operating costs and by customer satisfaction criteria.
Operating costs can be lowered by
• Deploying a database architecture that can scale in the just-in-time (JIT) model. The organization saves in infrastructure and licensing costs by not having to acquire resources well in advance of their actual need. A combination of consolidation using engineered systems and deploying and Oracle 12c multitenant architecture, Flex Clusters, and RMAN models are key here.
• Deploying an architecture that is agile enough to relocate idling resources to the areas of demand. A combination of policy-managed clusters and Oracle 12c multitenant architectures will drive the reduction of costs from an agility perspective.
• Deploying a solution that is standardized and has the toolkit available to monitor, alert, and administer the environment efficiently. A combination of technologies such as Oracle RAC, Oracle 12c multitenant architectures, and EM12c can be used to drive this administrative aspect of cost reduction.
• Deploying a highly resilient, flexible architecture to drive down outage windows (both planned and unplanned) and thus drive down costs associated with outages. Oracle 12c features such as RAC, Flex ASM, and multitenancy are key to driving down outage costs.
• Deploying an Oracle database–aware consolidation strategy to streamline and thereby reduce the capital expenditure in infrastructure costs. Oracle RAC and the multitenant architecture will play an essentially key role here.
• Deploying a user-friendly, portable system that allows end users to review, report, and manage their resource allocation. Such a deployment will drive down resource costs by eliminating wait times and streamlining processes and procedures.
Customer satisfaction will be a direct outcome of being able to design and deploy a solution that
• Reduces downtime and outages.
• Improves time to respond to user requests and questions.
• Provides the end user with tools to achieve self-sufficiency and control.
The same set of technologies that drive operating cost savings will go a long way to promoting customer satisfaction.

Challenges

The path to achieving the suggested goals and requirements requires careful upfront planning and design. As a part of the planning process, we recommend scheduling periodic checkpoints to validate that your organization is on track to achieve the target platform and environment.
Another aspect to consider during planning is the dedication of resources focused entirely on the design and build out of the new architecture. Without this focused effort, the probability for distraction is extremely high, and resources would be sucked right back into the day-to-day administrative tasks.
We also recommend that the organization plan for building and training the team in the new technologies being introduced. No matter how good the implementation, with a properly skilled staff to administer and deliver the service, the service is doomed to failure.
Standardized processes, procedures, and offerings contribute greatly to the success of DBaaS. These processes should be defined early and evangelized with the remainder of the organization from day one.

Summary

In this chapter, we looked at the various technology offerings available that can help an organization design, build, and deploy a robust DBaaS or database cloud solution. This chapter discussed several technologies that subsequent chapters will cover in more detail.
Oracle 12c has a host of new features that are perfectly suited for this purpose. The key new features included with Oracle 12c are
• Pluggable databases
• Resource Management
• Oracle Flex Clusters
• Oracle Flex ASM
EM12c not only supports these new features, but its design and architecture are focused on supporting the core concepts associated with cloud solutions in general and database clouds in particular. The proper design and implementation of this combination is the key to a successful database cloud implementation.
At an infrastructure level, Oracle-engineered systems provide a platform that can be useful in deploying a database cloud service, but we do not imply that this is the only way to implement a database cloud. Virtualization technologies such as VMware also have functionality and tools that can be used to implement a database cloud solution. The new features in Oracle Database 12c and EM12c can also be used to complement and enhance these solutions.




No comments:

Post a Comment