Saturday, 29 October 2022

Oracle Database Migration to Oracle Cloud Infrastructure

                          Migration Oracle Database to Oracle Cloud Infrastructure



Populare Method to Migration Oracle Database to OCI

  1. Migrating An Oracle Database to OCI using Data Pump
  2. Migrating An Oracle Database to OCI using unplug/plug And Remote Cloning Method
  3. Migrating An Oracle Database to OCI using RMAN Method
  4. Migrating An Oracle Database to OCI using Oracle Database Cloud Backup Module
  5. Migrating An Oracle Database to OCI using Oracle Golden Gate


 

 

OCI Overview

 

 

 

 

 

 

Storage

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Migrating An Oracle Database to OCI using Data Pump

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 








Explore Oracle RAC And ASM in Less Than An Hours

 


Oracle Clusterware and ASM


Presentation by:

www.EasyReliable.COM

Phone:  +080-41156843/+91-9606734482

Email:  easyreliable@gmail.com/support@easyreliable.com

Website: http://easyreliable.com/software_training_prices/






Some RAC Terminology

       Instance ,Clusterware, Storage Area Network (SAN)

       Database, Local Storage,

       Shared Storage, SCAN,SCAN listener, OCR,OLR, Voting Disk

       Raw Device,

       Cluster File system ,Automatic Storage Management (ASM)

       Single-instance DB, Multi-instance DB

       SRVCTL, CRSCTL, CLUVFY, DIAG, OCSSD, CRSD

       OCRCONFIG, CRS_STAT, TAF, OCRCHECK

 

Advantage of RAC

  1. Increasing availability
  2. Increasing scalability
  3. Improving maintainability

RAC Increase Availability

      

Depends on definition of availability

       May achieve less unplanned downtime

       May have more time to respond to failures

       Instance failover means any node can fail without total loss of service

       Must  have capacity in cluster to survive failover

       Additional Oracle and RAC licenses

       Load can be distributed over all running nodes

RAC increase scalability

       Scalability is the relationship between increments of resources and workloads

       Can be any resource but with RAC normally refers to adding instances


Introduction

Cluster

A cluster consists of a group of independent but interconnected computers whose combined resources can be applied to a processing task.

Most cluster architecture use dedicate network (cluster interconnect) for communication and coordination between cluster nodes.

Load-balancing clusters allow a single application to balance its workload across the cluster.

Clusterware

Clusterware is a software that provides various interfaces and services for a cluster.  Clusterware allow the cluster to be managed as a single entity and protect the integrity of the cluster . It maintain a registry of resources so that their location is known across the cluster.

Component(Hardware)

1) Two or more nodes

2) Shared storage

3) SAN(Storage Area Network)

4) NAS(Network Attached storage)

5)DAS(Direct Attached Storage)

6) Minimum two switches

7) Minimum 2 NIC card per each node

8) NIC means LAN Card

9) Minimum one SCSI Card each Node

10) LAN Cables

 

       Introduced in Oracle 10.1 (Cluster Ready Services - CRS)

       Renamed in Oracle 10.2 to Oracle Clusterware

       Cluster Manager providing

       Node membership services

       Global resource management

       High availability functions

On Linux

       Implemented using three daemons

       CRS - Cluster Ready Service

       CSS - Cluster Synchronization Service

       EVM - Event Manager

       In Oracle 10.2 includes High Availability framework

       Allows non-Oracle applications to be managed

 

In Real Application Clusters, There are multiple instances (normally one per

database server node) serving a single database. The single database is stored on a

set of disks, that are shared between all instances.

The slide shows some of the characteristics of Real Application Clusters:

• All files are shared between all instances, this includes database, redo log,

parameter files, etc. There is only one set of database files, i.e. there is a single

database, that just can be seen from multiple nodes/instances. Each instance

actually has it’s own set of redo log files, but they are shared and visible to all

instances. This is  used in case of recovery after loss of an instance.

• The instances are on separate nodes and do therefore not share any memory. All

coordination between the instances take place of the interconnect (thick, red arrow).

• A client and its associated server process can connect to any instance to get access to the database.

• The picture only shows two instances; many more can be configured.

There are some requirements to the hardware to support Real Application Clusters:

• A shared disk storage must be available; typically this involves using a SAN.

•A high speed interconnect must be available solely for the inter-node communication. This could e.g. be a Gigabit Ethernet.


Background Processes

Each RAC instance has set of standard background processes e.g.

  1. PMON
  2. SMON
  3. LGWR
  4. DBWn
  5. ARCn
  6. RAC instances use additional background processes to support GCS and GES including
  7. LMON
  8. LCK0     
  9. LMDn
  10. LMSn
  11. DIAG

 

In Real Application Clusters, extra background processes are started. Some of the

important ones of these are shown on this slide:

LMON is responsible for monitoring of all global resources, i.e. global cache

resources and global enqueue (lock) resources. Whenever a node is added or

removed from the cluster, the LMON process will distribute the global resources on

all nodes. In case of a loss of a node, this involves recovery of global resources

from the surviving nodes.

The LMSn processes actually implement the Oracle cache fusion mechanism. This

makes sure only one node is modifying a block at any one time, it will send dirty

blocks between instances if multiple instances have block modifications to perform and it will construct and ship read-consistent blocks if one instance need to read a

block, that has been modified by another instance.

The LMDn processes make the enqueues (e.g. row locks) – that in the non-RAC

case is handled internally on one instance – global, so that they can be seen by all

Instances.

Global Cache Service Processes (LMSn)

LMSn processes control the flow of messages to remote instances and manage global data block access.

LMSn processes also transmit block images between the buffer caches of different instances. This 

processing is part of the Cache Fusion feature. n ranges from 0 to 9 depending on the amount of

messaging traffic.

•Global Enqueue Service Monitor (LMON)

Monitors global enqueues and resources across the cluster and performs global enqueue recovery

operations. Enqueues are shared memory structures that serialize row updates.

•Global Enqueue Service Daemon (LMD)

Manages global enqueue and global resource access. Within each instance, the LMD process manages

incoming remote resource requests.

Lock Process (LCK)

Manages non-Cache Fusion resource requests such as library and row cache requests.

Diagnosability Daemon (DIAG)

Captures diagnostic data about process failures within instances. The operation of this daemon is automated and it updates an alert log file to record the activity that it performs.

 

What is RAC

       Multiple instances running on separate servers (nodes)

       Single database on shared storage accessible to all nodes

       Instances exchange information over an interconnect network

 

Instances versus Databases

A RAC database includes

       one database

        More than one instances

       RAC is nothing but a multi instance database

A database is a set of files

       Located on shared storage

       Contains all persistent resources

An instance is a set of memory structures and background processes

       Contain all temporal resources

       Can be started and stopped independently

 

What is RAC Database

RAC Database located on shared storage accessible by all instances

       Includes

       Control Files

       Data Files

       Online Redo Logs

       Server Parameter File

l           May optionally include

       Archived Redo Logs

       Backups

       Flashback Logs (Oracle 10.1 and above)

       Change Tracking Writer files (Oracle 10.1 and above)


       REDO LOG FILE IN RAC Database


Contents similar to single instance database except

       One redo thread per instance

 

ALTER DATABASE ADD LOGFILE THREAD 2

                GROUP 3  SIZE 51200K,

                GROUP 4  SIZE 51200K;

ALTER DATABASE ENABLE PUBLIC THREAD 2;

      If using Automatic Undo Management also require one UNDO tablespace per instance

       CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE  SIZE 25600K  AUTOEXTEND ON MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL;

       Additional dynamic performance views (V$, GV$ )

        created by $ORACLE_HOME/rdbms/admin/catclust.sql

 

Server Parameter File 

       Introduced in Oracle 9.0.1

       Must reside on shared storage

       Shared by all RAC instances

       Binary (not text) files

       Parameters can be changed using ALTER SYSTEM

       Can be backed up using the Recovery Manager (RMAN)

       Created using

  •        CREATE SPFILE [ = ‘SPFILE_NAME’ ] FROM PFILE [ = ‘PFILE_NAME’ ];

       init.ora file on each node must contain SPFILE parameter

       SPFILE = <pathname>

 

RAC Parameters

       RAC uses same parameters as single-instance

       Some must be different on each instance

       Some must be same on each instance

       Can be global or local

       [*.]<parameter_name> = <value>

       [<sid>]<parameter_name> = <value>

       Must be set using ALTER SYSTEM statement

       ALTER SYSTEM SET parameter = value

                      [ SCOPE = MEMORY | SPFILE | BOTH ]

                       [ SID = <sid>]

       ALTER SYSTEM SET parameter = value

                      [ SCOPE = MEMORY | SPFILE | BOTH ]

                      [ SID = <*>]

 RAC Parameters

l           Some parameters must be same on each instance including *:

       ACTIVE_INSTANCE_COUNT

       ARCHIVE_LAG_TARGET

       CLUSTER_DATABASE

       CONTROL_FILES

       DB_BLOCK_SIZE

       DB_DOMAIN

       DB_FILES

       DB_NAME

       DB_RECOVERY_FILE_DEST

       DB_RECOVERY_FILE_DEST_SIZE

       DB_UNIQUE_NAME

       UNDO_MANAGEMENT

l           Some parameters, if used, must be different on each instance including

       THREAD

       INSTANCE_NUMBER

       INSTANCE_NAME

       UNDO_TABLESPACE


Oracle Cluster Registry and Voting Disks

Oracle Clusterware includes two important components that manage configuration and node membership: Oracle Cluster Registry (OCR), which also includes the local component Oracle Local Registry (OLR), and voting disks.

These file can be stored in ASM.

       OCR manages Oracle Clusterware  and Oracle RAC database configuration information.You can have up to five OCR location

       OLR resides on every node in the cluster and manages Oracle Clusterware configuration information for each particular node.

       Voting disks manage information about node membership.

 

Oracle Cluster Registry (OCR)

        Configuration information for Oracle Clusterware / CRS

       Introduced in Oracle 10.1

       Similar to Windows Registry

       Located on shared storage

       For node-specific resources, each cluster node has a local registry called an Oracle Local Registry (OLR).

       OCRCONFIG can be used for OCR administration

run the following command to list the backup files:

ocrconfig –showbackup

       OCRCHECK and OCRDUMP are used for troubleshoot configuration problem

Verify the integrity of OCR

# ocrcheck

The dd commands used to back up and recover voting disks in previous versions of Oracle Clusterware are not supported in Oracle Clusterware 11g release 2 (11.2). Restoring voting disks that were copied using dd or cp commands can prevent the Oracle Clusterware 11g release 2 (11.2) stack from coming up. Use the backup and restore procedures described in this chapter to ensure proper voting disk functionality.

The default location for generating backups on Linux or UNIX systems is Grid_home/cdata/cluster_name

By default, OLR is located at Grid_home/cdata/host_name.olr on each node.

Manage OLR using the OCRCHECK, OCRDUMP, and OCRCONFIG utilities as root with the -local option.

You can check the status of OLR on the local node using the OCRCHECK utility, as follows:

# ocrcheck -local

Status of Oracle Cluster Registry is as follows :

        Version                  :          3

        Total space (kbytes)     :     262132

        Used space (kbytes)      :       9200

        Available space (kbytes) :     252932

        ID                       :  604793089

        Device/File Name         : /private2/crs/cdata/localhost/dglnx6.olr

                                   Device/File integrity check succeeded

        Local OCR integrity check succeeded

 

Defines cluster resources including:

       Databases

       Instances

       RDBMS

       ASM

       Services

       Node Applications

       VIP

       ONS

       GSD

       Listener Process

 

What is a Voting Disk

l           Known as Quorum Disk / File in Oracle 9i

       Located on shared storage accessible to all instances

       Used to determine RAC instance membership

l           In the event of node failure voting disk is used to determine which instance takes control of cluster

       Avoids split brain

l           In Oracle 10.2 and above can be mirrored

       Odd number of copies (1, 3, 5 etc)

       In Oracle 12c Clusterware , voting disk data is backed up automatically in the OCR as part of any configuration change.

       In RAC, CSSD processes (Cluster Services Synchronization Daemon) monitor the health of RAC nodes employing two distinct heart beats: Network heart beat and Disk heart beat. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario.

There are few different scenarios possible with missing heart beats:

       Network heart beat is successful, but disk heart beat is missed.

       Disk heart beat is successful, but network heart beat is missed.

       Both heart beats failed.

nod1[/home/oracrs]$ crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   3a5f3f9aaeef4fe3bf736b1422f88bf5 (/dev/rhdisk6) [VOTING]

 

What is VIP

       Node application introduced in Oracle 10.1 and  VIP address is defined for each node. It don’t required separate Network adapter/card , it uses existing public interface.

       All applications connect using Virtual IP in 10g but from 11g onward ,All application is connected through scan. In 12c It is called Node VIP

       If node fails ,Virtual IP address is automatically relocated to surviving node

       Only applies to newly connecting sessions. It enable a faster failover in the event of node failure.It is resolved to name and address configured on the DNS

Node VIP: Each Database node in Real Application cluster environment has one node IP and one Node VIP address, The main difference between these two is Node VIP can move to any other system in case if current owning system is down but Node IP can't do that. When ever a new connection request is made .The SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client's behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener as Node VIP address on the least-loaded node (Each scan listener keeps updated cluster load statistics) and connection is routed to that node.

1) The virtual IP address and the network name must not be currently in use, ( should not be accessible by a ping command or NOT Pingable)
2) The virtual IP address must be on the same subnet as your public IP address.
3) The virtual host name for each node should be registered with your DNS.
4) Static IP address
5) Configured before installation for each node, The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping command).
6) It should On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses.
7) A virtual IP is registered on DNS, but NOT defined in the servers. It will be defined later during Oracle Clusterware Install
8) Assign One virtual IP (VIP) address with an associated host name registered in a DNS.
9) Record the host name and VIP address in the system hosts file, /etc/hosts.

 

The VIP is not bound. It relies on the Address Resolution Protocol (ARP) to tie the VIP to the network device on the host. If that host goes down, another member of the cluster informs the network switch that the VIP is now tied to a different host. The network switch will then route any traffic to the VIP to the new host. There isn't the hard binding between the IP and the NIC when you have a VIP. This lets the VIP failover to a differ

node1[/home/oracrs]$  srvctl status vip -n node1

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

node1[/home/oracrs]$

node1[/home/oracrs]$ srvctl config vip -n node1

VIP exists: /node1-vip/10.160.32.181/10.160.32.128/255.255.255.192/en0, hosting node node1

node1[/home/oracrs]$ 

node1[/home/oracrs]$ srvctl status nodeapps -n node1

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

Network is enabled

Network is running on node: node1

GSD is disabled

GSD is not running on node: node1

ONS is enabled

ONS daemon is running on node: node1

node1[/home/oracrs]$

So What is difference is between relocating the SCAN using srvctl relocate scan and SCAN_LISTENER  by using srvctl relocate scan_listener command?

Regarding questions; The difference between a SCAN VIP and a normal RAC VIP, is that the RAC VIP has a node it want’s to run on and each node has one (whereas you only have 3 SCANs). If it fails over to another node, the normal VIP exists, but does not accept connections, whereas the SCAN is not fix to a node and can run on any node in the cluster (and will accept connections anytime).

Now that this works, the SCAN VIP will always move with the SCAN listener (otherwise it would not make any sense). Hence there is really no difference in moving the SCAN VIP (because this will trigger a relocate of the listener) or to move the SCAN_Listener (since this will move the VIP it depends on).

 

SCAN VIP: SCAN NAME resolves to one or more than one IP addresses, these IP address are called as SCAN VIP or SCAN IP. Each Scan VIP has a SCAN Listener generated corresponding to it. if there are three SCAN IP's three SCAN Listeners will be generated. These SCAN Listener runs on any of three nodes on the RAC environment or it could be two SCAN Listener on one

SCAN  VIP  – It is define in corporate DNS (Domain Name Service), this again uses existing public Interface
1) Determine your cluster name. The cluster name should satisfy the following conditions:
2) The cluster name is globally unique throughout your host domain.
3) The cluster name is at least 1 character long and less than 15 characters long.
4) The cluster name must consist of the same character set used for host names: single-byte alphanumeric
characters (a to z, A to Z, and 0 to 9) and hyphens (-).

5) Define the SCAN in your corporate DNS (Domain Name Service), You must ask your network administrator to create a single name,
that resolves to three IP addresses using a round robin algorithm.
6) The IP addresses must be on the same subnet as your public network in the cluster.

 

7) SCAN VIPs must NOT be in the /etc/hosts file, it must be resolved by DNS.

$ /u01/app/11.2.0.1/grid/bin/olsnodes -c
mgrac-cluster

$ nslookup mgrac-scan
Server:         192.168.56.99
Address:        192.168.56.99#53

Name:   mgrac-scan.mgdom.com
Address: 192.168.56.81
Name:   mgrac-scan.mgdom.com
Address: 192.168.56.82
Name:   mgrac-scan.mgdom.com
Address: 192.168.56.83

$ dig mgrac-scan.mgdom.com +noall +answer

; <<>> DiG 9.6-ESV-R8 <<>> mgrac-scan.mgdom.com +noall +answer
;; global options: +cmd
mgrac-scan.mgdom.com.   600     IN      A       192.168.56.83
mgrac-scan.mgdom.com.   600     IN      A       192.168.56.81
mgrac-scan.mgdom.com.   600     IN      A       192.168.56.82

 

$ /u01/app/11.2.0.1/grid/bin/srvctl config scan
SCAN name: mgrac-scan, Network: 1/192.168.56.0/255.255.255.0/e1000g0
SCAN VIP name: scan1, IP: /mgracscan3/192.168.56.83
SCAN VIP name: scan2, IP: /mgracscan1/192.168.56.81
SCAN VIP name: scan3, IP: /mgracscan2/192.168.56.82

$ /u01/app/11.2.0.1/grid/bin/srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

Verify SCAN Ip’s Registered with resource
$ /u01/app/11.2.0.1/grid/bin/crsctl stat res ora.scan1.vip -p |egrep ‘^NAME|TYPE|USR_ORA_VIP|START_DEPENDENCIES|SCAN_NAME|VERSION’

NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
SCAN_NAME=mgrac-scan
START_DEPENDENCIES=hard(ora.net1.network) dispersion:active(type:ora.scan_vip.type) pullup(ora.net1.network)
USR_ORA_VIP=192.168.56.83
VERSION=11.2.0.1.0

 

What is the Interconnect

       Instances communicate with each other over the interconnect (network)

       Information transferred between instances includes

       data blocks

       Locks

  • Most cases: use UDP over 1 Gigabit Ethernet
  • For large databases - Infiniband/IP or 10 Gigabit Ethernet
  • Use OS Bonding/teaming to “virtualize” interconnect
  • Set UDP send/receive buffers high enough
  • Crossover cables are not supported

Oracle Interface Configuration Tool: oifcfg

The oifcfg command-line interface helps you to define and administer network interfaces.

The oifcfg is a command-line tool for both single-instance Oracle databases and Oracle RAC environments.

The oifcfg utility can direct components to use specific network interfaces.

 

Private InterConnect – Requires one interface or network Adapter/card

Determine the private hostname for each node in the cluster.


1) This private hostname does not need to be resolvable through DNS and should be entered in the /etc/hosts file.
2) The private IP should NOT be accessible to servers not participating in the local cluster. (Only RAC Nodes should ping each other using Priv IP)
3) The private network should be on standalone dedicated switch(es).
4) The private network should NOT be part of a larger overall network topology.
5) The private network should be deployed on Gigabit Ethernet or better
6) It is recommended that redundant NICs are configured For Solaris either Sun Trunking (OS based) or Sun IPMP (OS based)

8) Static IP address
9) Configured before installation, but on a separate, private network, with its own subnet, that is not resolvable except by other cluster member nodes.
 

10) A private IP only known to the servers in the RAC configuration, to be used by the interconnect.
11) Configure the private interconnect network interface cards to have a private node name and a private IP address.
12) This can be an interface bonded using IPMP.
13) Choose a private IP address that is in the address range 10.*.*.* or 192.168.*.*
Private     10.0.0.1     Hosts file
Private     10.0.0.2     Hosts file
14) The private IP address should be on a separate subnet than the public IP address.
15) Oracle strongly recommends using a physically separate, private network (You can use a vlan on a shared switch if you must).
16) You should ensure that the private IP addresses are reachable only by the cluster member nodes.

17) Use Non Routable network addresses for private interconnect:
Class A: 10.0.0.0 to 10.255.255.255
Class B: 172.16.0.0 to 172.31.255.255
Class C: 192.168.0.0 to 192.168.255.255


Why Use Shared Storage

       Mandatory for

       Database files

       Control files

       Online redo logs

       Server Parameter file (if used)

       Optional for

       Archived redo logs (recommended)

       Executables (Binaries)

       Password files

       Parameter files

       Network configuration files

    Administrative directories

       Alert Log

       Dump Files

 

What Shared Storage is Supported

l           Oracle supplied options

       Oracle Cluster File System (OCFS)

       Automatic Storage Management (ASM)

       Both require underlying SAN or NAS

2) Raw devices

       Difficult to administer

       Cannot be used with archived redo logs

 

Internal Structures and Services

 

Resource Affinity: Dynamically change in ownership of resources in a RAC system is called resource affinity

Cache Coherence: Maintaining consistency of the data across all the RAC instances is called CACHE COHERENCE

Global Resource Directory (GRD)

Records current state and owner of each resource

Contains convert and write queues

Distributed across all instances in cluster

Location of most current version of the block

Role and mode of resource

in a RAC every resource is identified by its role and mode . The role of resource could be whether local and Global and Mode of the resource could be whether NULL,SHARE or EXCLUSIVE

Global Cache Services (GCS)

It plays a vital role in cache fusion and keep trace of the role and mode of resource . Coordinates access to database blocks for instances. It maintains GRD

Global Enqueue Services (GES)

Controls access to other resources (locks) including

       library cache

       dictionary cache

 

Full Cache Fusion

       Cache Fusion increases performance and scalability

       Data is shipped directly over high speed interconnect

       Minimize disk I/O

 

Cache Fusion Architecture


  • RAC – important features
  • Movement of data or transfer of data across instances through private interconnect is called CACHE FUSION

• The cache becomes global Global Cache Management is required

• Many locks become global

  • Global lock management is required

       Full Cache Fusion

       Cache-to-cache data shipping through interconnect

       Shared cache eliminates

       slow I/O

 

Clusterware verification

Clusterware processes

$ ps -ef|grep -v grep |grep d.bin

oracle 9824 1 0 Jul14 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.bin

root 22161 1 0 Jul13 ? 00:00:15 /u01/app/grid11gR2/bin/ohasd.bin reboot

oracle 24161 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/mdnsd.bin

oracle 24172 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/gipcd.bin

oracle 24183 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/gpnpd.bin

oracle 24257 1 0 Jul13 ? 00:01:26 /u01/app/grid11gR2/bin/ocssd.bin

root 24309 1 0 Jul13 ? 00:00:06 /u01/app/grid11gR2/bin/octssd.bin

root 24323 1 0 Jul13 ? 00:01:03 /u01/app/grid11gR2/bin/crsd.bin reboot

root 24346 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.bin

oracle 24374 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/evmd.bin

$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online $ crsctl check has

CRS-4638: Oracle High Availability Services is online

$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [12.2.0.2.0]

Clusterware processes

$ crsctl check cluster -all

**************************************************************

rat-rm2-ipfix006:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

rat-rm2-ipfix007:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

rat-rm2-ipfix008:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

$ crsctl status resource -t

NAME TARGET STATE SERVER STATE_DETAILS

------------------------------------------------

ora.DATA.dg ç ASM disk group (new resource)

ONLINE ONLINE rat-rm2-ipfix006

ONLINE ONLINE rat-rm2-ipfix007

ONLINE ONLINE rat-rm2-ipfix008

ora.LISTENER.lsnr

ONLINE ONLINE rat-rm2-ipfix006

ONLINE ONLINE rat-rm2-ipfix007

ONLINE ONLINE rat-rm2-ipfix008

ora.asm

ONLINE ONLINE rat-rm2-ipfix006 Started

ONLINE ONLINE rat-rm2-ipfix007 Started

ONLINE ONLINE rat-rm2-ipfix008 Started

ora.eons  new resource

ONLINE ONLINE rat-rm2-ipfix006

ONLINE ONLINE rat-rm2-ipfix007

ONLINE ONLINE rat-rm2-ipfix008

ora.gsd

Root Causes of Node Eviction

  • Network heartbeat lost
  • Voting disk problems
  • cssd is not healthy
  • Oprocd
  • Hang check timer
  • Important files to review are:

a) Each instance's alert log

b) Each instance's LMON trace file

c) awr reports from all nodes leading up to the eviction

d) Each node's syslog or messages file

e) iostat output before, after, and during evictions

f) vmstat output before, after, and during evictions

g) netstat output before, after, and during evictions

 

Clusterware log location in 12C

node1[/u02/oracle/12.1.0/grid]$ adrci

ADRCI: Release 12.1.0.2.0 - Production on Tue Jan 2 15:07:04 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u02/oracle/NFT"

adrci> show homes

ADR Homes:

diag/asm/+asm/+ASM2

diag/crs/node1/crs

diag/rdbms/_mgmtdb/-MGMTDB

diag/tnslsnr/node1/listener

diag/tnslsnr/node1/listener_scan1

diag/tnslsnr/node1/listener_scan2

diag/tnslsnr/node1/listener_scan3

diag/tnslsnr/node1/mgmtlsnr

adrci>

Load Balancing

LOAD BALANCING: The Oracle RAC system can distribute the load over running nodes this feature called as load balancing.

There are two methods of load balancing

1.Client load balancing

2.Server load balancing

Client Load Balancing distributes new connections among Oracle RAC nodes means it enables clients to randomize connection request among all the available listener. so that no one server is overloaded with connection requests and it is configured at net service name level by providing multiple descriptions in a description list or multiple addresses in an address list. For example, if connection fails over to another node in case of failure, the client load balancing ensures that the redirected connections are distributed among the other nodes in the RAC.

Configure Client-side connect-time load balancing by setting LOAD_BALANCE=ON in the corresponding client side TNS entry.

TESTRAC =

(DESCRIPTION =(ADDRESS_LIST=

(LOAD_BALANCE = ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521)))

(CONNECT_DATA = (SERVICE_NAME = testdb.oracleracexpert.com))

)

 

Server -side load Balancing

Server-side connection balancing

       Dependent on current workload on each node

       PMON monitors workload and updates listeners

       It does so using two important parameter local_listerner and remote_listener. The local listener should be set to node virtual ip address

PMON process communicates with other listeners and distributes processing workload among Oracle RAC nodes. It divides the connection load evenly between all available listeners and distributes new user session connection requests to the least loaded listener(s) based on the total number of sessions which are already connected. Each listener communicates with the other listener(s) via each database instance’s PMON.PMON process registers database with database’s local  listener

Cross-registers database with listeners on all other nodes in the cluster

PMON by default checks CPU every 3 seconds

PMON updates the listener with statistics.

 By default, PMON updates TNSLSNR every 10 minutes.

Listener routes incoming connections to the least loaded instance.

Server Side Load Balancing

TNSNAMES entry

LISTENERS_RACDB =

       (ADDRESS_LIST =

                 (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))

                 (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))

                 (ADDRESS = (PROTOCOL = TCP)(HOST = rac3-vip)(PORT = 1521))

 

What is TAF

TAF is Transparent Application Failover .it is a client-side feature that allows for clients to reconnect to surviving nodes in the event of a failure of an instance.

u Sessions connected to a failed instance will be terminated

u Uncommitted transactions will be rolled back

u Sessions can be reconnected to another instance automatically if using TAF

u Can optionally re-execute in-progress SELECT statements

u Statement re-executed with same SCN

u Fetches resume at point of failure

u Session state is lost including

u Session parameters

u Package variables

u Class and ADT instantiations

 

What is TAF

The failover is configured in tnsnamees.ora file, the TAF settings are placed in CONNECT_DATA section of the tnsnames.ora using FAILOVER_MODES parameters.

TAF Configuration Option

----------------------------------------------------------

(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))

TYPE Parameter=

  a) session- set to failover the session,if user's connection is lost, a new session is automatically created for the user on the backup node using the service name specified in the tnsnames.ora

  b) select – set to enable users with open cursors to continue fetching on them after failure.

  c) None= this is default, no failover functionality is used.

Method Parameter:

BASIC: set to establish connection at failover time.this option requires almost no work on backup server until failover time.

BACKUP/PRECONNECT:  set to pre-established connection .this means a separate database connectionto the secondary(backup instance) is maintained all the time while there is connection to the primary instance.Specify a different net service name for backup instance connections. A backup should be specified when using PRECONNECT to pre-establish connections.

 

A BASIC  sample configuration would look like

TESTRAC =

(DESCRIPTION =

(LOAD_BALANCE = ON)

(FAILOVER = ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = testdb.oracleracexpert.com)

(FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))

A Pre connect  sample configuration would look like

TESTDB1 =

(DESCRIPTION =

(LOAD_BALANCE = ON)

(FAILOVER = ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521))

(CONNECT_DATA =

(SERVICE_NAME = testdb.oracleracexpert.com)

(FAILOVER_MODE = (TYPE = SELECT)(METHOD = PRECONNECT)(BACKUP=TESTDB2))

))

 

Scan Configuration

       SCAN is give single point of access for clients to cluster, No need to modify tns entry after adding or deleting node.

       SCAN -Single Client Access Name

       Failover - Faster relocation of services

       Better Load balancing

       MTU package size of Network Adapter (NIC)

Few important points about SCAN

1.It is a better practice to have three SCAN IP addresses and three scan listeners to provide fault tolerance.

2.These three SCAN IP addresses can be alive in any node of the cluster. If you have more than three nodes in a cluster, then nodes joining initially to the cluster will  have SCAN resources running.

3.SCAN is an abstraction layer. In a huge cluster, client connect string do not need to specify all nodes in a cluster. Even if the topology of the cluster changes, still, there is no reason to change connect string.

4.SCAN and VIP addresses must be in the same subnet. Multiple subnets can be created using listener_networks, but redirection is not contained within the same subnet.

        SCAN listeners, introduced in 11gR2. SCAN listeners also act as a load balancing mechanism to manage workload. SCAN listeners simply acts as a redirection mechanism redirecting incoming connections to a local VIP listener.1.Connection request is made by the application connecting to the SCAN IP address and Port.

2.SCAN listener receives the service name and redirects the connection to the VIP listener servicing that service_name. SCAN listener uses load balancing statistics to choose a node if  the service is preferred in more than one instance.

3. VIP listener bequeath connection process and the connection continue to create a new DB connection.Parameter setup

Two parameters are used to configure these listeners in the database. Parameter remote_listener is set to scan_ip:1521 and the local_listener is set to a connect string connecting to local VIP listener.

PMON process registers the services with the listeners using local_listener and remote_listener

Configuration of these two parameters is of paramount importance since PMON service registration is the key for consistent successful connections.

Figure : SCAN redirection

 

Scan Configuration

When client program connects to RAC database through SCAN name, SCAN listener will accept the request and redirect the connection to local listener. To identify connection issue, first try to connect to each local listener through node VIP, then try each SCAN listener through each SCAN VIP.

To test through node VIP:

sqlplus <username>/<password>@<nodename-vip.domain>:<local-listener-port>/<service-name>

        SCAN listeners, introduced in 11gR2. SCAN listeners also act as a load balancing mechanism to manage workload. SCAN listeners simply acts as a redirection mechanism redirecting incoming connections to a local VIP listener.1.Connection request is made by the application connecting to the SCAN IP address and Port.

2.SCAN listener receives the service name and redirects the connection to the VIP listener servicing that service_name. SCAN listener uses load balancing statistics to choose a node if  the service is preferred in more than one instance.

3. VIP listener bequeath connection process and the connection continue to create a new DB connection.

Parameter setup

Two parameters are used to configure these listeners in the database. Parameter remote_listener is set to scan_ip:1521 and the local_listener is set to a connect string connecting to local VIP listener.

PMON process registers the services with the listeners using local_listener and remote_listener

Configuration of these two parameters is of paramount importance since PMON service registration is the key for consistent successful connections.

 

 SCAN VIP Troubleshooting

SCAN Configuration:

       $GRID_HOME/bin/srvctl config scan

SCAN Listener Configuration:

       $GRID_HOME/bin/srvctl config scan_listener

SCAN Listener Resource Status:

        $GRID_HOME/bin/crsctl stat res -w "TYPE = ora.scan_listener.type“

       $GRID_HOME/Listener.ora

       Local and remote listener parameters

 

Scan Configuration

       SCAN -Single Client Access Name

       Failover - Faster relocation of services

       Better Load balancing

       MTU package size of Network Adapter (NIC)

        SCAN listeners, introduced in 11gR2. SCAN listeners also act as a load balancing mechanism to manage workload. SCAN listeners simply acts as a redirection mechanism redirecting incoming connections to a local VIP listener.1.Connection request is made by the application connecting to the SCAN IP address and Port.

2.SCAN listener receives the service name and redirects the connection to the VIP listener servicing that service_name. SCAN listener uses load balancing statistics to choose a node if  the service is preferred in more than one instance.

3. VIP listener bequeath connection process and the connection continue to create a new DB connection.

Parameter setup

Two parameters are used to configure these listeners in the database. Parameter remote_listener is set to scan_ip:1521 and the local_listener is set to a connect string connecting to local VIP listener.

PMON process registers the services with the listeners using local_listener and remote_listener

. Configuration of these two parameters is of paramount importance since PMON service registration is the key for consistent successful connections.

What is SRVCTL

 

       Utility used to manage cluster database

       Configured in Oracle Cluster Registry (OCR)

       Controls

       Database

       Instance

       ASM

       Listener

       Node Applications

       Services

       Options include

       Start / Stop

       Enable / Disable

       Add / Delete

       Show current configuration

       Show current status

 

Starting and Stopping a Database

srvctl start database -d RAC

srvctl stop database -d RAC –o immediate

Starting and Stopping an Instance

srvctl start instance -d RAC -i RAC1

srvctl stop instance -d RAC -i RAC1

Starting and Stopping ASM on a specified node

srvctl start asm -n node1

srvctl stop asm -n node1

Some other Important Utilities

       oifcfg is a command-line tool that can be used to configure network interfaces.

       ocrconfig is a command-line tool that can be used to administer the OCR and OLR.

       ocrcheck is the OCR Check tool to check the state of the OCR.

       ocrdump is the Oracle Clusterware Registry Dump tool that can be used to dump the contents of OCR.

       Oracle Enterprise Manager Database Control 11g and Enterprise Manager Grid control 11g and 12c can be used to manage the Oracle Clusterware environment.

       Start Up and Stop Clusterware

Through the OS init process, Oracle Clusterware is automatically started up when the OS starts. The clusterware can also be manually started and stopped by using the crsctl utility.

The crsctl utility provides the commands to start up the Oracle Clusterware manually:

Start the Clusterware stack on all servers in the cluster or on one or more named server in the cluster:

$ crsctl start cluster [-all | - n server1

 

For example:

$crsctl start cluster -all

$ crsctl start cluster -n k2r720n1

Start the Oracle High Availability Services daemon (OHASD) and the Clusterware service stack together on the local server only:

$crsctl start crs

Both of these two crsctl startup commands require the root privilege on Linux/Unix to run. The 'crsctl start crs' command will fail if OHASD is already started.

The crsctl utility also provides similar commands to stop the Oracle Clusterware manually. It also requires root privilege on Linux/Unix to stop the clusterware manually.

The following command stops the clusterware stack on the local node, or all nodes, or specified local or remote nodes. Without the [-f] option, this command stops the resources gracefully, and with the [-f] option, the command forces the Oracle Clusterware stack to stop, along with the resources that Oracle Clusteware manages.

$ crsctl stop cluster [-all | -n server_name [] [-f]

The following command stops the Oracle High Availability service on the local server. Use the [-f] option to force any resources to stop, as well as to stop the Oracle High Availability service:

$ crsctl stop crs [-f]

Managing Oracle Clusterware

You can use the following command to check the cluster status:

$ crsctl check cluster {-all}

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

Check the CRS status with the following command:

$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

Check the OHASD status:

$GRID_HOME/bin/crsctl check has

CRS-4638: Oracle High Availability Services is online

Check the current status of all the resources using the following co

Some other Important Utilities

RAC utilities and diagnostics include

export ORACLE_HOME=$GRID_HOME

$GRID_HOME/bin/crsctl query crs activeversion

$GRID_HOME/bin/crsctl check crs

$GRID_HOME/bin/crs_stat -t

$GRID_HOME/bin/crs_stat

$GRID_HOME/bin/oifcfg iflist -p -n

$GRID_HOME/bin/oifcfg getif

$GRID_HOME/bin/ocrcheck

cat /etc/oracle/ocr.loc /var/opt/oracle/ocr.loc

$GRID_HOME/bin/crsctl get css diagwait

$GRID_HOME/bin/srvctl config nodeapps -n `hostname | cut -d'.' -f1 | awk '{ print tolower($0); }'` -a

$GRID_HOME/OPatch/opatch lsinventory  $GRID_HOME

$GRID_HOME/bin/ocrconfig -showbackup

 

BASIC RAC Commands

1) Check database instance on both node

                ps -ef|grep pmon -to see database status

                srvctl status database -d <Database Name>

  

2) check listener from both nodes

                ps -ef|grep tns to see listerner status

3) check cluster status from grid owner user on  node

                 crsctl check cluster -all

                crsctl check crs  --To check status of clusterware

                crsctl status resource -t

                crs_stat –t

                crsctl stat res -t -init

4)   Check whether sessions are balanced on each instance

                select name,open_mode from v$database

                select count(1),inst_id from gv$session group by inst_id

 

5) To know IP on server

                 ifconfig -a

6) To know RAC node

                olsnodes -t -s

7) To check completed RAC database configuration

                srvctl config database -d <database_name> -a

8) To know scan related details

                srvctl config scan

                lsnrctl status LISTENER_SCAN1

9) Clusterware log location before 12 c

Clusterware alert log

$GRID_ORACLE_HOME/log/<nodename>/alert<nodename>.log

$GRID_ORACLE_HOME/log/<nodename>/cssd/ocssd.log

$GRID_ORACLE_HOME/log/<nodename>/crsd/crsd.log


ASM OVERVIEW

       ASM Overview

       Disk and disk group

        Initialization parameter file for ASM

       Basic ASM Command


ASM OVERVIEW

ASM is a volume manager and a file system for Oracle Database files that supports single instance Oracle Database and Oracle RAC Database. Oracle ASM was introduce in oracle 10.1

Oracle ASM is Oracle’s recommended storage management solution that provides an alternative to conventional volume managers, file systems, and raw devices Portable and high performance

       cluster file system

       Manages Oracle database files

       Data is spread across disks to balance load

       Integrated mirroring across disks

 

In order to use ASM in a RAC cluster, a separate ASM instance must be running on each node. It maintain metadata in the ASM disk group . The ASM instance must be started before other database instances .So database instance can access file located in ASM storage. The database instance communicates with the ASM instance to obtain information about file stored in ASM. ASM divides files into allocation units (AUs) and spreads the AUs for each file evenly across all the disk.. One unique advantage of ASM is that the mirroring is applied on a file basis, rather than on a volume basis.

ASM supports data files, log files, control files, archive logs, RMAN backup sets, and other Oracle database file types.

Disk Groups -Logical grouping of disks and determine file mirroring option

ASM Disks-LUNs presented to ASM

ASM Files -Files that are stored in ASM disk groups are called ASM files, this includes database files and ACFS-ADVM volume files

ASM requires a special type of Oracle instance to provide the interface between a traditional Oracle instance and the file system; the ASM software components are shipped with the Oracle Database or Grid Infrastructure software and is a viable option for database storage.

ASM Key Features and Benefits

The following are some key benefits of ASM:

       ASM  spreads  I/O  evenly  across  all  available  disk  drives  to  prevent  hot  spots  and  maximize performance.

       ASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating database consolidation.

       Inherent large file support.

       Performs  automatic  online  redistribution  after  the incremental  addition  or  removal  of  storage capacity.

       Maintains  redundant  copies  of  data  to  provide  high availability, or  leverages  3rd party  RAID functionality.

       For simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM files. Any new files can be created as ASM files whilst existing files can also be migrated to ASM. 

       RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.

       Enterprise Manager Database Control or Grid Control can be used to manage ASM disk and file activities.

 

There is one ASM instance per server. Even when there are several database instances on a single server, Each database shares a single ASM instance on that server.

In a RAC configuration, each server in the cluster has an ASM instance.

Each of these instances communicates amongst themselves in the management and presentation of the file system.

•Each database instance coordinates file access through the ASM instance on the server for which it operates.

•Lastly, each ASM instance gathers performance related data from its  associated database instance on its server.

ASM Disk Groups

The ASM disk group is the fundamental object that ASM manages.

It consists of one or more ASM disks that provide space

ASM disks are the storage devices that are provisioned to ASM

Disk groups:

A pool of disks managed as a logical unit

       Partitions total disk space into uniform sized units

       Spreads each file evenly across all disks

 

How to Prepare Storage for ASM

ASM diskgroup will be created using one of the following storage resources: 

1) Raw disk partition—A raw partition can be the entire disk drive or a section of a disk drive. However, the ASM disk cannot be in a partition that includes the partition table because the partition table can be overwritten.

2) Logical unit numbers (LUNs)—Using hardware RAID functionality to create LUNs is a recommended approach. Storage hardware RAID 0+1 or RAID5, and other RAID configurations, can be provided to ASM as ASM disks.

3) Raw logical volumes (LVM)—LVMs are supported in less complicated configurations where an LVM is mapped to a LUN, or an LVM uses disks or raw partitions. LVM configurations are not recommended by Oracle because they create a duplication of functionality. Oracle also does not recommended using LVMs for mirroring because ASM already provides mirroring.

4) NFS NAS files—If you have a certified NAS device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle Automatic Storage Management (Oracle ASM) diskgroup.

 

Redundancy

Failure Groups-  it allows to take redundancy to disk to the next level,by creating a group containing disks from multiple controller.if controller fails and all of the disk associated with that controller are inaccessible, other disks within the disk group will still be accessible as long as connected to a different controller.,By creating a failure group within the disk group, Oracle and ASM will mirror writes to disks and ensure that those writes are to disks within different failure groups so that loss of controller will not impact access to data

 

ASM Supports three levels of redundancy

External Redundancy

       Implemented externally using storage layer

       In this configuration there are only primary extents and no mirrored extents.

Normal Redundancy

       Two copies of each extent maintained in separate failure groups

        In this configuration, for each primary extent, there is one mirrored (secondary)extent.

       Increases CPU overhead on servers

       This configuration protects against at least one disk failure. Note
that in the event a disk fails in this configuration, although there is
typically no outage or data loss, the system operates in a vulnerable state,
should a second disk fail while the old failed disk replacement has not
completed. Many Oracle Database Appliance customers thus prefer the High Redundancy configuration to
mitigate the lack of additional protection during this time.

High Redundancy

Three copies of each extent maintained in separate failure groups

       HIGH redundancy is the ASM redundancy level recommended to run on production systems as each primary file extent has 2 mirrored copies on different failgroups while with NORMAL redundancy each primary file extent has only one mirrored file extent in a different failgroup.

       This basically means that NORMAL redundancy protects from a single disk failure while HIGH redundancy protects from 2 simultaneous disk failures. Note that, in the specific case of Exadata, each failgroup is located on different storage servers.

 

Basic ASM Command

 

[oracrs@port3022ovl ~]$ . oraenv

ORACLE_SID = [+ASM1] ?

The Oracle base remains unchanged with value /u02/oracle/prod

[oracrs@port3022ovl ~]$ asmcmd

ASMCMD> ls -ltr

WARNING:option 'r' is deprecated for 'ls'

please use 'reverse'

State    Type    Rebal  Name

MOUNTED  EXTERN  N      OCR_VOTE/

MOUNTED  EXTERN  N      FEG_FRA/

MOUNTED  EXTERN  N      FEG_DATA/

MOUNTED  EXTERN  N      ACFS_CTRLM/

ASMCMD>

Node1[/home/oracrs]$ ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     40959      967                0             967              0             N  ACFS_CTRLM/

MOUNTED  EXTERN  N         512   4096  1048576    614398   596144                0          596144              0             N  FSB_DATA/

MOUNTED  EXTERN  N         512   4096  1048576    102399    60494                0           60494              0             N  FUB_FRA/

MOUNTED  EXTERN  N         512   4096  1048576    307198   266540                0          266540              0             N  FUO_DATA/

MOUNTED  EXTERN  N         512   4096  1048576    102399    43013                0           43013              0             N  FUO_FRA/

MOUNTED  EXTERN  N         512   4096  1048576     10239     5655                0            5655              0             Y  OCR_VOTE/

ASMCMD>

ASMCMD> spget

+VOTING/prod-cluster/asmparameterfile/registry.253.756259253

ASMCMD>

$ env | grep ORA

ORACLE_SID=+ASM1

ORACLE_HOME=/u01/app/12.1.0/grid

Get into the ASMCMD utility by running the

QL> select instance_name, db_name, status from v$asm_client;

INSTANCE_N DB_NAME    STATUS

---------- ---------- ----------

knewdb_2   knewdb     CONNECTED

+ASM2      +ASM       CONNECTED

+APX2      +APX       CONNECTED

-MGMTDB    _mgmtdb    CONNECTED

STARTUP/SHUTDOWN

ALTER DISKGROUP MOUNT/DISMOUNT

ALTER DISKGROUP ONLINE/OFFLINE DISK

The following commands are available to SYSOPER users:

ALTER DISKGROUP REBALANCE

ALTER DISKGROUP CHECK

SELECT all V$ASM_* views

 

All other commands, such as CREATE DISKGROUP, ADD/DROP/RESIZE DISK, and so on, require the SYSDBA privilege and are not allowed with the SYSOPER privilege.

To check space on ASM

column path FORMAT a40

select group_number,disk_number,path,mode_status,name from v$asm_disk order by group_number,disk_number,path;

select group_number,name,state,type,total_mb,free_mb,usable_file_mb  from v$asm_diskgroup

select name, total_mb, free_mb, state from v$asm_diskgroup;

To Check space in DiskGroup

set lines 200 pages 200

SELECT g.group_number "Group"

, g.name "Group Name"

, g.state "State"

, g.type "Type"

, g.total_mb/1024 "Total GB"

, g.free_mb/1024 "Free GB"

, 100*(max((d.total_mb-d.free_mb)/d.total_mb)-min((d.total_mb-d.free_mb)/d.total_mb))/max((d.total_mb-d.free_mb)/d.total_mb) "Imbalance"

, 100*(max(d.total_mb)-min(d.total_mb))/max(d.total_mb) "Variance"

, 100*(min(d.free_mb/d.total_mb)) "MinFree"

, 100*(max(d.free_mb/d.total_mb)) "MaxFree"

, count(*) "DiskCnt"

FROM v$asm_disk d, v$asm_diskgroup g

WHERE d.group_number = g.group_number and

d.group_number <> 0 and

d.state = 'NORMAL' and

d.mount_status = 'CACHED'

GROUP BY g.group_number, g.name, g.state, g.type, g.total_mb, g.free_mb

ORDER BY 1;

Or

set lines 200 pages 200

SELECT g.group_number "Group"

, g.name "Group Name"

, g.state "State"

, g.type "Type"

, g.total_mb/1024 "Total GB"

, g.free_mb/1024 "Free GB"

, 100*(max((d.total_mb-d.free_mb)/d.total_mb)-min((d.total_mb-d.free_mb)/d.total_mb))/max((d.total_mb-d.free_mb)/d.total_mb) "Imbalance"

, 100*(max(d.total_mb)-min(d.total_mb))/max(d.total_mb) "Variance"

, 100*(min(d.free_mb/d.total_mb)) "MinFree"

, 100*(max(d.free_mb/d.total_mb)) "MaxFree"

, count(*) "DiskCnt"

FROM v$asm_disk d, v$asm_diskgroup g

WHERE d.group_number = g.group_number and

d.group_number <> 0 and

d.state = 'NORMAL' and

d.mount_status = 'CACHED'

GROUP BY g.group_number, g.name, g.state, g.type, g.total_mb, g.free_mb

ORDER BY 1;

 

     STARTUP/SHUTDOWN

ALTER DISKGROUP MOUNT/DISMOUNT

ALTER DISKGROUP ONLINE/OFFLINE DISK

The following commands are available to SYSOPER users:

ALTER DISKGROUP REBALANCE

ALTER DISKGROUP CHECK

SELECT all V$ASM_* views

All other commands, such as CREATE DISKGROUP, ADD/DROP/RESIZE DISK, and so on, require the SYSDBA privilege and are not allowed with the SYSOPER privilege.

To check space on ASM

column path FORMAT a40

select group_number,disk_number,path,mode_status,name from v$asm_disk order by group_number,disk_number,path;

select group_number,name,state,type,total_mb,free_mb,usable_file_mb  from v$asm_diskgroup

select name, total_mb, free_mb, state from v$asm_diskgroup;

 

The procedures for preparing storage resources for ASM are: 

1) Identify or create the storage devices for ASM by identifying all of the storage resource device names that you can use to create an ASM disk group. For example, on Linux systems, device names are typically presented from the /dev directory with the /dev/device_name_identifier name syntax or for AIX as below

 cd /dev

ls -ltr  /ls -ltr /dev/rhdisk36

    

2) Change the ownership and the permissions on storage device resources. For example, the following steps are required on Linux systems or AIX system:

dev/rhdisk35

2.1) Change the user and group ownership of devices to oracle:dba or grid:asmadmin

2.2) Change the device permissions to read/write

2.3) On older Linux versions, you must configure raw device binding

on AIX as below

chown -R grid:asmadmin /dev/rhdisk5

chown -R grid:asmadmin /dev/rhdisk6

chmod 660 /dev/rhdisk5

chmod 660 /dev/rhdisk6

After you have configured ASM, ensure that disk discovery has been configured correctly by setting the ASM_DISKSTRING initialization parameter.

Note: Setting the ownership to oracle:dba or grid:asmadmin is just one example that corresponds to the default settings. A non-default installation may require different settings. In general, the owner of the disk devices should be the same as the owner of the Oracle binary. The group ownership should be OSDBA of the ASM instance, which is defined at installation.

 To check the current disk attach to relevant diskgroup

-----------------------------------------------------------

SELECT B.NAME, A.PATH, B.STATE, A.HEADER_STATUS, A.FREE_MB, A.TOTAL_MB

 FROM V$ASM_DISK A, V$ASM_DISKGROUP B

 where A.GROUP_NUMBER=B.GROUP_NUMBER

        AND B.NAME like '%DISKGROUPNAME%'

 order by path;

Check for the candidate disk

---------------------------

SELECT HEADER_STATUS,TOTAL_MB,PATH FROM V$ASM_DISK;

set linesize 200

column path format a40

 SQL> SELECT GROUP_NUMBER,DISK_NUMBER,REDUNDANCY,FAILGROUP ,HEADER_STATUS,state,TOTAL_MB,PATH FROM V$ASM_DISK;

GROUP_NUMBER DISK_NUMBER REDUNDA FAILGROUP                      HEADER_STATU STATE      TOTAL_MB PATH

------------ ----------- ------- ------------------------------ ------------ -------- ---------- ----------------------------------------

           0           4 UNKNOWN                                CANDIDATE    NORMAL            0 /dev/rhdisk5

           0           5 UNKNOWN                                CANDIDATE    NORMAL            0 /dev/rhdisk6

           1           0 UNKNOWN DATA1_0000                                   MEMBER       NORMAL       512000 /dev/rhdisk1

           2           0 UNKNOWN FRA1_0000                                      MEMBER       NORMAL       102400 /dev/rhdisk2

           3           0 UNKNOWN VOTING_0000                    MEMBER       NORMAL          500 /dev/rhdisk3

           3           1 UNKNOWN VOTING_0001                    MEMBER       NORMAL          500 /dev/rhdisk4

SELECT HEADER_STATUS,OS_MB,TOTAL_MB,PATH FROM V$ASM_DISK WHERE HEADER_STATUS LIKE '%CANDIDA%' ORDER BY PATH;

HEADER_STATU   TOTAL_MB PATH

------------ ---------- --------------------

CANDIDATE             0 /dev/rhdisk5

CANDIDATE             0 /dev/rhdisk6

 

Determine which diskgroup do you want to add

--------------------------------------------

select name, total_mb, free_mb, state from v$asm_diskgroup;

NAME                             TOTAL_MB    FREE_MB STATE

------------------------------    ----------            ---------- -----------

DATA                            3014656          418080 MOUNTED

FRA                               1048576          998834  MOUNTED

OCRVOTE                                  12288              11356   MOUNTED

SSDDISK                       524288          404805  MOUNTED

 

select dg.name dg_name,  dg.state dg_state,  dg.type, d.DISK_NUMBER dsk_no, d.MOUNT_STATUS, d.HEADER_STATUS, d.MODE_STATUS,

        d.STATE, d. PATH, d.FAILGROUP  FROM V$ASM_DISK d,  v$asm_diskgroup dg

     where dg.group_number(+)=d.group_number order by dg_name, dsk_no;

DG_NAME       DG_STATE    TYPE       DSK_NO MOUNT_S HEADER_STATU MODE_ST STATE    PATH              FAILGROUP

--------------------- ------ ---------- ------- ------------ ------- -------- ------------------------------------

DATA1         MOUNTED     EXTERN          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/rhdisk1    DATA1_0000

FRA1          MOUNTED     EXTERN          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/rhdisk2    FRA1_0000

VOTING        MOUNTED     EXTERN          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/rhdisk3    VOTING_0000

VOTING        MOUNTED     EXTERN          1 CACHED  MEMBER       ONLINE  NORMAL   /dev/rhdisk4    VOTING_0001

                                          4 CLOSED  CANDIDATE    ONLINE  NORMAL   /dev/rhdisk5

                                           5 CLOSED  CANDIDATE    ONLINE  NORMAL   /dev/rhdisk6

 Alter or the candidate disk to the disk group

-----------------------------------------------

login as grid user and invoke sqlplus / as sysasm

then run command below

ALTER DISKGROUP FRA ADD DISK

'/dev/rhdisk5' NAME rhdisk5,

'/dev/rhdisk6' NAME rhdisk6;

 

Add Disk to ASM diskgroup in Oracle 19c

Follow the below steps to add a disk to ASM diskgroup in Oracle 19c.

1. Create ASM disk.
2. Check the ASM disks.
3. Add disk to ASM diskgroup.
4. Check the rebalance status.
5. Check the newly added disk in ASM Diskgroup

1. Create ASM disk.

Get the Lun name from storage team.

Lun Name – /dev/sda1

Create the new ASM Disk,

$sudo oracleasm createdisk ORA_DATA_01 /dev/sda1

2. Check the ASM Disks,

Check the newly added disk,

$sudo oracleasm listdisks

3. Create the ASM Diskgroup

$sqlplus / as sysasm

SQL> alter diskgroup ORA_DATA add disk ‘/dev/oracleasm/disks/ORA_DATA_011’ NAME ORA_DATA_01

rebalance power 100;

4. Check the rebalance status.

$sqlplus / as sysasm

SQL> select * from v$asm_operation.

If no rows returned, then the rebalance is compeleted.

5. Check the newly added disk in ASM Diskgroup

set lines 999;
col diskgroup for a15
col diskname for a15
col path for a35
select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path,b.header_status
from v$asm_disk b, v$asm_diskgroup a
where a.group_number (+) =b.group_number
order by b.group_number,b.name;

Add Disk to ASM diskgroup in Oracle 19c

1)  Allocate a new LUN to the RAC nodes Node1 and Node2 - For Database TESTPRD -  OVM /Linux team to provide WWPn and fill Storage proforma and Storage team to action allocating the Luns

For as long as the customer wants to retain a valid DataGuard configuration, you need to mirror whatever you do on the primary cluster with the same action on the standby node (here, Node3standby) So, a LUN on both primary and standby. Same size storage to be allocated.

 

SO - steps 2 to 6 below to be performed for both the RAC cluster AND its standby/DataGuard node

 

2)  As root, on the first node of the cluster only, use the oracleasm program to allocate the disk to group oracrs asmadmin. (Linux Team or OVM  Team) ?

     The command is /etc/init.d/oracleasm createdisk <disk name> <path>

Step missing here - the disks need to be partitioned first of all, via "fdisk".

 

This and the subsequent "oracleasm createdisk" command to be performed by DBA

I would use the "oracleasm" executable in the path ( /usr/bin ) rather than /"etc/init.d/oracleasm" .

3) Verify that the disk is available: # /etc/init.d/oracleasm listdisks. The output should show all asm disks, including the new one (DBA team)

"oracleasm listdisks"

 

 4) On all other nodes of the cluster, as root, scan for the new oracle disk   (DBA team)

  "oracleasm scandisks"

 

5) Verify that the new disk is visible on both nodes (DBA team)

More informatively, can get a complete overview of the ASM disks and the partitions they're assigned to via the following command:

"oracleasm listdisks | sed 's/^/oracleasm querydisk -p /' | sh "

 

Should give same output on all cluster nodes.

 

6 ) Add disk in disk diskgroup on database TESTPRD (DBA team)

The diskgroup in question being called "FEO_DATA" on both primary and standby.

Easiest way to add is to use the X-based GUI tool "asmca" when logged in as "oracrs"

 

The whole process is completely non-intrusive, absolutely will not affect running applications. No outage required.

 These instructions assume we are working on a multi-node RAC system. They do not cover the steps required to get the LUN added to the operating system as that can vary by the server environment.

 

    Have the storage or systems group allocate a new LUN to the RAC nodes. Ensure this is a single LUN visible to both nodes. If possible, It should appear in the same location on both servers, ie. if it is at /dev/sdi on node 1 it should show as /dev/sdi on node 2. The disk placement is not absolutely critical when using ASMLIB, but I still recommend it. If the disks are mounted in different paths on the two nodes it becomes much easier to make mistakes in managing the disks. If you do not use ASMLIB, you should use udev or other method of ensuring that the same disk always comes up in the same location, even after adding additional storage.

 

    NOTE: Do not use thin provisioning to allocate this LUN!

    Create a single primary partition containing the entire LUN and make sure it is visible on both RAC nodes.

    As root, on the first node of the cluster only, use the oracleasm program to allocate the disk to oracle.

     The command is /etc/init.d/oracleasm createdisk <disk name> <path>,

eg:/etc/init.d/oracleasm createdisk DATA03 /dev/sdi

    Use oracleasm as oracle to verify that the disk is available: # /etc/init.d/oracleasm listdisks. The output should show all asm disks, including the new one:

 

    oracle (+ASM2)$ /etc/init.d/oracleasm listdisks

 

    DATA01

 

    DATA02

 

    DATA03

 

    FRA01

 

    GRID03

 

    GRID04

    On all other nodes of the cluster, as root, scan for the new oracle disk:

 

[root@kerbrac01 ~]# /etc/init.d/oracleasm scandisks

 

    Scanning the system for Oracle ASMLib disks: [ OK ]

    After running scandisk on the other nodes, run listdisks as oracle on each node to verify that the new disk is visible:

 

    oracle (+ASM1)$ /etc/init.d/oracleasm listdisks

 

    DATA01

    DATA02

    DATA03

    FRA01

     GRID03

 

    GRID04

    After verifying that the new disk is visible on all nodes, go back to the first node, connect as Oracle. Then set the environment to the ASM instance, and add the disk to the ASM diskgroup.

After verifying that the new disk is visible on all nodes, go back to the first node, connect as Oracle. Then set the environment to the ASM instance, and add the disk to the ASM diskgroup.

 

    oracle (racdb1)$ . oraenv

 

    ORACLE_SID = [racdb1] ? +ASM1

 

    The Oracle base has been changed from /opt/oracle to /u01/oracle

 

    kerbrac01:/home/oracle

    oracle (+ASM1)$ sqlplus / as sysasm

 

    SQL> alter diskgroup DATA01 add disk DAT03;

 

    Diskgroup altered

 SQL>

    After the disk is added, ASM will balance data in the diskgroup across all the disks in the diskgroup so each disk has about the same amount of data. This operation is called rebalancing. The rebalance process can be viewed by select from the v$asm_operation view:

    SQL> Select * from v$asm_operation;