Thursday 12 March 2020

To add space on ASM in HA linux enviroment Step by Step

To add space on ASM in HA linux enviroment
------------------------------------------
Enviroment Details

There is two node RAC database with single node Standby in Linux enviroment

Request to you add 50 GB on primary database server and Standby database Server

We require 50gb disk space on  Node1 and Nod2 Primary RAC database Server and 50gb on Node3 (dataguard node)
we need 100Gb in total


It is Non Disruptive Activity

High Level Plan
--------------------

The list of actions required to add disk space at the ASM level for RAC database TestRAC:

1)  50 GB new LUN allocation to the RAC nodes Node1 and Node2 - for RAC database TestRAC. (Storage Team)

2) 50 GB new LUN allocation to Node3 (For standby database) (Storage Team)
3) allocated lUN need to attached with each server (OVM Team)
4) The disks need to be partitioned , via "fdisk" on Node1 and Node3  (Linux Team)
5) on partitioning the disk, a "partprobe /dev/xvdo" should have been issued on node2 (second primary node) (linux Team)

and the "oracleasm scandisks" command likewise should have been issued on Node2 once "oracleasm createdisk" had been run on Node1
Has much the same effect as "partprobe" - ensures the other nodes in the cluster get to see the changes made on the first node.

6)  As root, on the first node of the cluster only, use the oracleasm program to allocate the disk to group oracrs asmadmin. (DBA Team)
     The command is  oracleasm  createdisk <disk name> <path>

7) Verify that the disk is available: # /usr/bin/oracleasm listdisks. The output should show all asm disks, including the new one (DBA team)
        "oracleasm listdisks"

8) On all other nodes of the cluster, as root, scan for the new oracle disk   (DBA team)
     oracleasm scandisks (second primary node)

8) Verify that the new disk is visible on both nodes (DBA team)
     "oracleasm listdisks | sed 's/^/oracleasm querydisk -p /' | sh "
      Should give same output on all cluster nodes.

9)  Add disk in disk diskgroup on database TestRAC (DBA team)
     The diskgroup in question being called "TestRAC_DATA3" on both primary and standby.

Detail Plans
------------------

Precheck
-----------
oracrs@Node3 dev]$ cd  /usr/bin
[oracrs@Node3 bin]$ oracleasm listdisks
ACFS_CTRLM
TestRAC_DATA1
TestRAC_DATA2
TestRAC_FRA1
TestRAC_VOTE01
[oracrs@Node3 bin]$


[root@Node3 init.d]# oracleasm listdisks | sed 's/^/oracleasm querydisk -p /' | sh
Disk "ACFS_CTRLM" is a valid ASM disk
/dev/xvdl1: LABEL="ACFS_CTRLM" TYPE="oracleasm"
Disk "TestRAC_DATA1" is a valid ASM disk
/dev/xvdh1: LABEL="TestRAC_DATA1" TYPE="oracleasm"
Disk "TestRAC_DATA2" is a valid ASM disk
/dev/xvdi1: LABEL="TestRAC_DATA2" TYPE="oracleasm"
Disk "TestRAC_FRA1" is a valid ASM disk
/dev/xvdj1: LABEL="TestRAC_FRA1" TYPE="oracleasm"
Disk "OCR_VOTE01" is a valid ASM disk
/dev/xvdk1: LABEL="OCR_VOTE01" TYPE="oracleasm"
[root@Node3 init.d]#




oracleasm querydisk -d TestRAC_DATA2

root@Node3 init.d]# oracleasm querydisk -d TestRAC_DATA2
Disk "TestRAC_DATA2" is a valid ASM disk on device [202,129]
[root@Node3 init.d]#

SQL> select name,group_number,mount_status,header_status,state,name,path from v$asm_disk;

NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ACFS_CTRLM                                1 CACHED  MEMBER       NORMAL   ACFS_CTRLM
ORCL:ACFS_CTRLM


TestRAC_DATA1                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA1
ORCL:TestRAC_DATA1

TestRAC_DATA2                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA2
ORCL:TestRAC_DATA2


NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TestRAC_FRA1                                  5 CACHED  MEMBER       NORMAL   TestRAC_FRA1
ORCL:TestRAC_FRA1

OCR_VOTE01                                6 CACHED  MEMBER       NORMAL   OCR_VOTE01
ORCL:OCR_VOTE01





SQL> select GROUP_NUMBER,DISK_NUMBER,TOTAL_MB,FREE_MB,path from V$ASM_DISK;

GROUP_NUMBER DISK_NUMBER   TOTAL_MB    FREE_MB
------------ ----------- ---------- ----------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           1           0      40959        967
ORCL:ACFS_CTRLM

           2           0     102399      96885
           4           0     153599      70158
ORCL:TestRAC_DATA1

           4           1     153599      70147
ORCL:TestRAC_DATA2


GROUP_NUMBER DISK_NUMBER   TOTAL_MB    FREE_MB
------------ ----------- ---------- ----------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           5           0     102399          0
ORCL:TestRAC_FRA1

           6           0      10239       5620
ORCL:OCR_VOTE01



SQL>

Check Newly created Block device/raw device on all nodes using command 'lsblk else fdisk -l'
It will show all raws devices including new one


Node1,Node2 (primary node)  and Node3 (dataguard node)


[oracrs@Node3 dev]$
[oracrs@Node3 dev]$ lsblk
NAME                                 MAJ:MIN   RM  SIZE RO TYPE MOUNTPOINT
xvdb                                 202:16     0   60G  0 disk
├─xvdb1                              202:17     0  500M  0 part /boot
└─xvdb2                              202:18     0 59.5G  0 part
  ├─vg_root-lv_swap (dm-0)           253:0      0    6G  0 lvm  [SWAP]
  ├─vg_root-lv_root (dm-1)           253:1      0    2G  0 lvm  /
  ├─vg_root-lv_home (dm-5)           253:5      0    2G  0 lvm  /home
  ├─vg_root-lv_opt (dm-6)            253:6      0    4G  0 lvm  /opt
  ├─vg_root-lv_optIbmItm (dm-7)      253:7      0    4G  0 lvm  /opt/IBM/ITM
  ├─vg_root-lv_optTivoli (dm-8)      253:8      0  500M  0 lvm  /opt/Tivoli
  ├─vg_root-lv_tmp (dm-9)            253:9      0    5G  0 lvm  /tmp
  ├─vg_root-lv_usr (dm-10)           253:10     0   12G  0 lvm  /usr
  ├─vg_root-lv_var (dm-11)           253:11     0    6G  0 lvm  /var
  ├─vg_root-lv_varAdmPerfmgr (dm-12) 253:12     0  500M  0 lvm  /var/adm/perfmgr
  ├─vg_root-lv_varTmp (dm-13)        253:13     0    2G  0 lvm  /var/tmp
  ├─vg_root-lv_oswatcher (dm-14)     253:14     0    7G  0 lvm  /var/log/oswatcher
  ├─vg_root-lv_controlm (dm-15)      253:15     0    3G  0 lvm  /opt/ctm700
  └─vg_root-lv_best1 (dm-16)         253:16     0  7.1G  0 lvm  /opt/bmc/Patrol3
xvdc                                 202:32     0   90G  0 disk
├─vg_app-lv_oracle (dm-3)            253:3      0   20G  0 lvm  /oracle
└─vg_app-lv_u01 (dm-4)               253:4      0   80G  0 lvm  /u01
xvdd                                 202:48     0   60G  0 disk
└─vg_grid-lv_u02 (dm-2)              253:2      0   60G  0 lvm  /u02
xvde                                 202:64     0  100G  0 disk
└─xvde1                              202:65     0  100G  0 part
xvdf                                 202:80     0  100G  0 disk
└─xvdf1                              202:81     0  100G  0 part
xvdg                                 202:96     0  100G  0 disk
└─xvdg1                              202:97     0  100G  0 part
xvdh                                 202:112    0  150G  0 disk
└─xvdh1                              202:113    0  150G  0 part
xvdi                                 202:128    0  150G  0 disk
└─xvdi1                              202:129    0  150G  0 part
xvdj                                 202:144    0  100G  0 disk
└─xvdj1                              202:145    0  100G  0 part
xvdk                                 202:160    0   10G  0 disk
└─xvdk1                              202:161    0   10G  0 part
xvdl                                 202:176    0   40G  0 disk
└─xvdl1                              202:177    0   40G  0 part
xvdm                                 202:192    0    3G  0 disk
└─vg_root-lv_optIbmItm (dm-7)        253:7      0    4G  0 lvm  /opt/IBM/ITM
xvdn                                 202:208    0   41G  0 disk
└─vg_app-lv_u01 (dm-4)               253:4      0   80G  0 lvm  /u01
sr0                                   11:0      1 1024M  0 rom
asm!acfsvol1-110                     252:56321  0   39G  0 disk
xvdo                                 202:224    0   50G  0 disk
[oracrs@Node3 dev]$


---new disk--


[oracrs@Node3 dev]$ ls -ltr xvdo
brw-rw---- 1 root disk 202, 224 Mar 11 14:33 xvdo


update from Linux Team

The Disk has been added on Node1 - partition has been created /dev/xdvo1
The disk partition has been done on Node3 - /dev/xdvo1

If they partition the disk correctly, We would expect to see a disk partition called /dev/xvdo1


We would name the ASM disk "TestRAC_DATA3" as it's going to be added to the diskgroup "TestRAC_DATA"



DBA task start Now to add space on ASM for RAC database TestRAC (standby Node). Standy databas is single node with ASM

1) Create ASM disk

/etc/init.d/oracleasm createdisk <disk name> <path>

oracleasm createdisk TestRACPRD_DATA3 /dev/xvdo1

[root@Node3 bin]# ls -ltr /dev/xvdo1
brw-rw---- 1 root disk 202, 225 Mar 11 15:53 /dev/xvdo1
[root@Node3 bin]# oracleasm createdisk TestRAC_DATA3 /dev/xvdo1
Writing disk header: done
Instantiating disk: done
[root@Node3 bin]# oracleasm listdisks
ACFS_CTRLM
TestRAC_DATA1
TestRAC_DATA2
TestRAC_DATA3
TestRAC_FRA1
OCR_VOTE01
[root@Node3 bin]#

to verify disk on standby node

SQL> select group_number, name from v$asm_diskgroup;
GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS_CTRLM
           5 TestRAC_FRA
           4 TestRAC_DATA
           6 OCR_VOTE
          
6 rows selected.
SQL> select name from v$asm_disk where group_number = '4';
NAME
------------------------------
TestRAC_DATA1
TestRAC_DATA2
TestRAC_DATA3
SQL>


We think that we need to prefix 'ORCL'

NAME                           GROUP_NUMBER MOUNT_S HEADER_STATU STATE    NAME
------------------------------ ------------ ------- ------------ -------- ------------------------------
PATH
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TestRAC_DATA1                                 4 CACHED  MEMBER       NORMAL   TestRAC_DATA1
ORCL:TestRAC_DATA1
TestRAC_DATA2                                 4 CACHED  MEMBER       NORMAL   TestRAC


To Add disk to disk group


alter diskgroup TestRAC_DATA  add disk 'TestRAC_DATA3';
or
alter diskgroup TestRAC_DATA  add disk 'ORCL:TestRAC_DATA3';

or

as opposed to logging onto Node3 as "oracrs" and typing asmca
Then right click on diskgroup FEO_DATA - select "Add Disks"
From the "Add Disks" screen, select the disk TestRAC_DATA3_DATA3
Click OK
Once the disk has been added, you get returned to the diskgroups screen, with thr amount of space available in the diskgroups appropriately updated.
And just click "Exit"
It's the way every single disk has been added to the various diskgroups....until now.


same process on primary node (node1 and node2)

----------------------

exact same process, only you issue the "oracleasm createdisk" on one VM and "oracleasm scandisks" on the other will ensure the new storage can be seen by both VMs

No comments:

Post a Comment