Friday, 30 June 2023

Data Migration from On Storage Server to another Storage Server


Migrate ASM diskgroups from one SAN to another SAN without a downtime.

Summary : We have RAC Database with 2 node   

Exact Steps To Migrate ASM Diskgroups To Another SAN/Disk-Array/DAS/etc Without Downtime. (Doc ID 837308.1)

We can use two method to Migrate data from On Storage Server to another Storage Server

A) Storage Method

1. Put the databases in BEGIN BCKUP mode.

2. Create a replication pair between the source and target ASM storage lun.

3. Once 100% sync completed, shutdown the database and application and host servers.

4. Split the pair and remove the source lun and map the target lun with the current hosts.

5. Bring up the host servers, grid and start the databases.

6. Perform the sanity check and disable the begin backup mode.


B) Oracle ASM Method




ORACLE ASM Method(if database size is approx 100 TB)

-----------

1) Backup all your databases and valid the backup (always required to protect your data).

2) Add the new path (new disks from the new storage) to your asm_disktring to be recognized by ASM:

show parameter asm_disktring;

SQL> alter system set asm_disktring = '/dev/rhdisk*' , '/dev/rhdisknew*';

Where: '/dev/rhdisk*' are the current disks.

Where: '/dev/rhdisknew*' are the new disks.

3) Confirm that the new disks are being detected by ASM:

SQL> select path from v$asm_disk;

         select * from V$ASM_CLIENT;

4) Storage Team will create Lun on New Storage Server and will share wwn of new lun to Unix Team 

5) Unix Team will create New ASM disk as per naming and size requirement

6) Oracle Team will verify those newly created ASM disk are visible on both node and path of new disks are correct

7) confirm the disk are visible (I/O) at OS level (as oracle or grid OS user) from each node

8) verify that KFED can read on the candidate disks at OS level  (as oracle or grid OS user) from each node

9) Add Ist batch of 8 ASM disk to disk group EASY_DB_FRA & EASY_DB_DATA1 and rebalance using 4

SQL> alter diskgroup <diskgroup name> add disk

‘<new disk 1>’,

‘<new disk 2>’,

‘<new disk 3>’,

‘<new disk 4>’,

.

.

.

‘<new disk N>’ rebalance power <#>;

10) Monitor the database of any I/O latency till the rebalance complete ,reduce power limit if any latency observed

select * from v$asm_operation;

11) remove the first batch of 8 OLD ASM data disk from  EASY_DB_FRA & EASY_DB_DATA1 diskgroup


SQL> alter diskgroup <diskgroup name> drop disk

<disk name A>,

<disk name B>,

<disk name D>,

<disk name E>,

.

.

.

<disk name X>  rebalance power <#>;

12) Monitor the database of any I/O latency till the rebalance complete ,reduce power limit if any letency 

13) Oracle DBA should confirm that none of old ASM disk in ASM and share list of removed disks to Unix team

14) Unix Team will remove the ASM DISK as per shared list by DBA from Database Server


Repeat same step from 1 to 11 for next batch of 8 disks (EASY_DB_DATA1) 


1) create a new diskgroup for redo(NEW_REDO_01 and NEW_REDO_02)

2) Add new redo members to each thread in new diskgroup

3) delete member from old diskgroups((NEW_REDO_01 and NEW_REDO_02)


Activities to be performed with Downtime


1) create a New Diskgroup for OCR And VOTE disk (NEW_OCRDATA)

2) shutdown cluster and start it in exclusive mode 

3) move OCR and Vote disk to New Disk group

4) shutdown clusterware and start 



ASM Disk Migration to Different Storage


This is a zero-downtime activity.

Below are the 3 disks allocated on both RAC nodes which are further allocated to CRS, DATA and FRA diskgroups

Device Boot      Start         End      Blocks   Id  System

------------------------------------------------------------

/dev/sdb1               1         915     7349706   83  Linux

/dev/sdb2             916        1830     7349737+  83  Linux

/dev/sdb3            1831        2610     6265350   83  Linux


Below are the diskgroup details

DISKGROUP  DISKNAME       TOTAL_MB    USED_MB    FREE_MB PATH                           HEADER_STATU

---------- ------------ ---------- ---------- ---------- ------------------------------ ------------

CRS        CRS_0000           7177        396       6781 /dev/oracleasm/disks/CRS1      MEMBER

DATA       DATA_0000          7177       2009       5168 /dev/oracleasm/disks/DATA1     MEMBER

FRA        FRA_0000           6118        366       5752 /dev/oracleasm/disks/FRA1      MEMBER


We are going to replace the DATA1 disk under DATA diskgroup with DATA2 disk. This new disk we have allocated via a new storage. Let us first create the ASM disk


[root@oraracn1 ~]# oracleasm createdisk DATA2 /dev/sdc1

Writing disk header: done

Instantiating disk: done

Connect to ASM via sqlplus and add the new diskstring. In our example, the new disk location is same as old disks, so no need to add. In case you have different disk path, add it to ASM_DISKSTRING parameter. Keep both old and new paths.


sqlplus / as sysasm

SQL> alter system set asm_disktring = '/dev/oracleasm/disks/*' , '/dev/new_loc/*';

SQL> select path from v$asm_disk;


Check the ASM disks details via below query. The new disk status must be PROVISIONED

set pages 40000 lines 120

col PATH for a30

select DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS,STATE,

PATH FROM V$ASM_DISK;


DISK_NUMBER MOUNT_S HEADER_STATU MODE_ST STATE    PATH

----------- ------- ------------ ------- -------- ------------------------------

          0 CLOSED  PROVISIONED  ONLINE  NORMAL   /dev/oracleasm/disks/DATA2

          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/oracleasm/disks/FRA1

          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/oracleasm/disks/DATA1

          0 CACHED  MEMBER       ONLINE  NORMAL   /dev/oracleasm/disks/CRS1


We will now add DATA2 to DATA diskgroup and later remove DATA1 disk

SQL> alter diskgroup DATA add disk '/dev/oracleasm/disks/DATA2';

OR

SQL> alter diskgroup DATA add disk '/dev/oracleasm/disks/DATA2' rebalance power 20;

Diskgroup altered.


Wait for re-balance operation to complete

SQL> Select operation, state, est_work, est_minutes from v$asm_operation; 


OPERA STAT   EST_WORK EST_MINUTES

----- ---- ---------- -----------

REBAL RUN        1175           0

Once re-balance operation is completed, check the disk details


set lines 999;

col diskgroup for a10

col diskname for a12

col path for a30

select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path, 

b.header_status

from v$asm_disk b, v$asm_diskgroup a 

where a.group_number (+) =b.group_number 

order by b.group_number,b.name;


DISKGROUP  DISKNAME       TOTAL_MB    USED_MB    FREE_MB PATH                           HEADER_STATU

---------- ------------ ---------- ---------- ---------- ------------------------------ ------------

CRS        CRS_0000           7177        396       6781 /dev/oracleasm/disks/CRS1      MEMBER

DATA       DATA_0000          7177        838       6339 /dev/oracleasm/disks/DATA1     MEMBER

DATA       DATA_0001         10236       1183       9053 /dev/oracleasm/disks/DATA2     MEMBER

FRA        FRA_0000           6118        366       5752 /dev/oracleasm/disks/FRA1      MEMBER


Observe that both DATA1 and DATA2 are now part of DATA diskgroup. Now we can remove the old disk DATA1 from the diskgroup


SQL> alter diskgroup DATA drop disk 'DATA_0000'; 

Diskgroup altered.


Wait for re-balance operation to complete


SQL> Select operation, state, est_work, est_minutes from v$asm_operation; 


OPERA STAT   EST_WORK EST_MINUTES

----- ---- ---------- -----------

REBAL RUN         836           0

Once re-balance operation is completed, check the disk details via below query and you must see DATA1 disk marked as FORMER


set lines 999;

col diskgroup for a10

col diskname for a12

col path for a30

select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path, 

b.header_status

from v$asm_disk b, v$asm_diskgroup a 

where a.group_number (+) =b.group_number 

order by b.group_number,b.name;


DISKGROUP  DISKNAME       TOTAL_MB    USED_MB    FREE_MB PATH                           HEADER_STATU

---------- ------------ ---------- ---------- ---------- ------------------------------ ------------

                                 0          0          0 /dev/oracleasm/disks/DATA1     FORMER

CRS        CRS_0000           7177        396       6781 /dev/oracleasm/disks/CRS1      MEMBER

DATA       DATA_0001         10236       1183       9053 /dev/oracleasm/disks/DATA2     MEMBER

FRA        FRA_0000           6118        366       5752 /dev/oracleasm/disks/FRA1      MEMBER


You can later choose to complete drop DATA1 disk via below command and ask storage team to reclaim the mount points


[root@oraracn1 ~]# oracleasm deletedisk DATA1 /dev/sdb1

You can also achieve above via ASMCA but make sure you monitor re-balancing manually!


AS per Oracle Support Documents


SOLUTION

If your plans are replacing the current disks associated to your diskgroups with a new storage, this operation can be accomplished without any downtime, so you can follow the next steps

1) Backup all your databases and valid the backup (always required to protect your data).

2) Add the new path (new disks from the new storage) to your asm_diskstring to be recognized by ASM:

Example:

SQL> alter system set asm_diskstring = '/dev/emcpowerc*' , '/dev/emcpowerh*';

Where: '/dev/emcpowerc*' are the current disks.
Where: '/dev/emcpowerh*' are the new disks.


3) Confirm that the new disks are being detected by ASM:

SQL> select path from v$asm_disk;

 

4) Validate all the new disks as described in the following document:
 

How To Add a New Disk(s) to An Existing Diskgroup on RAC Cluster or Standalone ASM Configuration (Best Practices). (Doc ID 557348.1)

 
5) Add the new disks to your desired diskgroup:

SQL> alter diskgroup <diskgroup name> add disk
‘<new disk 1>’,
‘<new disk 2>’,
‘<new disk 3>’,
‘<new disk 4>’,
.
.
.
‘<new disk N>’ rebalance power <#>; 



6) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;


7) Finally, remove the old disks:

SQL> alter diskgroup <diskgroup name> drop disk
<disk name A>,
<disk name B>,
<disk name D>,
<disk name E>,
.
.
.
<disk name X>  rebalance power <#>;


8) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;



9) Done, your ASM diskgroups and database have been migrated to the new storage.

Note: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one Rebalance operation will be started as follow:

SQL> alter diskgroup <diskgroup name>
add disk '<new device physical name 1>', .., '<new device physical name N>'
drop disk <old disk logical name 1>, <old disk logical name 2>, ..,<old disk logical name N>
rebalance power <#>;


This is more efficient than separated commands (add disk & drop disk statements).

 

Note 1: On 10g, a manual rebalance operation is required to restart the diskgroup rebalance and expel the disk(s) because on 10g (if something wrong happens on disk expelling, e.g. hanging) ASM will not restart the ASM rebalance automatically (this was already enhanced on 11g and 12c), therefore you will need to restart a manual rebalance operation as follows:
SQL> alter diskgroup <diskgroup name> rebalance power 11;
  


 

Note 2: Disk from the old SAN/Disk-Array/DAS/etc are finally expelled from the diskgroup(s) once the rebalance operation (from the drop operation) completes and when HEADER_STATUS = FORMER is reported thru the v$asm_disk view.


No comments:

Post a Comment