Find it

Sunday, November 10, 2013

Migrating disks within coordinator disk group from one array to another.

Last week I was working on storage migration tasks in which I envisioned to migrate disks within coordinator disk group from old array to newly deployed array.

To me, it was first time doing this task so I decided to note down the steps and after successfully completing the task decided to share with my all friends out there!

There can be several methods to do this task however for me below steps worked very well but if anyone has any better set of instructions doing this then request to share those.

Let's start then -

1. If VCS is running, shut it down locally:

# hastop -all -force

2. Stop I/O fencing on all nodes. This removes any registration keys on the disks.

# /etc/init.d/vxfen stop   (on all 3 cluster nodes)

3. Import the coordinator disk group. The file /etc/vxfendg includes the name of the disk group (for example, vxfencoorddg) that contains the coordinator disks, so use the command -

# vxdg -tfC import 'cat /etc/vxfendg'
                                       OR
# vxdg -tfC import vxfencoorddg

Where:

-t specifies that the disk group is imported only until the system restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or more disks is not accessible.
-C specifies that any import blocks are removed.


4. Turn off the coordinator attribute value for the coordinator disk group.

# vxdg -g vxfencoorddg set coordinator=off

5. Remove old disks and add new disks

    First remove N-1 disks from fence disk group
 
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg01
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg02
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg03
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg04
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg05
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg06

        Add disk for fence disk group

       # vxdisk list | egrep 'apevmx13_139|apevmx13_140|apevmx13_143|apevmx13_145|apevmx14_139|apevmx14_140|apevmx14_141'
    apevmx13_139 auto:cdsdisk    -            -            online
    apevmx13_140 auto:cdsdisk    -            -            online
    apevmx13_143 auto:cdsdisk    -            -            online
    apevmx13_145 auto:cdsdisk    -            -            online
    apevmx14_139 auto:cdsdisk    -            -            online
    apevmx14_140 auto:cdsdisk    -            -            online
    apevmx14_141 auto:cdsdisk    -            -            online


    # vxdg -g vxfencoorddg adddisk vxfencoorddg01=apevmx13_139
    # vxdg -g vxfencoorddg adddisk vxfencoorddg02=apevmx13_140
    # vxdg -g vxfencoorddg adddisk vxfencoorddg03=apevmx13_143
    # vxdg -g vxfencoorddg adddisk vxfencoorddg04=apevmx13_145
    # vxdg -g vxfencoorddg adddisk vxfencoorddg05=apevmx14_139
    # vxdg -g vxfencoorddg adddisk vxfencoorddg06=apevmx14_140

   
    Remove the remaining one disk from enclosure apedmx06

    # vxdg -g vxfencoorddg rmdisk vxfencoorddg07

    Add the 7th disk from enclosure apevmx14

    # vxdg -g vxfencoorddg adddisk vxfencoorddg07=apevmx14_141

6. Set the coordinator attribute value as "on" for the coordinator disk group.

# vxdg -g vxfencoorddg set coordinator=on

7. Run disk scan on all nodes

# vxdisk scandisks   (Run on all cluster nodes)

8. Check if fencing disks are visible on all nodes

# vxdisk -o alldgs list | grep fen

9. After replacing disks in a coordinator disk group, deport the disk group:

# vxdg deport 'cat /etc/vxfendg'
                                 OR
# vxdg deport vxfencoorddg

10. Verify if the fencing diskgroup is deported

# vxdisk -o alldgs list | grep fen

11. On each node in the cluster, start the I/O fencing driver:

# /etc/init.d/vxfen start  (on all 3 cluster nodes)

12. hastart on all cluster nodes. 

# hastart  (on all 3 cluster nodes)

That's it, these 12 steps takes you through migrating disks within coordinator disk group from one array to another.

HTH someone!

Saturday, November 2, 2013

Adding UFS, ZFS, VxVM FS, Raw FS, LOFS to Non-Global Zone - Some useful examples

In day to day administration we deal with these tasks like adding a raw device to zone, delegating ZFS Datasets to a Non-Global Zone, Adding FS/Volume etc.

In this post I'll be only talking about diffrent types of filesystem operations associated to zones.

Before we start I would like reiterate - Zones are cool and dynamic !!!

So let's start with -

Adding UFS filesystem to Non-Global Zone
______________________________
______

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set special=/dev/md/dsk/d100
zonecfg:zone1:fs> set raw=/dev/md/rdsk/d100
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> add options [nodevices,logging]
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit


Adding ZFS filesystem/dataset/Volume to Non-Global Zone
_____________________________________________
_____

Points to ponder before associating ZFS datasets with zones -

  • Can add a ZFS file system or a clone to a non-global zone with or without delegating administrative control.
  • Can add a ZFS volume as a device to non-global zones.
  • Cannot associate ZFS snapshots with zones
  • A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy. If the filesystem is created in the global zone and added to the local zone via zonecfg, it may be assigned to more than one zone unless the mountpoint is set to legacy.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=zfs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


Adding ZFS filesystem via lofs filesystem
__________________________________________


In a order to use lofs, actual zfs filesystem should be mounted in global zone.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


global # mkdir -p /zoneroot/zone1/root/u01
global # mount -F lofs /rpool/oradata-u01 /zoneroot/zone1/root/u01

global # zlogin zone1 df -h /u01
Filesystem             size   used  avail capacity  Mounted on
/oradata-u01             3G    21K   3G     1%      /u01

Delegating Datasets to a Non-Global Zone
_________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add dataset
zonecfg:zone1:dataset> set name=dpool/oradata-u01
zonecfg:zone1:dataset> set alias=oradata-pool
zonecfg:zone1:dataset> end


Within the zone1 zone, this file system is not accessible as dpool/oradata-u01, but as a virtual pool named oradata-pool. The zone administrator is able to set properties on the dataset, as well as create children. It allows the zone administrator to take snapshots, create clones, and otherwise control the entire namespace below the added dataset.

Adding ZFS Volumes to a Non-Global Zone
________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/zvol/dsk/dpool/oradata/u01
zonecfg:zone1:device> end


Adding VxVM filesystem to Non-Global Zone
___________________________________________

global # zonecfg -z zone1
zonecfg:zone11> add fs
zonecfg:zone1:fs> set type=vxfs
zonecfg:zone1:fs> set special=/dev/vx/dsk/oradg/u01
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/oradg/u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> commit
zonecfg:zone1> verify
zonecfg:zone1> exit


Create & Add UFS filesystem on VxVM volume
___________________________________________


global # vxassist -g zone1_dg make home-ora1-zone1 1g
global # mkfs -F ufs /dev/vx/rdsk/zone1_dg/home-ora1-zone1 2097152


NOTE: 2097152 is sector size.

global # mount -F ufs /dev/vx/dsk/zone1_dg/home-ora1-zone1 /zones/zone1/root/home/oradata/ora1

=========================================================================

Adding the filesystem to Non-Global Zone
____________________________________

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> set special=/dev/vx/dsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set dir=/zones/zone1/root/home/oradata/ora1
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
 

global # zlogin zone1 df -k | ora1
/home/oradata/ora1    986095    1041  886445     1%    /home/oradata/ora1


Adding raw device to Non-Global Zone
______________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
zonecfg:zone1:device> end
zonecfg:zone1>exit


Ideally we need to reboot non-global zone in order to see added raw device however there is a hack available to do it dynamically. See - Dynamically-adding-raw-device-to-Non-global-zone

Well, this is it. Hope this helps to our community friends for their day to day work!

BTW, In India it's a festive season, Diwali celebration time!!!!
So all my friends - Wish you & you’re your family a very happy, prosperous & safe Diwali.

Enjoy !!!!