Find it

Thursday, March 13, 2014

Apache-Subversion deployment - Legacy Solaris 8, different experience!

I got a requirement wherein customer requested to have Apache Subversion setup to be configured on quite old legacy version, Solaris 8 server; hence getting binaries for SVN on Solaris 8 is almost impossible hence I eventually decided to compile and install Subversion & all it's dependent components using source code.

Installing Subversion needs to fulfil some dependencies, below are the required dependent packages to install SVN successfully.

= Expat 2.x
= Apache 2.x with WebDAV support
= APR (Apache Portable Runtime)
= APR-UTIL
= NEON
= Compilation of SQLite-amalgamation C source code for SQLite which is a requirement for Subversion configure script and compilation purpose
= Compilation of zlib

This write-up is quite basic and gives enough insights on what compile string works fine on Solaris 8 and one might save lots of time.. trust me it's a pain to get Subversion compliled on Solaris 8 along with Apache and it's all dependent components.

Compile Expat
=============


# cd expat-2.0.1
#./configure && make && make install
# cd..

Compliling Apache 2.x
=====================


#./configure --prefix=/usr/local/apache2 --enable-so \
--enable-mods-shared=most --enable-ssl=static \
--with-ssl=/usr/local/ssl --enable-dav --enable-dav-fs
# make && make install
# cd..

Starting Apache Webserver
-------------------------


#/usr/local/apache2/bin/apachectl start

root@XXXX # ps -eaf | grep httpd | grep -v grep
  root 24948    1  2 21:35:49 ?        0:02 /usr/local/apache2/bin/httpd -k start
  svn 24952 24948  0 21:35:51 ?        0:00 /usr/local/apache2/bin/httpd -k start
  svn 24949 24948  0 21:35:51 ?        0:00 /usr/local/apache2/bin/httpd -k start
  svn 24950 24948  0 21:35:51 ?        0:00 /usr/local/apache2/bin/httpd -k start
  svn 24951 24948  0 21:35:51 ?        0:00 /usr/local/apache2/bin/httpd -k start
  svn 24953 24948  0 21:35:51 ?        0:00 /usr/local/apache2/bin/httpd -k start


Apache processes are onwed by user svn, you may use any user for this purpose.

Compile APR
===========


# cd subversion-1.6.6
# cd apr
# ./configure --prefix=/usr/local/apr
# make
# make install
# cd ..


Compile APR-UTIL
================


# cd apr-util
# ./configure --prefix=/usr/local/apr --with-apr=/usr/local/apr
# make
# make install
# cd ..


Compile NEON
============


# cd neon
# ./configure --prefix=/usr/local/neon  --with-ssl --with-libs=/usr/local/ssl --enable-shared
# make
# make install
# cd ..


Compile Subversion
==============


# ./configure --prefix=/usr/local/subversion \
--with-apxs=/usr/local/apache2/bin/apxs --with-apr=/usr/local/apr --with-apr-util=/usr/local/apr --with-neon=/usr/local/neon
# make
# make install


If everything success subversion will add 2 modules into your apache modules and added 2 module line into your httpd.conf like below -

# more /usr/local/apache2/conf/httpd.conf
....
LoadModule dav_svn_module /usr/lib/apache2/mod_dav_svn.so
LoadModule authz_svn_module /usr/lib/apache2/mod_authz_svn.so
...

Restart Your Apache Services
----------------------------
------------

# /usr/local/apache2/bin/apachectl stop
# /usr/local/apache2/bin/apachectl start


SVN repository configuration via WebDAV
---------------------------------------
----------------

Under HTTP configuration file "/usr/local/apache2/conf/httpd.conf" put below configuration -
 











---------------------------------------------------------------------

Add users for basic authentication
==========================


# /usr/local/apache2/bin/htpasswd /usr/local/svn/repos/authz njoshi01
New password:
Re-type new password:
Adding password for user njoshi01


The above command can be used for adding user and modifying the password. Now to access the SVN repository user needs to authenticate and then access the repository contents.

--------------------------------------------------

To make sure the httpd service comes online automatically after server reboot will have to create startup script. Let's create a startup script -

-rwxr--r--   6 root     sys          734 Feb 13 10:24 /etc/init.d/apache

#!/sbin/sh

APACHE_HOME=/usr/local/apache2
CONF_FILE=/usr/local/apache2/conf/httpd.conf
RUNDIR=/var/run
PIDFILE=${RUNDIR}/httpd.pid

if [ ! -f ${CONF_FILE} ]; then
        exit 0
fi

if [ ! -d ${RUNDIR} ]; then
        /usr/bin/mkdir -p -m 755 ${RUNDIR}
fi

case "$1" in
start)
        /bin/rm -f ${PIDFILE}
        cmdtext="starting"
        ;;
restart)
        cmdtext="restarting"
        ;;
stop)
        cmdtext="stopping"
        ;;
*)
        echo "Usage: $0 {start|stop|restart}"
        exit 1
        ;;
esac

echo "httpd $cmdtext."

/bin/sh -c "${APACHE_HOME}/bin/apachectl $1" 2>&1
status=$?

if [ $status != 0 ]; then
        echo "exit status $status"
        exit 1
fi
exit 0


PS: For Solaris 10 you have to create SMF service. Creating SMF is not described in this write-up.

--------------------------------------------------

How developers access the SVN repository?
=================================

Either via Eclipse or Web based Subversion portal.
































I hope above notes will stand helpful..

Thursday, January 2, 2014

Online data from storage luns in ZPOOL

Wishing you & your family very Happy New Year, may this year brings you all the success, good health & wealth.

Today will be writing about online storage (LUNs) migration for ZFS zpool. I'll be replacing the LUNs from old array to new array.

The best thing about the procedure is that during migration all the data which was present in ZFS file systems in ZPOOL nsrpool is still accessible without any issues, no performance lag. No impact to production system and no down time!!!

Each disk in zfs pool has the same size.

One can do this storage migration in two ways online.

1. One can use attach/detach options, zpool attach -f [pool] [device] [new-device] and then once re-silver done zpool detach [pool] [old-device]
2. Or simply one can simply do device replace, zpool replace [pool] [device] [new-device]

I’m going to use the 2nd option because I think this is the easiest way to achieve this LUN migration.

Newly allocated disks are as below,

/dev/rdsk/c6t6006016070312700CEB537F88C48E311d0s2 - 60G format
/dev/rdsk/c6t6006016070312700FA0468009348E311d0s2 - 60G format
/dev/rdsk/c6t600601606B312700FBA40FCCC0BCE211d0s2 - 60G format
/dev/rdsk/c6t600601606B3127002E623AB38B48E311d0s2 - 60G format
/dev/rdsk/c6t600601606B3127000283C9778948E311d0s2 - 60G format
/dev/rdsk/c6t6006016070312700008A87119148E311d0s2 - 60G format


Have formatted the above disks palcing everything on slice 0.

root@XXXXXX:/root# zpool status nsrpool
  pool: nsrpool
 state: ONLINE
 scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        nsrpool                                  ONLINE       0     0     0
          c6t60060160E6C31D00625335009BF1DB11d0  ONLINE       0     0     0
          c6t60060160E6C31D00C8B0F7B898F1DB11d0  ONLINE       0     0     0
          c6t60060160E6C31D000C0C7E699AF1DB11d0  ONLINE       0     0     0
          c6t60060160E6C31D00AB24671C98F1DB11d0  ONLINE       0     0     0
          c6t60060160E6C31D0060312EA39BF1DB11d0  ONLINE       0     0     0
          c6t60060160E6C31D00CCF4057397F1DB11d0  ONLINE       0     0     0

errors: No known data errors

Let's start the migration -

root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D00625335009BF1DB11d0 c6t6006016070312700CEB537F88C48E311d0
root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D00C8B0F7B898F1DB11d0 c6t6006016070312700FA0468009348E311d0
root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D000C0C7E699AF1DB11d0 c6t600601606B312700FBA40FCCC0BCE211d0
root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D00AB24671C98F1DB11d0 c6t600601606B3127002E623AB38B48E311d0
root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D0060312EA39BF1DB11d0 c6t600601606B3127000283C9778948E311d0
root@XXXXXX:/root# zpool replace nsrpool c6t60060160E6C31D00CCF4057397F1DB11d0 c6t6006016070312700008A87119148E311d0


Now that resilvering started, will have to wait.

root@XXXXXX:/root# zpool status nsrpool
  pool: nsrpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Wed Dec  4 13:04:21 2013
    44.4G scanned out of 220G at 75.1M/s, 0h39m to go
    44.4G scanned out of 220G at 75.1M/s, 0h39m to go
    44.4G resilvered, 20.24% done
config:

        NAME                                         STATE     READ WRITE CKSUM
        nsrpool                                    ONLINE       0     0     0
          replacing-0                              ONLINE       0     0     0
            c6t60060160E6C31D00625335009BF1DB11d0  ONLINE       0     0     0
            c6t6006016070312700CEB537F88C48E311d0  ONLINE       0     0     0  (resilvering)
          replacing-1                              ONLINE       0     0     0
            c6t60060160E6C31D00C8B0F7B898F1DB11d0  ONLINE       0     0     0
            c6t6006016070312700FA0468009348E311d0  ONLINE       0     0     0  (resilvering)
          replacing-2                              ONLINE       0     0     0
            c6t60060160E6C31D000C0C7E699AF1DB11d0  ONLINE       0     0     0
            c6t600601606B312700FBA40FCCC0BCE211d0  ONLINE       0     0     0  (resilvering)
          replacing-3                              ONLINE       0     0     0
            c6t60060160E6C31D00AB24671C98F1DB11d0  ONLINE       0     0     0
            c6t600601606B3127002E623AB38B48E311d0  ONLINE       0     0     0  (resilvering)
          replacing-4                              ONLINE       0     0     0
            c6t60060160E6C31D0060312EA39BF1DB11d0  ONLINE       0     0     0
            c6t600601606B3127000283C9778948E311d0  ONLINE       0     0     0  (resilvering)
          replacing-5                              ONLINE       0     0     0
            c6t60060160E6C31D00CCF4057397F1DB11d0  ONLINE       0     0     0
            c6t6006016070312700008A87119148E311d0  ONLINE       0     0     0  (resilvering)

errors: No known data errors


After a long time, resilvering done and now we see zpool with new disks - so our zpool online migration is done.

root@XXXXXX:/root# zpool status nsrpool
errors: No known data errors
  pool: nsrpool
 state: ONLINE
 scan: resilvered 219G in 2h55m with 0 errors on Wed Dec  4 16:00:07 2013
config:

        NAME                                       STATE     READ WRITE CKSUM
        nsrpool                                  ONLINE       0     0     0
          c6t6006016070312700CEB537F88C48E311d0  ONLINE       0     0     0
          c6t6006016070312700FA0468009348E311d0  ONLINE       0     0     0
          c6t600601606B312700FBA40FCCC0BCE211d0  ONLINE       0     0     0
          c6t600601606B3127002E623AB38B48E311d0  ONLINE       0     0     0
          c6t600601606B3127000283C9778948E311d0  ONLINE       0     0     0
          c6t6006016070312700008A87119148E311d0  ONLINE       0     0     0


It was easy! Hope this helps...

Sunday, November 10, 2013

Migrating disks within coordinator disk group from one array to another.

Last week I was working on storage migration tasks in which I envisioned to migrate disks within coordinator disk group from old array to newly deployed array.

To me, it was first time doing this task so I decided to note down the steps and after successfully completing the task decided to share with my all friends out there!

There can be several methods to do this task however for me below steps worked very well but if anyone has any better set of instructions doing this then request to share those.

Let's start then -

1. If VCS is running, shut it down locally:

# hastop -all -force

2. Stop I/O fencing on all nodes. This removes any registration keys on the disks.

# /etc/init.d/vxfen stop   (on all 3 cluster nodes)

3. Import the coordinator disk group. The file /etc/vxfendg includes the name of the disk group (for example, vxfencoorddg) that contains the coordinator disks, so use the command -

# vxdg -tfC import 'cat /etc/vxfendg'
                                       OR
# vxdg -tfC import vxfencoorddg

Where:

-t specifies that the disk group is imported only until the system restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or more disks is not accessible.
-C specifies that any import blocks are removed.


4. Turn off the coordinator attribute value for the coordinator disk group.

# vxdg -g vxfencoorddg set coordinator=off

5. Remove old disks and add new disks

    First remove N-1 disks from fence disk group
 
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg01
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg02
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg03
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg04
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg05
       # vxdg -g vxfencoorddg rmdisk vxfencoorddg06

        Add disk for fence disk group

       # vxdisk list | egrep 'apevmx13_139|apevmx13_140|apevmx13_143|apevmx13_145|apevmx14_139|apevmx14_140|apevmx14_141'
    apevmx13_139 auto:cdsdisk    -            -            online
    apevmx13_140 auto:cdsdisk    -            -            online
    apevmx13_143 auto:cdsdisk    -            -            online
    apevmx13_145 auto:cdsdisk    -            -            online
    apevmx14_139 auto:cdsdisk    -            -            online
    apevmx14_140 auto:cdsdisk    -            -            online
    apevmx14_141 auto:cdsdisk    -            -            online


    # vxdg -g vxfencoorddg adddisk vxfencoorddg01=apevmx13_139
    # vxdg -g vxfencoorddg adddisk vxfencoorddg02=apevmx13_140
    # vxdg -g vxfencoorddg adddisk vxfencoorddg03=apevmx13_143
    # vxdg -g vxfencoorddg adddisk vxfencoorddg04=apevmx13_145
    # vxdg -g vxfencoorddg adddisk vxfencoorddg05=apevmx14_139
    # vxdg -g vxfencoorddg adddisk vxfencoorddg06=apevmx14_140

   
    Remove the remaining one disk from enclosure apedmx06

    # vxdg -g vxfencoorddg rmdisk vxfencoorddg07

    Add the 7th disk from enclosure apevmx14

    # vxdg -g vxfencoorddg adddisk vxfencoorddg07=apevmx14_141

6. Set the coordinator attribute value as "on" for the coordinator disk group.

# vxdg -g vxfencoorddg set coordinator=on

7. Run disk scan on all nodes

# vxdisk scandisks   (Run on all cluster nodes)

8. Check if fencing disks are visible on all nodes

# vxdisk -o alldgs list | grep fen

9. After replacing disks in a coordinator disk group, deport the disk group:

# vxdg deport 'cat /etc/vxfendg'
                                 OR
# vxdg deport vxfencoorddg

10. Verify if the fencing diskgroup is deported

# vxdisk -o alldgs list | grep fen

11. On each node in the cluster, start the I/O fencing driver:

# /etc/init.d/vxfen start  (on all 3 cluster nodes)

12. hastart on all cluster nodes. 

# hastart  (on all 3 cluster nodes)

That's it, these 12 steps takes you through migrating disks within coordinator disk group from one array to another.

HTH someone!

Saturday, November 2, 2013

Adding UFS, ZFS, VxVM FS, Raw FS, LOFS to Non-Global Zone - Some useful examples

In day to day administration we deal with these tasks like adding a raw device to zone, delegating ZFS Datasets to a Non-Global Zone, Adding FS/Volume etc.

In this post I'll be only talking about diffrent types of filesystem operations associated to zones.

Before we start I would like reiterate - Zones are cool and dynamic !!!

So let's start with -

Adding UFS filesystem to Non-Global Zone
______________________________
______

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set special=/dev/md/dsk/d100
zonecfg:zone1:fs> set raw=/dev/md/rdsk/d100
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> add options [nodevices,logging]
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit


Adding ZFS filesystem/dataset/Volume to Non-Global Zone
_____________________________________________
_____

Points to ponder before associating ZFS datasets with zones -

  • Can add a ZFS file system or a clone to a non-global zone with or without delegating administrative control.
  • Can add a ZFS volume as a device to non-global zones.
  • Cannot associate ZFS snapshots with zones
  • A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy. If the filesystem is created in the global zone and added to the local zone via zonecfg, it may be assigned to more than one zone unless the mountpoint is set to legacy.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=zfs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


Adding ZFS filesystem via lofs filesystem
__________________________________________


In a order to use lofs, actual zfs filesystem should be mounted in global zone.

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set special=dpool/oradata-u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit


global # mkdir -p /zoneroot/zone1/root/u01
global # mount -F lofs /rpool/oradata-u01 /zoneroot/zone1/root/u01

global # zlogin zone1 df -h /u01
Filesystem             size   used  avail capacity  Mounted on
/oradata-u01             3G    21K   3G     1%      /u01

Delegating Datasets to a Non-Global Zone
_________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add dataset
zonecfg:zone1:dataset> set name=dpool/oradata-u01
zonecfg:zone1:dataset> set alias=oradata-pool
zonecfg:zone1:dataset> end


Within the zone1 zone, this file system is not accessible as dpool/oradata-u01, but as a virtual pool named oradata-pool. The zone administrator is able to set properties on the dataset, as well as create children. It allows the zone administrator to take snapshots, create clones, and otherwise control the entire namespace below the added dataset.

Adding ZFS Volumes to a Non-Global Zone
________________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/zvol/dsk/dpool/oradata/u01
zonecfg:zone1:device> end


Adding VxVM filesystem to Non-Global Zone
___________________________________________

global # zonecfg -z zone1
zonecfg:zone11> add fs
zonecfg:zone1:fs> set type=vxfs
zonecfg:zone1:fs> set special=/dev/vx/dsk/oradg/u01
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/oradg/u01
zonecfg:zone1:fs> set dir=/u01
zonecfg:zone1:fs> end
zonecfg:zone1> commit
zonecfg:zone1> verify
zonecfg:zone1> exit


Create & Add UFS filesystem on VxVM volume
___________________________________________


global # vxassist -g zone1_dg make home-ora1-zone1 1g
global # mkfs -F ufs /dev/vx/rdsk/zone1_dg/home-ora1-zone1 2097152


NOTE: 2097152 is sector size.

global # mount -F ufs /dev/vx/dsk/zone1_dg/home-ora1-zone1 /zones/zone1/root/home/oradata/ora1

=========================================================================

Adding the filesystem to Non-Global Zone
____________________________________

global # zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set type=ufs
zonecfg:zone1:fs> set special=/dev/vx/dsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/zone1_dg/home-ora1-zone1
zonecfg:zone1:fs> set dir=/zones/zone1/root/home/oradata/ora1
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
 

global # zlogin zone1 df -k | ora1
/home/oradata/ora1    986095    1041  886445     1%    /home/oradata/ora1


Adding raw device to Non-Global Zone
______________________________________


global # zonecfg -z zone1
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
zonecfg:zone1:device> end
zonecfg:zone1>exit


Ideally we need to reboot non-global zone in order to see added raw device however there is a hack available to do it dynamically. See - Dynamically-adding-raw-device-to-Non-global-zone

Well, this is it. Hope this helps to our community friends for their day to day work!

BTW, In India it's a festive season, Diwali celebration time!!!!
So all my friends - Wish you & you’re your family a very happy, prosperous & safe Diwali.

Enjoy !!!!

Tuesday, October 22, 2013

Split & Migrate VERITAS Sub-disks from one array to another.

Hi All, I’m still breathing and alive! Yes again, it’s been a long time that I haven’t posted any new investigational post since past few months. Anyways, it’s just that I was bit busy and got mangled in my daily routine.

Today I'll be writing about subject "Sub Disks". Split and migrate sub disks from one array to another.

I'm having a situation wherein I've to migrate the sub disk of size 45Gb from its existing array to another new array. From new array I got storage disks assigned each of size 17Gb and the sub disk which needs to be migrated is of size 45Gb. The limitation here is that the disk size is inelastic to 17Gb by Storage for new array, now with non-contiguous space we can't join the sub disks as a one sub disk so ONLY option available in here to split the existing 45Gb sub disks into three sub disks of equal sizes and then move the split subdisks to the newly created sub disks from the disks coming from new array.

So let's do it.

The first step would be to initializing the new disks and take those disks into VERITAS control and then add them to disk group. well, after doing so start splitting the sub disk -

root:XXXXXXXXXXX:/root # vxsd -g GAPRMANdg -s 33554432 split EMC0_4-05 EMC0_4-06 EMC0_4-07

BEFORE:

sd EMC0_4-05    db_GAPRMAN-01 EMC0_4  48234496 94371840 52428800  UNIX177_4 ENA

AFTER:

sd EMC0_4-06    db_GAPRMAN-01 EMC0_4  48234496 33554432 52428800  UNIX177_4 ENA
sd EMC0_4-07    db_GAPRMAN-01 EMC0_4  81788928 60817408 85983232  UNIX177_4 ENA


Now we will split sub disk EMC0_4-07 into another 2 sub disk each of size 14.5Gb.

root:XXXXXXXXXXX:/root # vxsd -g GAPRMANdg -s 30408704 split EMC0_4-07 EMC0_4-08 EMC0_4-09

BEFORE:

sd EMC0_4-06    db_GAPRMAN-01 EMC0_4  48234496 33554432 52428800  UNIX177_4 ENA
sd EMC0_4-07    db_GAPRMAN-01 EMC0_4  81788928 60817408 85983232  UNIX177_4 ENA


AFTER:

sd EMC0_4-06    db_GAPRMAN-01 EMC0_4  48234496 33554432 52428800  UNIX177_4 ENA
sd EMC0_4-08    db_GAPRMAN-01 EMC0_4  81788928 30408704 85983232  UNIX177_4 ENA
sd EMC0_4-09    db_GAPRMAN-01 EMC0_4  112197632 30408704 116391936 UNIX177_4 ENA


Create sub disks from newly allocated disks -

root:XXXXXXXXXXX:/root # vxmake -g GAPRMANdg sd EMC2_24-01 EMC2_24,0,33554432
root:XXXXXXXXXXX:/root # vxmake -g GAPRMANdg sd EMC2_25-01 EMC2_25,0,30408704
root:XXXXXXXXXXX:/root # vxmake -g GAPRMANdg sd EMC2_26-01 EMC2_26,0,30408704

Now move data off the old sub disk to new sub disk -

root:XXXXXXXXXXX:/root # vxsd -g GAPRMANdg -o rm mv EMC0_4-06 EMC2_24-01
root:XXXXXXXXXXX:/root # vxsd -g GAPRMANdg -o rm mv EMC0_4-08 EMC2_25-01
root:XXXXXXXXXXX:/root # vxsd -g GAPRMANdg -o rm mv EMC0_4-09 EMC2_26-01


So now the migration has been completed.

BEFORE:

sd EMC0_4-06    db_GAPRMAN-01 EMC0_4  48234496 33554432 52428800  UNIX177_4 ENA
sd EMC0_4-08    db_GAPRMAN-01 EMC0_4  81788928 30408704 85983232  UNIX177_4 ENA
sd EMC0_4-09    db_GAPRMAN-01 EMC0_4  112197632 30408704 116391936 UNIX177_4 ENA


AFTER:

sd EMC2_24-01   db_GAPRMAN-01 EMC2_24 0        33554432 52428800  UNIX168_23 ENA
sd EMC2_25-01   db_GAPRMAN-01 EMC2_25 0        30408704 85983232  UNIX168_24 ENA
sd EMC2_26-01   db_GAPRMAN-01 EMC2_26 0        30408704 116391936 UNIX168_25 ENA


Final task would be to free up the old disk from disk group and taking it out of VERITAS control.

root:XXXXXXXXXXX:/root # vxdg -g GAPRMANdg rmdisk EMC0_4
root:XXXXXXXXXXX:/root # vxdiskunsetup -C UNIX177_4
root:XXXXXXXXXXX:/root # vxdisk rm UNIX177_4

Isn't it such a flexible feature from VERITAS. Hope this helps someone, some day!

Monday, May 20, 2013

Increase Cluster File System - CVM Cluster

In this post I would like to discuss and demonstrate about increasing the Cluster File System (CFS) in CVM environment.

    - The requirement is to increase the filesystem by 1TB.

As a best practice, in a CVM/CFS environment, volume should be grown on CVM master and file system should be grown on a CFS Primary. Please note that the CVM Master and the SF CFS node can be two different nodes.

Just off the topic, to know how to grow volume on CVM master & then grow the filesystem on CFS Primary -

To increase the size of the file system, execute the following on CVM master -

# vxassist –g shared_disk_group growto volume_name newlength

And then on CFS node, execute-

# fsadm –F vxfs –b newsize –r device_name mount_point

On other hand, if the system is both CVM master and CFS primary, then "vxresize" command could be executed on the system without any issues.

The above mentioned statement is true below VERITAS version 3.5 but from and above VERITAS version 3.5 vxresize can be run on any nodes within the cluster provided that attribute named 'HacliUserLevel' is set to value "COMMANDROOT". The default value of this attribute is 'NONE' which prevents user to run vxresize command from all the nodes in the cluster.

In my case it is set to "COMMANDROOT"

# /opt/VRTSvcs/bin/haclus -display | grep Hacli
HacliUserLevel         COMMANDROOT


But if the value of attribute 'HacliUserLevel' is set to "NONE" then below is method can be used to change the value.

To change the value to COMMANDROOT, run:

# /opt/VRTSvcs/bin/haconf -makerw
# /opt/VRTSvcs/bin/haclus -modify HacliUserLevel COMMANDROOT
# /opt/VRTSvcs/bin/haconf -dump -makero


This change allows vxresize to call hacli command, which then allows any command to be run on any system within the cluster.

Okay, back to our requirement -

So first let's find out the CFS primary node -

# fsclustadm -v showprimary /u01-zz/oradata
XXXXX


Now let's find out the master CVM node in the cluster by -

# vxdctl -c mode
mode: enabled: cluster active - SLAVE
master: XXXXX


Now that I need to increase the filesystem by 1TB hence I'm checking if disk group has enough space in it.

# vxassist -g dde1ZZGO0 maxsize
Maximum volume size: 2136743936 (1043332Mb)

Well, we have enough space available under disk group dde1ZZGO0.

Let increase the FS by 1TB now -

# vxresize -b -F vxfs -g dde1ZZGO0 ZZGO0_v0 +1000g

BEFORE:

# df -kh /u01-zz/oradata
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/dde1ZZGO0/ZZGO0_v0
                       5.0T   5.0T    17G   100%    /u01-zz/oradata


AFTER:

# df -kh /u01-zz/oradata
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/dde1ZZGO0/ZZGO0_v0
                       6.0T   5.0T   954G    85%    /u01-zz/oradata


Good learning about attribute HacliUserLevel, isn't it?

Sunday, March 24, 2013

Change mount point under VCS.

It's my B'day today!!! On this occasion I can’t miss an opportunity to write a blog entry as I think, being a tech savvy  guy I should write something which would be the nice gift from me to myself..

On this special day, today I would like to share a blog entry with you all. So let's start...

Past week I was requested to change mount point of /opt/vdf to /data along with applicable alteration in VCS configuration to make sure VCS functionality should work as expected as before for this volume & mount resource.

The mount point needs to be renamed to /data is as follows –

# df -kh /opt/vdf/
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/vdfdg/opt_vdf_vol
            400G 165M 375G 1% /opt/vdf

Hence in order to do so, I’ll have to –

Unmount the /opt/vdf filesystem and re-mount it on /data as mount point. Before we do so, check if any data resides under filesystem and if any application active processes running within the filesystem which needs to be renamed. In case if any application is running and using the filesystem which needs renamed then at first place you need to stop the application and verify if no application process is hooked with the filesystem & If data resides on filesystem then simply create a temporary filesystem with temporary mountpoint and then copy over the data from original filesystem needs to be renamed. I trust this is simple one and any experienced SA can certainly do it.

Now in order to remount the filesystem with new mountpoint, first you need to modify your VCS configuration a bit so that you can un-mount the filesystem successfully.

Obliviously, the first basic thing to verify that VCS cluster configuration is currently read-only mode or write mode. To verify that you can use below command.

# haclus -display | grep -i 'readonly'
ReadOnly 1

Where,

0 = write mode
1 = read only mode

Well, currently VCS configuration is in Read-only mode so let’s make it write mode.

# haconf -makerw

# haclus -display | grep -i 'readonly'
ReadOnly 0

Good, now we have VCS configuration in write mode so we can make appropriate changes to the configuration and save them.

Now change the appropriate attributes in VCS configuration for filesystem/mount point resource that we are supposed to rename. This may vary, in my case I’m not changing the volume name and changing just a mountpoint name so I’m modifying only “MountPoint” attribute for resource “optvdf_mnt”

# hares -modify optvdf_mnt MountPoint "/data"

To verify.

# hares -display optvdf_mnt | grep -i MountPoint
optvdf_mnt ArgListValues adevdf01s MountPoint 1 /data BlockDevice 1 /dev/vx/dsk/vdfdg/opt_vdf_vol FSType 1 vxfs MountOpt 1 "" FsckOpt 1 -n SnapUmount 1 0 CkptUmount 1 1 SecondLevelMonitor 1 0 SecondLevelTimeout 1 30 OptCheck 1 0 CreateMntPt 1 0 MntPtPermission 1 "" MntPtOwner 1 "" MntPtGroup 1 "" AccessPermissionChk 1 0 RecursiveMnt 1 0 VxFSMountLock 1 1
optvdf_mnt ArgListValues adevdf02s MountPoint 1 /data BlockDevice 1 /dev/vx/dsk/vdfdg/opt_vdf_vol FSType 1 vxfs MountOpt 1 "" FsckOpt 1 -n SnapUmount 1 0 CkptUmount 1 1 SecondLevelMonitor 1 0 SecondLevelTimeout 1 30 OptCheck 1 0 CreateMntPt 1 0 MntPtPermission 1 "" MntPtOwner 1 "" MntPtGroup 1 "" AccessPermissionChk 1 0 RecursiveMnt 1 0 VxFSMountLock 1 1
optvdf_mnt MountPoint global /data

At this point we are good to un-mount the filesystem with mountpoint named /opt/vdf.

# umount /opt/vdf
UX:vxfs umount: ERROR: V-3-26388: file system /opt/vdf has been mount locked

Error!!! Whenever I get any error I feel very happy as every error teaches something new & especially new errors which I never came across!

The Solaris native OS command "umount" may through below error -

Well this error occurs if the mount point is locked by VCS. This is for cases where VCS service groups having DiskGroup resources configured with UnMountVolumes attribute set and the volumes are mounted outside of VCS control. The purpose of the VERITAS File System (VxFS) Mount Lock is to prevent accidental unmounting of a VxFS file system. The feature is enabled for the VCS Mount resource by default. This feature can be disabled by setting the Mount resource attribute VxFSMountLock to 0.

# hagrp -resources vdfapp_sg
vdfapp_dg
vdfappIP
optcdvdf_mnt
optvdf_mnt
optcdvdf_vol
optvdf_vol

# hares -display vdfapp_dg | grep -i UmountVolumes
vdfapp_dg ArgListValues adevdf01s DiskGroup 1 vdfdg StartVolumes 1 1 StopVolumes 1 1 MonitorOnly 1 0 MonitorReservation 1 0 tempUseFence 1 SCSI3 PanicSystemOnDGLoss 1 0 DiskGroupType 1 private UmountVolumes 1 0 vdfapp_dg ArgListValues adevdf02s DiskGroup 1 vdfdg StartVolumes 1 1 StopVolumes 1 1 MonitorOnly 1 0 MonitorReservation 1 0 tempUseFence 1 SCSI3 PanicSystemOnDGLoss 1 0 DiskGroupType 1 private UmountVolumes 1 0 vdfapp_dg UmountVolumes global 0

To get rid of this error use the VxFS umount command for Solaris to manually unmount the file system.

# /opt/VRTS/bin/umount -o mntunlock=VCS /opt/vdf

To mount filesystem/volume on new mountpoint use –

# mount -F vxfs -o mntlock=VCS /dev/vx/dsk/vdfdg/opt_vdf_vol /data

# df -kh /data
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/vdfdg/opt_vdf_vol
              400G  165M  375G  1%   /data

Make sure to save and make cluster configuration read-only.

# haconf -dump -makero

That's it! Done, isn't this easy! :)

Saturday, March 23, 2013

Migrate VERITAS (VxVM) to Solaris Disk Suite (SDS)

Hi there! busy.. busy.. busy... stuck with routine work.. lots of work! :)

Anyways, today I finally decided to write a blog entry on a bit unusual task I did recently - Migrate VERITAS (VxVM) to Solaris Disk Suite (SDS). In past have done several migrations, like UFS to ZFS, UFS/SDS to VxVM, VxVM to ZFS however this time I been contacted for some different and backward migration which is VxVM to SDS. When SDS is getting obsolete but on other hand still there are some requirement comes, in which demanding for such things… anyways, good to perform anything which you feels interesting and uncommon. Then let’s do it…

Point To Ponder - Make sure you have full backup of the system on which you're going to operate.

Let's first see which disks are part of bootdg.

Disk_0 auto rootdisk rootdg online c3t0d0s2
Disk_5 auto rootmirror rootdg online c0t0d0s2

Which all volumes needs to be converted to the SVM -

# df -kh | grep bootdg
/dev/vx/dsk/bootdg/rootvol 5.9G 4.3G 1.5G 75% /
/dev/vx/dsk/bootdg/var 5.9G 4.7G 1.1G 81% /var
/dev/vx/dsk/bootdg/opt 5.9G 1.8G 4.0G 32% /opt

Take the pre-veritas vfstab copy.

root@XXXXXX# cat /etc/vfstab.prevm

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c0t0d0s1 - - swap - no -
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no nologging
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /var ufs 1 no nologging
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /opt ufs 2 yes nologging
/dev/dsk/c0t2d0s0 /dev/rdsk/c0t2d0s0 /var/crash ufs 2 yes nologging
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -

#/dev/dsk/c3t0d0s0 is currently mounted on /.
#/dev/dsk/c3t0d0s1 is currently used by swap.
#/dev/dsk/c3t0d0s5 is currently mounted on /opt.
#/dev/dsk/c3t0d0s6 is currently mounted on /var.

Let's first unencapsulate the rootdisk using the vxunroot.

Detach all the plexes associated with the 'rootmirror' disk if applicable & Verify the rootmirror plexes have been detached.

# vxprint -qhtg rootdg -s | grep -i rootmirror | awk '{print $3}' > /var/tmp/subs.plex && cat /var/tmp/subs.plex
rootvol-02
swapvol-02
opt-02
var-02

# for x in `cat /var/tmp/subs.plex`

> do
> vxplex -g rootdg dis $x
> vxprint -qhtg rootdg -p $x
> done
pl rootvol-02 - DISABLED - 12584484 CONCAT - RW
sd rootmirror-01 rootvol-02 rootmirror 0 12584484 0 Disk_5 ENA
pl swapvol-02 - DISABLED - 31458321 CONCAT - RW
sd rootmirror-02 swapvol-02 rootmirror 12584484 31458321 0 Disk_5 ENA
pl opt-02 - DISABLED - 12584484 CONCAT - RW
sd rootmirror-03 opt-02 rootmirror 44042805 12584484 0 Disk_5 ENA
pl var-02 - DISABLED - 12584484 CONCAT - RW
sd rootmirror-04 var-02 rootmirror 56627289 12584484 0 Disk_5 ENA

# /etc/vx/bin/vxunroot

VxVM vxunroot NOTICE V-5-2-1564
This operation will convert the following file systems from
   volumes to regular partitions:
opt rootvol swapvol var

   VxVM vxunroot INFO V-5-2-2011
Replacing volumes in root disk to partitions will require a system
  reboot. If you choose to continue with this operation, system
  configuration will be updated to discontinue use of the volume
  manager for your root and swap devices.

Do you wish to do this now [y,n,q,?] (default: y) y
VxVM vxunroot INFO V-5-2-287 Restoring kernel configuration...
VxVM vxunroot INFO V-5-2-78
A shutdown is now required to install the new kernel.
You can choose to shutdown now, or you can shutdown later, at your
   convenience.

Do you wish to shutdown now [y,n,q,?] (default: n) n

VxVM vxunroot INFO V-5-2-258
Please shutdown before you perform any additional volume manager
   or disk reconfiguration. To shutdown your system cd to / and type

          shutdown -g0 -y -i6

# sync;sync;sync;shutdown -g0 -y -i6

Well, after 2-3 reboots server came back online and now I've UFS filesystems for OS volumes.

# df -kh / /var /opt
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t0d0s0 5.9G 4.3G 1.5G 74% /
/dev/dsk/c3t0d0s6 5.9G 4.7G 1.1G 81% /var
/dev/dsk/c3t0d0s5 5.9G 1.8G 4.0G 32% /opt

Just to be sure, once just try booting from mirror disk too.

Now let's create a partition slice for metadb.

# format c3t0d0

selecting c3t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c3t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c3t0d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c3t0d0s5 is currently mounted on /opt. Please see umount(1M).
/dev/dsk/c3t0d0s6 is currently mounted on /var. Please see umount(1M).

FORMAT MENU:
                disk - select a disk
                type - select (define) a disk type
                partition - select (define) a partition table
                current - describe the current disk
                format - format and analyze the disk
                repair - repair a defective sector
                label - write label to the disk
                analyze - surface analysis
                defect - defect list management
                backup - search for backup labels
                verify - read and display labels
                save - save new disk/partition definitions
                inquiry - show vendor, product and revision
                volname - set 8-character volume name
                ![cmd] - execute [cmd], then return
                quit
format> p

PARTITION MENU:
        0 - change `0' partition
        1 - change `1' partition
        2 - change `2' partition
        3 - change `3' partition
        4 - change `4' partition
        5 - change `5' partition
        6 - change `6' partition
        7 - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name - name the current table
        print - display the current table
        label - write partition map and label to the disk
        ![cmd] - execute [cmd], then return
        quit
partition> p
Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 3 - 4358 6.00GB (4356/0/0) 12584484
1 swap wu 4359 - 15247 15.00GB (10889/0/0) 31458321
2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 15248 - 19603 6.00GB (4356/0/0) 12584484
6 var wm 19604 - 23959 6.00GB (4356/0/0) 12584484
7 unassigned wm 0 0 (0/0/0) 0

partition> 7
Part Tag Flag Cylinders Size Blocks
7 unassigned wm 0 0 (0/0/0) 0

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 23960
Enter partition size[0b, 0c, 23960e, 0.00mb, 0.00gb]: 128mb
partition> p
Current partition table (unnamed):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 3 - 4358 6.00GB (4356/0/0) 12584484
1 swap wu 4359 - 15247 15.00GB (10889/0/0) 31458321
2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 15248 - 19603 6.00GB (4356/0/0) 12584484
6 var wm 19604 - 23959 6.00GB (4356/0/0) 12584484
7 unassigned wm 23960 - 24050 128.37MB (91/0/0) 262899

partition> l
Ready to label disk, continue? yes

Good enough, now let's create metadb.

# metadb -a -f -c 3 c3t0d0s7

# metadb -i
       flags first blk block count
         a  u     16      8192     /dev/dsk/c3t0d0s7
         a  u     8208    8192     /dev/dsk/c3t0d0s7
         a  u     16400   8192     /dev/dsk/c3t0d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors

Set SDS on root file system.

# metainit -f d11 1 1 c3t0d0s0
d11: Concat/Stripe is setup

# metainit d10 -m d11
d10: Mirror is setup
# metaroot d10

After executing metaroot, check the changes in /etc/vfstab and /etc/system.

#live-upgrade: updated boot environment

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
#live-upgrade::# /dev/vx/dsk/bootdg/swapvol - - swap -no nologging
/dev/dsk/c3t0d0s1 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no nologging
/dev/dsk/c3t0d0s6 /dev/rdsk/c3t0d0s6 /var ufs 1 no nologging,nosuid
/dev/dsk/c3t0d0s5 /dev/rdsk/c3t0d0s5 /opt ufs 2 yes nologging
/dev/vx/dsk/crashdg/crashvol /dev/dsk/crashdg/crashvol /var/crash vxfs 2 yes -
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes nosuid

* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)

Good enough.

# metastat -ac
d10           m 6.0GB d11
   d11        s 6.0GB c3t0d0s0

# metainit -f d31 1 1 c3t0d0s6
d31: Concat/Stripe is setup
# metainit d30 -m d31
d30: Mirror is setup

# metainit -f d51 1 1 c3t0d0s5
d51: Concat/Stripe is setup
# metainit d50 -m d51
d50: Mirror is setup

# metainit -f d1 1 1 c3t0d0s1
d1: Concat/Stripe is setup
# metainit d0 -m d1
d0: Mirror is setup

# metastat -ac
d0      m 15GB d1
   d1   s 15GB c3t0d0s1
d50     m 6.0GB d51
   d51  s 6.0GB c3t0d0s5
d30     m 6.0GB d31
   d31  s 6.0GB c3t0d0s6
d10     m 6.0GB d11
   d11  s 6.0GB c3t0d0s0

So in above,

d0 - swap
d10 - /
d30 - /var
d50 - /opt

Now it's time to change the vfstab to change slices into metadb devices.

# vi /etc/vfstab

# cat /etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
#live-upgrade::# /dev/vx/dsk/bootdg/swapvol - - swap - no nologging
/dev/md/dsk/d0 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no nologging
/dev/md/dsk/d30 /dev/md/rdsk/d30 /var ufs 1 no nologging,nosuid
/dev/md/dsk/d50 /dev/md/rdsk/d50 /opt ufs 2 yes nologging
/dev/vx/dsk/crashdg/crashvol /dev/dsk/crashdg/crashvol /var/crash vxfs 2 yes -
/devices - /devices devfs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes nosuid

Reboot the box once.

Now it's time to add mirror disk to existing metadevices.

At this stage the mirror disk is part of rootdg and for rootdg it's the last disk in the disk group hence we need to destroy rootdg.

# vxdg destroy rootdg

Unsetup the disk so it will be out of VERITAS control.

# vxdiskunsetup -C Disk_5

Cool, now I'm all set to create/copy partition table on mirror disk as exisitng disk part of metadevices.

# prtvtoc /dev/rdsk/c3t0d0s2 | fmthard -s - /dev/rdsk/c0t0d0s2
fmthard: New volume table of contents now in place.

Fine, Set redundant database on mirror disk.

# metadb -a -f -c 3 c0t0d0s7

# metadb -i

        flags    first blk    block count
     a m p luo   16           8192         /dev/dsk/c3t0d0s7
     a   p luo   8208         8192         /dev/dsk/c3t0d0s7
     a   p luo   16400        8192         /dev/dsk/c3t0d0s7
     a      u    16           8192         /dev/dsk/c0t0d0s7
     a      u    8208         8192         /dev/dsk/c0t0d0s7
     a      u    16400        8192         /dev/dsk/c0t0d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors

Create metadevices on mirror.

# metainit -f d12 1 1 c0t0d0s0
d12: Concat/Stripe is setup

# metainit -f d2 1 1 c0t0d0s1
d2: Concat/Stripe is setup

# metainit -f d52 1 1 c0t0d0s5
d52: Concat/Stripe is setup

# metainit -f d32 1 1 c0t0d0s6
d32: Concat/Stripe is setup

Attach metadevices.

# metattach d10 d12
d10: submirror d12 is attached

# metattach d30 d32
d30: submirror d32 is attached

# metattach d50 d52
d50: submirror d52 is attached

# metattach d0 d2
d0: submirror d2 is attached

# metastat -ac
d0              m 15GB d1 d2 (resync-76%)
     d1         s 15GB c3t0d0s1
d2              s 15GB c0t0d0s1
     d50        m 6.0GB d51 d52
d51             s 6.0GB c3t0d0s5
     d52        s 6.0GB c0t0d0s5
d30             m 6.0GB d31 d32
     d31        s 6.0GB c3t0d0s6
d32             s 6.0GB c0t0d0s6
     d10        m 6.0GB d11 d12
d11             s 6.0GB c3t0d0s0
     d12        s 6.0GB c0t0d0s0

Install boot block on both disks.

# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c3t0d0s0

Well, that completes VxVM to SDS migration.  I'm sure hardly anyone need to do such backward migration but just in case then this method will certainly help you... :)

Have a good weekend!