gibbsie.org Knowledge Base

Sun Volume Manager (SVM)

Aug 7th 2008
No Comments
respond
trackback

Sun Microsystems have had volume management available for Solaris for a number of years.  In recent releases, Sun Microsystems have bundled volume management freely with the Solaris operating environment.  Sun Volume Manager, or SVM, is the new marketing branding for Solstice DiskSuite (SDS) which is a mature product.

So what exactly is a volume manager?  A volume manager is exactly what it sounds: it creates, maintains and manages volumes, or more precisely, disk volumes.  This allows systems administrators to create logical volumes that span multiple physical disks, build volumes for performance, or build volumes for increased redundancy.  Another way of looking at it is software-based RAID.

Please be aware whilst reading this article that SVM commands exist in /sbin. ¬†They are also symlinked to /usr/sbin. ¬†We’ll refer to each command as being within /usr/sbin¬†since Sun Microsystems tend to refer to this path in preference to /sbin within all their documentation.

The State Database

The state database is an important feature of SVMs operation.  The database is used to store the configuration and geometry of the logical volumes created by SVM.  The database is stored on a dedicated slice of the physical disk.  SVM demands multiple database replicas to ensure availability and works on the majority algorithm, meaning that SVM must have N+1 replicas available to boot from that volume.  When a disk physically fails, it is the state database replicas that are consulted to understand the volume configuration and geometry hence they are extremely important.

State databases are easily created, checked and removed.

Create two copies of the state database on each device: (giving a total of four database replicas)
/usr/sbin/metadb -a -f -c2 c0t0d0s7 c0t1d0s7

Check status of state database:
/usr/sbin/metadb -i

Remove the state database from a given device:
/usr/sbin/metadb -f -d c0t0d0s7

Building a root mirror (RAID 1)

  1. Identify the slice containing the existing file system to be mirrored (i.e. c0t0d0s0)
  2. Create a new RAID 0 slice to be the first submirror using: /usr/sbin/metainit -f d11 1 1 c0t0d0s1
    Where:
    /usr/sbin/metainit -f volumeName numberOfStripes componentsPerStripe componentNames
    -f is needed to continue if the slice is mounted;
    volumeName is the name of the logical volume being created;
    numberOfStripes is the number of stripes to create in the logical volume;
    componentsPerStripe is the number of components a stripe should have;
    componentNames is the name of the components that’s used to create the volume, c0t0d0s0 for example.
  3. Create a second RAID 0 volume on an unused slice, say c1t1d0s0, to act as the secondary submirror.
  4. Create a one-way mirror (that is a mirror with just one submirror) by using:
    /usr/sbin/metainit volumeName -m subMirrorName
    Where:
    volumeName is the name of the logical volume being created;
    -m option means create a mirror;
    subMirrorName is the name of the component that will be the first submirror in the mirror.
  5. If the filesystem being mirrored is not root (/) filesystem, you should edit the /etc/vfstab file to make sure that the file system mount instructions refer to the mirror, not the block device.
    i.e. /dev/md/dsk/mirrorName /dev/md/rdsk/mirrorName /var ufs 2 yes -
  6. Remount the newly mirrored filesystem using one of the below methods:
    1. If the root (/) has been mirrored, execute metaroot command which updates /etc/vfstab and /etc/system to instruct the system to boot from the mirror, then reboot:
      /usr/sbin/metaroot volumeName
      lockfs -fa; sync; sync; init 6
    2. If you’ve mirrored a filesystem that is not root (/) and that cannot be unmounted, just reboot the system:
      lockfs -fa; sync; sync; init 6
    3. If you are mirroring a filesystem that can be unmounted, you can avoid a reboot: just unmount and remount the filesystem:
      umount fileSystem
      mount fileSystem
  7. Attach the second submirror by issuing the metattach command:
    /usr/sbin/metattach volumeName submirrorName
    Where:
    volumeName specifies the RAID 1 volume name to which to add the submirror
    submirrorName specifies the submirror you wish to add
  8. Install the bootloader:
    • If SPARC:
      1. installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s2
      2. eeprom "bootpath=/pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/sd@3,0:a"
    • If x86/x64:
      1. fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c3t3d0p0
      2. installboot /usr/platform/i86pc/lib/fs/ufs/pboot /usr/platform/i86pc/lib/fs/ufs/bootblk /dev/rdsk/c0d0s2

Building a root mirror (RAID 1): a worked example
Please use the above descriptions to understand the following. Please note that disk targets will change depending upon the hardware you are using. Similarly, the number of database replicas might change depending upon the standard your enterprise environment complies with.

  • Create a 10MB slice on a disk to store the state replica databases:
    # format
  • Copy the slice geometry across the two physical disks:
    # prtvtoc /dev/rdsk/c1t0d0s7 | fmthard -s - /dev/rdsk/c1t1d0s7
  • Create the state replica databases:
    # metadb -a -f -c 2 c1t0d0s7
    # metadb -a -c 2 c1t1d0s7
  • Define the first half of the mirror of the root disk in SVM:
    # metainit -f d10 1 1 c1t0d0s0
    # metainit -f d11 1 1 c1t0d0s1
    # metainit -f d17 1 1 c1t0d0s7
  • Define the second half of the mirror in SVM:
    # metainit d20 1 1 c1t1d0s0
    # metainit d21 1 1 c1t1d0s1
    # metainit d27 1 1 c1t1d0s7
  • Create the first half of the mirror:
    # metainit d0 -m d10
    # metainit d1 -m d11
    # metainit d7 -m d17
  • Initiailise the root disk:
    # metaroot d0
  • Flush all filesystem buffers to disk:
    # lockfs -fa
    # init 6
  • Attach second half of mirror to first:
    # metattach d0 d20

Unmounting a mirror

  1. Become superuser
  2. Verify at least one submirror is in the “Okay” state:
    /usr/sbin/metastat mirrorName
  3. Detach the submirror that you want to continue using for the filesystem:
    /usr/sbin/metadetach mirrorName submirrorName
  4. Depending on the filesystem you wish to unmirror:
    1. For the root (/) filesystem, use metaroot to tell the system where to boot from:
      /usr/sbin/metaroot rootSlice
    2. For the /usr, /opt or swap filesystems, change the filesystem entry in the /etc/vfstab file to use a non-SVM device (slice).
  5. Reboot the system:
    sync;sync;reboot
  6. Clear the remaining mirror and submirrors:
    /usr/sbin/metaclear -r mirrorName

Expanding an SVM volume

  1. Check the current size of the file system on the SVM volume:
    df -h /data
  2. Compare the total size to the size of the SVM volume itself:
    /usr/sbin/metastat -c d10
  3. Is the size of the filesystem similar to the size of the SVM volume?
    • Yes: The SVM volume needs expanding! Expand by X gigabytes (Xgb)
      1. /usr/sbin/metattach d10 Xgb
      2. /usr/sbin/metastat -c d10
      3. /usr/sbin/growfs -M /data /dev/md/rdsk/d10
    • No: The filesystem needs to be grown!
      1. /usr/sbin/growfs -M /data /dev/md/rdsk/d10

Checking on SVM volumes

As with the rest of SVM, checking SVM volumes is made possible by yet another ‘meta’ command. ¬†This time it’s metastat. ¬†The metastat command is the single command that tells you everything about volumes.

View the configuration of all volume sets, or a specific volume set:
# /usr/sbin/metastat -p
# /usr/sbin/metastat -p d20

View detailed information on all, or a specific, volume sets:
# /usr/sbin/metastat -t
# /usr/sbin/metastat -t d50


This post is tagged , , , , , ,

No Comments

Leave a Reply