62
VxVM 3 Day Brain Dump - Notes Table of Contents What version of VxVM does this course cover?..................................................................................................2 What this course will cover...................................................................................................................................2 What this course will not cover............................................................................................................................2 Objectives..............................................................................................................................................................2 Questions...............................................................................................................................................................2 Schedule................................................................................................................................................................2 Why is RAID so important?.................................................................................................................................3 RAID Basics..........................................................................................................................................................4 Key Concepts & Files...........................................................................................................................................6 private region.......................................................................................................................................................10 public region........................................................................................................................................................10 VxVM managed disks.........................................................................................................................................10 GUIs....................................................................................................................................................................10 CLI.......................................................................................................................................................................11 Complete Lab - VxVM installation....................................................................................................................11 VxVM disk flags.................................................................................................................................................11 Complete Lab - diskgroups.................................................................................................................................12 Building Volumes...............................................................................................................................................12 Complete Lab - building volumes......................................................................................................................13 Hot Sparing & Hot Relocation............................................................................................................................13 Complete Lab - Hot Sparing & Hot Relocation.................................................................................................14 Changing Volume Layouts.................................................................................................................................14 Complete Lab - changing volume layout............................................................................................................14 Logging...............................................................................................................................................................14 Snapshots.............................................................................................................................................................17 Fast Mirror Resyncs/Reconnects (FMR)............................................................................................................18 Complete Lab - snapshots...................................................................................................................................18 VxDMP - Dynamic Multi-Pathing.....................................................................................................................18 Booting from a VxVM encapsulated disk..........................................................................................................21 Hardware considerations.....................................................................................................................................21 Installation / Upgrade Considerations................................................................................................................22 OS considerations...............................................................................................................................................23 Kernel Tuning.....................................................................................................................................................26 Multiple-Initiator (MI) Disks..............................................................................................................................26 Transaction Locking...........................................................................................................................................27 Configuration Recovery......................................................................................................................................27 Complete Lab - configuration recovery..............................................................................................................27 Patches.................................................................................................................................................................28 Bugs & Escalations.............................................................................................................................................28 Product Licensing...............................................................................................................................................28 InfoDocs/SRDBs.................................................................................................................................................29 Personalities........................................................................................................................................................29 man pages............................................................................................................................................................29 URLs....................................................................................................................................................................29 Course Labs.........................................................................................................................................................31 Course Labs Solutions........................................................................................................................................44 Evaluation Form (type 1)....................................................................................................................................58 Evaluation Form (type 2)....................................................................................................................................60 Page 1 of 62 http://fde.Aus/crs/ Mike Arnott - FDE

VxVM Brain Dump

Embed Size (px)

Citation preview

Page 1: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Table of ContentsWhat version of VxVM does this course cover?..................................................................................................2What this course will cover...................................................................................................................................2What this course will not cover............................................................................................................................2

Objectives..............................................................................................................................................................2Questions...............................................................................................................................................................2Schedule................................................................................................................................................................2Why is RAID so important?.................................................................................................................................3RAID Basics..........................................................................................................................................................4Key Concepts & Files...........................................................................................................................................6private region.......................................................................................................................................................10public region........................................................................................................................................................10VxVM managed disks.........................................................................................................................................10

GUIs....................................................................................................................................................................10CLI.......................................................................................................................................................................11Complete Lab - VxVM installation....................................................................................................................11VxVM disk flags.................................................................................................................................................11Complete Lab - diskgroups.................................................................................................................................12Building Volumes...............................................................................................................................................12Complete Lab - building volumes......................................................................................................................13Hot Sparing & Hot Relocation............................................................................................................................13Complete Lab - Hot Sparing & Hot Relocation.................................................................................................14

Changing Volume Layouts.................................................................................................................................14Complete Lab - changing volume layout............................................................................................................14Logging...............................................................................................................................................................14Snapshots.............................................................................................................................................................17Fast Mirror Resyncs/Reconnects (FMR)............................................................................................................18Complete Lab - snapshots...................................................................................................................................18VxDMP - Dynamic Multi-Pathing.....................................................................................................................18Booting from a VxVM encapsulated disk..........................................................................................................21Hardware considerations.....................................................................................................................................21

Installation / Upgrade Considerations................................................................................................................22OS considerations...............................................................................................................................................23Kernel Tuning.....................................................................................................................................................26Multiple-Initiator (MI) Disks..............................................................................................................................26Transaction Locking...........................................................................................................................................27Configuration Recovery......................................................................................................................................27Complete Lab - configuration recovery..............................................................................................................27Patches.................................................................................................................................................................28Bugs & Escalations.............................................................................................................................................28

Product Licensing...............................................................................................................................................28InfoDocs/SRDBs.................................................................................................................................................29Personalities........................................................................................................................................................29man pages............................................................................................................................................................29URLs....................................................................................................................................................................29Course Labs.........................................................................................................................................................31Course Labs Solutions........................................................................................................................................44Evaluation Form (type 1)....................................................................................................................................58Evaluation Form (type 2)....................................................................................................................................60

Page 1 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 2: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

What version of VxVM does this course cover?

� revised to cover VxVM 3.5 on 26 September 2002

What this course will cover

� anything that I can remember about VxVM in any order that I remember it in

� things that you ask questions about

� anything else even remotely related that takes my interest

� anything that makes me laugh �

What this course will not cover

� the various VxVM GUIs

� not guts, no glory

� real men don't use GUIs

Objectives

� that we all learn something

� caveats ...

� YMMV (your mileage may vary)

� MMMV (my mileage may vary)

Questions

� you should definitely ask them ...

� often

� anytime

� about anything related to the topic

� this is a free form course, so the areas in which the most questions are asked are the areas in whichwe will concentrate

� I'll let you know when enough is enough, & if you still want to continue let's take it off-line

� remember, there's no such thing as a dumb question as I have asked them all before

Schedule

� duration of 3 days

� prompt start at 0900 (subject to change with very little notice)

� short breaks will be taken as requested

� feel free to stop me as I tend to ramble & lose track of time

� course will go to as late as necessary on day #1 & day#2 to ensure that all the course is covered &sufficient labs are performed by the end of day #3

Page 2 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 3: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Why is RAID so important?

� MTBF - Mean Time Before Failure

� 2.9GB Seagate 5400 RPM 5.25" (Tarzan tray) = 200,000 hours = 22.83 years

� 100 units = 83.22 days per failure

� 1000 units = 8.32 days per failure

� 2.1GB Seagate 7200 RPM 1" (SSA) = 800,000 hours = 91.32 years

� 100 units = 333.32 days per failure

� 1000 units = 33.33 days per failure

� 36GB Seagate 10000 RPM 1" (A5000, T3) = 1,200,000 = 136.99 years

� 100 units = 500.1 days (1.37 years) per failure

� 1000 units = 50.01 days per failure

� the bathtub curve

� drives tend to fail either early or late in their theoretical life span with a long period of stability in theinterim

� disks are one of the few mechanical devices left in computer systems

Page 3 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Drive MTBF MP 2 (6) D1000 (12) A5000 (14) A5200 (22)1 Device Hours Days/Fail Days/Fail Days/Fail Days/Fail2.9GB 5400 RPM 200000 1388.89 694.44 595.24 378.792.1GB 7200 RPM 800000 5555.56 2777.78 2380.95 1515.1536GB 10000 RPM 1200000 8333.33 4166.67 3571.43 2272.732 Devices2.9GB 5400 RPM 200000 694.44 347.22 297.62 189.392.1GB 7200 RPM 800000 2777.78 1388.89 1190.48 757.5836GB 10000 RPM 1200000 4166.67 2083.33 1785.71 1136.363 Devices2.9GB 5400 RPM 200000 462.96 231.48 198.41 126.262.1GB 7200 RPM 800000 1851.85 925.93 793.65 505.0536GB 10000 RPM 1200000 2777.78 1388.89 1190.48 757.584 Devices2.9GB 5400 RPM 200000 347.22 173.61 148.81 94.72.1GB 7200 RPM 800000 1388.89 694.44 595.24 378.7936GB 10000 RPM 1200000 2083.33 1041.67 892.86 568.18

Page 4: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

RAID Basics

� RAID - Redundant Array of Inexpensive/Independent Disks

� RAID 0 - concatenation & striping

� 0% capacity loss on raw storage

� no redundancy in case of member failure

� RAID 1 - mirroring

� 50% capacity loss on raw storage

� duplexing - mirroring across controllers

� redundant against single member failure

� RAID 0 + 1 - mirrored stripes

� 50% capacity loss, same as per RAID 1

� redundant against single member failure

� RAID 1 + 0 - striped mirrors, also known as RAID 10

� 50% capacity loss, same as per RAID 1

� redundant against multiple member failure as long as no 2 failures affect the same mirrored region

� RAID 3 - dedicated parity block disk (not used in VxVM)

� 1/Nth capacity loss

� redundant against single member failure

� RAID 5 - distributed parity blocks across all member disks

� 1/Nth capacity loss

� redundant against single member

� RAID S (ESS) - EMC RAID 5 (it's a product, not a standard)

RAID in a nutshell� Fast, Cheap, Safe - pick any two ... in a black & white world

� RAID 0 - fast, cheap

� RAID 1 - safe, & sort of fast, relatively speaking

� RAID 0 + 1, 1 + 0 - fast, safe

� RAID 5 - cheap, safe

RAID Level Fast Cheap Safe

RAID 0 � �+ �

RAID 1 ��R+,W- � �

RAID 0+1 �+ � �

RAID 1+0 (aka 10) �+ � �+

RAID 5 ��R+,W-- � �-

Page 4 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 5: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Hardware RAID� is the various RAID levels discussed previously implemented via a dedicated hardware platform, alsoknown as the hardware RAID controller

� all RAID is actually implemented via software, in "hardware" RAID the "computer" is a dedicated,single-purpose controller

� Sun product examples of hardware RAID controllers include:-

� A[13]xx (Sonoma & RAID Dilbert) arrays

� T3 (Purple) arrays

� SRC/P re-badged DPT PCI RAID controller

� Solaris is unaware of the RAID calculations & I/O operations happening on the hardware RAIDcontroller

� if the array has a battery-backed (non-volatile) fast-write cache & it is enabled, I/Os from Solaris areusually acknowledged as completed by the hardware RAID controller before the data reaches thephysical disks

� no negative performance impact to Solaris

� hardware RAID controllers can improve performance by working smart, & not necessarily hard, at theback end

� typically, features like write coalescence are only available using hardware RAID

� one of the downsides of hardware RAID is it's inability to think beyond the box

� for redundancy purposes it often makes sense to mirror data between chassis' (arrays)

� hardware RAID in general does not lend itself well to this purpose

Software RAID� is the various RAID levels discussed previously implemented within the host's OS (Solaris in our case)

� Solaris is ultimately performing & controlling all of the RAID calculations & I/O operations happeningon the system

� there is a small performance impact to Solaris for most RAID levels

� some time ago the VxVM overhead was calculated at being less than 1% of a single CPU

� moderate to heavy write oriented applications do not suit software RAID-5 however due to thesignificant I/O overhead in data/parity block updates

� as software RAID is capable of utilising all of the devices that Solaris can see, it does not exhibit thelimitation of hardware RAID's inability to think beyond the box, see "storage cocktails" below

� Sun shipped & supported software RAID volume managers include:-

� Solstice DiskSuite (SDS)

� Veritas Volume Manager (VxVM)

� it is not unusual to see a "storage cocktail" implemented, especially on larger servers

� a storage cocktail refers to a mixture of storage types tailored to specific application use, e.g.

� A3xx arrays mirrored across chassis' using VxVM

� VxVM used to split large T3 LUNs into smaller chunks

� SDS used to manage boot disks while data stored on A1000 LUNs

Page 5 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 6: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Key Concepts & Files

/etc/vx/volboot� the VxVM bootstrap file

� it is an ASCII file but it is highly recommended that it is not edited with a text editor as it must adhere to avery strict format

� must be exactly 512 bytes in size & is padded accordingly

� use vxdctl command to update the file

� contains VxVM "hostid"

� this is commonly the same value as the Solaris nodename, not the hexadecimal Sun hostid as thename suggests

� the VxVM hostid is not directly tied to the Solaris nodename

� it is possible to have a completely different name for the Solaris nodename to that of theVxVM hostid (it is a little confusing that way though)

� the hostid is used to establish disk & diskgroup ownership

� the hostid is also used to ensure that two or more hosts that can access disks on a shared bus will notinterfere with each other in their use of those disks

� can contain a list of simple disks to scan for rootdg

vxconfigd� this is the VxVM configuration management daemon

� no configuration changes are possible without vxconfigd process running & enabled mode

� VxVM volume access will not be stopped if vxconfigd is not running

� you can check the status of vxconfigd using vxdctl mode

� just checking if the process is running using ps is not enough

diskgroups� a diskgroup is a collection of VxVM managed disks

� diskgroups are atomic entities

� i.e. diskgroups are entirely self-reliant, they carry their configuration & state around with them

� there is no VxVM equivalent OS ASCII configuration file to the SDS md.cf ASCII configuration file

� configuration must be saved manually by collecting vxprint output, etc.

� the unsupported vxinfosave script can be used to store a rolling history of the VxVMconfiguration

� a diskgroup can be deported from a given host and imported to another host to facilitate easily moving

disks & their data between hosts

� this concept/feature is used heavily in Sun Cluster 2.x product

Page 6 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 7: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� NB - be aware that not all diskgroups can be deported/imported onto different hosts without specialconsideration

� bear in mind diskgroup versioning, VxVM versioning, shared diskgroups on non-CVM(Clustered Volume Manager) systems

� at least one diskgroup is required for VxVM to function

� rootdg must be present & functional

� very early versions of VxVM only supported a single rootdg diskgroup

� NB be sure to use the -g diskgroup_name option to VxVM commands wherever possible

� as object names need only be unique within the confines of a diskgroup, using the -g dgname

option ensures operations are performed on the desired object & not a like named object in

another diskgroup

� VxVM diskgroups are equivalent to SDS disksets

� unlike SDS, the use of separate diskgroups is encouraged, there are advantages in using separatediskgroups

� the ability to easily move data between hosts

� use the vxdg deport & vxdg import functionality to achieve this

� in the case of diskgroup corruption, damage is limited to that diskgroup only

� rootdg can be re-initialised without affecting non-rootdg diskgroups & volumes

� upgrades can be effected simply without the involvement of all disks on the system

� large numbers of disks can be managed

� as new functionality is added into VxVM over the various releases, new versions of diskgroupconfiguration types are created to support the additions

� for the (most) of the VxVM diskgroup versions & their supported functionality see:-

http://fde.Aus/cgi-bin/man.show?manpage=vxdg&SRC=VxVM_3.2&SF=1m

� NB - shared diskgroups (used in CVM) do not support all normal diskgroup functionality

� no RAID-5 volumes

� no fsgen volumes

� NB - do not create filesystems on volumes in shared diskgroups

� if VxVM <= 3.1.1, no layered (professional) volumes - see BugID# 4397790

� if VxVM >= 3.2 & using diskgroup version 90 or higher, layered (professional) volumes are

supported

� vxdg upgrade dgname can be used to upgrade "older" diskgroups to allow access to the newerfunctionality

� NB diskgroup versions can not be "downgraded", therefore it would be wise to hold of on upgradingdiskgroups during a VxVM upgrade if a roll back to the previous VxVM version may be required

� vxdg destroy dgname is used to remove a diskgroup from the system

� as the name suggests, this destroys the diskgroup, it does this by “blanking out” the dgname entry inthe diskgroup configuration. The diskgroup can be recovered by importing the diskgroup using it's

diskgroup ID (assuming the diskgroup's resources have not been re-tasked).

� WARNING - this utility is potentially highly destructive & does not request confirmation before

performing the action of destroying the diskgroup

Page 7 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 8: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

subdisks� the smallest building block in VxVM

� can be best described as a "soft" slice or partition

� subdisks are not addressable directly from the OS

plexes� a plex is a collection of subdisks that together form the data storage object of the volume

� commonly equated to sub-mirrors

� plexes are not addressable directly from the OS

� plexes have a type property, e.g.:-

� concatenated

� striped

� RAID-5

� log

volumes� a virtual disk device that looks to applications and file systems like a regular disk partition device

� volumes are made up of plexes

� volumes are directly addressable from the OS

� volumes are the level at which a filesystem, swap space or database tablespace is created

� as of VxVM 3.x there are 2 types of volumes:-

� standard

� RAID 0+1

� mirror of concats/stripes

� professional (aka layered)

� RAID 1+0

� stripe/concat of mirrors

� due to VxVM architecture, significantly more objects are required to define volumes

� this can impact on the total number of volumes able to be created within a diskgroup

sub-plexes� newly introduced object in VxVM 3.x

� required to implement "Professional" or "layered"volumes

� best described & explained by viewing the vxprint -htr out of a RAID 1+0 volume

sub-volumes� newly introduced object in VxVM 3.x

� required to implement "Professional" or "layered" volumes

� best described & explained by viewing the vxprint -htr out of a RAID 1+0 volume

Page 8 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 9: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Volume layout - RAID 0+1 versus RAID 1+0 (aka Professional or Layered)

Page 9 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

sd

sd

sd

plex

volume

sd

sd

sd

plexmirror

RAID 0+1

stri

pe

9 objects required

volume

RAID 1+0

plex

stri

pe

17 objects required

sd

subplex

mirror

subvolume

sd

subplex

sd

subplex

mirror

subvolume

sd

subplex

sd

subplex

mirror

subvolume

sd

subplex

Page 10: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

private region

� uses VTOC tag# 15 to identify slice on vxconfigd probe of disks

� usually slice# 3 on sliced disks

� seen on sliced & simple (though not as a separate slice) disks

� used for VxVM housekeeping tasks & is not used for data storage

� contains, among other things:-

� disk name & ID

� diskgroup name & ID

� diskgroup configuration copy (space is reserved, the copy is not necessarily active)

� maintained by vxconfigd daemon

public region

� VTOC tag# 14 is used to denote a public region slice

� usually slice# 4 on sliced disks

� seen on sliced , simple & nopriv disks

� used for data storage, i.e. area where subdisks are created

� maintained by VxVM commands for volume creation & maintenance

VxVM managed disks

sliced

� named so because private & public regions are defined as Solaris slices (partitions)

� default disk type

� can be automatically discovered on boot

simple

� named so because private & public regions are defined within the one Solaris slice (partition)

� i.e. no separate Solaris slice for private & public regions are defined

� are volatile objects, will disappear after reboot if not in use & not defined in volboot file

� can not be automatically discovered on boot without assistance from either sliced disks or volboot file

nopriv

� named so because they have no private region at all, i.e. they consist only of a public region

� their configuration is held & maintained by another disk with a private region (a proxy)

� avoid nopriv disks like the plague

� can not be automatically discovered on boot without assistance from either sliced disks or volboot file

� nopriv disks will not be supported in the future & the functionality will be phased out (current thinking)

GUIs

� vxva - X-windows interface, available on SEVM 2.0 - SEVM 2.6

� vxvm / vmsa / vea - Java interface, available on SEVM 2.6 & VxVM 3.0 or greater

Page 10 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 11: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

CLI

� use vxassist to create, add to & relayout volumes

� vxassist -g datadg make oraclelogs 200m layout=stripe nmirror=2 nstripe=5

� use vxtask to monitor operations

� if using vxtask on layered (professional) volumes, be aware that the percentage complete figure canrelate to the mirrored pair in resync & not the entire volume

� use vxdctl list to view VxVM configuration information from the /etc/vx/volboot file

� use vxdctl enable to scan for disks newly added to the system or to enable failed DMP paths

� use vxdctl initdmp to recreate all the DMP nodes in /dev/vx/[r]dmp directories

� use vxreattach to reconnect disks in a failed was: state that were just temporarily unavailable at thetime of VxVM start-up

� you'll need to ensure that Solaris can see the disk first, see format(1m) , drvconfig(1m) ,disks(1m) , devlinks(1m) &/or devfsadm(1m)

� you'll also need to ensure that VxVM can see the disk, use vxdctl enable

� use vxrecover to manually start recovery operations, e.g. RAID-5 & mirror resyncs

Menu-Driven Functions� vxdiskadm allows character-based menu driven access to most core VxVM maintenance tasks

� will not run if VxVM /var directories are not accessible

� supports:-

� adding a disk

� removing & replacing a failed disk

� mirror a disk's volumes

� moving volumes off a disk

� importing/deporting a diskgroup

� encapsulating a disk

Complete Lab - VxVM installation

VxVM disk flags

� online - disk has a valid private region

� error - disk is not under VxVM control

� contrary to what the flag name suggests, there is normally no issue with the disk at all

� VxVM is making the rather arrogant assumption that it must manage all disks on the system

� failing - a (perhaps transient) I/O error has occurred on this disk

� this flag is set automatically by VxVM when the disk has experienced an I/O failure, it must bemanually cleared via the GUI or by using the vxedit command

� spare - preferred disk in Hot Relocation operations

� this is a manually set flag

Page 11 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 12: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� removed was: - disk was manually removed from the VxVM configuration

� typically you see this after using vxdiskadm to remove a disk for replacement

� failed was: - complete disk failure, usually automatically discovered

� typically you see this after a disk has completely failed from VxVM's perspective, often after a reboot

� reserved - will not be used by vxassist command to make new volumes or relocate

� manually set flag, use vxedit set reserved=[on|off] diskname to alter

� altused - the alternate config copy in the private region is being used

� this is flag is usually the harbinger of bad news

� this flag is/was the calling card of RM6 LUNs no longer being accessible to VxVM

� see FIN# I0511

� invalid - private area exists, but info in it is not a valid configuration

� nohotuse - this disk will not be used to relocate data onto

� new flag in VxVM 3.1

� shared - shared (multiple, simultaneous imports to different hosts) diskgroup, should only be seen in SunCluster (Clustered Volume Manager) environments

Complete Lab - diskgroups

Building Volumes

top down� use the vxassist command

� defaults (both sane & insane) can be configured via the /etc/default/vxassist file

� all of the GUIs use the top down method, most are just graphical front ends to VxVM CLI utilities

� this is the supported method, i.e. you can log bugs against these utilities when something doesn't work

bottom up� vxmake , vxsd , vxplex , vxvol , etc.

� unsupported method of volume creation

� NB only the method is unsupported, the actual commands/utilities used are supported

� see BugID# 4011269 - vxva can build volumes which will not start on reboot

� see BugID# 4078591 - vxmake - volume is larger than the plex

� see BugID# 1247206 - Some striped volumes, when created "bottom-up", can not be started.

� make sure you know what you are doing if using this method

� most "bugs" found when using this method are actually user errors for which the bottom up utilities don'tsanity check

� a classic example of this would be the striped volumes that are created larger than the contiguous

space in their component plexes

Page 12 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 13: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� use extreme caution if a volume that you have created via the bottom up approach does not startautomatically without the use of the "force" option

� fix the problem before placing data on this volume

Complete Lab - building volumes

Hot Sparing & Hot Relocation

� choose either Hot Sparing or Hot Relocation

� do not attempt to enable both simultaneously

� see the last lines of /etc/rc2.d/S95vxvm-recover for disabling or enabling an alternate drive failurerecovery mechanism

� NB only redundant volumes can be automatically recovered regardless of the "Hot" disk method in use

Hot Sparing� the only option on VxVM versions prior to 2.3

� optional on VxVM versions higher than 2.3 (though a potential supportability issue)

� only kicks in on a full disk failure, i.e. I/O errors to both the public & private region of a disk (theoretical)

� only spares out to dedicated Hot Spare disk/s

� on sparing out operation, the Hot Spare disk disappears (in VxVM terms) as it completely assumes thepersonality of the disk it replaces

� uses vxsparecheck script/daemon

Hot Relocation� default on VxVM versions 2.3 or greater

� enabled by default

� think of it as Hot Sparing by subdisk

� NB only subdisks on redundant volumes will be relocated, i.e. relocation only kicks in if data onsubdisk to be relocated can be automatically regenerated from a copy on another mirror or from

RAID-5 data & parity information

� if a subdisk in a log plex is to be relocated, a new log plex will be created to perform the relocation

� uses Hot Spare disks in preference before using any free space in the diskgroup

� without modifications, the default vxrelocd script is capable of splitting a subdisk into multiple pieces onrelocation

� see VxVM Release Notes on how to disable the splitting of subdisks during relocation

� VxVM 3.1 (or higher) introduces the unrelocate function vxunreloc

� prior to vxunreloc it was a manual process or utility scripts (like vxunrelocate , vxreconstruct )to put subdisks back in place

� extra fields were added to the subdisk structure in the diskgroup private region database toaccommodate unrelocation

� to support unrelocation, diskgroups must be of version 70 or higher

� see vxdg upgrade for more information on upgrading diskgroups

Page 13 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 14: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� NB only one (1) unrelocation step per subdisk is supported, i.e. if a subdisk is relocated due to an I/Ofailure & then relocated again because of another I/O failure, an unrelocation operation will onlyreturn the subdisk to the location it was first relocated to... no further unrelocations can be performedon that subdisk

� NB subdisks will be unrelocated to their original disks but not necessarily to their original location onthat disk

� NB when vxunreloc unrelocates subdisks the subdisks will not revert to their original names

� NB similarly to when subdisks in log plexes are relocated, when subdisks in log plexes areunrelocated new plexes are created to complete the operation

� VxVM 2.3 introduced the failing flag & while this flag is not set by the Hot Relocation operation, it isnot unusual to see this flag set on a disk that has been Hot Relocated from

� uses vxrelocd script/daemon

Complete Lab - Hot Sparing & Hot Relocation

Changing Volume Layouts

� one of the advantages of VxVM has always been the ability to perform most reconfiguration operationson-line without the need to schedule an outage for the storage system

� growing or shrinking volumes

� in stripes, this is performed by increasing column lengths not by increasing the number of columns

� adding or removing mirrors (plexes) from volumes

� adding or removing DRL & RAID-5 logging devices

� as of VxVM 3.x, it is now possible to change the base configuration of a volume on-line

� this feature does not require a separate licence

� transform RAID-5 volumes to RAID 1+0 - known as relayout

� add or remove columns from striped volumes - known as relayout

� change concatenated volumes into striped volumes - known as relayout

� transform RAID 0+1 volumes to RAID 1+0 volumes - known as conversion

� see the following commands for more information:-

� vxassist relayout

� vxassist convert

� vxrelayout

� this functionality may require additional temporary storage capacity

Complete Lab - changing volume layout

Logging

� VxVM employs two types of I/O transaction logging mechanisms

� the only features these two types of logging have in common are:-

� they have the word "logging" in their name

� they are part of VxVM

Page 14 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 15: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

RAID-5 Logs� ensures data/parity calculation protection in the event of a system crash or abnormal volume termination

� NB - Degraded RAID-5 volumes will not auto-start at system boot

� all degraded RAID-5 volumes must be manually started (see vxvol(1m) ) on system reboot

� having a RAID-5 log attached to the RAID-5 volume will not alter this fact

� to support enough concurrent access to the RAID-5 array, the log should be several times the stripe size ofthe RAID-5 plex

� in the example below the RAID-5 log plex is 45 times the size of the stripe width

� plex length / (number of columns * stripe unit width)

� 4320/(3*32) = 45

� it is possible, though not good practice, to configure RAID-5 volumes without logs

� the documentation recommends at least two RAID-5 log plexes per RAID-5 volume

� this can be a performance penalty to an already slow volume type (but if you're worried about writeperformance you shouldn't be using RAID-5 anyway)

� most installations use only one RAID-5 log plex

� RAID-5 volumes without logs are subject to silent data/parity corruption in the event of a system crash

� you have to have a very good reason not to use RAID-5 logs

� as RAID-5 parity is only used (& never automatically checked) in the event of a member disk failure, it isadvisable to run the vxr5check(1m) utility on a regular basis to verify RAID-5 parity is, in fact, good

� personally, I've never seen a customer, or anybody, for that matter actually do this

� perhaps a RFE to have it performed automatically as, RM6 parityck(1m) does, is warranted

� RAID-5 logs appear as separate plexes with a state of "LOG" in vxprint output

� example vxprint -ht output

Disk group: ddg

DG NAME NCONFIG NLOG MINORS GROUP-IDDM NAME DEVICE TYPE PRIVLEN PUBLEN STATEV NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEXPL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODESD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODESV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE

v r0 RAID-5 ENABLED ACTIVE 20480 RAID -pl r0-01 r0 ENABLED ACTIVE 21568 RAID 3/32 RWsd ddg01-01 r0-01 ddg01 0 10800 0/0 c1t2d0 ENAsd ddg02-01 r0-01 ddg02 0 10800 1/0 c1t2d1 ENAsd ddg03-01 r0-01 ddg03 0 10800 2/0 c1t3d0 ENApl r0-02 r0 ENABLED LOG 4320 CONCAT - RWsd ddg04-01 r0-02 ddg04 0 4320 0 c1t4d0 ENA

Page 15 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 16: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Dirty Region Logs (DRLs)� provide faster mirror resynchronisation in the event of a system crash or abnormal system termination

only

� DRLs are a bitmap of "dirty" regions of the mirrored volume

� in the event of an abnormal volume termination, the resynchronisation process only resynchronises thedata block regions marked as "dirty" in the DRLs & not the entire volume

� does not provide SDS metaoffline /metaonline type partial synchronisation

� see Fast Mirror Resynchronisation (FMR) in VxVM 3.1 or higher for this

� FMR requires a separate, additional cost, purchasable licence

� additional I/O overhead for potentially little benefit as DRLs only have effect on abnormal volumetermination, e.g.:-

� system crash / panic / hang

� loss of power to system

� DRLs are usually quite small in size

� the default sizing is 2 sectors (at 512 bytes/sector) per 2 GB of volume size

� the minimum DRL size is 2 sectors

� the larger the DRL the more granular the dirty region bitmap becomes

� beware of making DRLs too large as only a certain number of regions can be dirty at any given time& throttling of performance could be induced if using too large a DRL

� DRLs have no beneficial effect on 1-way mirrors, though it is possible to set this up

� it is unsupported to apply DRLs to core-OS volumes ... /, /usr, /var, etc.

� DRLs usually appear as separate logging plexes, but it is possible (using bottom up commands) to attach alogging subdisk within a data plex

� plex m0-03 in the example vxprint -ht output below shows a logging plex

� subdisk z in the example vxprint -ht output below shows a logging subdisk with in a data plex

Page 16 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 17: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� example vxprint -ht output

V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEXPL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODESD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE

v m0 fsgen ENABLED ACTIVE 20480 SELECT -pl m0-01 m0 ENABLED ACTIVE 21600 CONCAT - RWsd ddg04-02 m0-01 ddg04 4320 21600 0 c1t4d0 ENApl m0-02 m0 ENABLED ACTIVE 21600 CONCAT - RWsd ddg05-01 m0-02 ddg05 4320 21600 0 c1t4d1 ENApl m0-03 m0 ENABLED ACTIVE LOGONLY CONCAT - RWsd ddg04-03 m0-03 ddg04 25920 5 LOG c1t4d0 ENA

v m1 fsgen ENABLED ACTIVE 20480 SELECT -pl m1-01 m1 ENABLED ACTIVE 21600 CONCAT - RWsd z m1-01 ddg05 0 4320 LOG c1t4d1 ENAsd ddg01-02 m1-01 ddg01 10800 21600 0 c1t2d0 ENApl m1-02 m1 ENABLED ACTIVE 21600 CONCAT - RWsd ddg02-02 m1-02 ddg02 10800 21600 0 c1t2d1 ENA

Page 17 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 18: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Snapshots

� a VxVM snapshot is an additional write-only plex which is synched up & then broken off at a given point

to form it's own volume

� once the snapshot plex is broken off & the volume is formed, the data can be used independently

� data is a full, read/write image

� once broken off, the snapshot volume no longer has any relationship with the volume it was snappedfrom

� commonly used practice for:-

� reducing database down time during backups as the database only needs to be shutdown while thesnapshot plex is broken off & not for the entire duration of the backup process

� creating a test data image for software development & testing

� the entire VxVM volume is duplicated regardless of the volume contents & free/used space within thevolume

� requires free space in the diskgroup equal to the size of the volume being snapped

� do not confuse this function with VxFS snapshots

� Veritas in their infinite wisdom named the 2 different technologies the same �

� VxFS snapshots (or snapfs' as I like to refer to them) are done at the filesystem level, not at the

volume level

� snapshots of RAID-5 volumes are not supported in VxVM < 3.x

� it is a common request for the ability to move snapshot volumes into another diskgroup to allow deport &import onto another host for backup, testing, etc. purposes

� the bad news ...

� on VxVM <= 3.1.1 you will need to perform steps as documented by SRDB# 14882

� this is a manual process which entails saving volume configurations, destroying the volumes,removing the disks from the diskgroup, creating a new diskgroup & re-creating the volumes'logical structure ... perfectly functional but fraught with danger for the unwary

� technically, this procedure is unsupported due to the risk of reconstructing the volumes' logicalstructure on the wrong disks

� the good news ...

� VXVM >= 3.2 (using diskgroup version 90 or higher) introduces the new vxdg subcommands move,split & join

� these commands automate the procedure of moving disks, & the volumes contained therein,between diskgroups essentially making SRDB# 14882 obsolete.

� caveat is that the disks to be moved must be self-contained

� the bad news about the good news ...

� unfortunately the new vxdg subcommands move, split & join require a separate purchasableFMR licence, i.e. it costs extra, see FMR below

Page 18 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 19: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Fast Mirror Resyncs/Reconnects (FMR)

� Fast Mirror Resynchronisations are a new feature in VxVM 3.1

� FMR is an extension of the existing snapshot feature

� NB it is a licensable feature, i.e. it's a chargeable item

� provides SDS metaoffline /metaonline type functionality & with VxVM >= 3.2 (using diskgroupversion 90 or higher) the new vxdg subcommands move, split & join

� changes between the master volume & the FMR snapshot are tracked via an in-memory bitmap

� accordingly a system reboot will force a full mirror resync on plex reattachment

� VxVM 3.2 (using diskgroup version 90 or higher) introduces Persistent FMRs (PFMR)

� changes between the master volume & the PFMR snapshot are tracked via a disk volume known as aData Change Object (DCO)

� as the name persistent suggests, this change tracking mechanism will survive a system reboot

� it is possible to force resynchronisations to be from the FMR snapshot back to the master volume

� possible use for simplified data upgrade rollbacks

� FMR is not supported on RAID-5 volumes

Complete Lab - snapshots

VxDMP - Dynamic Multi-Pathing

what it does� generally speaking, VxDMP provides "intelligent" load balancing across the multiple paths to a disk on around-robin basis

� active/active arrays

� all working I/O paths to the same disk/LUN are used simultaneously

� A5x00 arrays are active/active

� active/passive arrays

� only one I/O path is used of the multiple possible I/O paths to the same disk/LUN, the other path(s)are only used in the case of primary path failure

� T3 -ES "partner pairs" are active/passive

� to avoid "LUN thrashing" DMP does not round-robin I/Os down the multiple paths to a T3-ES

� ensure VxVM >= 3.1, with the T3-ES aware VxDMP driver, is used with T3-ES units

� Sun officially supports T3-ES with VxVM 3.0.4 but support for T3 arrays as active/passive

is not specifically mentioned in the VxVM documentation until VxVM 3.1

� my advice is to play safe & for TS-ES use VxVM >= 3.1

� VxDMP handles transparent failover of I/Os on a failed data path

� when using VxDMP in VxVM < 3.0, once a failed I/O path is repaired the path must be manuallybrought on-line by using vxdctl enable , stopping & restarting vxconfigd or performing a reboot

� by default, VxDMP in VxVM >= 3.0 introduced the automatic probing of failed paths at regularintervals

Page 19 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 20: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� once a failed I/O path is found to be repaired the path, by default, is reactivated

� to check the status of the VxDMP Restore Daemon use vxdmpadm stat restored

� for more information see the section Administering Dynamic Multipathing in VxVMAdministrator's Guide

how it works� disks with multiple paths are identified by hardware serial numbers or WWNs for FC-AL disks

� VxVM masks additional paths to multi-pathed disks

� OS still sees multiple devices for the same disk though

� the OS seeing multiple devices may change in Solaris >= 9 or with the use of MPXIO aka SunStorEdge Traffic Manager (STMS)

when should it be used?� enabled by default at time of installation unless Sun Alternate Pathing (AP) packages are installed

� it is always enabled with VxVM >= 3.1.1

� unless there is a specific need to disable DMP, leave it enabled

� it seems that Veritas does all their product testing with DMP enabled

� past experience has shown that you are more likely to run into a corner-case bug if DMP is disabled - seeBugID# 4187714

� there was one VxVM version (& a Sun SEVM patch) where the system would not boot unless DMPwas enabled – see BugID# 4203247

� if VxVM < 3.0.2 is in use & RM6 LUNs are under VxVM control then DMP should be disabled

� see FIN# I0511

how do I disable DMP if I need to?� only applies to VxVM < 3.1.1, VxVM >= 3.1.1 require DMP to be enabled or VxVM will not function

� for VxVM >= 3.1.1 see the vxdmpadm command or Chris Kiessling's excellent tutorial athttp://storage.east/vxvm/vxdmp_311_tutorial.html

� see InfoDoc# 18314 ... not just an /etc/system edit

� after disabling vxdmp, be aware that patching VxVM will re-install the vxdmp driver in /kernel/drv &/or/kernel/drv/sparcv9

� if DMP disabled, after patching VxVM always check that the vxdmp drivers have not been re-installed

� after patching VxVM & performing a reboot, perform a modinfo | grep vxdmp to ensure thevxdmp driver has not been loaded

� be careful when deciding to disable DMP, there are bugs against various VxVM versions with both DMP

enabled & disabled

� moral of the story ... disabling DMP is not a panacea to all VxVM issues

how do I enable DMP if I need to?� only applies to VxVM < 3.1.1

� for VxVM >= 3.1.1 see the vxdmpadm command for disabling DMP on a path by path basis or Chris

Page 20 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 21: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Kiessling's excellent tutorial at http://storage.East/vxvm/vxdmp_311_tutorial.html

� see InfoDoc# 19639

� not just an /etc/system edit

vxdmpadm� utility to allow the off-lining/on-lining of DMP paths to disks

� primarily designed to allow VxVM interaction/coexistence with ...

� Alternate Pathing (AP) - this is VxVM version dependent

� STORtools

� command options & syntax varies from version to version

� will be used a lot more frequently in VxVM >= 3.1.1 as DMP can not be disabled

Page 21 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 22: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Booting from a VxVM encapsulated disk

� VxVM is not a core volume manager, i.e. it is not built into the Solaris kernel

� Solaris boots via it's standard method

� mirrored copy of boot disk is not available at this time

� if the primary boot disk fails early in the boot process the secondary (mirror) boot disk mustbe manually booted from

� ensure boot device aliases are correct before a failure is encountered

� VxVM core volumes are started by rcS.d script

� if VxVM can not start it's core volumes it will halt the system

� Solaris mounts the core OS filesystems on VxVM volumes

� using a non-mirrored VxVM boot disk as a rootdg placeholder, while commonly practiced, is to beavoided if at all possible

� an extra level of complexity for zero redundancy benefit

� it is possible for the only plex in rootvol to be placed in an unstartable state

� can lead to cases of a healthy, bootable OS being unable to boot because of spurious VxVMissue

� difficult situation to recover from

Hardware considerations

� when booting off fibre-attached disk (A5K, E3500 internal disks, T3) ...

� place boot mirrors on separate FC-AL loops

� see FIN# I0421 for FC-AL root disk replacement & data restoration

� do not mix SBus & PCI (ifp driver) hosts on the same FC-AL loop (may change with the release ofthe new PCI FC-AL cards)

� do not place A[13]xx LUNs in rootdg

� a disk that does not have a slice 2 can not be initialised into the VxVM configuration

� when upgrading disk or array firmware ...

� boot from CD-ROM or network, if possible, this saves the long & tortured (& only partiallydocumented) procedure of disabling VxVM to facilitate the firmware upgrade

� ensure OBP variable use-nvramrc? is set to true

� there may be problems after encapsulation or root disk replacement if use-nvramrc? is not set

� define manual root disk & root mirror devaliases as VxVM can & will "mess" with it's own vx-

diskname entries, especially during a VxVM upgrade

� use sliced disks where possible to facilitate the ability to "shuffle" disks & controller numbers aroundwithout upsetting VxVM configuration

� it is possible that two (2) reboots may be required before all devices come back on-line

� if the device nodes for the "slots" that the disks have moved to don't already exist, a reconfigurationboot (boot -r ) will be required to configure the "new" disks into Solaris before VxVM can find &use the disks

Page 22 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 23: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Installation / Upgrade Considerations

� if encapsulating boot disk

� ensure all core OS filesystems & partitions reside on the boot disk

� / , /usr , /var , primary swap

� as a precaution ...

� set alt (tag #9) tags on non-core OS partitions/filesystems

� do not leave the default tag of unassigned (tag #0) in place

� save boot disk VTOC away before encapsulation

� take copy of /etc/vfstab before encapsulation

� some of the VxVM documentation states that to upgrade VxVM, the new VxVM packages should simplybe installed over the existing packages

� this results in VRTSvxvm, VRTSvxvm.1 , VRTSvxvm.2 type packages in the package database

� messy SysAdmin practice

� may affect patching?

� follow the procedure to upgrade VxVM with a Solaris upgrade instead (although don't do the Solarisupgrade unless you were planning to) as this properly removes the existing packages before installingthe new VxVM packages

� using SDS to manage boot disks

� this is popular in some geos as it affords a simpler recovery mechanism should there be an issue thatrequires the boot disk to be brought out from under a volume manager's control

� downside is that the customer must learn at least two pieces of volume management software

� unfortunately VxVM does not recognise a SDS-managed disk & vice-versa

� there is the potential to scribble over disks that are in-use, if due care is not exercised

� unmirrored VxVM-managed boot disks are not, generally speaking, a good idea

� introduces an extra level of complexity with no perceivable benefit

� makes recovery more difficult in the event of an OS disk failure

� use correct tags on root disk partitions before encapsulating

� swap may not be configured properly if not tagged as type sw

� conversely, if filesystem slices are tagged as sw, volumes may be created incorrectly

� configuring rootdg on simple slices

� is officially unsupported by Veritas & hence, by association, is unsupported by Sun

� this functionality does work however (just don't do a vxdctl init without re-adding the rootdg

slices) & is used in common practice

� was formerly documented as part of the Sun Cluster installation documentation

� if a VxVM version change introduces a diskgroup functionality enhancement, any existing diskgroupsmay have to be manually upgraded to the latest version to access that additional functionality

� can be done on-line without an outage, very painless

� see the vxdg upgrade command

Page 23 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 24: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� non-reversible procedure, i.e. don't upgrade if you think you might have to roll back your VxVMupgrade

� the maximum number of objects (disks, volumes, plexes, subdisks, etc.) in a diskgroup is dictated by thesize of the diskgroup's smallest private region

� it is possible, if using lots of VxVM objects, to max out a diskgroup

� increasing the diskgroup private region size is not a good option, not only is it painful to implement(often requiring a full backup, reconfigure & restore) but it can also lead to support issues when

replacing a failed disk

� if you max out a diskgroup, the best & safest option is to create an additional diskgroup & createyour new objects there

� bear in mind also that the larger a diskgroup is the longer it takes to import

� large diskgroups can have problems being automatically imported on reboot due to time-outs

� number of active diskgroup configuration copies

� by default all disks with a private region have space reserved for a diskgroup config copy

� by default "enough" diskgroup config copies are enabled across controllers, targets & disks to ensurethat the failure of a given controller will not render the diskgroup inaccessible

� if a disk fails that is holding a diskgroup config copy, another disk will take it's place by enabling it'sconfig copy

� usually a message will be logged to this effect

� you can manually set the number of diskgroup config copies

� vxedit set nconfig=0 dgname (default where VxVM configures "enough")

� vxedit set nconfig= N dgname (only allow N config copies)

� vxedit set nconfig=-1 dgname (all disks should have config copies)

� VxVM does not allow the user to control where diskgroup config copies are placed

� it is theoretically possible to manipulate VxVM to influence it's location of diskgroup configcopies but for practical purposes it is not a viable option

OS considerations

� do not grow core OS volumes

� / , /usr , /var , /opt , primary swap, etc.

� system may not boot

� upgrades will require significant manual intervention to complete if core OS volumes are grown

� do not concat, stripe or RAID-5 core OS volumes

� for exactly the same reasons as why you wouldn't grow core OS filesystems

� ensure all core OS filesystems are on the root disk if encapsulating

� VxVM assumes that all core OS filesystems reside on the one root disk

� if core OS filesystems are not all on the root disk, upgrades become problematic as the entire OS cannot be unencapsulated using the standard upgrade procedure

� gen & fsgen usage types for volumes

� usage types are used to indicate to VxVM what sort of data may be contained within the volume

Page 24 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 25: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� defines a particular class of rules for operating on the volume

� gen volumes are assumed to contain "raw" type information, e.g. swap space, ORACLE tablespace

� fsgen volumes are assumed to contain filesystems, e.g. UFS, VxFS

� effort should be made to ensure that the volume usage type selected on volume creation is accuratefor the volume's projected usage

� a volume's usage type cannot be changed without destroying & recreating the volume

� having the /opt filesystem outside of rootdg & not on the root disk can cause issues with OS upgrades

� if possible keep /opt on the root disk

� not a major issue but should be taken into consideration

� how are dump devices handled?

� Solaris <= 2.6

� VxVM defines it's own dump device from an available swap slice in rootdg

� it's is possible for dump devices in this situation to round-robin causing valid dumps to be missed

by savecore(1m) & overwritten by mirror resync or normal system paging operation

� to avoid round-robin affect, implement /etc/rcS.d/S35newdumpdevice script available fromhttp://spider.Aus/utils/

� Solaris >= 7

� protocol for dumpadm(1m) handling from the /etc/rc2.d/S85vxvm-startup2 script:-

# Upon encapsulation, will save /etc/dumpadm.conf & dumpadm output# IF dedicated partition, will remain so.# IF swap partition, will adjust to first swap added by VM.# IF subsequently user changes dump partition, will start using that.# IF user needs to change back to using swap, will need to remove# CONFORIG file

� what to do with swap volumes ...

� the primary swap device must be located in rootdg & on the boot disk

� this rule primarily applies to Solaris <= 2.6 as the recommended procedure for Solaris >= 7 is toassign a dedicated dump device

� additional swap devices can be in any diskgroup

� some of the VxVM documentation states that all swap must be in rootdg , this is not the case

� if non-rootdg swap devices are are listed in /etc/vfstab for mounting at boot you will seesome error messages from VxVM as the system boots

� there is an error shown because Solaris is asking VxVM to access these swap volumeswhich are on diskgroups which are yet to be imported/started

� these messages can be safely ignored as the swap volumes are mounted later in theboot process when their diskgroups are available to the system

� it is common practice to define swap volumes within application diskgroups as the swap isreally required by the application & not by the OS directly

� via the diskgroup deport/import function, this allows application swap to move alongwith the application itself

� if not using VxVM to manage the system boot disks, swap volume disks can be a good placeholder for a minimal rootdg

Page 25 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 26: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� swap is part of the virtual memory system, to Solaris an error in swap is equivalent to an uncorrectablememory error & a panic will ensue

� highly recommend customers to mirror swap volumes

� VxVM failure notifications are pretty much a one-off deal

� e-mail sent to root (by default)

� minimal information is logged to syslogd

� the unsupported utility vdisk.healthcheck may be useful here as it continually advises of an issueuntil the situation is rectified

� VxVM devices nodes /dev/vx/[r]dsk/* are built on the fly

� use vxedit to set owner, group, permissions for things like raw database tablespace

� this information will be written to the diskgroup configuration & hence survive reboots

� using Solaris chown , chgrp & chmod commands is a temporary fix & will not survive a rebootor diskgroup deport/import

� the minor numbers of VxVM volumes are determined by the diskgroup in which they reside

� it is possible for two hosts to assign the same diskgroup minor id range to different diskgroups

� if ever these two diskgroups (sharing the same minor number range) are imported onto the same host

there will be a conflict

� this is temporarily resolved by VxVM automatically reminoring the diskgroup beingimported to be in a very high range

� use the vxdg reminor command to permanently resolve VxVM minor number conflicts

� NB do not use the temporary minor number automatically generated by VxVM as a basis fora new minor number

� the temporary number is very high, near the top of maximum allowable minor numberrange, & will probably prevent additional volumes from being created in thediskgroup

� on systems that have their rootdg on shared storage along with the rootdg of other systems, a warningmessage will be seen on VxVM start-up as vxconfigd probes all the attached disks

� this message is indicative of the booting system's VxVM discovering another diskgroup (usuallyfrom another system) called rootdg

� this message can be safely ignored

� see SRDB# 19092

� having multiple rootdg s with the same VxVM hostid attached to the same system is unsupported

� VxVM gets confused about which rootdg to use on boot

� don't tell customers as it is a nightmare to support but there is a workaround:-

� can use different VxVM hostid s on each rootdg

� will need to manually import non-rootdg diskgroups on reboot

Page 26 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 27: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Kernel Tuning

� VxVM tunable parameter applicability will vary from VxVM version to VxVM version

� do not assume tunable parameters documented for VxVM version N will work or not cause issueswhen applied to VxVM version N+1

� /etc/system tuning parameters can usually be found in the VxVM Administrator's Reference Guide

� ensure you consult the reference guide for the version of VxVM in use

� the following information is straight out of the Volume Manager Support Readiness Training - AdvancedConcepts - Tuning Guidelines documentation

� tuning VxVM may increase performance on larger systems at the expense of valuable resources

� by default VxVM is tuned to run on the smallest supported configurations

� don't ask me why, it just is

� tuning can adversely affect overall system performance if care is not taken

� performance is restricted by the what hardware is attached and how it is configured

� do not make changes to VxVM tunables without benchmarking the system beforehand

� use the same logic that should be applied to Solaris Kernel tuning

Multiple-Initiator (MI) Disks

� only supported on SSA, A5K & T3 outside of a Sun Cluster environment

� multi-initiator non-fibre disk configurations (A[13]xx, Multipacks, D1000) are only supported in aSun Cluster framework

� VxVM does not use SCSI reserve commands or any hardware/OS locking for MI disks

� VxVM uses it's own software-based locking mechanism

� writes VxVM hostid of the current owning host into the disk's private region

� writes VxVM hostid of the current owning host into the diskgroup's configuration

� a force over-ride diskgroup import option is available for forcibly importing a diskgroup from a"crashed" host

� if a diskgroup is simultaneously imported from more than one host (a forcible import was probablyperformed from host B when host A wasn't really down)

� you will see Duplicate record in configuration errors caused by each host's vxconfigd

updating the same private regions

� if this message is seen the diskgroup is usually corrupted beyond repair

� you will have to recover the diskgroup configuration from saved VxVM configuration output

Page 27 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 28: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Transaction Locking

� when VxVM is performing an operation on an object it will set a tutil flag to indicate to other VxVM

utilities that the object is currently busy (in-use)

� in the event of a utility abnormal terminating this tutil flag may not be cleared

� typically you will observe an error similar to the following:-

� vxvm:vxplex: ERROR: Plex v1-02 in volume v1 is locked by another utility

� exhibit extreme caution before manually clearing these flags as object corruption can occur

� use ps to check that no resync processes are still running for the given object

� if VxVM >= 3.0, try checking the vxtask list for status too

� you can use vxedit set or vxmend clear all to clear these flags

� putil0 , putil1 , putil2

� permanent flags (non-volatile, will survive a reboot)

� commonly used for commenting an object, e.g. "Block0" on the boot block

� tutil0 , tutil1 , tutil2

� temporary flags (volatile, will not survive a reboot)

� to see an example of the locking mechanism, attach a plex to a volume & suspend the process, then checkthe vxprint output for the volume & observe the utility field entries

Configuration Recovery

� while not as straight forward as SDS, it is possible to completely recover a VxVM configuration without

data loss if you have prepared in advance

� assumes the underlying data was unaffected by the catastrophe

� core OS volumes in rootdg are an exception due to the encapsulation process

� yet another good reason to encourage the use of separate diskgroups

� the unsupported vxinfosave script will automate this process (officially unsupported but in common use)

� the raw data required to rebuild a VxVM configuration by hand would include the following:-

� vxprtvtoc output for every VxVM managed disk on the system

� used as potential input to the vxedvtoc command

� vxprint -g dgname -vpshm output for each VxVM diskgroup

� vxdisk list output

� the vxmake command is used to relay the diskgroup configuration back over the disks

� NB vxmake is an atomic command, it either succeeds or fails in it's entirety, i.e. either all objects arecreated or none are

� very large diskgroup configurations may need to be broken down into smaller chunks as there is a

timeout value associated with vxmake successfully completing it's task

Complete Lab - configuration recovery

Page 28 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 29: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Patches

� for SEVM, patches came from Sun

� available on SunSolve

� very comprehensive patches, sometimes incorporating several VxVM version upgrades

� for VxVM >= 3.0, patches come directly from Veritas

� in patchadd format

� replicated on SunSolve (after a while)

� the Veritas VxVM patching model is a bit of a departure from the Sun model

� historically, Veritas have not released patches, they have only released minor "dot" upgrades

� historically "point patches" have been released but not in patchadd format

� VxVM upgrades will probably be much more frequent than SEVM upgrades were as VxVM patches seem

to now be point patches in patchadd format

Bugs & Escalations

� the Bugtraq category for SEVM & VxVM bugs is sevm

� CTE opens a Veritas "incident" on an escalation

Product Licensing

� use vxserial or vxlicense to install or examine volume manager licences

� NB do not use vxserial to maintain licences if the VxVM version has the vxlicense binary as thetwo are not 100% compatible

� Veritas uses the Elan Licence Manager and accordingly licence files are stored in the /etc/vx/elm

directory

� with VxVM >= 3.2 the licence software is included in a separate package VRTSlic

� NB this package must be installed before any of the other VxVM packages

� SSA – hardware is the licence, full licence (including striping & RAID-5) on disks within SSA,concatenate & mirror only on other disks

� A5K – hardware is the licence, full licence (including striping & RAID-5) on disks within A5K,concatenate & mirror only on other disks

� T3 - a nightmare, WorkGroup/Enterprise & attached platform dependent

� requires key obtained from Sun Licence Centre

� Unipack, Multipack, D1000, A[13]xx series - VxVM is a purchasable extra

� vxserial or vxlicense output on systems using hardware device keys (SSA or A5K) may show an

expiration date

� this key will automatically renew, & extend the expiration date, during normal system operationwhen VxVM probes the devices to ensure they are still connected

� every now & again VxVM patches break the hardware device key licencing

� e.g. at one stage, a certain VxVM patch level meant you could have either your SSAs or your A5Ksattached to a system & recognised, but not both at the same time - BugID# 4100943

Page 29 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 30: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� if this ever happens, apply the VxVM demo licence to the system until you can get it sorted out

http://webhome.ebay/tpsoftware/products/vxvm/licenses/vmFCSlicense

http://spider.Aus/storage/files/vxvm/lic/vxvm.license.key

InfoDocs/SRDBs

� InfoDoc# 14820 - SEVM - How to recover a primary boot disk.

� InfoDoc# 18314 - How to disable DMP

� InfoDoc# 19639 - How to re-enable DMP

� InfoDoc# 21725 - Unencapsulating VxVM boot disks [Third Party Recommendation]

� SRDB# 14882 - Moving volumes from one diskgroup to another

� SRDB# 19092 - SEVM - warning at boot time concerning "group ID differs".

� SRDB# 25063 - VxVM - Error: vxvm:vxdg: disk public region is too small

Personalities

� anything VxVM-related authored by the people listed below is the genuine article, superseding softwareor functionality excepted, these guys really know their stuff

� Chris "ck" Kiessling (East)

� Ken Booth (UK)

� anything VxVM-related authored by the people listed below is generally pretty good info, supersedingsoftware or functionality excepted, these people only make the odd mistake

� Brian Wong (Eng)

� Terrie Douglas (Eng)

� Joe Harman (EBay) - Joe is especially good value on SSA & A5K related matters

man pages

� there are most of the VxVM manual pages on-line at fde.Aus

� many other manual pages also are on-line here too

http://fde.Aus/cgi-bin/man

http://fde.Aus/cgi-bin/list.man.pages

URLs

General pages� A Case for Redundant Arrays of Inexpensive Disks (RAID)

http://sunsite.berkeley.edu/Dienst/UI/2.0/Describe/ncstrl.ucb/CSD-87-391

Veritas Software (corporation) pages� VxVM product information page

http://www.veritas.com/us/products/volumemanager/

� VxVM Technical Notes (like Sun's InfoDocs & SRDBs)

Page 30 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 31: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

http://seer.support.veritas.com/tnotes/volumeman/

� Veritas' Knowledge Base (SunSolve-type system)

http://support.veritas.com/menu_ddProduct_VOLUMEMAN.htm

Chris Kiessling's pages� excellent page for VxVM & other storage-related resources

http://storage.east/ck/

� Chris' well received VxVM Advanced TOI

http://storage.east/ck/vxvmclass.html

� various InfoDocs & SRDBs written by Chris

http://storage.east/ck/srdbs.html

� a VxVM/VxFS support matrix table

http://storage.east/ck/vmtable.html

UK Data Centre Storage Group page� good tips, tricks, cribs & howto's page run by Ken Booth

http://service.uk/vxvm/

E10K RAS AP & SEVM Interaction page� a bit old but a good backgrounder

http://esp.west/pubs/ras_companion/ap_sevm.html

Page 31 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 32: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Course Labs

Do not use a GUI to perform any of the lab exercises, CLI or ASCII output utilities only!

Lab - VxVM installation1. install the VxVM packages onto your lab system

� install the VxVM version provided by the instructor

� you can chose “Enclosure Based Naming” at your discretion

� perform a Custom installation

� if you are not using an array that provides a VxVM hardware license, install the temporary licensefrom the location provided

2. perform the VxVM installation

� configure 1 disk into rootdg

� do not encapsulate your boot disk

� leave the remaining disks alone

3. reboot your system if instructed to

4. examine VxVM's view of your system's disks

� command used: _______________________________________________________________

5. examine VxVM's view of your system's diskgroups

� command used: _______________________________________________________________

6. examine VxVM's configuration daemon status

� command used: _______________________________________________________________

7. examine VxVM's loaded licences

� command used: _______________________________________________________________

8. examine VxVM's /etc/vx/volboot file

� what is the size of the file in bytes: _______________________________________________

9. create a new diskgroup with the remaining free disks, except for one, in your array

� leave one (1) free disk

� do not encapsulate your boot disk

� name the diskgroup ddg

� commands used: _______________________________________________

10. re-examine VxVM's view of your system's disks

11. re-examine VxVM's view of your system's diskgroups

12. examine a VxVM disk's detailed information

� what is the slice number, offset & length of the disk's private region?

� slice: _______________ offset: _______________ length: _______________

� how man active configuration & log copies does the disk's private region have?

� configuration: _______________ log: _______________

Page 32 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 33: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - diskgroups1. create a diskgroup called test containing a single disk

� remove a disk from the ddg or rootdg diskgroup if necessary

2. create a volume called v0 in the test diskgroup

� use vxassist -g test make v0 10m

3. execute a vxdisk list command

� paying particular attention to the DISK & GROUP columns, what do you notice about the disks in yourtest diskgroup?

4. change the owner & group using chown on the device entry for volume v0 in test diskgroup to besys , sys

5. verify the device file's owner & group have changed

6. deport the test diskgroup

7. execute a vxdisk list command

� paying particular attention to the DISK & GROUP columns, what do you notice about the disks in yourtest diskgroup?

8. execute a vxdisk -o alldgs list command

� paying particular attention to the DISK & GROUP columns, what do you notice about the disks in yourtest diskgroup now?

9. import the test diskgroup

10. what is the owner & group of the volume v0 in the test diskgroup?

� why?

11. newfs(1m) the v0 volume in the test diskgroup

12. deport the test diskgroup

13. import the test diskgroup by the new name of toast

14. start all the volumes in the toast diskgroup

15. destroy the toast diskgroup

� notice anything scary?

16. execute a vxdisk -o alldgs list command

� what do you notice about the disks in your toast diskgroup now?

Page 33 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 34: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - building volumes1. create a 10MB volume in the ddg diskgroup called v0

2. examine the layout of the volume just created

3. add a mirror to the v0 volume in ddg

4. examine the volume layout with the added mirror

5. how is the mirroring implemented in the VxVM volume v0?

6. create a mirrored 10MB volume in ddg diskgroup called v1

� use only one command to complete the task

7. create a RAID-5 10MB volume in ddg diskgroup called r0

� if you can't create the volume, explain why

8. add a log to the r0 volume in ddg

� if you can't can't add the log, explain why

9. check the parity on your RAID-5 volume

10. rebuild the parity on your RAID-5 volume

� tip - see the vxvol(1m) man page in the RAID-5 Usage Type section

11. create a mirrored, 3-way striped 10MB volume in the ddg diskgroup called v2

� use only one command to complete the task

12. create a mirrored, 3-way striped 2GB volume in the ddg diskgroup called p0

� use only one command to complete the task

� while the volume is being created, use vxtask list to check on the status

13. examine the layout of the volume just created

� what is different about the layout compared to volume v2?

Page 34 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 35: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - Hot Sparing & Hot Relocation1. identify which method, if any, of automated disk recovery (i.e. Hot Sparing or Hot Relocation) your

system is using

� how can you tell?

2. save the partition table of a disk in the ddg diskgroup that contains VxVM subdisks out to a file

3. remove the public region slice of the disk

� this used to simulate a bad block on a hard disk

� since about VxVM 3.2 it seems to completely kill the disk

� to simulate a total hard disk failure both the public & private regions usually have to be removed

� this seemed to change with VxVM 3.2 also

4. create a filesystem on at least one of the volumes having a subdisk on the disk whose public region youjust removed

5. monitor the console, examine the /var/adm/messages file, read the root e-mail messages

� what do you notice?

6. if you have any other volumes (with subdisks on the failing disk) which you did not access, what doyou notice about their subdisks?

� your results will differ depending on the version of VxVM you are using

7. after the recovery operation has completed, what state is the volume in?

8. what state is the disk in from which we removed the public region?

� your results will differ depending on the version of VxVM you are using

9. perform the VxVM steps of the service procedure to replace the failing disk

� remove the private region slice of the disk also to simulate the installation of a new disk

10. restore the configuration to as it was before the disk failure

� use only one VxVM command to achieve this

11. do you notice anything about the subdisks affected by the configuration restoration?

Page 35 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 36: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - changing volume layout1. change the layout of volume v0 to be a 3-way RAID-5 volume

� confirm & examine the volume layout changes

2. change the layout of volume v0 to be a concatenated, mirrored volume

� confirm & examine the volume layout changes

3. change the layout of volume v0 to the complementary RAID-0, RAID-1 combination

� i.e. if the volume RAID 0+1 change it to RAID 1+0, if the volume is RAID 1+0 change it to RAID0+1

4. what do you notice about the time taken to convert as opposed to the time taken to relayout?

5. why do you think this is?

6. create a new 500MB non-mirrored volume in ddg diskgroup called t0

7. add a mirror to this volume & use the time(1) command to time the operation

� note the time taken to complete the operation

8. add another mirror to the t0 volume but this time alter the options, use the command below

� time vxassist -o slow=1 -g ddg mirror t0

� note the time taken to complete the operation

� what do you notice about the time between the two mirror adding operations?

9. create a 2GB, 3-way stripe, mirrored volume called p1 in the ddg diskgroup

10. stop the p1 volume in the ddg diskgroup

� what happens to the sub-volumes?

11. remove the p1 volume & all it's components with a single command

Page 36 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 37: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - snapshots1. create a snapshot of the v2 volume in diskgroup ddg

� view the v2 volume's configuration

2. what has changed with the v2 volume?

3. what is the mode of the snap plex?

4. what is the size of the snap plex?

5. abort the snapshot of the v2 volume in diskgroup ddg

� view the v2 volume's configuration

6. what has changed with the v2 volume?

7. create a snapshot of the v2 volume in diskgroup ddg

8. break off the snapshot of the v2 volume in diskgroup ddg

� view the v2 volume's configuration

9. what new volume has been created?

10. what relationship does this new volume now have to the v2 volume?

Page 37 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 38: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - configuration recovery1. save the VxVM configuration information for the entire system out to a file

� hint - save it in a format suitable for reading by humans as we will be using it later for comparisonwith our recovered configuration

2. save all of the VxVM ( i.e. volume, plex, subdisk, etc.) configuration information for the ddg

diskgroup in a format suitable for recovery via the vxmake(1m) command

3. remove all of the volumes & their component objects from the ddg diskgroup

� hint - see the lab solution for a script to do the job

� hint – you may need to run the script twice to cater for professional (aka layered) volumes

4. verify the diskgroup ddg is now empty

5. recover all of the VxVM objects in diskgroup ddg from the configuration information saved earlier

� hint – don't forget to start the volumes

6. save the VxVM configuration information for the entire system out to a file

� NB use a different file to that used in step 1

7. compare the two saved VxVM configuration information files

8. do they match?

9. if they do not match, why don't they match?

10. if they do not match, can the recovered configuration be made to match the saved configuration?

11. what part of the configuration of the ddg diskgroup did we not recover?

12. why is this an important point to note?

Page 38 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 39: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - root disk encapsulation1. ensure your existing disk in rootdg is the same size or larger than your system's boot disk

2. rename the existing rootdg disk to be called rootmir

3. encapsulate your boot disk into VxVM control

� name your boot disk rootpri

4. reboot you machine as instructed

� note the steps VxVM goes through to encapsulate the boot disk

5. use the eeprom command to examine your machine's settings

� what do you notice?

6. what value is the use-nvramrc? variable set to?

7. mirror your boot disk to the other disk in rootdg

8. use the eeprom command to examine your machine's settings again

� what do you notice?

9. ensure the use-nvramrc? variable is set to true

10. check the boot-device setting lists both vx-rootpri & vx-rootmir

11. turn off your machine's auto-boot capability

12. shutdown your machine to the OBP prompt

13. boot from the boot disk mirror

14. when you get a login prompt, login and run the iostat -xn 5 command

� what do you notice about the primary boot disk & the mirror boot disk?

Page 39 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 40: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - root disk recovery1. note the layout of your primary boot disk rootpri

2. simulate a primary root disk failure

� remove all slices, except slice 2, from your primary boot disk rootpri

3. if your disk hasn't been failed by VxVM yet, prompt it into action by doing some I/O to the operatingsystem

4. did VxVM "hot relocate" any of your volumes' subdisks?

� if so, why?

� if not, why not?

5. use InfoDoc# 14820 to recover the primary boot disk

� note the references to particular VxVM (& SEVM) versions & patch levels for applicability of thesteps

6. compare the layout of the recovered disk as compared to the original layout noted in step 1

� what do you notice?

Page 40 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 41: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - moving volumes between diskgroups1. assume you have no FMR licence & so you can not use the vxdg move , vxdg split or vxdg join

commands in VxVM >= 3.2

2. remove all existing volumes from the ddg diskgroup

3. create a 10MB 2-way mirrored volume called v0 in the ddg diskgroup

4. create a filesystem on the v0 volume

5. mount the v0 volume & copy the /etc/passwd file to the mount directory

6. unmount the v0 volume

7. take a snapshot of the v0 volume to a volume named s0

8. mount the s0 volume & verify it contains a copy of the /etc/passwd file

9. unmount the s0 volume

10. move the s0 from the ddg diskgroup into a new diskgroup called edg

� this will require multiple steps to be performed & commands to be issued

11. mount the s0 volume in the edg diskgroup & verify it contains a copy of the /etc/passwd file

Page 41 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 42: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - changing objects1. rename volume v0 to be m0 in the ddg diskgroup

� tip - ensure the volume is not in use before attempting this, i.e. the volume is not mounted, does nothave swap defined on it & is not in use as database raw table space

2. examine the vxprint output, is the name change to the volume recursively applied to it's objects?

3. rename a disk containing subdisks in the ddg diskgroup to be called fred

4. examine the vxprint output, is the name change to the disk recursively applied to it's objects?

5. change the size of volume m0 to be 5MB

� NB do not resize the volume's plexes or subdisks, simply alter the volume's length

6. use vxassist to grow the m0 volume to be 20MB

7. use vxassist to grow the m0 volume by 10MB onto different disk drives than those which itcurrently occupies

� verify via vxprint that the m0 volume has been grown using a different set of disks to thosepreviously in use

8. select a disk that currently has a subdisk in use by the m0 volume & issue a single vxassist

command to move all subdisks in use by volume m0 from that disk

9. set the putil2 flag on volume m0 to be "my volume"

10. verify the the putil2 flag has been set on volume m0 by using the vxprint command

11. list only the name, object type & putil2 flag of each VxVM object in your VxVM configurationusing one vxprint command

12. establish what free space is available within the ddg diskgroup on the fred disk

13. make a subdisk on disk fred of 10 blocks in length within the free space determined in the previousstep

14. use only one vxprint command to list all VxVM objects that are orphaned, i.e. they are notassociated to a parent object

� tip - look at the -e option to vxprint

Page 42 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 43: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - building volumes bottom up1. remove all objects from the ddg diskgroup

2. in the ddg diskgroup, create a 10MB subdisk named bill

3. create another 10MB subdisk named ben on a different disk

4. create another 10MB subdisk named zebedee on a different disk

5. create another 10MB subdisk named dougal on a different disk

6. create a plex called arthur that contains the disks bill & ben in a concatenated configuration

7. create a plex called martha that contains the disks zebedee & dougal in a striped configurationwith a 200KB interleave

8. create a volume, which will not hold a filesystem, called boing containing the arthur plex

9. start the boing volume

10. what is the size of the boing volume?

11. what is the usable size of the martha plex & why?

� tip - use vxprint -g ddg -l martha

12. add the martha plex to the boing volume to create a 2 way mirrored volume

� tip - you will need to alter the size of the volume to be the smallest usable size of all of the memberplexes

Page 43 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 44: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab - bits & bobs1. add a third plex to the boing volume in diskgroup ddg , creating a 3-way mirror

2. use vxprint to examine the new layout of the boing volume, what sort of plex did vxassist addto create the third plex, concatenated or striped?

3. break off the arthur plex from the boing volume

4. issue a vxprint -ht command & note the presentation of the arthur plex

5. did vxprint indicate an error because the arthur plex was an orphan, i.e. not associated with a

volume?

6. create a volume called break to contain the orphaned arthur plex

� configure VxVM to know that this new volume will contain a filesystem

7. use vxvol -g ddg startall to start all the volumes in the ddg diskgroup

� does the break volume start?

� if not, why not?

8. use vxvol -g ddg start break to start the break volume in the ddg diskgroup

� does the break volume start?

� if so, why?

9. add another plex to the break volume creating a 2-way mirror

10. rename the boing volume to be called scratch

11. rename the break volume to be called boing

12. there is a minor number conflict between the ddg diskgroup & the odg diskgroup on anothermachine, change the minor number range of the ddg diskgroup on this machine to start at 131067

13. create the following 1MB volumes in the ddg diskgroup:-

� v0

� v1

� v2

� v3

14. what happened when you tried to create the volumes?

15. why did you get this error?

16. find the cause & fix for this error message in the VxVM documentation, or via any other supportresource

17. change the minor numbers for the ddg diskgroup to start at 39000

18. determine how much free space there is in the ddg diskgroup's configuration database

19. create a new subdisk called zzz of 1 block in length in the ddg diskgroup

20. how much space in the ddg diskgroup's configuration did creating the zzz subdisk consume?

Page 44 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 45: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Course Labs Solutions

Lab solution - VxVM installation1. pkgadd(1m) the VxVM packages

2. use the /usr/sbin/vxinstall script

� use the Custom installation option, not Automatic option

3. reboot the system

4. use the vxdisk list command

5. use the vxdg list command

6. use the vxdctl mode command

7. use the vxserial -p , vxlicense -p , /sbin/vxlicrep (depending on the VxVM version) orvxdctl license command (all versions)

8. /etc/vx/volboot should be 512 bytes in size

9. use the vxdiskadm menu driven utility or the following commands:-

� /etc/vx/bin/vxdisksetup -i cXtYdZ

� one command for each disk to be added

� vxdg init ddg ddg01=cXtYdZ

� vxdg -g ddg adddisk ddg02=cXtYd(Z+1)

� one command for each disk to be added

10. use the vxdisk list command

11. use the vxdg list command

12. use the vxdisk list diskname command to examine the information

Page 45 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 46: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution- diskgroups1. use vxdg init test test01=cXtYdZ

2. use vxassist -g test make v0 10m

3. the disks appear as online & have a name (DISK) and diskgroup (GROUP) assigned

4. use chown sys:sys /dev/vx/rdsk/test/v0

5. use ls -l /dev/vx/rdsk/test/v0

6. use vxdg deport test

7. the disks appear as online but do not have any name (DISK) or diskgroup (GROUP) assigned

8. the disks appear as online but do not have a name (DISK) assigned, the diskgroup (GROUP) namehowever appears in parentheses

9. use vxdg import test

10. ownership & group has reverted to root:root

� because the permissions of volumes are stored in the diskgroup's configuration

� we used chown(1) not vxedit(1m) to update the ownership/group

� the changes are not retained across reboots or deport/import operations unless vxedit(1m) is used

11. having problems?

� take a look at the vxprint -ht output for your volume, what do you notice about the volume state?

� you need to start your volumes manually when doing an import

� use vxvol -g test startall

� your newfs /dev/vx/rdsk/test/v0 should work fine now

12. use vxdg deport test

13. use vxdg -n toast import test

14. use vxvol -g toast startall

15. use vxdg destroy toast

� did you notice how VxVM didn't care that you had started volumes in the diskgroup when it blew thediskgroup away?

16. the disks appear as online but do not have any name (DISK) or diskgroup (GROUP) assigned, thediskgroup appears to no longer exist

� it is possible to recover the diskgroup as long as the diskgroup's resources are not re-tasked

� the diskgroup can be recovered by importing the diskgroup using it's diskgroup ID

Page 46 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 47: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - building volumes1. use vxassist -g ddg make v0 10m

2. use vxprint -ht

3. use vxassist -g ddg mirror v0

4. use vxprint -ht

5. VxVM implements mirroring by creating an additional plex within the volume

6. use vxassist -g ddg make v1 10m nmirror=2

7. use vxassist -g ddg make r0 10m layout=raid5

� by default you will require a minimum of four (4) disks to create a RAID-5 volume

� it is possible that you do not have enough disks to spread the default number of columns, three (3) &

the (separate) log across

8. use vxassist -g ddg addlog r0

� again, you may not have a disk with free space which is currently not already in use by the volumeyou are attempting to attach a log to

9. use /etc/vx/bin/vxr5check -v -g ddg r0

10. you need to stop the volume, set it to state EMPTY & restart the volume

� NB - this is process does not harm the volume's data

� use vxvol -g ddg stop r0

� use vxmend -g ddg fix empty r0

� use vxvol -g ddg start r0

11. use vxassist -g ddg make v2 10m layout=stripe nstripe=3 nmirror=2

12. use vxassist -g ddg make p0 2g layout=stripe nstripe=3 nmirror=2

� notice how VxVM defaults to building a RAID 1+0 volume

� about time too as SDS has had this functionality for ages

13. VxVM has created, without being specifically requested to, a layered (aka professional) volume

� p0 is a RAID 1+0 volume whereas v2 is a RAID 0+1 volume

Page 47 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 48: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - Hot Sparing & Hot Relocation1. there are several steps:-

� check what is enabled in the /etc/rc2.d/S95vxvm-recover VxVM startup file

� check ps -aef output for which VxVM recovery daemon is running

� if vxrelocd appears then the system is running Hot Relocation

� if vxsparecheck appearsthen the system is running Hot Sparing

� if using Hot Sparing check that each diskgroup has a disk flagged as being spare

2. use prtvtoc /dev/rdsk/cXtYdZs2 >vtoc.cXtYdZs2

3. use format(1m) or fmthard(1m) to remove the slice tagged 14 (usually slice 4)

4. use newfs /dev/vx/rdsk/ddg/ volume

5. you should see messages to the console & in /var/adm/messages for:-

� a plex write error

� a plex detach notice (on mirrors)

� a subdisk failure message

� relocation/sparing messages in root e-mail

� NB you may get a panic if there was a RAID-5 volume that took an I/O error ... this is not normal & is

a bug, I have yet to find the BugID# (if there is one) so please feel free update me if you run across it

6. depending on the version of VxVM you may find:-

1. these subdisks were not relocated

� an I/O is required to fail before a subdisk is relocated

� if Hot Sparing is in use, the entire disk contents are evacuated to a spare disk

2. these subdisks were relocated

� only a single I/O to any resident subdisk is required to fail before all resident subdisks are relocated

� though this functionality could be because of the type of failure simulation used

3. Veritas has an annoying habit of changing this sort of functionality without documenting it

anywhere, especially somewhere obvious like the Release Notes :-(

7. the volume & plexes should have returned to optimal state, usually this is ENABLED & ACTIVE

8. the disk should now be in a state of FAILING

� beware that some VxVM versions will only show this state flag in vxdisk list output & not invxprint -ht output

� also beware that some VxVM versions will only show this state flag in vxprint -ht output & not invxdisk list output ... just to keep you on your toes ;-)

� in some cases the disk could have completely failed & be in a failed was: state

9. use vxdiskadm & perform the following steps:-

� 4. Remove a disk for replacement

� 5. Replace a failed or removed disk

� if this doesn't work, you will have to remove the private region slice, tag# 15, from the disk's VTOC

Page 48 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 49: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� NB there may be a bug in some versions of VxVM where you may have to run vxrecover manually, itseems if any relocation operation fails in the diskgroup vxrecover is not run automatically ... this is

not normal functionality & is a bug, I have yet to find the BugID# (if there is one) so please feel freeupdate me if you run across it

10. use /etc/vx/bin/vxunreloc -g ddg diskname (where diskname is the name of the disk thatVxVM Hot Relocated from, i.e. the failing/failed disk)

� there is no equivalent for Hot Sparing, probably vxevac is the closest thing

11. the most obvious thing is that the subdisks that have been unrelocated is that the subdisk names havechanged

� you may also notice that log plexes are recreated & obtain new names accordingly, there is noreversion done on the log plex names

Page 49 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 50: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - changing volume layout1. use vxassist -g ddg relayout v0 layout=raid5 nstripe=3

2. use vxassist -g ddg relayout v0 layout=mirror-concat nmirror=2 to create a simpleconcatenated volume

� using vxassist -g ddg relayout v0 layout=concat-mirror nmirror=2 create a layeredconcatenated volume

� VxVM 3.1.1 appears to have a bug in it that makes it relayout to a RAID 1+0 (concat-mirror) volumeeven when a RAID 0+1 (mirror-concat) volume is specified

3. use vxassist -g ddg convert v0 layout=mirror-concat to convert from RAID 1+0 to RAID0+1 or vxassist -g ddg convert v0 layout=concat-mirror

4. the time taken to convert is fixed & relatively short whereas the time taken to relayout is proportionalto the conversion being undertaken

5. the time to convert is (relatively) fixed because not data is being moved, the only thing that is being

changed is the way in which the data is being accessed by VxVM

6. use vxassist -g ddg make t0 500m

7. use time vxassist -g ddg mirror t0

8. time taken is different, it may be higher it may be lower

� this iosize & slow options can be used to great affect in speeding up resync operations but theireffectiveness varies significantly in different VxVM versions

� rule of thumb - the older the VxVM version, the more it's worth playing with the iosize & slow

options

9. use vxassist -g ddg make p1 2g layout=stripe-mirror nstripe=3 nmirror=2

10. use vxvol -g ddg stop p1

� the sub-volumes remain ENABLED & ACTIVE ... this is a known “feature”

11. use vxassist -g ddg remove volume p1

� all the volume's sub-volumes are removed also

� NB not all VxVM versions support vxassist remove functionality, older VxVM versions willrequire you to stop the volume & it's sub-volumes & use vxedit -g ddg -r rm p1

Page 50 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 51: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution- snapshots1. use vxassist -g ddg snapstart v2

2. an additional plex has been added to the volume

3. the snap plex is write-only

� note the WO mode of the plex in the vxprint -ht output

4. the same size (cylinder alignment not withstanding) as the parent volume

� it's essentially a “write-only” third mirror, hence it has to be the same size

5. use vxassist -g ddg snapabort v2

6. the snap plex has been detached & it's component objects destroyed

7. use vxassist -g ddg snapstart v2

8. use vxassist -g ddg snapshot v2

9. the SNAP-v2 volume has been created

10. none, it is now a completely independent volume

Page 51 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 52: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - configuration recovery1. use vxprint -ht >ht.save

2. use vxprint -g ddg -vpshm >ddg.vxmake

3. use ...

for v in `vxprint -g ddg -vF %name`do

vxvol -g ddg stop $vvxassist -g ddg remove volume $v

done

4. use vxprint -ht to show only the disks remain in the ddg diskgroup

5. use vxmake -g ddg -d ddg.vxmake

� the volumes will not be started yet & this will cause the comparison to fail

6. use vxprint -ht >ht.new

7. use diff ht.new ht.save

� no output from diff(1) means the files match exactly, hence the configurations match exactly

8. they should match

� if they don't match, perhaps you have forgotten to start the volumes?

9. if they do not match you probably only have forgotten to start the volumes

10. if the only difference in the configurations is the volumes aren't started, then use

for v in `vxprint -g ddg -vF %name`do

vxvol -g ddg start $v &done

� processes are started in the background as vxvol normally does not return until the volume is started

� you can use vxtask to monitor the progress of the resynchronisations

� NB if you are 100%, totally sure, can absolutely guarantee that all a volumes plexes are in sync, youcan use vxvol -g ddg init active $v in place of the vxvol start

� this is much quicker as no resyncs are performed but no consistency checking is done by VxVM &

you run the risks of making mismatched mirrors available to the system

11. we didn't destroy or recover the actual VxVM disks in the ddg diskgroup

12. it is an important point to note because there is no magic vxmake command to recreate the VxVM disks

� NB for the vxmake recovery of the VxVM objects to work, the VxVM disks must have exactly thesame names as when the configuration was saved

Page 52 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 53: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - root disk encapsulation1. use format to check the disk sizes

2. use the vxedit -g rootdg rename existing rootmir command

3. the easiest way to do this is via the vxdiskadm utility

� use option 2 Encapsulate one or more disks

4. reboot as instructed

5. VxVM should have inserted a device alias for the boot disk

� something like devalias vx-rootpri / path_to_disk

6. it should be set to false

� what we are trying to show here is that although VxVM sets up devalias entries it does not set the

use-nvramrc? variable to true to allow their use :-(

� if the use-nvramrc? variable to true it was probably already set that way, i.e. VxVM did not do it

7. the easiest way to do this is via the vxdiskadm utility

� use option 6 Mirror volumes on a disk

8. a new devalias should have been added for the rootmir disk

9. use eeprom "use-nvramrc?=true"

10. use eeprom "boot-device" to check the setting

� use eeprom "boot-device=vx-rootpri vx-rootmir" to set correctly if necessary

11. use eeprom "auto-boot?=false"

12. use init 0

13. use boot vx-rootmir

14. the disks are probably undergoing a resynchronisation

� NB the disks may not necessarily be resynchronising in the direction you would expect, i.e. mirror bootdisk to primary boot disk (in this case)

Page 53 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 54: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - root disk recovery1. save a copy of the disk's VTOC using the prtvtoc command

2. use format or prtvtoc & fmthard to remove all but slice 2

3. surely you don't need help doing this one!

4. check the vxprint output for rootdg & see if any VxVM disks other than rootpri & rootmir are inuse in rootdg

� if so ...

� vxrelocd was running & there was spare, eligible space within the rootdg diskgroup to relocate to

� if not ...

� vxrelocd was running & there was no spare, eligible space within the rootdg diskgroup to relocateto

� vxrelocd was not running

5. lookup SunSolve for the documentation or use the supplied copy in the course handouts

6. all of the sizes of the partitions should have remained the same but their numbering & location on diskmay be completely different

Page 54 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 55: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - moving volumes between diskgroups1. bummer!

2. use:-

� unmount any filesystems mounted on volumes in ddg diskgroup

� remove swap from any volumes in the ddg diskgroup

� vxvol -g ddg stopall

� remove volumes recursively, either of the following two steps but not both

� for each volume vxedit -g -r rm volname

� for each volume vxassist -g ddg remove volume volname

3. use vxassist -g ddg make v0 10m nmirror=2

4. use newfs /dev/vx/rdsk/ddg/v0

5. use:-

� mount -F ufs /dev/vx/dsk/ddg/v0 /mnt

� cp /etc/passwd /mnt

6. use umount /mnt

7. use:-

� vxassist -g ddg snapstart v0

� vxassist -g ddg snapwait v0

� vxassist -o name=s0 -g ddg snapshot v0

8. use mount -F ufs /dev/vx/dsk/ddg/s0 /mnt

9. use umount /mnt

10. assuming volume s0 resides on disk ddg03 (c1t1d0 ), use:-

� vxprint -g ddg -vpshm s0 >/var/tmp/s0.cfg

� vxassist -g ddg remove volume s0

� vxdg -g ddg rmdisk ddg03

� vxdg init edg ddg03=c1t1d0

� vxmake -g edg -d /var/tmp/s0.cfg

� vxvol -g edg start s0

11. use mount -F ufs /dev/vx/dsk/edg/s0 /mnt

Page 55 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 56: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - changing objects1. use vxedit -g ddg rename v0 m0

2. no, all other objects' names remained unchanged

3. use vxedit -g ddg rename diskname fred

4. no, all other objects' names remained unchanged

5. use vxvol -g ddg stop m0 to stop the volume, then use vxvol -f -g ddg set len=10240 m0

to change the length to be 5MB

6. use vxvol -g ddg start m0 to start the volume, then use vxassist -g ddg growto m0 20m oruse vxassist -g ddg growby m0 15m (don't forget that we changed the volume's size to be5MB)

7. use vxassist -g ddg growby m0 10m diskX diskY where diskX & diskY are not the disks thatvolume m0 currently resides on

8. use vxassist -g ddg move m0 ! diskX

9. use vxedit -g ddg set putil2="my volume" m0

10. use vxprint -g ddg -F "name=%name putil2=%putil2" m0

11. use vxprint -Aht -F "name=%name type=%type putil2=%putil2"

12. use vxdg -g ddg free fred

13. use vxmake -g ddg sd barney len=10 offset= N disk=fred

14. use vxprint -Ae "! assoc"

Page 56 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 57: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - building volumes bottom up1. use:-

� vxvol -g ddg stopall

� vxassist -g ddg remove volume m0

� vxedit -g ddg rm barney

2. use vxmake -g ddg sd bill offset=X len=10m diskY

3. use vxmake -g ddg sd ben offset=A len=10m diskB

4. use vxmake -g ddg sd zebedee offset=C len=10m diskD

5. use vxmake -g ddg sd dougal offset=E len=10m diskF

6. use vxmake -g ddg plex arthur sd=bill,ben layout=concat

7. use vxmake -g ddg plex martha sd=zebedee,dougal layout=stripe stwidth=200k

8. use vxmake -g ddg -U gen vol boing plex=arthur

9. use vxvol -g ddg start boing

10. volume boing is 40960 blocks

11. the usable plex length is whatever the value of the contiglen field in the vxprint -l output

� in my test this value was 40880 blocks

� the plex is "short" because the usable length of striped & RAID-5 plexes must correlate to amultiple of the stripe width of the plex

� NB concatenated plexes have no such restrictions

12. use:-

� vxvol -f -g ddg set len=40880 boing

� vxplex -g ddg att boing martha

Page 57 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 58: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Lab solution - bits & bobs1. use vxassist -g ddg mirror boing

2. when I tested this, vxassist added a concatenated plex

� basically what I am trying to show here is that unless you explicitly ask for a certain type of plex tobe created & attached vxassist will take a best guess

3. use vxplex -g ddg dis arthur

4. use vxprint -ht

5. no error message should have been observed as vxprint is not designed to sanity check aconfiguration

� vast amounts of disk space can disappear from configurations in this way

6. use vxmake -g ddg -U fsgen vol break plex=arthur

7. the break volume should not start, this is because, from the vxvol man page, "This operation willnot start uninitialized volumes"

8. the break volume should start with this command as the volume was explicitly specified

9. use vxassist -g ddg mirror break

10. use vxedit -g ddg rename boing scratch

11. use vxedit -g ddg rename break boing

12. use vxdg -g ddg reminor 131067

13. use vxassist -g make v N 1m

14. you should have received a "Too many volumes" error

15. well, in this case it was because I got you to renumber your diskgroup towards the very upper limitof available minor numbers & VxVM ran out of minor numbers to assign to your new volume

16. SunSolve provides a couple of possibilities but essentially they're all red herrings, in this caseanyway ... welcome to the world of VxVM :-)

17. use vxdg -g ddg reminor 39000

18. use vxdg list ddg & note the free versus permlen figures

19. use vxmake -g ddg sd zzz len=1 offset=0

20. it will vary, as not all objects consume 1 block in the configuration database, extra blocks may notbe required as there may be enough free space in the existing number of blocks in use to store thenew object's configuration

# eof

Page 58 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 59: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Evaluation Form (type 1)

Course Date: _____________________________ Location: __________________________

� By attending this course I learned:-

� a lot

� a fair bit

� a bit

� not enough to justify 3 days out of my schedule

� This course could be improved by:-

� the course is OK as it is

� additions:- _______________________________________________________________

� deletions:- _______________________________________________________________

� The duration of this course is:-

� too long

� a little long

� about right

� a little short

� too short to cover the topics in the detail I require

� The labs in this course:-

� were very useful

� were OK

� could do with some additions (specify)

_______________________________________________________________

� could do with some improvement (specify)

_______________________________________________________________

� were totally useless

� I would recommend this course to:-

� anybody

� specific audiences (specify)

_______________________________________________________________

� nobody

Page 59 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 60: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Evaluation Form (type 2)

Course Date: _____________________________ Location: __________________________

Name (optional): ___________________________________________________

Comments (optional):

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

__________________________________________________________________________________

Page 60 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 61: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

Troubleshooting� reminor a diskgroup to be very high

� will not be able to create new volumes

� ask to determine problem & resolve

� put swap volumes in non-rootdg diskgroup

� note warning messages on boot

� touch install-db file & reboot

� /etc/vx/reconfig.d/state.d/install-db

� touch upgrade file

� /VXVM3.0-UPGRADE/.start_runed

� remove vxdmp forceload from /etc/system

� only a problem on encapsulated root disks

� deport a diskgroup to different host

� remove public region from a disk

� remove public & private region from a disk

� remove slice 2 from a disk

� kill vxconfigd with -9

� try doing a vxprint

� try making any config changes

� change vxconfigd mode to disabled

� change hostid of diskgroup on deport

� 2 diskgroups with same name when deported

� change VxVM hostid in volboot

� does it boot

� do non-rootdg diskgroups import

� remove volboot file

� change the size of the volboot file

� change VTOC of rootdg disk

� remove /var/spool/locks directory

� try running vxdiskadm

� vxconfigd

� run vxsparecheck & ask why VxVM is not relocating on failure

� kill vxrelocd & ask why VxVM is not relocating on failure

� make volume 10mb & plex 100mb & ask to spot mistake & fix

� configure RAID5 volume & put the volume in degraded mode & halt system

� why won't volume auto-start?

� make a volume longer than it's component plexes

Page 61 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)

Page 62: VxVM Brain Dump

VxVM 3 Day Brain Dump - Notes

� may not work in VxVM >= 3.0

� make both plexes of a mirror stale

� give both diskgroups the same minor ids

� 2 rootdg diskgroups with same hostid

� set failing flag on 1 disk & ask to determine why vxassist fails

� set reserved flag on 1 disk & ask to determine why vxassist fails

� put nmirror=30 layout=mirror in /etc/default/vxassist

� fix a disk in NODEVICE state

� fail a disk, relocate & unrelocate

� use vxinfosave to rebuild a diskgroup

Page 62 of 62 http://fde.Aus/crs/ Mike Arnott - FDE (updated 26 September 2002 16:35)