28
3 ways to unencapsulate (including manual unencapsulation) Article:TECH157 465 | Created: 2011-04-05 | Updated: 2011-11-23 | Article URL http://www.symantec.com/docs/TECH157 465 Article Type Technical Solution Product(s) Show all Environment Show all Languages Show all Issue Document decribes 3 ways to unencapsulate Solution There are 3 ways of unencapsulating the root disk: [1.] Unencapsulating the simple way via 'vxunroot' (/usr/lib/vxvm/bin/vxunroot). [2.] Unencapsulating manually while Veritas Volume Manager (VxVM) is up and running by either using the 'vxva' GUI or 'vxedit' CLI. [3.] Unencapsulating manually while booted from cdrom. NOTE: Options [1.] and [2.] require that you be booted from either the root disk or the mirror disk and that VxVM

3 Ways to Unencapsulate

Embed Size (px)

DESCRIPTION

unencapsulate

Citation preview

Page 1: 3 Ways to Unencapsulate

3 ways to unencapsulate (including manual unencapsulation)

Article:TECH157465

 | Created: 2011-04-05

 | Updated: 2011-11-23

 | Article URL http://www.symantec.com/docs/TECH157465

Article TypeTechnical Solution

Product(s)Show all

EnvironmentShow all

LanguagesShow all

Issue

Document decribes 3 ways to unencapsulate

Solution

There are 3 ways of unencapsulating the root disk:

[1.] Unencapsulating the simple way via 'vxunroot'    (/usr/lib/vxvm/bin/vxunroot).[2.] Unencapsulating manually while Veritas Volume Manager    (VxVM) is up and running by either using the 'vxva'    GUI or 'vxedit' CLI.[3.] Unencapsulating manually while booted from cdrom.    

    NOTE: Options [1.] and [2.] require that you be booted from          either the root disk or the mirror disk and that VxVM           be up and running at that time.          For obvious reasons of requiring VxVM to be up and           running, there may be instances when this is not           feasible and thus option [3.] is the most ideal.           Below is a step-by-step procedure on how to go           about unencapsulating the root disk manually while           booted from cdrom.          

=-=-=-=-=-=-=-=-=

Page 2: 3 Ways to Unencapsulate

STEPS TO FOLLOW:=-=-=-=-=-=-=-=-=

1.  Boot the system from your Solaris CD media in your cdrom drive.    First, shutdown the system to get to the Open Boot PROM level.

    # /usr/sbin/shutdown -y -g0 -i0

    At The OBP (Open Boot PROM) level issue the following command:

    ok boot cdrom -sw        where "-s" means single-user mode and "w",        writeable mode).

        

2.  As an added precaution, if you feel you need to backup the    data off of your boot drive partitions, you may do so only    for the so-called "big-4" partitions which are /, swap, /usr    /var, (obviously since swap is a tmpfs and not a ufs    you do NOT need to back this up) BUT is NOT necessary since     you will essentially be backing up the same data on    Veritas volumes (when you encapsulated the boot drive)...    which means if you will be restoring this data, you will    STILL have to go thru this same process of unencapsulating    it and re-encapsulating your boot drive.        Why only the big-4 partitions you ask and not "data/application"    partitions? Well, this is precisely because of the fact that when    you encapsulate a root disk, the Solaris hard partitions for the     "big-4" partitions ALWAYS get retained even after an encapsulation    process and the other "non-big-4" partitions get removed and Volume     Manager then assigns it's own 2 partitions - the "private" and     "public" regions (usually slices 3 and 4 BUT is really dependent    on the free or available partitions prior to the encapsulation process).    Whereas for non-root disks, Volume Manager retains nothing for the    Solaris partitioning scheme and instead again creates its own    2 partitions - the "private" and "public" regions - but instead     blows away all other partitions. When I say "blows away all other     partitions" I don't necessary mean that data gets lost or moved.     In both instances DATA NEVER GETS MOVED or LOST!    This will be discussed in detail later on.       While still booted from cdrom do the following:

    # mt -f /dev/rmt/on stat    # ufsdump 0f /dev/rmt/0n  /dev/rdsk/c#t#d#s0       (example slice for /)    # ufsdump 0f /dev/rmt/0n  /dev/dsk/c#t#d#s5       (example slice for /var)    # ufsdump 0f /dev/rmt/0n  /dev/dsk/c#t#d#s6       (example slice for /usr)

3.  Fsck the root filesystem before mounting it.

    # fsck /dev/rdsk/c#t#d#s0       where c#t#d#s0 is your root       partition say, c0t0d0s0 as an example.    

Page 3: 3 Ways to Unencapsulate

    

4.  Mount the root filesystem on /a.

    # mount /dev/dsk/c#t#d#s0 /a       where c#t#d#s0 is still your root       partition say, c0t0d0s0 as an example.

    

5.  Make copies of your /a/etc/system & /a/etc/vfstab files to ensure    that you have backup copies of them in the event that you need to     reference or go back to these files before the edits you make to them.

    # cd /a/etc    # cp system system.ORIG    # cp vfstab vfstab.ORIG

6.  Edit your /a/etc/system & /a/etc/vfstab files back to their original     states prior to the encapsulation process.        You will find a /a/etc/vfstab.prevm file.  This is your original vfstab     file before Veritas Volume Manager (VxVM) took control of your boot drive.      Use this file to rebuild your current vfstab file so that it reflects     the Solaris hard partitions for the boot slices (/, swap, /usr, /var,     /opt as examples), rather than the Volume Manager volumes.       # cp /a/etc/vfstab.prevm /a/etc/vfstab        NOTE: Make sure that the vfstab file reflects the old Solaris hard           partitions for the boot disk. Ensure that there are NO           "/dev/vx" devices at this point within this file. If there are          any "/dev/vx" devices, comment them out with a "#".     Edit your /a/etc/system file and remove the 2 entries listed below:

    rootdev:/pseudo/vxio@0:0    set vxio:vol_rootdev_is_volume=1         # vi /a/etc/system      * vxvm_START (do not remove)      forceload: drv/vxdmp      forceload: drv/vxio      forceload: drv/vxspec      forceload: drv/sd      forceload: drv/esp      forceload: drv/dma      forceload: drv/sbi      forceload: drv/io-unit      forceload: drv/sf      forceload: drv/pln      forceload: drv/soc      forceload: drv/socal      rootdev:/pseudo/vxio@0:0      set vxio:vol_rootdev_is_volume=1      * vxvm_END (do not remove)     NOTE: REMOVE by deleting ONLY the following lines:          rootdev:/pseudo/vxio@0:0  set vxio:vol_rootdev_is_volume=1

Page 4: 3 Ways to Unencapsulate

          You MUST remove them and not just comment them           out via an "*". Use the "dd" (delete line) function of           the "vi" editor.  Do NOT touch the other entries!          These 2 entries will automatically be put back in           once the root drive is re-encapulated. 

7.  Since the root drive's big-4 partitions (/, swap, /usr, /var)    ALWAYS gets retained even after an encapsulation process you    will NEED to recover the hard partitions for the following:       7.A. "Non-big-4" partitions that are part of the primary boot disk.         (Example: /opt, /home, /export/home assuming that these         partitions all reside on the primary boot drive together with          the big-4 partitions). Even if the non-big-4 partitions         are NOT needed for the boot-up process, you are at least         ensuring that the next time you re-encapsulate this primary         root disk, you will have the SAME exact partitioning scheme.

    7.B. "Non-rootdisk" partitions like data/application partitions.         (Example: /home, /apps, /oracle, /export/home all located         on a non-rootdisk, non-primary drive assuming that they          were ENCAPSULATED too!!!!). This however is NOT mandatory         and NOT required since for obvious reasons these partitions         are really NOT needed for the boot-up process under regular         Solaris hard partitions. What I am ensuring here is the fact          that you are at least ensuring reverting back to the original         hard partitions and know where the data previously was prior          to the encapsulation process. This ALSO works well in the         event that you split up your big-4 partitions on two separate         drives and encapsulated BOTH drives. Say, "/" and "/usr on         a drive called "c0t0d0" and "/var", "swap" and a data partition         called "/home" on a drive called "c0t1d0". Again, to be         identical for all drives, you recover ALL hard partitions         (big-4 obviously gets retained so no need to recover them)         even for non-primary, non-root disks.                  Why do the "big-4" partitions get retained when you encapsulate         the boot disk? Because the encapsulation binary/script looks         for a hardcoded tag and flag such as:                  PARTITION   TAG   MOUNT_POINT         0           2      /         2           5      backup  <--- NEVER TOUCHED AS THIS IS         1           3      swap         THE WHOLE DRIVE.         4           7      /var         6           4      /usr                 And since this is labeled for the root disk, when you encapsulate         it, these Solaris hard partitions get retained and DO NOT get         CHANGED even after they become defined under Veritas Volume Manager         volumes.        7.A. Instructions For Recovering Hard PartitionsFfor "Non-big-4"         Partitions On The Primary Boot Disk:         -----------------------------------------------------------             Refer to /a/etc/vfstab.prevm (which you've converted to be your         /a/etc/vfstab) and /a/etc/vx/reconfig.d/disk.d/c#t#d#/vtoc          files to figure out what filesystems were mounted on the original          boot drive, and how large each partition was or what the hard

Page 5: 3 Ways to Unencapsulate

         cylinder boundaries were for those partitions.

         NOTES: - "vfstab.prevm" contains your original "vfstab" file before                   VxVM took control of your boot disk.                - "vtoc" contains your original boot disk Solaris hard                   partition table before VxVM was used to encapsulate it.                - c#t#d# is your boot disk device name say c0t0d0 as                  a concrete example.                      To recover the hard partitions for your Non-big-4 partitions         on your primary boot disk you have 3 options:           --------     OPTION 1: Recovering A Single Slice/Partition Via 'fmthard -d':    --------             # /usr/sbin/fmthard -d part:tag:flag:start:size /dev/rdsk/c#t#d#s2         where:         <part> is the "SLICE" column in the "vtoc" file.         <tag> is the "TAG" column in the "vtoc" file,               stripping off the "0x" and retaining the last number only               on the right side. Example 0x2 = 2 for the "tag" field.         <flag> is the "FLAG" column in the "vtoc" file,                stripping off the "0x2" and retaining the last two                numbers on the right side. Example 0x201 = 01 for the                "flag" field.         <start> is  the number from the "START" column in the "vtoc"                 file corresponding to the starting sector number of                  the partition.         <size> is the number in the "SIZE" column in the "vtoc" file                corresponding to the number of sectors in the partition.

    Example:    ========    Here is an example of /a/etc/vx/reconfig.d/disk.d/c0t0d0/vtoc:

    #THE PARTITIONING OF /dev/rdsk/c0t0d0s2 IS AS FOLLOWS :    #SLICE     TAG  FLAGS    START     SIZE    0         0x2  0x200              0          66080    1         0x3  0x201          66080          66080    2         0x5  0x201              0        1044960 <retrieve.pl?type=0&doc=bug%2Fsysadmin%2Frelease%2F1044960>    3         0x0  0x200         132160         141120    4         0x0  0x200         273280         564480    5         0x0  0x200         837760         206080    6         0x0  0x200              0              0    7         0x0  0x200              0              0

    In this example, let's say /opt (a Non-big-4 partition) used     to be on slice 4 (before being encapsulated) and needed to     be recovered (obtaining the hard cylinder boundaries), so we do:        # fmthard -d 4:0:00:273280:564480 /dev/rdsk/c0t0d0s2        This will partition slice 4, with the default flag and tag,     the starting block of 273280 in sectors and a total size     of 564480 blocks (also in sectors).      ---------    OPTION 2: Recovering A Single Slice/Partition Via 'format' Utility:    ---------

Page 6: 3 Ways to Unencapsulate

        Use Solaris' "format" utility to repartition and recover the hard    partition for your Non-big-4 partition.  The trick here is to    translate the <start> and <size> numbers which are in blocks (sectors)    into cylinders so that they can be entered into the    format -> partition utility as starting and ending cylinders for that    particular partition.        The formula is:        <start> / <sectors/cylinder> = starting cylinder    <size>  / <sectors/cylinder> = ending cylinder        where:        <start> is number from the "START" column in the vtoc file.    <size> is the number in the "SIZE" column in the vtoc file.    <sectors/cylinder> is the number or value obtained via the                       'prtvtoc' output:         Example:     ========     # prtvtoc /dev/rdsk/c0t0d0s2       /dev/rdsk/c0t0d0s2 partition map       *       * Dimensions:       *     512 bytes/sector       *      80 sectors/track       *      19 tracks/cylinder       *    1520 sectors/cylinder       *    3500 cylinders       *    2733 accessible cylinders      In our example, the <sectors/cylinder> is 1520. The <start> is 273280     and the <size> is 564480. So,     273280 / 1520 = 179 cylinders     564480 / 1520 = 371 cylinders          Using the "format" utility you can then repartition it by entering the     appropriate values:

     # format

     Choose the drive.

format> papartition> prpartition> 4Enter partition id tags: optEnter partition permission flags: wmEnter new starting cylinder: 179Enter partition size: 371c             ---------     OPTION 3:  Recovering The Entire Slice/Partition From The "vtoc"     ---------  Of That Disk Via 'fmthard -s':           To set/change the entire "vtoc" of the disk, use:

# /usr/sbin/fmthard -s vtocfile          where:

Page 7: 3 Ways to Unencapsulate

          <vtocfile> is in the following format:                     part:tag:flag:start:size                            Example:       ========       In our example of /a/etc/vx/reconfig.d/disk.d/c0t0d0/vtoc:

       #THE PARTITIONING OF /dev/rdsk/c0t0d0s2 IS AS FOLLOWS :       #SLICE     TAG  FLAGS    START     SIZE        0         0x2  0x200              0          66080        1         0x3  0x201          66080          66080        2         0x5  0x201              0        1044960 <retrieve.pl?type=0&doc=bug%2Fsysadmin%2Frelease%2F1044960>        3         0x0  0x200         132160         141120        4         0x0  0x200         273280         564480        5         0x0  0x200         837760         206080        6         0x0  0x200              0              0        7         0x0  0x200              0              0

# cp /a/etc/vx/reconfig.d/disk.d/c0t0d0/vtoc /tmp/vtocfile# vi /tmp/vtocfile

       Edit the file and make it look like the following:

          0      2   00              0          66080          1      3   01          66080          66080          2      5   01              0        1044960 <retrieve.pl?type=0&doc=bug%2Fsysadmin%2Frelease%2F1044960>          3      0   00         132160         141120          4      0   00         273280         564480          5      0   00         837760         206080          6      0   00              0              0          7      0   00              0              0             7.B. Instructions For Recovering Hard Partitions For "Non-big-4"       Partitions On Non-rootdisk:       -----------------------------------------------------------            Same as Step 7.A. (You can use any of the three options listed)        assuming that you reference the correct "vtoc" file for the particular        drive. As long as the drive had been encapsulated, it should       have a corresponding "vtoc" file in:        /a/etc/vx/reconfig.d/disk.d/c#t#d#/vtoc           8.  Using the 'format' utility, remove the public & private region     partitions of VxVM.         You must zero out these slices (usually the tag is a "-" and    the flag is a "wu").        First, verify whether the VxVM's private and public region partitions    are still there and intact via the 'prtvtoc' output:        Make sure pub/priv areas are there, because it shows the     tag values as numeric numbers 14 and 15.    14 is the public area.    15 is the private area .

    As an example for a disk c3t2d0,

# prtvtoc /dev/rdsk/c3t2d0s2

Page 8: 3 Ways to Unencapsulate

* /dev/rdsk/c3t2d0s2 partition map** Dimensions:*     512 bytes/sector*      80 sectors/track*       7 tracks/cylinder*     560 sectors/cylinder*    2500 cylinders*    1866 accessible cylinders** Flags:*   1: unmountable*  10: read-only**                          First     Sector    Last* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory             2      5    01          0   1044960 <retrieve.pl?type=0&doc=bug%2Fsysadmin%2Frelease%2F1044960>   1044959 <retrieve.pl?type=0&doc=bug%2Fcygnus%2Fgdb%2F1044959>             3     15    01          0      1120      1119             4     14    01       1120   1043840   1044959 <retrieve.pl?type=0&doc=bug%2Fcygnus%2Fgdb%2F1044959>

    Based on this output you can clearly see that tag 15 is VxVM's    private region which is located on slice 3 and tag 14 is VxVM's    public region which is located on slice 4.       Now using the 'format' utility fix and zero out these slices.    Again like I said, "format" usually lists the public and private    regions by having a "-" for the Tag and "wu" for the Flag.        Example:    ========

# formatChoose the disk c3t2d0.format> papartition> prPart      Tag    Flag     Cylinders        Size            Blocks3          -     wu       0 -    1        0.55MB    (2/0/0)       1120       4          -     wu       2 - 1865      509.69MB    (1864/0/0) 1043840       Make the Tag "unassigned" and the flag "wm".            partition> 3      Enter partition id tags: unassigned      Enter partition permission flags: wm      Enter new starting cylinder: 0      Enter partition size: 0            partition> 4      Enter partition id tags: unassigned      Enter partition permission flags: wm      Enter new starting cylinder: 0      Enter partition size: 0            It will NOW look like:            Part      Tag    Flag     Cylinders        Size        Blocks       3  unassigned    wm          0            (0/0/0)       0       4  unassigned    wm          0            (0/0/0)       0  

      You have now effectively removed VxVM's public and private       region partitions.

Page 9: 3 Ways to Unencapsulate

            

9.  Create a /a/etc/vx/reconfig.d/state.d/install-db file. 

    # touch /a/etc/vx/reconfig.d/state.d/install-db

    WHY? Because IF the root disk contains mirrors, and the system     boots up, the mirrors will get resynced, corrupting the changes     we just made. The file is checked by VxVM rc scripts in     /etc/init.d and determines that if it exists it will disable     VxVM from starting and enabling itself.  

10. Reboot the system.         # /usr/sbin/reboot    

       11. You will NOW be booted on regular Solaris hard partitions    with Veritas Volume Manager NOT running or disabled.    Verify via:        # df -k

    You should see that all your devices are being mounted    as "/dev/dsk" as opposed to "/dev/vx/dsk"        Remove the /etc/vx/reconfig.d/state.d/install-db file     (and the "root-done" file if existent only!).

# rm /etc/vx/reconfig.d/state.d/install-db# rm /etc/vx/reconfig.d/state.d/root-done      12. Determine which scenarios you have for your "rootdg" disk group    configuration.

    SCENARIO A. You created the primary and the mirror disks to be the ONLY                disks residing in the "rootdg" disk group while all other                disks and volumes reside in different disk groups like                 "datadg" for example.

    SCENARIO B. You created the primary and the mirror disks, including all                other disks and volumes to ALL be in the "rootdg"                disk group

    -----------    SCENARIO A:    -----------        If Scenario A., you will need to run 'vxinstall' and choose    to encapsulate your boot disk. In this  scenario since the only    disks residing under rootdg are your boot and mirror drive it    is SAFE to run "vxinstall" and NOT worry about losing any data    or non-root volumes. This will re-create your rootdg disk group    and thus the need to be certain that only the boot and mirror    drives were the only disks part of this disk group.    The command is:

Page 10: 3 Ways to Unencapsulate

    12.A.1 Run "vxinstall".           # /usr/sbin/vxinstall             - Choose 'Custom Install'.             - Choose 'Encapsulate Boot Drive c#t#d#'.             - Choose to 'Leave other disks alone', as it will go               thru all your other controllers as well assuming                that you did NOT create a '/etc/vx/disks.exlcude'                file that essentially disallows VxVM to touch the                disks defined within this file.             - Multiple reboots will then occur. Be sure you get                to the point where it says: "The system must be                shutdown... Shutdown now?" line.               You can "yes" to this or choose to initiate an                'init 6' or a 'reboot' command later on.             12.A.2 Once the reboot occurs what happens is:           - /etc/vfstab will be updated to reflect "/dev/vx" devices             defined under volumes for the boot partitions such as              (/, swap, /var, /usr, /opt)           - /etc/system will re-put the 2 entries previously removed:             rootdev:/pseudo/vxio@0:0             set vxio:vol_rootdev_is_volume=1           - /etc/vx/reconfig.d/state.d/install-db gets automatically             removed.             12.A.3 System will now be booted with VxVM started and enabled.           You can now verify this via a 'vxdctl mode' and the output           mode should say 'enabled'.           # vxdctl mode             Mode: enabled                    (See DETAILED steps discussed in a later step, Step 14.).

    -----------    SCENARIO B:    -----------

    If Scenario B., you will need to go thru the so-called "E-38"     (Also known as Appendix B.8.24 in SEVM 2.6 Installation Guide)     Volume Recovery Procedure below:        E-38 Volume Recovery:    =====================    In recovering your Volume Manager configurations follow the    following steps:        12.B.1. Remove the /etc/vx/reconfig.d/state.d/install-db file             (and the "root-done" file if existent).

          # rm -rf /etc/vx/reconfig.d/state.d/install-db          # rm -rf /etc/vx/reconfig.d/state.d/root-done

    12.B.2. Start VxVM I/O daemons.

    # vxiod set 10

    12.B.3. Start the volume manager configuration daemon, vxconfigd             in disabled mode.

Page 11: 3 Ways to Unencapsulate

    # vxconfigd -m disable      

    12.B.4. Initialize the vxconfigd daemon.

    # vxdctl init

            NOTE: In some cases, if you happen to have re-installed                  your VxVM packages on a particular host or retained                   the same packages on the same host BUT renamed the                  hostname of the said host, you will have to run                  'vxdctl init old_hostname' where:                  old_hostname is obtained from the 'hostid' field                  of a 'vxprivutil list' output. Since this entails                  recovering your existing disks/volumes under the                  diskgroup called "rootdg" AND other disks/volumes                  on other disk groups, the command is run only                  once. A reboot will then cause your /etc/vx/volboot                  file to match your new hostname, so there should be                  no cause for alarm in reverting to your old hostname.                  Compare your 'hostid' field from your /etc/vx/volboot                   to your /etc/hostname. They MUST match. If they do                   not, follow example below:                      Explicit Example:        =================                Obtain the slice number of VxVM's private region partition.

# prtvtoc /dev/rdsk/c3t2d0s2         * /dev/rdsk/c3t2d0s2 partition map         *         * Dimensions:         *     512 bytes/sector         *      80 sectors/track         *       7 tracks/cylinder         *     560 sectors/cylinder         *    2500 cylinders         *    1866 accessible cylinders         *         * Flags:         *   1: unmountable         *  10: read-only         *         *                          First     Sector    Last         * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory                2      5    01          0   1044960 <retrieve.pl?type=0&doc=bug%2Fsysadmin%2Frelease%2F1044960>   1044959 <retrieve.pl?type=0&doc=bug%2Fcygnus%2Fgdb%2F1044959>                3     15    01          0      1120      1119                4     14    01       1120   1043840   1044959 <retrieve.pl?type=0&doc=bug%2Fcygnus%2Fgdb%2F1044959>         Tag 15 (VxVM private region) is on slice 3.

# /usr/lib/vxvm/diag.d/vxprivutil list /dev/rdsk/c3t2d0s3         diskid:  941127537.11942.lucky-sw         group:   name=richdg id=942324120.12495.lucky-sw         flags:   private autoimport         hostid:  lucky-sw         version: 2.1         iosize:  512         public:  slice=4 offset=0 len=1043840         private: slice=3 offset=1 len=1119

Page 12: 3 Ways to Unencapsulate

                  The 'hostid' field for this disk is "lucky-sw".         Armed with this information, you will then run:       # vxdctl init lucky-sw

    12.B.5. Enable the vxconfigd daemon.

     # vxdctl enable

    12.B.6. Verify that vxconfigd is enabled.

    # vxdctl mode              mode: enabled        12.B.7 After performing the "E-38" procedure, verify that you see            all your Volume Manager volumes as well as disk groups via:

   # vxprint -ht           # vxdisk list

             OR you can open your 'vxva' GUI interface and look              at your volumes.

13. Perform "rootability" cleanup.      In order to perform "rootability" cleanup, the VxVM daemons   should be up and running. They however require that a default   "rootdg" disk group and at least one disk be present within that   disk group BEFORE they can be properly started/enabled!      -----------   SCENARIO A:   -----------      13.A. If SCENARIO A. is chosen, you are essentially rebuilding      your "rootdg" disk group so there is NO need to clean up      anything,

      13.A.1 Run "vxinstall".             # /usr/sbin/vxinstall               - Choose 'Custom Install'.               - Choose 'Encapsulate Boot Drive c#t#d#'.               - Choose to 'Leave other disks alone', as it will go                 thru all your other controllers as well assuming                  that you did NOT create a '/etc/vx/disks.exlcude'                  file that essentially disallows VxVM to touch the                  disks defined within this file.               - Multiple reboots will then occur. Be sure you get                  to the point where it says: "The system must be                  shutdown... Shutdown now?" line.                 You can "yes" to this or choose to initiate an                  'init 6' or a 'reboot' command later on.               13.A.2 Once the reboot occurs what happens is:             - /etc/vfstab will be updated to reflect "/dev/vx" devices               defined under volumes for the boot partitions such as                (/, swap, /var, /usr, /opt)             - /etc/system will re-put the 2 entries previously removed:               rootdev:/pseudo/vxio@0:0

Page 13: 3 Ways to Unencapsulate

               set vxio:vol_rootdev_is_volume=1             - /etc/vx/reconfig.d/state.d/install-db gets automatically               removed.               13.A.3 System will now be booted with VxVM started and enabled.             You can now verify this via a 'vxdctl mode' and the output             mode should say 'enabled'.             # vxdctl mode               Mode: enabled                      (See DETAILED steps in the next step, Step 14.).

   -----------  SCENARIO B:  -----------

  13.B. If SCENARIO B. is chosen, you will have to perform a        "rootability" cleanup.              ----         CLI:        ----             13.B.1 Verify that "old" volumes for the root disk still               show up before the unencapsulation process was done.                 # vxprint -ht               Look for volumes that used to be mounted on filesystems               that was reverted back to Solaris hard partitions. You               have to be aware that since you are performing                "unencapsulation" and reverting back to Solaris hard                partitions, any volumes that used to house these               filesystems become "hanging" or "unused" volumes that               appear on your disk group BUT have no functionality at all               or not associated to any filesystem at all.

               Look for the following volumes (Listed below are the               most commonly used naming convention for the root drive               volumes):               - "rootvol" volume which contains the root filesystem.               - "swapvol" volume which contains the swap area.               - "usr" volume which contains the /usr filesystem.               - "var" volume which contains the /var filesystem.               - "opt" which contains the /opt filesystem.

13.B.2 Recursively remove these root drive volumes.       # vxedit -fr rm rootvol       # vxedit -fr rm swapvol       # vxedit -fr rm usr       # vxedit -fr rm var       # vxedit -fr rm opt

               NOTE: Repeat the 'vxedit -fr' for any root or non-root                      disk volumes that were unencapsulated or reverted                      back to Solaris hard partitions. The "-fr" means                     forcibly and recursively remove them since these                      volumes become "hanging" volumes and will be                      re-created afterwards via a "re-encapsulation"                      process.                13.B.3 IS OPTIONAL!!!

Page 14: 3 Ways to Unencapsulate

                13.B.3 You can choose to remove the root and mirror               Volume Manager disks if you intend to start from               scratch and have the ability to choose your own               root and mirror drives. In doing so, you will                have to RE-INITIALIZEe the 2 drives and put them               into the rootdg disk group once more.

       # vxdisk list         Look for the Volume Manager disk name for the boot          and mirror disks.                     In this example, let's say the primary boot drive                  will be called "rootdisk" and the secondary mirror                  drive will be called "disk01".                         Remove them via:

             # vxdg rmdisk rootdisk       # vxdg rmdisk disk01                   ----              GUI:        ----                    13.B.1 Bring up the "vxva" GUI.               # /opt/SUNWvxva/bin/vxva                13.B.2 Click and choose the "rootdg" disk group icon.               Choose all root disk volumes (the root disk's mirror is               reflected by the root disk volume having 2 plexes or a               label "V" indicating that it's a volume).               - Highlight rootvol, swapvol, usr var, opt volumes               Basic Ops -> Volume Operations -> Remove Volumes Recursively                           NOTE: Repeat this for any root or non-root disk volumes                      that were unencapsulated or reverted back to Solaris                      hard partitions. This forcibly and recursively removes                      them since these volumes become "hanging" volumes and                      will be re-created afterwards via a "re-encapsulation"                      process.             13.B.3 IS OPTIONAL!!!             13.B.3 You can choose to remove the root and mirror              Volume Manager disks if you intend to start from              scratch and have the ability to choose your own              root and mirror drives. In doing so, you will               have to RE-INITIALIZEe the 2 drives and put them              into the rootdg disk group once more.                    If you choose to remove the root and mirror               Volume Manage disks, then do the following:              - Highlight the root and mirror Volume Manager disks              (Label is a "D").              Advanced Ops -> Disk Operations -> Remove          

14. You have NOW SUCCESSFULLY UNENCAPSULATED your root disk!    Congratulations. You may now re-encapsulate and re-mirror

Page 15: 3 Ways to Unencapsulate

    your root disk. Follow the steps below:            SCENARIO A.         In SCENARIO A., Re-encapulation of the boot disk is done via     'vxinstall' already discussed earlier. Here is the complete    walk-thru once more:        14.A.1 Run "vxinstall".           # /usr/sbin/vxinstall             - Choose 'Custom Install'.             - Choose 'Encapsulate Boot Drive c#t#d#'.             - Choose to 'Leave other disks alone', as it will go               thru all your other controllers as well assuming                that you did NOT create a '/etc/vx/disks.exlcude'                file that essentially disallows VxVM to touch the                disks defined within this file.             - Multiple reboots will then occur. Be sure you get                to the point where it says: "The system must be                shutdown... Shutdown now?" line.               You can "yes" to this or choose to initiate an                'init 6' or a 'reboot' command later on.              14.A.2 Once the reboot occurs what happens is:            - /etc/vfstab will be updated to reflect "/dev/vx" devices               defined under volumes for the boot partitions such as                (/, swap, /var, /usr, /opt)            - /etc/system will re-put the 2 entries previously removed:              rootdev:/pseudo/vxio@0:0              set vxio:vol_rootdev_is_volume=1            - /etc/vx/reconfig.d/state.d/install-db gets automatically              removed.              14.A.3 System will now be booted with VxVM started and enabled.            You can now verify this via a 'vxdctl mode' and the output            mode should say 'enabled'.            # vxdctl mode              Mode: enabled             14.A.4 Verify that you see your other disk groups (aside from rootdg)            and volumes under them via 'vxprint -ht', 'vxdisk list', or            'vxva' GUI.            # vxprint -ht            Look for the list of the disk groups, the volumes and            disks that were not part of the "rootdg" disk group.             They should show up including your existing "rootdg"            disk group and it's volumes and disks under it.            # vxdisk list            An example entry of a drive called "newdg" that was part            of a disk group called "newdg" being seen properly:            c3t1d0s2     sliced    newdg03      newdg        online            # vxva            Look at the GUI and you should see your icons for your            other disk groups. If you choose them, look at the            volumes and disks being seen within that particular             disk group.                 14.A.5 Since a vxinstall was run, and we chose to encapsulate            your boot disk, typically the mirror disk will show up            as already being initialized but still not part of             any disk group and having no Volume Manager disk name 

Page 16: 3 Ways to Unencapsulate

            associated to it. For a better description this is what             it would look like for the mirror disk from a             'vxdisk list' output:                     c#t#d#s2      sliced       -          -           online                         This indicates that 'c#t#d#s2' has been initialized but             is NOW NOT part of any disk group or has no Volume Manager             disk name.                      Re-initialize or just add it back into the rootdg             disk group.                        # vxdiskadm              - Choose option #1 (Add or initialize one or more disks).              - Choose the 'c#t#d#' for the mirror drive.              - It might then tell you that the drive has already                 been initialized, and will ask you re-initialize?                 You can either say "yes" or "no" to this. Just                 answer "yes" to re-initialize it.              - It will then prompt you to enter your own Volume                 Manager disk name for this drive or you can choose                 the default name of "diskXX" (example disk01)                assigned by VxVM.              14.A.6 Re-mirror your root disk to this re-initialized drive             which will now become your mirror disk via:            # vxdiskadm              - Choose option #6 (Mirror volumes on a disk).              - Follow the prompts again as it is "user-friendly"                 and will ask for the "source_drive" and "destination/mirror                drive". As an example, say a primary boot disk c0t0d0                 called "rootdisk" being mirrored to a secondary mirror                 disk called "disk01".              - 'vxdiskadm option #6' will do all the mirroring processes                 for all the root disk volumes which are rootvol, swapvol,                usr, var and opt volumes. You DO NOT need to do each                volume manually and separately. This is the ease-of-use                and simplicity of using the 'vxdiskadm' utility.                     14.A.7 Check to see if all other volumes from other disk groups             (non-rootdg) are "started" or not via:            # vxinfo              - A vxinfo with no flags will list out "root disk"               volumes only.            # vxinfo -g {disk_group_name}              where {disk_group_name} is "datadg" for example.              Mode should be: "started".                           If not start up all your volumes via:                        # vxvol startall              OR individually start them up via:            # vxvol start {volume_name}              where {volume_name} is "vol01" for example.             14.A.8 Lastly, edit your /etc/vfstab and put back all the mounts             for these volumes (/dev/vx) devices. You can manually             mount them too if you want to verify the integrity of the             data on these volumes which SHOULD ALL be GOOD.            - Edit vfstab using the "vi" editor:              As an example, adding a particular volume back into this 

Page 17: 3 Ways to Unencapsulate

              file:              /dev/vx/dsk/newdg/vol01 /dev/vx/rdsk/newdg/vol01 /home                ufs     3       yes     -              Do this for all other volumes since the only entries you'll              have in here will be for your root disk volumes.                   14.A.9 Re-mount all non-root-disk volumes (non-rootdg disk group).            # mountall               or manually mount them by hand one at a time via:              FOR UFS:            # mount /dev/vx/dsk/newdg/vol01 /home              FOR VXFS:            # mount -F vxfs /dev/vx/dsk/newdg/vol01 /home              or reboot the host via:             # reboot              Rebooting one last time ensures that everything is                in an "okay" state, "ENABLED ACTIVE" from a              'vxprint -ht' perspective and GUI normalcy from              the 'vxva' GUI.                            NOTE: In the event that you are unable to mount the                    volumes even after starting them, ensure that                    you run 'fsck' on the volumes.                    Example:                    FOR UFS:                    # fsck /dev/vx/rdsk/newdg/vol01                    FOR VXFS:                    # fsck -F vxfs -o full,nolog /dev/vx/rdsk/newdg/vol01

        SCENARIO B.        In SCENARIO B., Re-encapulation is done via the 'vxdiskadm'    utility. (PLEASE REFERENCE INFODOC ID 15838 - HOW TO    ENCAPSULATE BOOT DISK USING 'VXDISKADM').        14.B.1  If you chose to REMOVE the Volume Manager disks             as well (following the optional step, Step 13.B.3            say, "rootdisk" for the primary drive and "disk01"             for the mirror which were removed via 'vxdg rmdisk rootdisk'             and 'vxdg rmdisk disk01'), then do the following:

            First re-initialize and add the two drives back into the            rootdg disk group.  As an EXAMPLE again, let's say the             primary drive c0t0d0 which will be called "rootdisk"             and the mirror drive c0t1d0 which will be called "disk01"             is to be intialized and added back into the rootdg disk             group. As this is an "EXAMPLE-ONLY" basis, choose the             appropriate c#t#d# and Volume Manager disk name or use            the defaults assigned by VxVM.                        # vxdiskadm              - Choose option #1 (Add or initialize one or more disks)                and intialize the root and mirror disks.              - Follow the prompts and choose to initialize the c#t#d#                 and give their VxVM disk names and tell vxdiskadm that                they are to be added into the rootdg disk group.                 When in doubt which drives you want to initialize                 always do a 'list' to list out the drives that can be                 selected. In our example we should choose "c0t0d0"                and name it as "rootdisk" for the primary disk and                choose "c0t1d0" and nae this as "diks01" for the

Page 18: 3 Ways to Unencapsulate

                mirror disk.                     14.B.2 Verify that both disks now show up under the rootdg             disk group via 'vxdisk list' output.            # vxdisk list              EXAMPLE:              Primary c0t0d0 called "rootdisk.              Mirror  c0t1d0 called "disk01".              OUTPUT WILL BE:              c0t0d0s2   sliced   rootdisk  rootdg   online              c0t1d0s2   sliced   disk01    rootdg   online            

      NOTE: If the optional step, Step 13.B.3 was NOT done, then skip            steps 14.B1. and 14.B.2 and instead follow Step 14.B.3!!!                          14.B.3 Re-encapsulated the root disk using the 'vxdiskadm'             utility.              # vxdiskadm                - Choose option #2 (Encapsulate one or more disks)                 to re-encapsulate the root disk that was                  previously initialized.               - Again follow the prompts as they should be                  "user-friendly" and will direct you waht to choose                 next.             14.B.4 Re-mirror the root disk to the designated mirror disk.              - Choose option #6 (Mirror volumes on a disk).              - Follow the prompts again as it is "user-friendly"                 and will ask for the "source_drive" and "destination/mirror                drive". As an example, say a primary boot disk c0t0d0                 called "rootdisk" being mirrored to a secondary mirror                 disk called "disk01".              - 'vxdiskadm option #6' will do all the mirroring processes                 for all the root disk volumes which are rootvol, swapvol,                usr, var and opt volumes. You DO NOT need to do each                volume manually and separately. This is the ease-of-use                and simplicity of using the 'vxdiskadm' utility.                14.B.5 Check to see if all other volumes from other disk groups               (non-rootdg) are "started" or not via:              # vxinfo                - A vxinfo with no flags will list out "root disk"                   volumes only.              # vxinfo -g {disk_group_name}                where {disk_group_name} is "datadg" for example.                Mode should be: "started".                             If the mode says "startable" or anything that                doesn't specifically say "started", then start                up all your volumes via:                          # vxvol startall                OR individually start them up via:              # vxvol start {volume_name}                where {volume_name} is "vol01" for example.               14.B.6 Lastly, edit your /etc/vfstab and put back all the mounts               for these volumes (/dev/vx) devices. You can manually               mount them too if you want to verify the integrity of the               data on these volumes which SHOULD ALL be GOOD.

Page 19: 3 Ways to Unencapsulate

              - Edit the /etc/vfstab using the "vi" editor:              As an example, adding a particular volume back into this               file:              /dev/vx/dsk/newdg/vol01 /dev/vx/rdsk/newdg/vol01 /home                ufs     3       yes     -              Do this for all other volumes since the only entries you'll              have in here will be for your root disk volumes.                     14.B.7 Re-mount all non-root-disk volumes (non-rootdg disk group).              # mountall                 or manually mount them by hand one at a time via:                FOR UFS:              # mount /dev/vx/dsk/newdg/vol01 /home                FOR VXFS:              # mount -F vxfs /dev/vx/dsk/newdg/vol01 /home                or reboot the host via:               # reboot                Rebooting one last time ensures that everything is                  in an "okay" state, "ENABLED ACTIVE" from a                'vxprint -ht' perspective and GUI normalcy from                the 'vxva' GUI.                            NOTE: In the event that you are unable to mount the                    volumes even after starting them, ensure that                    you run 'fsck' on the volumes.                    Example:                    FOR UFS:                    # fsck /dev/vx/rdsk/newdg/vol01                    FOR VXFS:                    # fsck -F vxfs -o full,nolog /dev/vx/rdsk/newdg/vol01  

                ENCAPSULATION PROCESS FOR A ROOT AND NON-ROOT DISK EXPLAINED:=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Encapsulating the Root Disk:============================

Why encapsulate a root disk?Two reasons: 1) to mirror it.2) to have a disk in rootdg.       What do you need to encapsulate the root disk?Two slices with no cylinders assigned to them.Either:  a) Two cylinders that are not part      of any slice.  These 2 cylinders must      be at the beginning or at the end of      the disk.     or   b) A 'swap' partition that VxVM can       take 2 cylinders away from.

      Before and after 'format' outputs.

Page 20: 3 Ways to Unencapsulate

        -----BEFORE-----     --------AFTER--------     0   /          0-100     /               0-100     1   swap     101-200     swap          101-200     2   backup     0-502     backup          0-502     3   /usr     201-300     /usr          201-300     4   /opt     301-400     -               0     5   /var     401-500     /var          401-500     6   <free>     0         14 (public)     0-502     7   <free>     0         15 (private)  501-502

What's the difference between encapsulating root vs non-root disks?Non-root disk looses all hard partitionsRoot disk retains 4 (the "big 4") hard partitions, but all others are removed.

However, on both root & non-root disks, data is never MOVED.

When you have a root-disk encapsulated:

- The /etc/vfstab file is modified to   mount volumes instead of hard partitions.  Below the mounts, it puts comments   containing the original partitions:

  #NOTE: volume rootvol (/) encapsulated partition c0t3d0s0  #NOTE: volume swapvol (swap) encapsulated partition c0t3d0s1  #NOTE: volume opt (/opt) encapsulated partition c0t3d0s4  #NOTE: volume usr (/usr) encapsulated partition c0t3d0s5  #NOTE: volume var (/var) encapsulated partition c0t3d0s6

- Creates the /etc/vx/reconfig.d/state.d/root-done file  (not used for anything?)

- Creates a copy of the /etc/vfstab file in /etc/vfstab.prevm.  WARNING: This file, once created, is never modified.    In other words, if new entries  are added to the 'real' /etc/vfstab file, this   one will be out of date.

 - The /etc/system file contains two additional lines:   rootdev:/pseudo/vxio@0:0   set vxio:vol_rootdev_is_volume=1

 - Even if you create an install-db file, the vxconfigd    will start up anyway!  Why?   Because if root is under VxVM control, it HAS to.  The    /etc/init startup scripts check the /etc/system file to    determine if root is encapsulated.                              Encapsulating Non-root disks:=============================

Why encapsulate? Many reasons...- To mirror it.- To grow it (concatenate).- To "change" it to a striped volume.

Run 'vxdiskadm' (option #2) to encapsulate any disk (either a root or non-root disk).

Page 21: 3 Ways to Unencapsulate

Why does it have to reboot afterwards?- It has to repartition the drive.- It has to change the /etc/vfstab file and must  mount the "vx" devices (/dev/vx/dsk/rootdg/vol01) instead of the hard  partitions (/dev/dsk/c0t3d0s0).

What do you need to encapsulate?- Two slices with no cylinders assigned to them.- Two cylinders that are not part of any slice.   These 2 cylinders must be at the beginning or  at the end of the disk.

What happens AFTER you encapsulate a non-root disk?

Before:

# df -klFilesystem            kbytes    used   avail capacity Mounted on/dev/dsk/c1t1d4s0      96455       9   86806     0%    /data0/dev/dsk/c1t1d4s1     192423       9  173174     0%    /data1/dev/dsk/c1t1d4s3     288391       9  259552     0%    /data3/dev/dsk/c1t1d4s4     384847       9  346358     0%    /data4

# formatPart      Tag    Flag     Cylinders        Size       Blocks0 unassigned    wm       0 -  203      100.41MB    (204/0/0)1 unassigned    wm     204 -  610      200.32MB    (407/0/0)2 backup        wu       0 - 2035     1002.09MB    (2036/0/0)3 unassigned    wm     611 - 1220      300.23MB    (610/0/0)4 unassigned    wm    1221 - 2033      400.15MB    (813/0/0)5 unassigned    wm       0               0         (0/0/0)6 unassigned    wm       0               0         (0/0/0)7 unassigned    wm       0               0         (0/0/0)

# prtvtoc /dev/rdsk/c1t1d4s2 *                         First    Sector    Last * Partition Tag  Flags    Sector   Count     Sector  Mount Directory     0        0   00            0    205632   205631    /data0     1        0   01       205632    410256   615887    /data1     2        0   00            0   2052288  2052287     3        0   01       615888    614880  1230767    /data3     4        0   01      1230768 <retrieve.pl?type=0&doc=bug%2Fmanpage%2Fsection1m%2F1230768>    819504 2050271    /data4

Internals: The encapsulation process creates the following files: 

# ll /etc/vx/reconfig.d/state.d total 0 -rw-rw-rw-  1 root   other   0 Feb 13 13:18 init-cap-part -rw-rw-rw-  1 root   other   0 Feb 13 13:18 reconfig

# more  /etc/vx/reconfig.d/disks-cap-part c1t1d4

# ls -l /etc/vx/reconfig.d/disk.d/c1t1d4 total 8 -rw-rw-rw-  1 root    other  7   Feb 13 13:18 dmname -rw-rw-rw-  1 root    other 1143 Feb 13 13:18 newpart -rw-rw-rw-   1 root   other  476 Feb 13 13:18 vtoc

Page 22: 3 Ways to Unencapsulate

# more /etc/vx/reconfig.d/disk.d/c1t1d4/dmname disk01

# more /etc/vx/reconfig.d/disk.d/c1t1d4/newpart #volume manager partitioning for drive c1t1d4 0 0x0 0x000        0        0 1 0x0 0x000        0        0 2 0x0 0x200        0  2052288 3 0x0 0x000        0        0 4 0x0 0x000        0        0 5 0x0 0x000        0        0 6 0xe 0x201        0  2052288 7 0xf 0x201  2050272     2016  #vxmake vol data0 plex=data0-%%00 usetype=gen #vxmake plex data0-%%00 sd=disk01-B0,disk01-%%00 #vxmake sd disk01-%%00 disk=disk01 offset=0 len=205631 #vxmake sd disk01-B0 disk=disk01 offset=2050271 len=1  putil0=Block0 comment="Remap of block 0 #vxvol start data0 #rename c1t1d4s0 data0 #vxmake vol data1 plex=data1-%%01 usetype=gen #vxmake plex data1-%%01 sd=disk01-%%01 #vxmake sd disk01-%%01 disk=disk01 offset=205631 len=410256 #vxvol start data1 #rename c1t1d4s1 data1 #vxmake vol data3 plex=data3-%%02 usetype=gen #vxmake plex data3-%%02 sd=disk01-%%02 #vxmake sd disk01-%%02 disk=disk01 offset=615887 len=614880 #vxvol start data3 #rename c1t1d4s3 data3 #vxmake vol data4 plex=data4-%%03 usetype=gen #vxmake plex data4-%%03 sd=disk01-%%03 #vxmake sd disk01-%%03 disk=disk01 offset=1230767 len=819504 #vxvol start data4 #rename c1t1d4s4 data4

# more /etc/vx/reconfig.d/disk.d/c1t1d4/vtoc #THE PARTITIONING OF /dev/rdsk/c1t1d4s2 IS AS FOLLOWS : #SLICE     TAG  FLAGS    START     SIZE  0         0x0  0x200              0         205632  1         0x0  0x201         205632         410256  2         0x0  0x200              0        2052288  3         0x0  0x201         615888         614880  4         0x0  0x201        1230768 <retrieve.pl?type=0&doc=bug%2Fmanpage%2Fsection1m%2F1230768>         819504  5         0x0  0x000        0        0  6         0x0  0x000        0        0  7         0x0  0x000        0        0

After Reboot: 

After rebooting the system, you end up with four new volumes.  The names of the volumes will be "data0", "data1", "data2", and "data3".

# df -klFilesystem            kbytes    used   avail capacity Mounted on/dev/vx/dsk/data0      96455       9   86806     0%    /data0/dev/vx/dsk/data1     192423       9  173174     0%    /data1/dev/vx/dsk/data3     288391       9  259552     0%    /data3/dev/vx/dsk/data4     384847       9  346358     0%    /data4

Page 23: 3 Ways to Unencapsulate

# vxprint -htDM NAME  DEVICE  TYPE    PRIVLEN  PUBLEN   STATEV  NAME  USETYPE KSTATE  STATE    LENGTH   READPOL  PREFPLEXPL NAME  VOLUME  KSTATE  STATE    LENGTH   LAYOUT   NCOL/WID MODESD NAME  PLEX    DISK    DISKOFFS LENGTH  [COL/]OFF DEVICE   MODE

dm disk01      c1t1d4s2     sliced   2015     2050272  -dm disk02      c1t0d4s2     sliced   2015     2050272  -dm disk03      c1t3d4s2     sliced   2015     2050272  -

v  data0       gen          ENABLED  ACTIVE   205632   ROUND    -pl data0-01    data0        ENABLED  ACTIVE   205632   CONCAT   -  RWsd disk01-B0   data0-01     disk01   2050271  1        0   c1t1d4  ENAsd disk01-04   data0-01     disk01   0        205631   1   c1t1d4  ENA

v  data1       gen          ENABLED  ACTIVE   410256   ROUND    -pl data1-01    data1        ENABLED  ACTIVE   410256   CONCAT   -  RWsd disk01-03   data1-01     disk01   205631   410256   0   c1t1d4  ENA

v  data3       gen          ENABLED  ACTIVE   614880   ROUND    -pl data3-01    data3        ENABLED  ACTIVE   614880   CONCAT   -  RWsd disk01-02   data3-01     disk01   615887   614880   0   c1t1d4  ENA

v  data4       gen          ENABLED  ACTIVE   819504   ROUND    -pl data4-01    data4        ENABLED  ACTIVE   819504   CONCAT   -  RWsd disk01-01    data4-01     disk01   1230767  819504   0  c1t1d4  ENA

# format Part      Tag    Flag     Cylinders        Size       Blocks 0 unassigned    wm       0               0         (0/0/0) 1 unassigned    wm       0               0         (0/0/0) 2 backup        wm       0 - 2035     1002.09MB    (2036/0/0) 3 unassigned    wm       0               0         (0/0/0) 4 unassigned    wm       0               0         (0/0/0) 5 unassigned    wm       0               0         (0/0/0) 6          -    wu       0 - 2035     1002.09MB    (2036/0/0) 7          -    wu    2034 - 2035        0.98MB    (2/0/0)

# prtvtoc /dev/rdsk/c1t1d4s2 *                          First     Sector   Last * Partition  Tag  Flags    Sector    Count    Sector  Mount Directory    2          0    00           0    2052288   2052287    6          14   01           0    2052288   2052287    7          15   01     2050272       2016   2052287

Article URL http://www.symantec.com/docs/TECH157465