54
Veritas Storage Foundation and High Availability Solutions Application Note: Support for Logical Domains with 5.0 Maintenance Pack 1 October 2008

Vcs for Ldom

Embed Size (px)

Citation preview

Page 1: Vcs for Ldom

Veritas Storage Foundationand High Availability SolutionsApplication Note: Support for Logical Domains

with 5.0 Maintenance Pack 1

October 2008

Page 2: Vcs for Ldom

Veritas Storage Foundation and High Availability Solutions Application Note

Copyright © 2008 Symantec Corporation. All rights reserved.

Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 1

Symantec, the Symantec logo, Veritas, and Veritas Storage Foundation are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.

The product described in this document is distributed under licenses restricting its use, copying, distribution, and decompilation/reverse engineering. No part of this document may be reproduced in any form by any means without prior written authorization of Symantec Corporation and its licensors, if any.

THIS DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID, SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

The Licensed Software and Documentation are deemed to be “commercial computer software” and “commercial computer software documentation” as defined in FAR Sections 12.212 and DFARS Section 227.7202.

Symantec Corporation20330 Stevens Creek Blvd.Cupertino, CA 95014www.symantec.com

Page 3: Vcs for Ldom

Third-party legal noticesThird-party software may be recommended, distributed, embedded, or bundled with this Symantec product. Such third-party software is licensed separately by its copyright holder. All third-party copyrights associated with this product are listed in the Veritas Storage Foundation 5.0 Release Notes.

The Veritas Storage Foundation 5.0 Release Notes can be viewed at the following URL:http://entsupport.symantec.com/docs/283886

Solaris is a trademark of Sun Microsystems, Inc.

Licensing and registrationVeritas Storage Foundation is a licensed product. See the Veritas Storage Foundation Installation Guide for license installation instructions.

Technical supportFor technical assistance, visit http://www.symantec.com/enterprise/support/assistance_care.jsp and select Product Support. Select a product and use the Knowledge Base search feature to access resources such as TechNotes, product alerts, software downloads, hardware compatibility lists, and our customer email notification service.

Page 4: Vcs for Ldom
Page 5: Vcs for Ldom

Contents

Chapter 1 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsIntroduction ............................................................................................................ 8New features ........................................................................................................... 9

Solaris Logical Domains ................................................................................ 9Storage Foundation and High Availability Solutions feature support for

Logical Domains ...................................................................................10Terminology for Logical Domains .............................................................11

Reference architecture for Storage Foundation .............................................12How Storage Foundation and High Availability Solutions works in LDoms 13System requirements ..........................................................................................14

Solaris operating system requirements ...................................................14Sun hardware requirements ......................................................................14Veritas software requirements ..................................................................14Veritas patches .............................................................................................15Veritas Storage Foundation features restrictions ..................................16Localization ...................................................................................................17

Component product release notes .....................................................................17Storage Foundation .....................................................................................17High Availability ..........................................................................................17

Product licensing .................................................................................................18Installing Storage Foundation in a LDom environment ................................19

Installing and configuring the LDom software and domains ...............19Installing Storage Foundation in the control domain ............................19Installing VxFS in the guest domain using pkgadd ................................20Verifying the configuration .......................................................................21

Migrating a VxVM disk group from a non-LDom environment to an LDom environment .................................................................................................21

Provisioning storage for a Guest LDom ...........................................................22Provisioning VxVM volumes as data disks for guest LDoms ................23Provisioning VxFS files as boot disks for guest LDoms .........................23

Using VxVM snapshots for cloning LDom boot disks ....................................24Software limitations ............................................................................................28

I/O devices cannot be added dynamically ................................................28VxVM cannot be used in the guest domain ..............................................29A VxVM volume exported to a guest LDom appears as a single slice ..29

Page 6: Vcs for Ldom

6

Binding a whole disk which is under VxVM control fails silently ....... 29A DMP metanode cannot be used to export a whole disk to a guest logical

domain ................................................................................................... 30The eeprom command cannot be used to reset EEPROM values to null 31

Chapter 2 Using multiple nodes in an LDom environmentClustering using cluster volume manager (CVM) ........................................... 34Installing Storage Foundation on multiple nodes in a LDom environment 34

Reconfiguring the clustering agents for CVM ........................................ 34CVM in the control domain for providing high availability ......................... 36

Chapter 3 Configuring Logical Domains for high availability using Veritas Cluster ServerAbout Veritas Cluster Server in an LDom environment ............................... 40Installing VCS in an LDom environment ......................................................... 40

VCS requirements ........................................................................................ 40VCS prerequisites ........................................................................................ 40VCS limitations ............................................................................................ 41Installation instructions for VCS .............................................................. 41

About configuring VCS in an LDom environment .......................................... 41Configuration scenarios ..................................................................................... 42

Network configuration ................................................................................ 42Storage configurations ............................................................................... 42

Creating the service groups ............................................................................... 43Creating an LDom service group using VCS Service Group

Configuration Wizard for Sun’s LDoms ........................................... 44Verifying a service group failover ............................................................. 47

Configuring VCS to manage applications in guest domains ......................... 47Creating a logical domain ........................................................................... 48Installing and configuring one-node VCS inside the logical domain .. 48Installing and configuring VCS inside the control domain ................... 49

About VCS agent for LDoms .............................................................................. 51LDom agent ................................................................................................... 51

Page 7: Vcs for Ldom

Chapter

1

Storage Foundation and High Availability Solutions Support for Solaris Logical Domains

This chapter contains the following:

■ Introduction

■ New features

■ Reference architecture for Storage Foundation

■ How Storage Foundation and High Availability Solutions works in LDoms

■ System requirements

■ Component product release notes

■ Product licensing

■ Installing Storage Foundation in a LDom environment

■ Migrating a VxVM disk group from a non-LDom environment to an LDom environment

■ Provisioning storage for a Guest LDom

■ Using VxVM snapshots for cloning LDom boot disks

■ Software limitations

Page 8: Vcs for Ldom

8 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsIntroduction

IntroductionThis document provides release information about support for Solaris Logical Domains (LDoms) using the products in the Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 1 (MP1) Solaris product line.

Support for Solaris Logical Domains (LDoms) is also available in later releases of Veritas Storage Foundation and High Availability Solutions.

Review this entire document before installing your Veritas Storage Foundation and High Availability products.

For information about Veritas Storage Foundation and High Availability Solutions 5.0 and 5.0 Maintenance Pack 1, refer to:

■ Veritas Cluster Server Release Notes 5.0 for Solaris

■ Veritas Cluster Server Release Notes 5.0 MP1 for Solaris

■ Veritas Storage Foundation Release Notes 5.0 for Solaris

■ Veritas Storage Foundation Release Notes 5.0 MP1 for Solaris

For information about installing Veritas Storage Foundation 5.0, refer to the following documentation:

■ Veritas Storage Foundation Installation Guide 5.0 for Solaris

■ Veritas Cluster Server Installation Guide 5.0 for Solaris

For further information about installing Veritas Cluster Server 5.0, see “Installation instructions for VCS” on page 41.

Table 1-1 Storage Foundation and High Availability Solutions 5.0 and 5.0 Maintenance Pack 1 information

Product and descriptions Links

Late Breaking News (LBN) has additions to the Release Notes

http://entsupport.symantec.com/docs/281987

Hardware compatibility list (HCL) http://entsupport.symantec.com/docs/283161

Hardware TechNote http://entsupport.symantec.com/docs/283282

Page 9: Vcs for Ldom

9Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsNew features

New featuresSupport for the new Logical Domain feature from Sun Microsystems has been incorporated into this release of Veritas Storage Foundation and High Availability Solutions.

Solaris Logical DomainsLogical Domains (LDoms) is a virtualization technology on the Solaris SPARC platform which enables creation of independent virtual machine environments on the same physical system. This allows you to consolidate and centrally manage your workloads on one system.

The Veritas Storage Foundation 5.0 MP1 release supports Solaris Logical Domains. Support for Solaris Logical Domains (LDoms) is also available in later releases of Veritas Storage Foundation and High Availability Solutions.

For more information about Solaris Logical Domains, refer to the Sun documentation:

Beginners Guide to LDoms: Understanding and Deploying Logical Domains for Logical Domains 1.0 Release.

For installation and configuration information for Solaris Logical Domains, refer to the Sun documentation:

Logical Domains (LDoms) 1.0 Administration Guide, and Logical Domains (LDoms) 1.0 Release Notes.

Sun provides regular updates and patches for the Solaris Logical Domains feature. Contact Sun for details.

Page 10: Vcs for Ldom

10 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsNew features

Storage Foundation and High Availability Solutions feature support for Logical Domains

Standardization of toolsIndependent of how an operating system is hosted, consistent storage management tools save an administrator time and reduce the complexity of the environment.

Storage Foundation in the control domain provides the same command set, storage namespace, and environment as in a non-virtual environment.

Array migrationData migration for Storage Foundation can be executed in a central location, migrating all storage from an array utilized by Storage Foundation managed hosts.

This powerful, centralized data migration functionality is available with Storage Foundation Manager 1.1 and later.

Moving storage between physical and virtual environmentsStorage Foundation can make painful migrations of data from physical to virtual environments easier and safer to execute.

With Storage Foundation there is no need to copy any data from source to destination, but rather the administrator reassigns the same storage (or a copy of it for a test migration) to the virtual environment.

Page 11: Vcs for Ldom

11Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsNew features

Terminology for Logical DomainsThe following terminology is helpful in configuring the Veritas software in Logical Domains.

Table 1-2 Logical Domains terminology

Term Definition

LDom Logical Domain or Virtual Machine with its own operating system, resources, and identity within the same physical host.

Hypervisor A firmware layer that provides a set of hardware-specific support functions to the operating systems running inside LDoms through a stable interface, known as the sun4v architecture. It is interposed between the operating system and the hardware layer.

Logical Domains Manager

Software that communicates with the Hypervisor and logical domains to sequence changes, such as the removal of resources or creation of a logical domain.

The Logical Domains Manager provides an administrative interface and keeps track of the mapping between the physical and virtual devices in a system.

Control domain The primary domain which provides a configuration platform to the system for the setup and teardown of logical domains. Executes Logical Domains Manager software to govern logical domain creation and assignment of physical resources.

I/O domain Controls direct, physical access to input/output devices, such as PCI Express cards, storage units, and network devices. The default I/O domain is the control domain.

Guest domain Utilizes virtual devices offered by control and I/O domains and operates under the management of the control domain.

Virtual devices Physical system hardware, including CPU, memory, and I/O devices that are abstracted by the Hypervisor and presented to logical domains within the platform.

Logical Domains Channel (LDC)

A logical domain channel is a point-to-point, full-duplex link created by the Hypervisor. LDCs provide a data path between virtual devices and guest domains and establish virtual networks between logical domains.

Page 12: Vcs for Ldom

12 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsReference architecture for Storage Foundation

Reference architecture for Storage FoundationFigure 1-1 illustrates the architecture of Storage Foundation with Solaris Logical Domains.

Figure 1-1 Block level view of VxVM and VxFS in LDoms environment

Page 13: Vcs for Ldom

13Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsHow Storage Foundation and High Availability Solutions works in LDoms

How Storage Foundation and High Availability Solutions works in LDoms

Storage Foundation and High Availability Solutions supports Solaris Logical Domains in both single-node, multiple-node, and multiple-node high availability configurations.

■ For a single node configuration, VxVM (including DMP) is placed in the control domain, and VxFS is placed in the guest domain.

■ For clustered nodes, CVM is placed in the control domain, and VxFS is placed in the guest domain.

For more information multiple-node support for Solaris Logical Domains, see “Using multiple nodes in an LDom environment” on page 33.

■ For clustered nodes in a highly available environment, install Veritas Cluster Server (VCS) in the control domain.

For more information about providing high availability for multiple nodes, see “Configuring Logical Domains for high availability using Veritas Cluster Server” on page 39.

■ VxFS drivers in the guest domain cannot currently interact with the VxVM drivers in the control domain. This will render some features, which require direct VxVM-VxFS co-ordination, unusable in such a configuration. The exact features which will not be supported are listed in the section “Veritas Storage Foundation features restrictions”.

Note: VxFS can also be placed in the control domain, but there will be no co-ordination between the two VxFS instances in the guest and the control domain.

Page 14: Vcs for Ldom

14 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSystem requirements

System requirementsThis section describes the system requirements for this release.

Solaris operating system requirementsVeritas Storage Foundation 5.0 MP1 with support for Logical Domains is supported on the following Solaris operating systems:

■ Solaris 10 (SPARC Platform), update 4

■ LDom software from Sun, version 1.0.1 or later

http://www.sun.com/servers/coolthreads/ldoms/get.jsp

Sun hardware requirementsVeritas Storage Foundation 5.0 MP1 with support for Logical Domains is supported on the following hardware:

■ Sun Fire and SPARC Enterprise T1000 Servers

■ Sun Fire and SPARC Enterprise T2000 Servers

Sun supports additional platforms for the Solaris Logical Domains feature. Contact Sun for details.

Veritas software requirementsSolaris Logical Domains are supported only with the following Veritas software:

■ Veritas Storage Foundation 5.0 MP1 or later

The following Veritas products are supported in this release:

■ VxVM

■ CVM

■ VxFS

■ Veritas Cluster Server 5.0 MP1 or later

Page 15: Vcs for Ldom

15Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSystem requirements

Veritas patchesThe following Veritas patches or hotfixes are required for support with Solaris Logical Domains:

■ Veritas Cluster Server requires Veritas patch 128055-01.

■ The 5.0 product installation scripts for Solaris fails in the ssh communications phase if the node prints a system banner upon ssh to that node.

This issue was found on a Solaris 10 Update 3 and Update 4 with the Solaris Logical Domains software installed. The Solaris Logical Domains uses ssh as the default means of communication for the control domain.

You would need to apply a hotfix patch if the following banner is in /etc/issue:

|-----------------------------------------------------------| This system is for the use of authorized users only. | Individuals using this computer system without authority, or in | excess of their authority, are subject to having all of their | activities on this system monitored and recorded by system | personnel. || In the course of monitoring individuals improperly using this| system, or in the course of system maintenance, the activities| of authorized users may also be monitored.| | Anyone using this system expressly consents to such monitoring | and is advised that if such monitoring reveals possible | evidence of criminal activity, system personnel may provide the | evidence of such monitoring to law enforcement officials. |-------------------------------------------------------------

Problem resolution: Install hotfix 292592 from the following location:

http://support.veritas.com/docs/292592

Page 16: Vcs for Ldom

16 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSystem requirements

Veritas Storage Foundation features restrictions The following Veritas Storage Foundation software features may be restricted in a Solaris LDom guest environment:

■ VxVM volume snapshots

Due to the inability of VxFS in the guest domain to coordinate with VxVM in the control domain, taking a data consistent snapshot of a VxVM volume containing a VxFS file system requires shutting down the application and unmounting the file system before taking the snapshot.

■ Resizing VxVM volumes and any File System on top of the volume with vxresize

Resizing any file system on the guest whose underlying device is backed by a VxVM volume in the control domain, requires resizing the VxVM volume and the file system in the guest individually.

■ Resizing VxVM volumes and VxFS together with vxresize.

Resizing the VxFS file system on the guest whose underlying device is backed by a VxVM volume in the control domain, requires resizing the VxVM volume and the VxFS filesystem individually.

Growing a VxFS file system in the guest whose underlying device is backed by a VxVM volume requires you to first grow the volume in the control domain using the vxassist command, and then the file system in the guest Ldom using the fsadm command.

Shrinking a VxFS file system on the other hand requires you to first shrink the file system in the guest LDom using fsadm and then the volume in the control domain using vxassist. Using vxassist requires you to use the "-f" force option of the command, as in the following example.# vxassist -g [diskgroup] -f shrinkto volume length

Caution: Do not shrink the underlying volume beyond the size of the VxFS file system in the guest as this can lead to data loss.

■ Exporting a volume set to a guest LDom and trying to read/write the volume set is not currently supported.

■ Veritas Volume Replicator is not supported in an LDoms environment.

The following Veritas VxFS software features are not supported in a Solaris LDom guest environment:

■ Multi-Volume Filesets/DST

■ File-Level Smartsync

Page 17: Vcs for Ldom

17Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsComponent product release notes

■ The following VxFS tunables will not be set to their default values based on the underlying volume layout, due to VxFS being in the guest LDom and VxVM installed in the control domain:

■ read_pref_io/write_pref_io

■ read_nstream/write_nstream

If desired, the user can set the values of these tunables based on the underlying volume layout in the /etc/vx/tunefstab.

For more information, refer to the section “Tuning I/O” in the Veritas File System Administrator's Guide for version 5.0.

■ Storage Foundation Cluster File System is not recommended in an LDoms environment.

LocalizationThis Application Note is not localized. It is available in English only.

Component product release notes

Storage FoundationRelease notes for component products in all versions of the Veritas Storage Foundation are located under the storage_foundation/release_notes directory of the Veritas Storage Foundation disc. It is important that you read the relevant component product release notes before installing any version of Veritas Storage Foundation:

■ Veritas Storage Foundation Release Notes (sf_notes.pdf)

■ Veritas Cluster Server Release Notes (vcs_notes.pdf)

Because product release notes are not installed by any packages, Symantec recommends that you copy them to the /opt/VRTSproduct_name/doc directory after the product installation so that they are available for future reference.

High AvailabilityFind the Veritas Cluster Server release notes in the cluster_server/release_notes directory of the product disc.

Page 18: Vcs for Ldom

18 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsProduct licensing

Product licensingSymantec’s pricing policy changes when used in a LDom virtual machine environment. Contact Symantec sales for more information.

Page 19: Vcs for Ldom

19Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsInstalling Storage Foundation in a LDom environment

Installing Storage Foundation in a LDom environment

To install Storage Foundation in a Solaris Logical Domains environment, follow the procedures in this section.

To install Veritas Cluster Server in a Solaris Logical Domains environment, see “Installing VCS in an LDom environment” on page 40.

To install Storage Foundation in a Solaris Logical Domains environment, you must complete the following operations:

■ “Installing and configuring the LDom software and domains”

■ “Installing Storage Foundation in the control domain”

■ “Installing VxFS in the guest domain using pkgadd”

■ “Verifying the configuration”

Installing and configuring the LDom software and domainsRefer to the Sun documentation for instructions about installing and configuring the Logical Domain software and configuring the control and guest domains.

See Logical Domains (LDoms) 1.0 Administration Guide.

See Logical Domains (LDoms) 1.0.1 Administration Guide.

Installing Storage Foundation in the control domainIf you are installing Veritas Storage Foundation for the first time you must first install version 5.0 and then upgrade to MP1.

Note: A Solaris Logical Domains environment is not supported with Storage Foundation 5.0. Storage Foundation 5.0 MP1 is required.

Use the procedures in the Veritas installation documentation and Release Notes to install Storage Foundation to the control domain.

Install version 5.0 first.

See Veritas Storage Foundation Installation Guide 5.0 for Solaris.

See Veritas Storage Foundation Release Notes 5.0 for Solaris.

Then, upgrade to 5.0 MP1.

See Veritas Storage Foundation Release Notes 5.0 MP1 for Solaris.

Page 20: Vcs for Ldom

20 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsInstalling Storage Foundation in a LDom environment

Installing VxFS in the guest domain using pkgaddVxFS must be installed in the guest domain using pkgadd.

To install VxFS 5.0 in the guest domain using pkgadd

1 Copy the VxFS packages from the DVD mounted in the control domain to the guest domain, to a location to which you can write and then uncompress and untar the packages.

The VxFS packages are in the following directory on the DVD:

file_system/packages

2 The VxFS packages are compressed using GNU compression.Uncompress the packages using the gunzip command, and place the uncompressed packages in the pkgs directory that you created.

# gunzip pkgs/*.gz

3 Change to the pkgs directory that now contains the VxFS packages: # cd pkgs

Use tar to extract the packages. You should extract each package individually using a command such as:

# tar xvf VRTSvxfs.tar

Repeat the command for each package.

4 Install the packages:# pkgadd -d . VRTSvlic VRTSvxfs VRTSfsman VRTSfsdoc \VRTSfssdk VRTSfsmnd

5 Do not reboot. Rebooting is not necessary at this time, even if the script prompts you to reboot.

To upgrade VxFS to 5.0 MP1 in the guest domain

1 Copy the VxFS patches from the DVD mounted in the control domain to the guest domain, to a location to which you can write and then uncompress and untar the patches.

The VxFS patch for Solaris 10 is in the following file on the DVD:

file_system/patches/123202-02.tar.gz

2 The VxFS patches are compressed using GNU compression.Uncompress the patches using the gunzip command, and place the uncompressed patches in the patches directory that you created.

# gunzip patches/*.gz

3 Change to the patches directory that now contains the VxFS patches: # cd patches

Use tar to extract the patches. You should extract each patch individually using a command such as:

# tar xvf 123202-02.tar

Page 21: Vcs for Ldom

21Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsMigrating a VxVM disk group from a non-LDom environment to an LDom environment

Repeat the command for each VxFS patch.

4 Use the patchadd command to add the patches.# patchadd -M patches 123202-02

5 Reboot the system.

Verifying the configurationVerify the configuration of Logical Domains in the control domain and the guest domain. Refer to the Sun documentation for details.

See Logical Domains (LDoms) 1.0 Administration Guide.

Verify the Storage Foundation installation in both the control domain and the guest domain.

Caution: Only VxFS should be installed in the guest domain. Verify that other packages are not installed.

See Veritas Storage Foundation Installation Guide 5.0 for Solaris.

See Veritas Storage Foundation Release Notes 5.0 for Solaris.

See Veritas Storage Foundation Release Notes 5.0 MP1 for Solaris.

Migrating a VxVM disk group from a non-LDom environment to an LDom environment

Use the following procedure to migrate a VxVM disk group from a non-LDom environment to an LDom environment.

Follow the “Moving disk groups between systems” procedure in the VxVM Administrator's Guide to migrate the diskgroup.

The VxVM diskgroup on the target LDom host is imported in the control domain and volumes are visible from inside the control domain. Then follow these additional steps for LDoms.

In this example, the control domain is named “primary” and the guest domain is named “ldom1.” The prompts in each step show in which domain to run the command.

To create virtual disks on top of the VxVM data volumes using the ldm command

1 In the control domain (primary) configure a service exporting the VxVM volume as a virtual disk.

primary# ldm add-vdiskserverdevice /dev/vx/dsk/dg-name/vol-name \

Page 22: Vcs for Ldom

22 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsProvisioning storage for a Guest LDom

bootdisk1-vol@primary-vds0

2 Add the exported disk to a guest LDom.primary# ldm add-vdisk vdisk1 bootdisk1-vol@primary-vds0 ldom1

3 Start the guest domain, and make sure the new virtual disk is visible.primary# ldm bind ldom1primary# ldm start ldom1

4 You might also have to run the devfsadm command in the guest domain.ldom1# devfsadm -C

In this example, the new disk appears as /dev/[r]dsk/c0d1s0.ldom1# ls -l /dev/dsk/c0d1s0lrwxrwxrwx 1 root root 62 Sep 11 13:30 /dev/dsk/c0d1s0 -> ../../devices/virtual-devices@100/channel-devices@200/disk@1:a

Note: Note that with Solaris 10, Update 4, a VxVM volume shows up as a single slice in the guest LDom.

Refer to “Software limitations” on page 28, or the LDoms 1.0 release notes from Sun (Virtual Disk Server Should Export ZFS Volumes as Full Disks (Bug ID 6514091) for more details.

5 Mount the file system on the disk in order to access the application data.ldom1# mount -F vxfs /dev/dsk/c0d1s0 /mntldom1# mount -F ufs /dev/dsk/c0d1s0 /mnt

Caution: After the “volume as a single slice” limitation is fixed by Sun, then a volume by default will show up as a full disk in the guest. In that case, the Virtual Disk Client driver will write a VTOC on block 0 of the virtual disk, which will end up as a WRITE on block 0 of the VxVM volume. This can potentially cause data corruption, because block 0 of the VxVM volume contains user data. Sun will provide an option in the LDom CLI to export a volume as a single slice disk. This option should always be used in the migration scenario as the VxVM volume already contains user data at block 0.

Provisioning storage for a Guest LDomUse the following procedure to provision storage for a Guest LDom. Both boot disks and data disks can be provisioned.

Page 23: Vcs for Ldom

23Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsProvisioning storage for a Guest LDom

Provisioning VxVM volumes as data disks for guest LDomsUse the following procedure to use VxVM volumes as data disks (virtual disks) for guest LDoms.

VxFS can be used as the file system on top of these disks.

In this example, the control domain is named “primary” and the guest domain is named “ldom1.” The prompts in each step show in which domain to run the command.

To provision VxVM volumes as data disks

1 Create a VxVM disk group (datadg in this example) with some disks allocated to it.

primary# vxdg init datadg TagmaStore-USP0_29 TagmaStore-USP0_30

2 Create a VxVM volume of the desired layout (in this example, creating a simple volume).

primary# vxassist -g datadg make datavol1 500m

3 Configure a service exporting the volume datavol1 as a virtual disk.primary# ldm add-vdiskserverdevice /dev/vx/dsk/datadg/datavol1 \bootdisk1-vol@primary-vds0

4 Add the exported disk to a guest domain.primary# ldm add-vdisk vdisk1 bootdisk1-vol@primary-vds0 ldom1

5 Start the guest domain, and make sure the new virtual disk is visible. primary# ldm bind ldom1primary# ldm start ldom1

6 You might also have to run the devfsadm command in the guest domain.ldom1# devfsadm -C

To create a VxFS file system on top of the new virtual disk

1 Make the file system. The disk is c0d1s0 in this example.ldom1# mkfs -F vxfs /dev/rdsk/c0d1s0

2 Mount the file system.ldom1# mount -F vxfs /dev/dsk/c0d1s0 /mnt

3 Verify that the file system has been created.ldom1# df -hl -F vxfsFilesystem size used avail capacity Mounted on/dev/dsk/c0d1s0 500M 2.2M 467M 1% /mnt

Provisioning VxFS files as boot disks for guest LDomsUse the following procedure to provision boot disks for a guest domain.

Page 24: Vcs for Ldom

24 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsUsing VxVM snapshots for cloning LDom boot disks

Because VxVM volumes currently show up as “single slice disks” in the guest LDoms, they cannot be used as boot disks for the guests. However, a large VxFS file can be used to provision a boot disk for a guest LDom, because a file appears as a whole disk in the guest LDom.

The following process gives the outline of how a VxFS file can be used as a boot disk.

In this example, the control domain and is named “primary” and the guest domain is named “ldom1.” The prompts in each step show in which domain to run the command.

To provision VxFS files as boot disks for guest LDoms

1 On the control domain, create a VxVM volume of a size that is recommended for Solaris 10 OS installation. In this example, a 7GB volume is created.

primary# vxassist -g bootdisk-dg make bootdisk-vol 7g

2 Create a VxFS file system on top of the volume and mount it.primary# mkfs -F vxfs /dev/vx/rdsk/bootdisk-dg/bootdisk-volprimary# mount -F vxfs /dev/vx/dsk/bootdisk-dg/bootdisk-vol /fs1

3 Create a large file of size 6GB on this file system.primary# mkfile 6G /fs1/bootimage1

4 Configure a service exporting the file /fs1/bootimage1 as a virtual disk.primary# ldm add-vdiskserverdevice /fs1/bootimage1 \bootdisk1-vol@primary-vds0

5 Add the exported disk to ldom1.primary# ldm add-vdisk vdisk1 bootdisk1-vol@primary-vds0 ldom1

6 Follow the Sun's recommended steps to install and boot a guest domain, and use the virtual disk vdisk1 as the boot disk during net install.

Using VxVM snapshots for cloning LDom boot disksThis procedure will highlight the steps to clone the boot disk from an existing LDom using VxVM snapshots.

The example makes use of the third-mirror breakoff snapshots.

Refer to “Provisioning VxFS files as boot disks for guest LDoms” on page 23 for details on how to provision such a boot disk for a guest LDom.

Figure 1-2 illustrates the example names used in the following procedure.

Page 25: Vcs for Ldom

25Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsUsing VxVM snapshots for cloning LDom boot disks

Figure 1-2 Example of using VxVM snapshots for cloning LDom boot disks

Before this procedure, ldom1 has its boot disk contained in a large file (/fs1/bootimage1) in a VxFS file system which is mounted on top of a VxVM volume.

This procedure involves the following steps:

■ Cloning the LDom configuration to form a new LDom configuration.

This step is a Solaris LDom procedure, and can be achieved using the following commands.# ldm list-constraints -x# ldm add-domain -i

Refer to Solaris LDoms documentation for more details about how to carry out this step.

■ After cloning the configuration, clone the boot disk and provision it to the new LDom.

To create a new LDom with different configuration than that of ldom1, then skip this step of cloning the configuration, and just create the desired LDom configuration separately.

Page 26: Vcs for Ldom

26 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsUsing VxVM snapshots for cloning LDom boot disks

To clone the boot disk using VxVM snapshots

1 Create a third-mirror breakoff snapshot of the source volume bootdisk1-vol. To create the snapshot, you can either take some of the existing ACTIVE plexes in the volume, or you can use the following command to add new snapshot mirrors to the volume:

primary# vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \[alloc=storage_attributes]

By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a different number of mirrors. The mirrors remain in the SNAPATT state until they are fully synchronized. The -b option can be used to perform the synchronization in the background. Once synchronized, the mirrors are placed in the SNAPDONE state.

For example, the following command adds two mirrors to the volume, bootdisk1-vol,

on disks mydg10 and mydg11:primary# vxsnap -g mydg addmir bootdisk1-vol nmirror=2 \alloc=mydg10,mydg11

If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchronization of the snapshot plexes to complete, as shown in this example:

primary# vxsnap -g mydg snapwait bootdisk1-vol nmirror=2

2 To create a third-mirror break-off snapshot, use the following form of the vxsnap make command.

Caution: Shut down the guest domain before executing the vxsnap command to take the snapshot.

primary# vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\{/plex=plex1[,plex2,...]|/nmirror=number]}

Either of the following attributes may be specified to create the new

snapshot volume, snapvol, by breaking off one or more existing plexes in

the original volume:

plex Specifies the plexes in the existing volume that are to be broken off. This attribute can only be used with plexes that are in the ACTIVE state.

nmirror Specifies how many plexes are to be broken off. This attribute can only be used with plexes that are in the SNAPDONE state. (Such plexes could have been added to the volume by using the vxsnap addmir command.)

Page 27: Vcs for Ldom

27Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsUsing VxVM snapshots for cloning LDom boot disks

Snapshots that are created from one or more ACTIVE or SNAPDONE plexes in the volume are already synchronized by definition.

For backup purposes, a snapshot volume with one plex should be sufficient.

3 Use fsck (or some utility appropriate for the application running on the volume) to clean the temporary volume’s contents. For example, you can use this command with a VxFS file system:

primary# fsck -F vxfs /dev/vx/rdsk/diskgroup/snapshot

4 Mount the VxFS file system on the snapshot volume. primary# mount -F vxfs /dev/vx/dsk/bootdisk-dg/SNAP-bootdisk1-vol \/snapshot1/

This file system will contain a copy of the golden boot image file /fs1/bootimage1.

The cloned file is visible on the primary.primary # ls -l /snapshot1/bootimage1-rw------T 1 root root 6442450944 Sep 4 12:40 /snapshot1/bootimage1

5 Verify that the checksum of the original and the copy are the same.primary # cksum /fs1/bootimage1primary # cksum /snapshot1/bootimage1

6 Configure a service exporting the file /snapshot1/bootimage1 as a virtual disk.

primary# ldm add-vdiskserverdevice /snapshot1/bootimage1 \vdisk2@primary-vds0

7 Add the exported disk to ldom1 first.primary# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom1

8 Start ldom1 and boot ldom1 from its primary boot disk vdisk1.primary# ldm bind ldom1primary# ldm start ldom1

9 You may have to run the devfsadm -C command to create the device nodes for the newly added virtual disk (vdisk2).

ldom1# devfsadm -C

In this example the device entry for vdisk2 will be c0d2s#.ldom1# # ls /dev/dsk/c0d2s*/dev/dsk/c0d2s0 /dev/dsk/c0d2s2 /dev/dsk/c0d2s4 /dev/dsk/c0d2s6/dev/dsk/c0d2s1 /dev/dsk/c0d2s3 /dev/dsk/c0d2s5 /dev/dsk/c0d2s7

10 Mount the root file system of c0d2s0 and modify the /etc/vfstab entries such that all c#d#s# entries are changed to c0d0s#. This is because ldom2 is a new LDom and the first disk in the OS device tree is always named as c0d0s#.

11 After the vfstab has been changed, unmount the file system and unbind vdisk2 from ldom1.

primary# ldm remove-vdisk vdisk2 ldom1

Page 28: Vcs for Ldom

28 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSoftware limitations

12 Bind vdisk2 to ldom2 and then start and boot ldom2.

Now ldom2 will boot from the cloned disk but will look like ldom1 as the hostname and IP address are still that of ldom1.

primary# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom2primary# ldm bind ldom2 primary# ldm start ldom2

13 After booting ldom2, it will appear as ldom1 on the console.ldom1 console login:

This is because the other host-specific parameters like hostname and IP address are still that of ldom1.

To change these, bring ldom2 to single-user mode and run sys-unconfig.

14 After running sys-unconfig, reboot ldom2.

During the reboot, the OS will prompt you to configure the host-specific parameters like hostname and IP address, which you need to enter corresponding to ldom2.

15 After you have specified all these parameters, the LDom ldom2 will boot up successfully.

Software limitationsThe following section describes some of the limitations of the Solaris Logical Domains software and how those limitations affect the functionality of the Veritas Storage Foundation products.

I/O devices cannot be added dynamicallyCannot dynamically add I/O devices to guest LDoms. Adding I/O devices dynamically requires shutting down and rebooting the LDom.

Due to this limitation, VxVM volumes or VxFS files cannot be added to a guest LDom without shutting down and rebooting the guest LDom.

Page 29: Vcs for Ldom

29Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSoftware limitations

VxVM cannot be used in the guest domain(Tracked as Sun bug ID 6437722.)

The Guest LDom currently does not support the following disk-based ioctls:

USCSICMD

DKIOCINFO

DKIOCGMEDIAINFO

Due to this the SCSI inquiry does not work from the guest, rendering VxVM/DMP unusable in the guest domain.

A VxVM volume exported to a guest LDom appears as a single sliceA VxVM volume when exported to the guest appears as a single slice in the guest.

Example: # ls -l /dev/dsk/c0d2s0lrwxrwxrwx 1 root root 62 Jul 18 22:20 /dev/dsk/c0d2s0 ->

../../devices/virtual-devices@100/channel-devices@200/disk@2:a

Due to this limitation, a VxVM volume cannot be used as a boot disk for a guest LDom.

Binding a whole disk which is under VxVM control fails silently(Tracked as Sun bug ID 6528156.)

When trying to bind a whole disk which is under VxVM control (and whose multipathing is done by DMP), the bind fails silently.

This is because the virtual disk server (vds) driver tries to issue an exclusive open on the disk device, which fails because the device is already held open by DMP.

DMP always keeps an online disk device open via the current primary path in case of A/P arrays, and via all ACTIVE/ENABLED paths in case of A/A arrays.

Sun states that the open with EXCLUSIVE flag will be removed in the subsequent release of LDoms; then this problem will no longer occur.

Workaround: If Veritas Volume Manager (VxVM) is installed in the control domain, then before exporting a full disk to a guest LDom, you first need to disable the exclusive open done by the vds driver by setting the kernel global variable "vd_open_flags" to "0x3".

You can disable the exclusive open on the running system with the following command:

echo 'vd_open_flags/W 0x3' | mdb -kw

Page 30: Vcs for Ldom

30 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSoftware limitations

You also need to add the change in /etc/system to make it persistent across reboots:

set vds:vd_open_flags = 0x3

Note: This is a temporary workaround until the SUN bug listed above is fixed and delivered in a patch.

A DMP metanode cannot be used to export a whole disk to a guest logical domain

(Tracked as Sun bug ID 6528156.)

When a virtual disk is bound using the DMP metanode to a guest LDom, the bind fails silently due to the exclusive open issue in the current Logical Domains software.

Because the open on the DMP metanode fails, DMP internally disables all the underlying paths of that disk device. These paths, however, are enabled when the restore daemon then probes the paths.

A DMP metanode cannot be used to successfully export a whole disk to a guest LDom.

Sun states that the open with EXCLUSIVE flag will be removed in the subsequent release of LDoms; then this problem will no longer occur.

Workaround: If Veritas Volume Manager (VxVM) is installed in the control domain, then before exporting a full disk to a guest LDom, you first need to disable the exclusive open done by the vds driver by setting the kernel global variable "vd_open_flags" to "0x3".

You can disable the exclusive open on the running system with the following command:

echo 'vd_open_flags/W 0x3' | mdb -kw

You also need to add the change in /etc/system to make it persistent across reboots:

set vds:vd_open_flags = 0x3

Note: This is a temporary workaround until the SUN bug listed above is fixed and delivered in a patch.

Page 31: Vcs for Ldom

31Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSoftware limitations

The eeprom command cannot be used to reset EEPROM values to null

Because eeprom(1M) has some issues in the control domain of LDom, setting a devalias entry for the alternate boot disk using the eeprom command does not work in the control domain.

The eeprom(1M) command cannot be used to reset EEPROM values to null in Logical Domains systems.

The following example shows what happens if you attempt to reset EEPROM values to null in Logical Domains systems: primary# eeprom boot-file= eeprom: OPROMSETOPT: Invalid argument boot-file: invalid property.

The same command works correctly on non-Logical Domains systems as shown in this example: # eeprom boot-file=# eeprom boot-fileboot-file: data not available.

Page 32: Vcs for Ldom

32 Storage Foundation and High Availability Solutions Support for Solaris Logical DomainsSoftware limitations

Page 33: Vcs for Ldom

Chapter

2

Using multiple nodes in an LDom environment

This chapter contains the following:

■ Clustering using cluster volume manager (CVM)

■ Installing Storage Foundation on multiple nodes in a LDom environment

■ CVM in the control domain for providing high availability

Page 34: Vcs for Ldom

34 Using multiple nodes in an LDom environmentClustering using cluster volume manager (CVM)

Clustering using cluster volume manager (CVM)The Veritas Volume Manager cluster functionality (CVM) makes logical volumes and raw device applications accessible throughout a cluster.

For clustered nodes, CVM is placed in the control domain, and VxFS is placed in the guest domain.

The cluster functionality of Veritas Volume Manager (CVM) allows up to 16 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control (VM disks). The same logical view of disk configuration and any changes to this is available on all the nodes. When the cluster functionality is enabled, all the nodes in the cluster can share VxVM objects.

See “CVM in the control domain for providing high availability” on page 36.

Installing Storage Foundation on multiple nodes in a LDom environment

To install Storage Foundation on multiple nodes in a Solaris Logical Domains environment, you must complete the following operations (the same as on a single node):

■ “Installing and configuring the LDom software and domains” on page 19

■ “Installing Storage Foundation in the control domain” on page 19

■ “Installing VxFS in the guest domain using pkgadd” on page 20

■ “Verifying the configuration” on page 21

Reconfiguring the clustering agents for CVMFor a Storage Foundation CVM, the following additional configuration steps are necessary:

■ “Removing the vxfsckd resource” on page 34

■ “Creating CVMVolDg in a group” on page 35

Removing the vxfsckd resourceAfter configuring Storage Foundation and CVM, to remove the vxfsckd resource, complete the following steps. This will delete the vxfsckd resource.

Page 35: Vcs for Ldom

35Using multiple nodes in an LDom environmentInstalling Storage Foundation on multiple nodes in a LDom environment

To remove the vxfsckd resource

1 Make the configuration writeable:# haconf -makerw

2 Delete the resource:# hares -delete vxfsckd

3 Make the configuration read-only:# haconf -dump -makero

4 Stop the resources:# hastop -all

5 Restart them. This needs to be run on all nodes in the cluster.# hastart

Creating CVMVolDg in a groupThe following procedure creates CVMVolDg in a given group.

To create CVMVolDg

1 Make the configuration writeable:# haconf -makerw

2 Add the CVMVolDg resource:# hares -add <name of resource> CVMVolDg <name of group>

3 Add a dg name to the resource:# hares -modify <name of resource> CVMDiskGroup sdg1

4 Make the attribute local to the system:# hares -local <name of resource> CVMActivation

5 Add the attribute to the resource. This step must be repeated on each of the nodes.# hares -modify <name of resource> CVMActivation \

<activation value> -sys <nodename>

6 If the user wants to monitor volumes, then complete this step; otherwise skip this step. In a database environment, we suggest the use of volume monitoring.# hares -modify <name of resource> CVMVolume \

-add <name of volume>

7 Modify the resource, so that a failure of this resource does not bring down the entire group.# hares -modify <name of resource> Critical 0

8 Enable it:# hares -modify cvmvoldg1 Enabled 1

Page 36: Vcs for Ldom

36 Using multiple nodes in an LDom environmentCVM in the control domain for providing high availability

9 Make the configuration read-only:# haconf -dump -makero

10 Verify the configuration:# hacf -verify

11 This should put the resource in the main.cf file.

CVM in the control domain for providing high availability

The main advantage of clusters is protection against hardware failure. Should the primary node fail or otherwise become unavailable, applications can continue to run by transferring their execution to standby nodes in the cluster.

CVM can be deployed in the control domains of multiple physical hosts running LDoms providing high availability of the control domain.

Figure 2-3 illustrates a CVM configuration.

Figure 2-3 CVM configuration in an Solaris LDom environment

If a control domain encounters a hardware or software failure causing it to shut down, all applications running in the guest LDoms on that host also are affected.

Page 37: Vcs for Ldom

37Using multiple nodes in an LDom environmentCVM in the control domain for providing high availability

These applications can be failed over and restarted inside guests running on another active node of the cluster.

Caution: With the I/O domain reboot feature introduced in LDoms 1.0.1, when the control domain reboots, any I/O being done by the guest domain gets queued up and resumes once the control domain comes back up. See the Logical Domains (LDoms) 1.0 .1 Release Notes from Sun..Due to this, applications running in the guests may resume or time out based on individual application settings. It is the user's responsibility to decide if the application should be restarted on another guest (on the failed-over control domain). There is a potential data corruption scenario, if the underlying shared volumes gets accessed from both the guests simultaneously.

Shared volumes and their snapshots can be used as a backing store for guest LDoms.

Note: The ability to take online snapshots is currently inhibited because the file system in the guest cannot coordinate with the VxVM drivers in the control domain.Make sure that the volume whose snapshot is being taken is closed before the snapshot is taken.

The following example procedure shows how snapshots of shared volumes are administered in such an environment.

Consider the following scenario:

■ datavol1 is a shared volume being used by guest LDom ldom1 and c0d1s0 is the front end for this volume visible from ldom1.

To take a snapshot of datavol1

1 Unmount any VxFS file system if it exists on c0d1s0.

2 Stop and unbind ldom1.primary# ldm stop ldom1primary# ldm unbind ldom1

This ensures that all the file system metadata is flushed down to the backend volume datavol1.

3 Create a snapshot of datavol1.

Refer to the “Creating and managing third-mirror break-off snapshots” section of the VxVM Administrator’s Guide for details.

4 Once the snapshot operation is complete, rebind and restart ldom1.primary# ldm bind ldom1primary# ldm start ldom1

Page 38: Vcs for Ldom

38 Using multiple nodes in an LDom environmentCVM in the control domain for providing high availability

5 Once the LDom ldom1 boots, remount the VxFS file system back on c0d1s0.

Page 39: Vcs for Ldom

Chapter

3

Configuring Logical Domains for high availability using Veritas Cluster Server

This chapter contains the following:

■ About Veritas Cluster Server in an LDom environment

■ Installing VCS in an LDom environment

■ About configuring VCS in an LDom environment

■ Configuration scenarios

■ Creating the service groups

■ Configuring VCS to manage applications in guest domains

■ About VCS agent for LDoms

Page 40: Vcs for Ldom

40 Configuring Logical Domains for high availability using Veritas Cluster ServerAbout Veritas Cluster Server in an LDom environment

About Veritas Cluster Server in an LDom environment

Use Veritas Cluster Server (VCS) and patch 128055-01 to ensure high availability for a Sun Microsystems Logical Domain (LDom). Use VCS to monitor LDoms, their storage, and switches. If any component (resource) goes down, VCS can move the LDom, and all its dependent resources to a running node.

Installing VCS in an LDom environmentInstall VCS in the control domain of a Solaris 10 system.

VCS requirementsFor installation requirements see, “System requirements” on page 14.

VCS requires shared storage that is visible across all the nodes in the cluster. Configure each LDom on a node. The LDom’s boot device and application data must reside on shared storage.

VCS prerequisitesThis document assumes a working knowledge of VCS.

Review the prerequisites in the following documents to help ensure a smooth VCS installation:

■ Veritas Cluster Server Release Notes

Find this in the cluster_server/release_notes directory of the product disc.

■ Veritas Cluster Server Installation Guide

Find this in the cluster_server/docs directory of the product disc.

Unless otherwise noted, all references to other documents refer to the Veritas Cluster Server documents version 5.0 for Solaris.

Page 41: Vcs for Ldom

41Configuring Logical Domains for high availability using Veritas Cluster ServerAbout configuring VCS in an LDom environment

VCS limitationsThe following limitations apply to using VCS in an LDom environment:

■ VCS does not support the use of alternate I/O domains as the use of alternate I/O domains can result in the loss of high availability.

■ This release of VCS does not support attaching raw physical disks or slices to LDoms. Such configurations may cause data corruption either during an LDom failover or if you try to manually bring up LDom on different systems.

For details on supported storage configurations, see “Storage configurations” on page 42.

■ Each LDom configured under VCS must have at least two VCPUs. With one VCPU, the control domain always registers 100% CPU utilization for the LDom. This is an LDom software issue.

Installation instructions for VCSInstall VCS in the primary control domain.

See “VCS prerequisites” on page 40.

About configuring VCS in an LDom environmentWhen you configure VCS in your LDom environment, VCS monitors the health of the LDom, and its supporting components. The LDom agent monitors the LDom. If the agent detects the LDom resource has failed, the agent moves the LDom resource, and all the resources on which it depends to another physical node.

When you configure VCS in an LDom environment, you need some specific information about the LDom, network, and the storage devices that the LDom requires to run. You need to know the following information about your LDom:

■ The name of the LDom

■ The names of the primary network interfaces for each node

■ The virtual switch that the LDom uses

■ The name and type of storage that the LDom uses

Page 42: Vcs for Ldom

42 Configuring Logical Domains for high availability using Veritas Cluster ServerConfiguration scenarios

Configuration scenariosFigure 3-1 shows the basic dependencies for an LDom resource.

Figure 3-1 An LDom resource depends on storage and network resources

Network configurationUse the NIC agent to monitor the primary network interface, whether it is virtual or physical. Use the interface that appears using the ifconfig command.

Figure 3-2 is an example of an LDom service group. The LDom resource requires both network (NIC) and storage (Volume and DiskGroup) resources.

For more information about the NIC agent, refer to the Veritas Cluster Server Bundled Agents Reference Guide.

Storage configurationsDepending on your storage configuration, use a combination of the Volume, DiskGroup, and Mount agents to monitor storage for LDoms.

Note that VCS in an LDom environment supports only volumes or flat files in volumes that are managed by VxVM.

Veritas Volume Manager (VxVM) exposed volumesVeritas Volume Manager (VxVM) exposed volumes is the recommended storage solution for LDoms in a VCS environment. Use the Volume and DiskGroup agents to monitor a VxVM volume. VCS with VxVM provides superior protection for your highly available applications.

Figure 3-2 shows an LDom resource that depends on a Volume and DiskGroup resource.

Page 43: Vcs for Ldom

43Configuring Logical Domains for high availability using Veritas Cluster ServerCreating the service groups

Figure 3-2 The LDom resource can depend on many resources, or just the NIC, Volume, and DiskGroup resources depending on environment

For more information about the Volume and DiskGroup agents, refer to the Veritas Cluster Server Bundled Agents Reference Guide.

Image filesUse the Mount, Volume, and DiskGroup agents to monitor an image file.

Figure 3-3 shows how the Mount agent works with different storage resources.

Figure 3-3 The Mount resource in conjunction with different storage resources

For more information about the Mount agent, refer to the Veritas Cluster Server Bundled Agents Reference Guide.

Creating the service groupsThe Veritas Cluster Server Service Group Configuration Wizard for Sun’s LDoms is a script-based tool that enables creation of an LDom-specific service group.

You can also create and manage service groups using Veritas Cluster Server Management Console, Cluster Manager (Java Console), or through the command line.

LDom

Mount

Volume

DiskGroup

NIC

Page 44: Vcs for Ldom

44 Configuring Logical Domains for high availability using Veritas Cluster ServerCreating the service groups

For complete information about using and managing service groups, either through CLI or GUI, refer to the Veritas Cluster Server User’s Guide.

Creating an LDom service group using VCS Service GroupConfiguration Wizard for Sun’s LDoms

You can use the Veritas Cluster Server Service Group Configuration Wizard for Sun’s LDoms to quickly provision a service group for an LDom. You must create the LDom or edit its configuration appropriately on each node in the SystemList before you enable all its resources.

Prerequisites for using the wizardThe following prerequisites apply to using the wizard to provision a service group:

■ VCS must be running for the wizard to create the LDom service group.

■ The LDom must use only the primary domain services.

■ The LDom must not be configured in VCS.

■ The LDom must be running to determine its correct configuration.

■ The LDom must be attached only to volumes or flat files in volumes that are managed by VxVM.

■ The LDom resource type must exist in VCS.

■ All storage devices should be shared devices.

■ If flat files (image files) are attached to the LDom for storage, the mount points of their devices must not be in the /etc/vfstab file.

■ The service group and resources names that the wizard creates must not already exist in VCS.

Saving and using the generated command set executable fileAt certain points when you use the wizard, you can choose to save the set of VCS commands for service group creation in an executable file. You can then edit the values in the executable to customize service group creation for LDoms. You can edit the executable to modify resources, resource names, service group name, attributes and dependencies.

The wizard creates the file in the /tmp directory and gives it a .sh extension. After you edit the file for the configuration that you want, you can then execute the file from any running node in a VCS cluster. The executable creates a service group on the node where you run it with the values that you have specified. The file is mainly comprised of hagrp and hares commands.

Page 45: Vcs for Ldom

45Configuring Logical Domains for high availability using Veritas Cluster ServerCreating the service groups

Note that if VCS is not running while you run the executable, it saves the file for your use.

For more information on using the hagrp and hares commands, refer to the Veritas Cluster Server User’s Guide.

Starting and using the wizardNote that no “back” function exists for this wizard. If you make a mistake on a step, quit and re-start the wizard.

To use the wizard

1 After patch installation, run the hawizard ldom command to start the wizard.# hawizard ldom

2 Review the prerequisites.

3 If you have multiple LDoms configured on the node, the wizard prompts you to select the LDom that you want to configure. Select the number that corresponds to the LDom. Choose LDom 1 in the following example:The following LDoms are configured on the localhost:----------------------------------------------------1) ldg12) ldg2q) QuitChoose an LDom to configure in VCS [1-2,q]: 1

Press the Enter key to continue.

4 Review the assigned resource names. Where needed, the wizard creates multiple resources. Once the groups and resources are created, their names cannot be changed unless you delete the groups or resources and recreate them. You can also stop VCS and edit the main.cf.

Assigned resource name Device name

LDom: ldom-ldg1 LDomName = ldg1

NIC: ldg1-vsw0 Device = vsw0

DiskGroup: control-dg DiskGroup = control-dg

Volume: control-dg-v1 DiskGroup = control-dg

Volume = v1

Volume: control-dg-v2 DiskGroup = control-dg

Volume = v2

Page 46: Vcs for Ldom

46 Configuring Logical Domains for high availability using Veritas Cluster ServerCreating the service groups

5 Review the generated resource dependencies.

If the wizard discovers unfulfilled prerequisites, it gives you an option to save the commands. If you answer y, it saves the file that contains the exact commands to re-create the LDom service group that you made in step 2-step 4 and quits. This is an editable file, which you can edit and reuse.

See “Saving and using the generated command set executable file” on page 44.

6 When the wizard prompts you, enter y to configure the service group.Do you want to configure the service group? [y,n,h,q]: y

If you answer n, you have the option to save the file that contains the exact commands to re-create the LDom service group that you made in step 2-step 5. This is an editable file, which you can edit and reuse.

See “Saving and using the generated command set executable file” on page 44.

7 You are now prompted to select the failover nodes for the service group (the system list):VCS systems:------------1) sysA2) sysBa) All systemsq) Quit

Enter space separated VCS systems for SG-ldg1::SystemList [1-2,a,h,q]: 1 2

Select the nodes that correspond to the numbers presented. For example, in the above example, sysA corresponds to 1 and sysB corresponds to 2. Press the a key to select all the systems. If you enter q, the wizard saves the file in the /tmp directory. This is an editable file, which you can edit and reuse.

See “Saving and using the generated command set executable file” on page 44.

8 Press the c key to continue.

Volume: control-dg-v3 DiskGroup = control-dg

Volume = v3

Mount: mnt_mnt_control-dg_volume3 MountPoint = /mnt/control-dg/volume3

BlockDevice = /dev/vx/dsk/control-dg/v3

FSType = vxfs

Page 47: Vcs for Ldom

47Configuring Logical Domains for high availability using Veritas Cluster ServerConfiguring VCS to manage applications in guest domains

9 Review the output as the wizard builds the service group. The service group is offline until you complete the post-wizard tasks, and bring it online.

Post-wizard tasksAfter you have run the wizard, before you bring the service group online, return to any Mount resources and set their FsckOpt attributes.

When you want VCS to automatically create LDoms on nodes, you must set the value of the CfgFile attribute in the LDom agent. If you already have LDoms created on other nodes in the cluster, the LDoms must have the same LDom name.

Make any other resource configuration changes as required.

Verifying a service group failoverVerify the configuration in different situations.

Using a switch commandSwitch the LDom to another node in the cluster to make sure the service group fails over. If all the resources are properly configured, the service group shuts down on the first node and comes up on the second node.

Other verification scenariosIn all of these verification scenarios, you are stopping or moving an LDom, or stopping a resource for that LDom. VCS should detect the failure, or the movement, and either fail over the effected LDom or take no action. The following list presents some quick testing scenarios:

■ From outside of VCS control, stop the LDom. VCS should fail the LDom over to the other node.

■ Boot the LDom through VCS by entering a hagrp -online command. Move the LDom to another node by shutting it down through VCS on the node where the LDom is running. Boot the LDom outside of VCS control on the other node—the service group comes online on that node.

Configuring VCS to manage applications in guest domains

You can configure VCS in the control domain and the guest domains to enable VCS in the control domain to manage applications in the guest domains.

Page 48: Vcs for Ldom

48 Configuring Logical Domains for high availability using Veritas Cluster ServerConfiguring VCS to manage applications in guest domains

You must install and configure VCS in at least two control domains to form a VCS cluster.

Follow these steps to configure VCS inside the control domain and guest domains:

■ “Creating a logical domain” on page 48

■ “Installing and configuring one-node VCS inside the logical domain” on page 48

■ “Installing and configuring VCS inside the control domain” on page 49

Creating a logical domainYou must perform the following steps to create a logical domain:

To create a logical domain

1 Create a diskgroup (dg1) on a node and a volume that are managed by Veritas Volume Manager (VxVM) on a shared storage. The shared storage must be visible from all the VCS nodes.

2 Use the following command to apply Veritas file system (VxFS) on top of the volume:mkfs -F vxfs /dev/vx/rdsk/dg1/vol1

3 Mount the volume on a directory on one of the systems using the following command.mount -F vxfs /dev/vx/dsk/dg1/vol1 /mnt/dg1/vol1

4 Create a 10 GB flat file in the volume using the command:mkfile 10G /mnt/dg1/vol1/bootfile

5 Create an LDom (1dom1) on each system with an identical configuration and boot device as the flat file.

6 Install the operating system in the LDom.

Installing and configuring one-node VCS inside the logical domainPerform the following steps to install and configure one-node VCS inside the logical domain:

To install and configure one-node VCS inside the logical domain

1 Install one-node VCS (no kernel components required) in the guest domain.

2 Start the VCS engine.

3 Configure a VCS service group (lsg1) for the application. The ManualOps attribute of the service group must remain set to true, the default value.

Page 49: Vcs for Ldom

49Configuring Logical Domains for high availability using Veritas Cluster ServerConfiguring VCS to manage applications in guest domains

4 Add a VCS user (lsg1 -admin) with the minimum privilege as the group operator of the VCS service group (lsg1).

Refer to Veritas Cluster Server Installation Guide to perform a single node VCS installation in the logical domains.

Installing and configuring VCS inside the control domainAfter you install VCS in the control domain, you must create separate service groups for the RemoteGroup resource and LDom resource with online global firm dependency.

Note: If you create the RemoteGroup resource as part of the LDom service group, then the RemoteGroup resource state remains as UNKNOWN if the LDom is down. So, VCS does not probe the service group and cannot bring the LDom online. The online global firm dependency between the service groups allows VCS to fail over a faulted child LDom service group independent of the state of the parent RemoteGroup service group.

Perform the following steps to install and configure VCS inside the control domain:

To install and configure VCS inside the control domain

1 Install and configure VCS in the control domain.

The process for installing VCS in the control domain is very similar to the regular installation of VCS. However, you must specify the name of the control domain for the name of the host where you want to install VCS.

Refer to Veritas Cluster Server Installation Guide to install VCS in control domains.

2 Verify that the VCS engine is running.

3 Configure a RemoteGroup resource (rsg1) for the VCS service group (lsg1) that was configured in step 3 on page 48.

4 Create an LDom service group (csg1).

5 Configure a NIC resource (vsw0) for the public virtual switch of ldom1 that caters to the guest domain.

6 Configure Mount (bootmnt), DiskGroup (dg1), and Volume (vol1) resources for the boot device.

7 Configure an LDom resource (ldom1) for the guest domain.

8 Set the values of the OfflineWaitLimit attribute and the ToleranceLimit attribute of the resource or type to 1.

Page 50: Vcs for Ldom

50 Configuring Logical Domains for high availability using Veritas Cluster ServerConfiguring VCS to manage applications in guest domains

9 Set the value of the OfflineWaitLimit attribute of the resource or type to 1.

10 Create the dependencies between the resources as shown in Figure 3-4.

Figure 3-4 Resource dependency diagram

RemoteGroup resource definitionThe resource definition for the RemoteGroup resource is as follows:

RemoteGroup rsg1 ( GroupName = lsg1 IpAddress= <IP address of ldom1> ControlMode = OnOff Username = lsg1-admin Password = <lsg1-admin's password>)

Control Domain Logical Domain (ldom1)

RemoteGroup

online global firm

LDom

Mount

Volume

DiskGroup

NIC

Application

Storage Network

Application service group (lsg1)RemoteGroup service group (rsg1)

LDom service group (csg1)

Monitor

Page 51: Vcs for Ldom

51Configuring Logical Domains for high availability using Veritas Cluster ServerAbout VCS agent for LDoms

About VCS agent for LDomsUse the LDom agent to monitor and manage LDoms.

For information on the Mount, Volume, and DiskGroup agents, refer to the Veritas Cluster Server Bundled Agents Reference Guide.

LDom agentThe LDom agent brings LDoms online, takes them offline, and monitors the LDom.

LimitationsThe LDom agent requires at least two VCPUs per LDom.

DependenciesThe LDom resource depends on the NIC resource. It can depend on the Volume or Mount resources in different environments.

Network resourcesUse the NIC agent to monitor the network adapter for the LDom.

Storage resources

■ Veritas Volume Manager (VxVM) exposed volumes

Use the Volume and DiskGroup agents to monitor a VxVM volume.

■ Image file

Use the Mount, Volume, and DiskGroup agents to monitor an image file.

■ Primary network interface

Use the NIC agent to monitor the primary network interface, whether it is virtual or physical.

Agent functions

Online Starts the LDom.

Offline Stops the LDom.

Monitor Monitors the status of the LDom.

Clean Stops the LDom forcefully.

Page 52: Vcs for Ldom

52 Configuring Logical Domains for high availability using Veritas Cluster ServerAbout VCS agent for LDoms

State definitions

Attributes

ONLINE Indicates that the LDom is up and running.

OFFLINE Indicates that the LDom is down.

FAULTED Indicates that the LDom is down when VCS expected it to up and running.

100% CPU utilization of the LDom is detected as a fault.

UNKNOWN Indicates the agent cannot determine the LDom’s state. A configuration problem likely exists in the VCS resource or the LDom.

Table 3-1 Required attributes

Required attribute Description

LDomName The name of the LDom that you want to monitor.

Type-dimension: string-scalar

Default: n/a

Example: ldom1

Table 3-2 Optional attributes

Required attribute Description

CfgFile The absolute location of the XML file that contains the LDom configuration. The Online agent function uses this file to create LDoms as necessary. Refer to the ldm(1M) man page for information on this file.

Type-dimension: string-scalar

Default: n/a

Page 53: Vcs for Ldom

53Configuring Logical Domains for high availability using Veritas Cluster ServerAbout VCS agent for LDoms

Resource type definitiontype LDom (

static keylist RegList = { NumCPU }static str AgentFile = "/opt/VRTSvcs/bin/Script50Agent"static str ArgList[] = { LDomName, CfgFile, NumCPU}int NumCPUstr LDomNamestr CfgFile

)

Sample configurationsLDom ldg1 (

LDomName = ldg1)

NumCPU The number of virtual CPUs that you want to attach to the LDom when it is online. If you set this to a positive value, the agent detaches all of the VCPUs when the service group goes offline. Do not reset this value to zero after setting it to 1.

Type-dimension: integer-scalar

Default: 0

Table 3-2 Optional attributes

Required attribute Description

Page 54: Vcs for Ldom

54 Configuring Logical Domains for high availability using Veritas Cluster ServerAbout VCS agent for LDoms