View
268
Download
1
Category
Preview:
Citation preview
8/2/2019 Oracle Solaris Virtualization
1/31
Solaris Virtualization
(Zones/Containers and
LDOMs)Solaris 10 Virtualization (Zones/Containers and
LDOMs) - Building and Maintaining (v1.1)
This document covers steps in managing Solaris virtualized
environments (Zones/Containers and LDOMs)
Jeronimo M. Mulato
12/8/2010
8/2/2019 Oracle Solaris Virtualization
2/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
2
Table of ContentsZFS - An Introduction ..................................................................................................................................... 6
zpool Create a regular raid-z zpool named pool1 with 3 disks ......................................................... 6
zpool Create a mirrored zpool named pool1 with 4 disks .................................................................. 6
zpool Adding mirror to zfs storage pool, pool1 with 2 disks .............................................................. 6
zpool List available storage pools ........................................................................................................ 6
zpool List all pool properties for pool1............................................................................................... 6
zpool Destroy a zfs storage pool ................................................................... ..................................... 6
zpool Export a zfs storage pool, pool1 ............................................................................................... 6
zpool Import a zfs storage pool, pool1 ................................................................................................ 6
zpool Upgrading zfs storage pool to current version ......................................................................... 6
zpool Managing/Adding hot spares................................................................................................... 7
zpool Create zfs storage pool with mirrored separate intent logs ..................................................... 7
zpool Adding cache devices to zfs storage pool ................................................................................. 7
zpool Remove a mirrored device........................................................................................................ 7
zpool Recovering a Faulted zfs pool ................................................................................................... 7
zpool - reverting a zpool disk back to a regular disk. .......................................................................... 7
zfs - hide from df command .............................................................................................................. 7
zfs mount to a pre-defined mount point (zfs managed) ................................................................... 7
zfs mount to a pre-defined mount point (legacy managed) ............................................................. 7
zfs set limits/quota a zfs filesystem .................................................................................................... 8
zfs destroy a zfs filesystem ................................................................................................................. 8
zfs making snapshot .......................................................................................................................... 8
zfs - rolling back .................................................................................................................................... 8
zfs - removing snapshot ....................................................................................................................... 8
ZONES (aka Containers) ................................................................................................................................. 8
Easy Steps in creating a Zone .................................................................................................................... 8
Recommendations on Zone Build ............................................................................................................. 9
8/2/2019 Oracle Solaris Virtualization
3/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
3
Sample Zone Build ..................................................................................................................................... 9
Steps: ................................................................................................................................................... 10
Zone Cloning (Magic) ............................................................................................................................... 12
Zone Gotchas ...................................... ..................................................................................................... 13
Zones Resource (Memory Capping) ........................................................................................................ 13
Viewing capped swap memory ........................................................................................................... 13
Viewing capped locked memory ......................................................................................................... 13
Viewing zones capped memory usage ................................................................................................ 13
Change max-swap resource dynamically ............................................................................................ 13
Change max-locked-memory resource dynamically ........................................................................... 13
Change physical-memory capped resource dynamically .................................................................... 13
To change capped memory resource permanently ............................................................................ 13
Zone Resource (Storage Devices|Network) ............................................................................................ 14
Adding resource .................................................................................................................................. 14
Removing resource .............................................................................................................................. 15
Logical Domains (LDOMs)............................................................................................................................ 16
Prepping up for Primary LDOM (Control Domain) .................................................................................. 16
Installing Firmware patch on the T5240 System Controller ................................................................ 16
Prepping up for Guest LDOMs ................................................................................................................. 18
Assumptions/Recommendations: ....................................................................................................... 18
Creating Guest LDOMs ............................................................................................................................ 18
Steps in Creating Guest LDOMs ........................................................................................................... 18
Sample LDOM build (after primary or control domain has been created): ............................................ 21
Steps: ................................................................................................................................................... 21
Customizing guest LDOM ........................................................................................................................ 23
Assigning specific virtual console to Guest LDOM .................................................................................. 24
Removing Guest LDOM ........................................................................................................................... 24
Updating LDOM Software ....................................................................................................................... 25
Precautions before LDOM Software Upgrade .................................................................................... 25
Steps on Upgrading LDOM software ................................................................................................... 26
8/2/2019 Oracle Solaris Virtualization
4/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
4
LDOM Gotchas: ....................................................................................................................................... 27
Housekeeping LDOM guests ............................................................................................................... 27
T5240 Additional Patches .................................................................................................................... 29
Manually Primary LDOM creation ........................................................................................................ 29
Manually Guest LDOM creation ........................................................................................................... 30
Appendix 1. ....................................... ........................................................................................................... 31
Appendix 2. ....................................... ........................................................................................................... 31
8/2/2019 Oracle Solaris Virtualization
5/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
5
This document is intended to provide support information in managing Solaris Virtualized environments
or Virtual Machines (VMs) whether it be zones or containers and LDOMs. For intent purpose, zones and
containers refer to the same technology and implementation and will just be called zones. See below for
a basic comparison of LDOMs and Zones.
Description LDOMs Zones
Hardware specific
implementation.
Yes. Currently runs only on T-
Series servers.
No. As long as Solaris 10 runs
you can implement zones.
Full hardware virtualization Yes. Hardware resource are
assigned specifically to an LDOM
and is totally isolated.
No.
Run zones Yes. An LDOM can run zones No.
Solaris LDOMs are virtualized environments assisted by hardware. Currently, LDOMs run only onSun/Oracle SPARC T-Series servers (servers having the SPARC T-series CPU chip). Whereas zones (or
containers) is not restricted to hardware.
Prior to delving into the topics of zones and ldoms, a introduction to zfs and its faculties is presented.
Much of the examples made use of zfs in both virtualization implementation.
8/2/2019 Oracle Solaris Virtualization
6/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
6
ZFS - An IntroductionZFS is a combined file system and logical volume manager designed by Sun Microsystems (now Oracle).
It features support for high storage capacities, integration of other filesystem and volume management
concepts such as snapshots, clones, raid-z, nfs, smbfs, continuous integrity checking and automatic
repair. Discussed below are the ways in managing and using zfs.
Creating a zpool is the first step in making and using zfs. Prior to making the pool, you have to decide
which type of pool to create (raid 0 or concatenated, raid 1 or mirrored, raidz or single crc disk raid,
raidz2 or double crc disk raid, and raidz3 or triple crc disk raid).
To create a zpool, just enter the command:
zpool create
To illustrate creating zpools, examples will be used below.
zpool Create a regular raid-z zpool named pool1 with 3 disks
zpool create pool1 raidz c0t0d0 c0t1d0 c0t2d0
If you want to create raid-z2, you would need at least 3 drives. For raid-z3, its only advisable to
use with 5 drives or more.
zpool Create a mirrored zpool named pool1 with 4 disks
zpool create pool1 mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
zpool Adding mirror to zfs storage pool, pool1 with 2 disks
zpool add pool1 mirror c0t3d0 c0t4d0
zpool List available storage pools
zpool list
zpool List all pool properties for pool1
zpool get all pool1
zpool Destroy a zfs storage pool
zpool destroy f pool1
zpool Export a zfs storage pool, pool1zpool export pool1
zpool Import a zfs storage pool, pool1
zpool import pool1
zpool Upgrading zfs storage pool to current version
zpool upgrade a
8/2/2019 Oracle Solaris Virtualization
7/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
7
zpool Managing/Adding hot spares
zpool create pool1 raidz c0t0d0 c0t1d0 c0t2d0 spare c0t3d0
zpool replace pool1 c0t0d0 c0t3d0 (this replaces the c0t0d0 with c0t3d0 in pool1)
zpool remove pool1 c0t2d0 (this removes c0t2d0 from pool1)
zpool Create zfs storage pool with mirrored separate intent logs
zpool create pool1 mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0 log mirror c0t4d0 c0t5d0
zpool Adding cache devices to zfs storage pool
zpool add pool1 cache c1t0d0 c1t1d0
Normally, you add cache devices in which the devices are fast SSD drives. This is to compensate
the use of the ARC cache which is in memory to be moved into the fast SSD devices.
zpool iostat v pool1 5
zpool Remove a mirrored device
zpool remove pool1 mirror-1
zpool Recovering a Faulted zfs pool
If the pool faulted but with message as recoverable from the zpool status command, issue the
following command:
zpool clear F pool1
If the pool configuration was not cached, use the zpool import with the recovery mode flag as
follows:
zpool import F pool1
zpool - reverting a zpool disk back to a regular disk.
format e
The -e allows for extended format on the disk. Choose the disk to revert back and issue the
command label to label it to SMI format. NOTE: zpool relabels the disk to EFI format.
zfs - hide from df command
zfs set canmount=off
zfs set canmount=on ** Reverse
zfs mount to a pre-defined mount point (zfs managed)
zfs get mounted /
zfs set mountpoint= /
zfs mount to a pre-defined mount point (legacy managed)
zfs create pool1/autonomy
mkdir p /apps/autonomy
zfs set mountpoint=legacy pool1/autonomy
8/2/2019 Oracle Solaris Virtualization
8/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
8
mount -F zfs pool1/autonomy /apps/autonomy
NOTE: You have to manually edit /etc/vfstab to enable auto mounting.
zfs set limits/quota a zfs filesystemzfs set quota=m|g /
zfs destroy a zfs filesystem
zfs destroy /
zfs making snapshot
zfs snapshot /@
zfs - rolling back
zfs list
zfs list -H -t snapshot
zfs rollback /@
zfs - removing snapshot
zfs list -H -t snapshot
zfs destroy /@
ZONES (aka Containers)Zones are soft virtual environments which are lighter in weight compared to full VM. With zones, youre
not running a different OS but just a copy of the same OS within the contructs of a VM. Resources such
as cpu and memory are all shared with the global zone. Resource restrictions may be imposed to the
local zones using mechanisms such as fair share scheduling and memory capping. There are two
implementations of zones, one is called sparce root and the other is called whole root.
With the sparce root implementation, there are four directories in the global zones root filesystem
inherited by the local zones. These four directories are as follows: /lib, /platform, /sbin, and /usr. In
this model, all packages that are installed on the global zone are made available to the sparce root zone.
With the whole root implementation, a copy of the global zones is made for the local zone. Even the
package database is made local for the zone. Hence, after the zone creation, you can put in additional
packages on the global zones or in the local whole root zone independent of each other.
The choice of using whole root versus sparce root depends on the control of resource and
administrative trade-off. Whole root allows you to maximize administrative control (independence and
isolation) at the cost of more disk space, while sparce root optimizes the efficient sharing of executables
and resources using a smaller disk footprint at the cost of administrative independence.
Easy Steps in creating a Zone
1. Create a zone configuration template.
8/2/2019 Oracle Solaris Virtualization
9/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
9
global-zone# vi
2. Define the zone configuration
global-zone# zonecfg z -f
3. Install the non-global zoneglobal-zone# zoneadm z install
global-zone# zoneadm z ready
4. Create sysidcfg for system and naming service convention (this is optional)
global-zone# cd /root/etc
global-zone# vi sysidcfg
5. Implement resource management policies (this is optional). Note that you can incorporate this
on the configuration template.
6. Boot the non-global zone
global-zone# zoneadm z boot
7. Login to the non-global zone
global-zone# zlogin C (To exit the console, you need to use ~.)
8. Implement SOE/SSM implementation
Recommendations on Zone Build
1. If you have separate disk space you want to assign to zones implementation, its advisable to
create the zones on a ZFS filesystem.
2. Always use the Fair Share Scheduler when configuring zones. This allows you to assign cpu-
shares to a zone and prevent a zone from monopolizing the whole cpu resource.
3. Always use memory capping daemon. This allows capping memory, swap and locked in memory
to zones.
CAUTION: DO NOT SET memory capping on the global zone as this may impact system
availability.
4. Use the given template (see the Appendix A) to use for making the zone.
Sample Zone Build
In this example we will create two zones with the following configuration:
- The global zone has 4 cpus, 8gb memory and 32gb swap. Drive c1t2d0 will be used for the ZFS.
- Each non-global zones will have a boot drive of size 30gb
- Non-global zone1 will have an IP of 169.185.220.15and will share/use eri0 as its network
interface.
- Non-global zone2 will have an IP of 169.185.220.16and will share/use eri0 as its network
interface.
- Use Fair Share Scheduler (FSS) to allow assigning cpu shares to each zones. This is to safeguard
and prevent a zone from monopolizing the entire cpu resource.
- Global zone will have 40 cpu shares.
- Non-global zones will each have 30 cpu shares and will also run with FSS as its scheduler.
- Memory capping will be implemented for each non-global zones (2gb memory and 8gb swap).
8/2/2019 Oracle Solaris Virtualization
10/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
10
Steps:
1. Update the systems default scheduler to run using the Fair Share Scheduler, FSS.
Global-zone# dispadmin d FSS
2. Set all processes on the running global zone to run using FSS.
Global-zone# priocntl s c FSS i all
3. Start the rcap daemon to enable memory capping. This can be done using the command:
Global-zone# svcadm enable rcap
4. Set the global zone cpu-shares as follows:
Global-zone# zonecfg z global
zonecfg:global> set cpu-shares=40
zonecfg:global> commit
zonecfg:global> exit
The above will set the global zone cpu-shares. However, to take into effect, the above would
require a reboot. To bypass the reboot, issue the following command:
Global-zone# prctl n zone.cpu-shares -v 40 -r -i zone global5. Create a ZFS pool/filesystem for zonepath assignment. zonespool/zone1 for zone1, and
zonespool/zone2 for zone2. Give each zfs filesystem a quota of 30gb.
Global-zone# zpool create f zonespool c1t2d0
Global-zone# zfs create zonespool/zone1
Global-zone# zfs create zonespool/zone2
Global-zone# zfs set quota=30g zonespool/zone1
Global-zone#zfs set quota=30g zonespool/zone2
6. Create the zone template for both zone1 and zone2. For zone1, the template,
/var/tmp/zone1.cfg, will contain the following:
create -b
set zonepath=/zonespool/zone1
set autoboot=true
set scheduling-class=FSS
add net
set address=169.185.220.15
set defrouter=169.185.220.1
set physical=eri0
endadd rctl
set name=zone.cpu-shares
set value=(priv=privileged,limit=20,action=none)
end
add capped-memory
set physical=2g
8/2/2019 Oracle Solaris Virtualization
11/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
11
set swap=8g
end
For zone2, the template, /var/tmp/zone2.cfg will contain the following:
create b
set zonepath=/zonespool/zone2
set autoboot=true
set scheduling-class=FSS
add net
set address=169.185.220.16
set defrouter=169.185.220.1
set physical=eri0
end
add rctl
set name=zone.cpu-shares
set value=(priv=privileged,limit=20,action=none)
end
add capped-memory
set physical=2g
set swap=8g
end
Configure the zones using the following command:
zonecfg -z zone1 -f /var/tmp/zone1.cfg
zonecfg -z zone2 -f /var/tmp/zone2.cfgTo check/verify the zones configuration, use the following command:
zoneadm list cvecho info | zonecfg -z zone1echo info | zonecfg -z zone2
7. Start the zone install. This will copy the necessary files to the zones location.
zoneadm -z zone1 installzoneadm -z zone2 install
8. Upon confirmation of the zones install, issue the command to make the zones ready for use:
zoneadm -z zone1 readyzoneadm -z zone2 ready
9. Create a sysidcfg template file for each zone (/var/tmp/sysidcfg.zone1 and/var/tmp/sysidcfg.zone2) and copy it to the local zone root location as follows:
cp /var/tmp/sysidcfg.zone1 /zonespool/zone1/root/etccp /var/tmp/sysidcfg.zone2 /zonespool/zone2/root/etc
10.Boot the zones using the following command:
zoneadm -z zone1 bootzoneadm -z zone2 boot
11.Login to the zones and start the your environment customization.
8/2/2019 Oracle Solaris Virtualization
12/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
12
Zone Cloning (Magic)
After youve completed one zone install and customization, you really dont need to redo the same
steps. If you want to create a copy of the same virtual machine (another local zone), you can use
cloning. To clone a zone, you have to follow the steps below:
1. Create the new zone target location:
Global-zone# zfs create zonespool/zone3
Global-zone# zfs set quota=30g zonespool/zone3
2. Create the new zone template and configure the new zone. As an example, well create zone3
with the following template -> /var/tmp/zone3.cfg, to contain the following:
create -b
set zonepath=/zonespool/zone3
set autoboot=true
set scheduling-class=FSS
add net
set address=169.185.220.17
set defrouter=169.185.220.1
set physical=eri0
end
add rctl
set name=zone.cpu-shares
set value=(priv=privileged,limit=20,action=none)
end
add capped-memory
set physical=2g
set swap=8g
end
Configure the new zone using the following command:
zonecfg -z zone3 -f /var/tmp/zone3.cfg3. Halt the original (source) zone to clone.
Global-zone#zlogin zone1 halt
4. Start the cloning as follows:
Global-zone# zoneadm -z zone3 clone zone1
5. Create a sysidcfg template file for the new zone (/var/tmp/sysidcfg.zone3) and copy itto the local zone root location as follows:
cp /var/tmp/sysidcfg.zone3 /zonespool/zone3/root/etc6. Start the zones.
Global-zone# zoneadm -z zone1 bootGlobal-zone# zoneadm -z zone3 boot
8/2/2019 Oracle Solaris Virtualization
13/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
13
Zone Gotchas
1. If you are to configure multiple zones, its recommended to allocate the cpu shares as follows:
Global = 20
Non-global zones = 80/(number of non-global zones)
2. If a decision is made to mirror the ZFS pool on an already made zfs pool with a single drive. In
the example given earlier, we already have a ZFS pool named zpool with c1t2d0 as member. To
make this a mirror, this can be done simple by issuing the following commands:
zpool attach zpool c1t2d0 c1t3d0
3. If IPMP is configured on a physical interface on the global zone and one of the interfaces (the
primary active) is used as shared network interface on the local/non-global zone. Then its
automatically picked up as IPMP on the non-global zone. Meaning that you do not need to
configure IPMP on the non-global zone.
Zones Resource (Memory Capping)
Viewing capped swap memory
/bin/prctl -n zone.max-swap `pgrep -z init`
Viewing capped locked memory
/bin/prctl -n zone.max-locked-memory `pgrep -z init`
Viewing zones capped memory usage
rcapstat -z 1 1
Change max-swap resource dynamically
prctl -z zone.max-swap -r -v 200m `pgrep -z init`
Change max-locked-memory resource dynamically
prctl -z zone.max-locked-memory -r -v 200m `pgrep -z init`
Change physical-memory capped resource dynamically
rcapadm -z -m 100m
To change capped memory resource permanently
zonecfg:zone: > select capped-memory
zonecfg:zone:capped-memory> set physical-100m
zonecfg:zone:capped-memory> set physical-100mzonecfg:zone:capped-memory> set physical-100m
zonecfg:zone:capped-memory> end
zonecfg:zone: > commit
8/2/2019 Oracle Solaris Virtualization
14/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
14
Zone Resource (Storage Devices|Network)
Adding resource
Dataset (ZFS filesystem are treated as datasets):
If the primary goal is to delegate the administration of storage to a zone, then ZFS supports adding
datasets to a non-global zone. Unlike adding the filesystem, this causes the ZFS filesystem to be visible
within the already configured zone. The zone administrator can set file system properties as well as
create children. In addition, the zone administrator can take snapshots, create clones and otherwise
control the entire file system hierarchy. To use this facility, use the following commands:
zonecfg:zone> add dataset
zonecfg:zone:dataset> set name=pool/filesys
zonecfg:zone:dataset> end
zonecfg:zone> verify
zonecfg:zone> commitzonecfg:zone> end
Zfs filesystem:
If the goal is solely to share space with the global zone.
zonecfg:zone> add fs
zonecfg:zone:fs> set type=zfs
zonecfg:zone:fs> set special=
zonecfg:zone:fs> set dir=
zonecfg:zone:fs> end
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> end
Fs:
zonecfg:zone> add fs
zonecfg:zone:fs> set dir=
zonecfg:zone:fs> set special=/dev/dsk/c#t#d#s#
zonecfg:zone:fs> set raw=/dev/rdsk/c#t#d#s#
zonecfg:zone:fs> set type=ufs
zonecfg:zone:fs> add options logging
zonecfg:zone:fs> end
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> end
Inherit-pkg-dir:
zonecfg:zone> add fs
zonecfg:zone:fs> set dir=
zonecfg:zone:fs> set share=
zonecfg:zone:fs> set type=lofs
zonecfg:zone:fs> end
8/2/2019 Oracle Solaris Virtualization
15/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
15
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> end
Export home directory from global zone to local zone:
zonecfg:zone> add fs
zonecfg:zone:fs> set dir=/export/home
zonecfg:zone:fs> set special=/export/home
zonecfg:zone:fs> set type=lofs
zonecfg:zone:fs> set options=nodevices
zonecfg:zone:fs> end
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> exit
net:
zonecfg:zone> add net
zonecfg:zone:net> set physical=
zonecfg:zone:net> set address=
zonecfg:zone:fs> end
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> exit
Removing resource
Storage:
zonecfg:zone> remove fs dir=
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> exit
Net:
zonecfg:zone> remove net physical=
zonecfg:zone> verify
zonecfg:zone> commit
zonecfg:zone> exit
8/2/2019 Oracle Solaris Virtualization
16/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
16
Logical Domains (LDOMs)LDOMs are VMs assisted by hardware in which you can run separate operating system in each domain.
Unlike zones, LDOMs operate independently of each other except for the control and resource domains
which propagates and controls which resource is given or assigned to the guest domains. Each ldoms are
assigned their own resource such as cpu, memory, network and disk space. Note that unlike zones,
ldoms are totally isolated from sharing cpu and memory.
Prepping up for Primary LDOM (Control Domain)
1. Make sure that youre on the latest Solaris 10 LDOM patch (138888-02).
2. Install the LDOM software package
3. Make sure you have the correct firmware. This can be done by running showhost from the
ILOM/ALOM session.
> show /HOSTS
Patch Platform Firmware Version LDOM Version
139434-01 T2000 6.7.0 1.1
139439-02 T5x20 7.2.1.b 1.1
139444-01 T5240 7.2.0 1.1
139446-01 T5440 7.2.0 1.1
If you do not have the correct firmware, you need to apply the firmware patch.
Installing Firmware patch on the T5240 System Controller
a. Copy the patch to the server (control domain) to be patched. Unpack the patch from any
directory from the control domain. Then copy the following files to /tmp:
sysfwdownloadfirmware*.pkg
NOTE: The firmware package file will no longer be available and different for individual
platforms.
b. Afterwards, you need to execute the following commands:
cd /tmp
./sysfwdownload ./
Shutdown the domain controller using the commands:
shutdown y i0 g0
or
halt
c. Once in the ok prompt, you will need to get to the system controller (ALOM or ILOM)
using the key combination #.
d. Once on the ILOM, you nee to login as admin to get to the ALOM prompt (sc>). To get to
the ALOM prompt you need to login first as root which gives you the standard ILOM
prompt (->). Issue the following command to create an admin user which would provide the
default ALOM prompt as follows:
-> create /SP/users/admin role=Administrator cli_mode=alom
Creating user..
Enter new password: *******
8/2/2019 Oracle Solaris Virtualization
17/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
17
Enter new password again: *******
Created /SP/users/admin
-> logout
Then login as admin to the ALOM prompt.e. Issue the following commands:
sc>poweroff
f. Ensure the keyswitch is set to NORMAL.
sc> setkeyswitch y normal
g. Perform the flash update:
sc>flashupdate -s 127.0.0.1
h. After the flash update, be sure to reset the ALOM/ILOM using the command:
sc> resetsc
i. Log back in to the ALOM prompt and issue the command:
sc>poweron
j.
Logout and log back in to the ILOM prompt and get back to the domain controller prompt asusual:
-> start /SP/console
Once its up, youre now ready to LDOM!
4. Create the primary domain (also known as the control domain). This domain will be controlling
and manipulating all other domains. To do this, issue the command:
# Create Virtual Disk Server (vds)
ldm add-vds primary-vds0 primary
# Create Virtual Console concentrator server (vcc)
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
# Create Virtual Switch server (vsw)
ldm add-vsw net-dev=nxge0 primary-vsw0 primary
# Assign 8 virtual CPU
ldm set-vcpu 8 primary
# Assign 0 MAU (Math Unit)
ldm set-mau 0 primary
# Assign 4gb memory
ldm set-memory 4096m primary
# Assign the ldom config to the service processor
ldm add-spconfig initial
The above commands can be put into a script will assign 8 vCPU (virtual CPU), 0 MAU (math
unit), and 4G memory to the primary domain. It will also use the first instance of the network
interface (assume nxge0) as the primary network interface for use by itself and the guest
domains. NOTE that you can change this later as needed.
After the command execution, you would need to reboot the box to boot in LDOM mode.
shutdown y i6 g0
5. When the system comes back up, it will be configured with LDOM mode enabled. Confirm using
the command:
8/2/2019 Oracle Solaris Virtualization
18/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
18
#/opt/SUNWldm/bin/ldm list
It should provide you the initial ldom named primary indicative of the control domain.
Prepping up for Guest LDOMsThe above command set, basically sets up the initial LDOM (primary LDOM). Its usually setup with
the following basic services as:
- VCC (virtual console concentrator)
- VSW (virtual switch)
- and VDS (virtual disk service)
After which you would need to customize these services based on the requirements of the guest
LDOM(s).
Assumptions/Recommendations:
- 4 network connections (make sure that physical networks are connected on two different
network interfaces and separate cards; say one internal and one external).
Make sure that the virtual switches has been setup for each physical network connection. To
create a virtual switch issue the following command:
/opt/SUNWldm/bin/ldm add-vsw net-dev= primary-vswX primary
After the creation of the virtual switches, you would need to remove the original entries for the
physical interfaces and instead put in the virtual interfaces. You simply map the file entries and
rename the physical network with the virtual switch network. As an example, if nxge0 was
used for primary-vsw0, then you would need to rename the file, hostname.nxge0to
hostname.vsw0. Until the virtual switches have been made the active interfaces via the
command, ifconfig a, the communication between the control domain and the guest domain
would not be feasible.
Creating Guest LDOMs
Before creating the guest LDOMs, make sure to have the following information.
- Guest LDOM name
- How much vCPU to allocate? (3 cores for production/cob; 1 core for dev/uat. Each core has 8
threads. Hence, for three cores, you would need to specify 24 vCPU and for a single core, 8
vCPU).
- How much memory to allocate? (4GB for each core assigned. Hence, for a three core LDOM, you
would need to specify 12 GB for the LDOM; for a single core LDOM, you would need to specify
4GB).
- How much disk space to allocate(30gb for boot and 25 for apps) ? Make sure that the space is
allocated for the guest LDOM.
Steps in Creating Guest LDOMs
From the control domain (sample is a production/cob guest domain):
1. Setup the proper path for the shell to be used for the administration of LDOM.
exec ksh o vi
8/2/2019 Oracle Solaris Virtualization
19/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
19
export PS1=$LOGNAME@`uname -n` [\$PWD] >
export PATH=$PATH:/opt/SUNWldm/bin
2. Create the LDOM guest virtual disk. With the use of zfs and for simplicity, well create a
disk/image file which will act as our guest ldom bootdisk. As an example.
zfs create ldomroot/ldom
cd /ldomroot/ldom
mkfile 30g bootdisk.img
3. Create an entry for the bootdisk image to be used for the LDOM guest.
ldm add-vdsdev /ldomroot/ldom/bootdisk.img _boot@primary-vds0
4. Create the LDOM guest using the command:
ldm add-domain gdomain
ldm add-vcpu 24 gdomain
ldm add-memory 12g gdomain
ldm add-vnet vnet1 primary-vsw1 gdomain
ldm add-vdsdev /ldomroot/ldom/bootdisk.img gdomain_boot@primary-vds0
ldm add-vdisk vdisk0 gdomain_bootdisk@primary-vds0
ldm bind gdomain
ldm set-var auto-boot\?=false gdomain
5. Start the LDOM guest
ldm start
Login to the LDOM guest. To login, you need to see which telnet port to login using the virtual
console for the newly created LDOM guest. Use the command below to identify the virtual port.
ldm list
The command above will indicate the and its console port. As an example, you
may see port 5000 for ldom-> gdomain, issue the command:
telnet localhost 5000
To exit from the virtual console, use the following sequence -> Ctrl-], then exit.
From the ldom guest, issue the command:
ok> boot
Then login as root and initiate sys-unconfig to rename the LDOM to its new name.
Now that youve successfully configured the LDOM guest, you may want to customize. Follow
the steps below for additional LDOM customization.
6.
Execute the following command:ldm add-spconfig working
The above command is to save the current system configuration (both LDOM primary and
guests) into the Linux hypervisor running the ldom.
**** NOTE ****
Always run the command ldm add-spconfig when in doubt to save the current configuration.
8/2/2019 Oracle Solaris Virtualization
20/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
20
LDOM guest : Adding a virtual network
Halt the guest domain.
From the control domain:
ldm stop ldm unbind
ldm add-vnet vnetX
where Xis the virtual network number being assigned.
ldm bind
ldm start
LDOM guest: Removing a virtual network
Halt the guest domain.
From the control domain:
ldm stop
ldm unbind
ldm remove-vnet vnetX
whereXis the virtual network number to be removed.
ldm bind
ldm start
LDOM guest : Adding virtual disk using file
Halt the guest domain.
From the control domain:
mkfile
ldm add-vdsdev _@primary-vds0
ldm stop
ldm unbind ldm add-vdisk vdiskX _
whereXis the virtual disk number to be assigned.
ldm bind
ldm start
LDOM guest : Adding virtual disk using SAN
If SAN is made available, it would be best to put the LDOM guest under SAN for performance
reasons. You need to register the actual raw (with slice 2) device to the virtual disk server.
Halt the guest domain.
From the control domain:
ldm add-vdsdev /dev/rdsk/c2t1d0s2 gdomain_boot@primary-vds0ldm stop
ldm unbind
ldm add-vdisk vdiskX _
whereXis the virtual disk number to be assigned.
From the syntax above,
ldm add-vdisk vdisk1 gdomain_boot@primary-vds0 gdomain
ldm bind
8/2/2019 Oracle Solaris Virtualization
21/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
21
ldm start
LDOM guest: Removing a virtual disk
Halt the guest domain.From the control domain:
ldm stop
ldm unbind
ldm remove-vdisk vdiskX
whereXis the virtual disk number to be assigned.
ldm bind
ldm start
Sample LDOM build (after primary or control domain has been created):
Assumptions:
Server : T5240Network : nxge2, nxge3, nxge4, and nxge5
Disks : LDOM guests will be assigned the following disks for both OS and apps (c1t2d0, c1t3d0,
c1t4d0, c1t5d0, c1t6d0, and c1t7d0 each 146gb)
Steps:
1. Confirm primary LDOM
# exec ksh -o vi
# export PATH=$PATH:/opt/SUNWldm/bin
# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
Primary active -n-cv- SP 8 4G 0.2% 7h 52m
#
2. Confirm the primary LDOM services
# ldm list-services
VCC
NAME LDOM PORT-RANGE
primary-vcc0 primary 5000-5255
VSW
NAME LDOM MAC NET-DEV DEVICE DEFAULT-VLAN-ID PVID
VID MODE
primary-vsw0 primary 00:14:4f:f9:d8:76 nxge2 switch@0 1 1
VDSNAME LDOM VOLUME OPTIONS MPGROUP DEVICE
#
In the above we see that the create_primary initializes the first active network interface as the
network device for the primary virtual switch 0.
You would need to add the other virtual network interfaces as follows:
ldm add-vsw net-dev=nxge3 primary-vsw1 primary
ldm add-vsw net-dev=nxge4 primary-vsw2 primary
8/2/2019 Oracle Solaris Virtualization
22/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
22
ldm add-vsw net-dev=nxge5 primary-vsw3 primary
Upon assignment of the virtual switches, you need to make it active and remove the physical
network from the active list. This is done by issuing the commands below. This ensures
communication between the control domain and the guest domains.for i in 0 1 2 3
do
ifconfig vsw${i} plumb
done
for i in 2 3 4 5
do
ifconfig nxge${i} unplumb
done
ifconfig vsw0 netmask broadcast + up
ifconfig vsw1 netmask broadcast + up
ifconfig vsw2 netmask broadcast + upifconfig vsw3 netmask broadcast + up
3. Create the disk device to be used by the guest domains. In our case, ZFS (RAIDZ) is used for
flexibility. Note that guests would be assigned a separate filesystem. See below:
zpool create -f ldomroot raidz c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0
If you make a mistake, you may remove the zpool and recreate the pool as follows:
zpool destroy ldomroot
zpool create -f ldomroot raidz c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0
NOTE : destroy option in zpool should be used with caution and care.
Create separate zfs filesystems for each individual LDOM guests. Its also advisable to create a
separate zfs filesystem for the apps.
zfs create ldomroot/ldom1
zfs create ldomroot/ldom2zfs create ldomroot/ldom3
zfs create ldomroot/ldom4
zfs create ldomroot/apps
By separating the filesystem for the LDOM apps, we can apply the snapshot tool on the guest
LDOM OS disk filesystem prior to applying kernel patches. This way, we can revert back to the
previous version of the OS if in case we need to (ie. Failed kernel patch that needs to be backed
out).
4. Copy the gold build image for each LDOM guest as follows:
cp /export/temp/bootdisk.img /ldomroot/ldom1
cp /export/temp/bootdisk.img /ldomroot/ldom2
cp /export/temp/bootdisk.img /ldomroot/ldom3cp /export/temp/bootdisk.img /ldomroot/ldom4
5. Assign the virtual disk to the service
ldm add-vdsdev /ldomroot/ldom1/bootdisk.img gdom01_boot@primary-vds0
ldm add-vdsdev /ldomroot/ldom2/bootdisk.img gdom02_boot@primary-vds0
ldm add-vdsdev /ldomroot/ldom3/bootdisk.img gdom03_boot@primary-vds0
ldm add-vdsdev /ldomroot/ldom4/bootdisk.img gdom04_boot@primary-vds0
6. Create each individual LDOM guest using the command:
8/2/2019 Oracle Solaris Virtualization
23/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
23
/opt/IHSldmcfg/bin/create_guest
Provide the initial config as follows:
gdom01
24 vCPU 12g Memory
gdom01_boot@primary-vds0 as primary drive (vdisk0)
primary-vsw0 as the primary network interface (vnet0)
gdom02
24 vCPU
12g Memory
gdom02_boot@primary-vds0 as primary drive (vdisk0)
primary-vsw1 as the primary network interface (vnet0)
gdom03
24 vCPU
12g Memory
gdom03_boot@primary-vds0 as primary drive (vdisk0)
primary-vsw0 as the primary network interface (vnet0)
gdom04
24 vCPU
12g Memory
gdom04_boot@primary-vds0 as primary drive (vdisk0)
primary-vsw0 as the primary network interface (vnet0)
7. Start the guest LDOM
ldm start
8. Login to the LDOM via the virtual console concentrator (VCC) using the port info provided using
the command ldm list. Note the port number and login using the command telnet as
follows:
telnet localhost
9. Login as root (password is x!tra123), then issue the command:
/usr/sbin/sys-unconfig
The above will unconfigure the new LDOM guest, then reboot.
10.From the primary domain, execute the following command:
ldm add-spconfig working
The above command is to save the current system configuration (both LDOM primary and
guests) into the Linux hypervisor running the ldom.
**** NOTE ****Always run the command ldm add-spconfig when in doubt to save the current configuration.
Customizing guest LDOM
1. From the control domain, add the virtual network
ldm add-vnet vnet1 primary-vsw1 gdom01
Confirm the vnet addition using the command:
ldm list-bindings gdom01
8/2/2019 Oracle Solaris Virtualization
24/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
24
2. Make sure to unbind and re-bind the new config prior to restart.
ldm unbind gdom01
ldm bind gdom01
ldm start gdom01
3. Configure IPMP
4. Continue with the other LDOM using the same steps above. In configuring IPMP, use the
following table to effectively utilize the networks:
LDOM Guest Vnet0 Vnet1 Primary Active
gdom01 primary-vsw0 (nxge2) primary-vsw2 (nxge4) primary-vsw0 (nxge2)
gdom02 primary-vsw2 (nxge4) primary-vsw0 (nxge2) primary-vsw2 (nxge4)
gdom03 primary-vsw1 (nxge3) primary-vsw3 (nxge5) primary-vsw1 (nxge3)
Gdom04 primary-vsw3 (nxge5) primary-vsw1 (nxge3) primary-vsw3 (nxge5)
Doing the above assignment, the load will be split on all four physical interface actively.
5. To add the application disks to the guest ldoms. The guest LDOMS needs to be stopped. Thenfrom the control domain, create the application disk assignment on the apps filesystem for all
the LDOMS, add and then restart.
cd /ldomroot/apps
for i in 1 2 3 4
do
ldm stop gdom0${i}
ldm unbind gdom0${i}
mkfile 25g ldom${i}_apps.img
ldm add-vdsdev /ldomroot/apps/ldom${i}_apps.img rncardsweb0${i}_apps@primary-vds0
ldm add-vdisk vdisk1 rncardsweb0${i}_apps@primary-vds0 rncardsweb0${i}
ldm bind gdom0${i}
ldm start gdom0${i}
done
Connect the guest LDOM and do a device re-configure using the command, reboot -- -r or
simply issuing the command devfsadm -C. Also note that the new devices needs to be re-
partitioned prior to use.
Assigning specific virtual console to Guest LDOM
ldm set-vcons port=
Note that you need to specify this command only for unbound guest domains.
Removing Guest LDOM
ldm remove-domain
ldm remove-spconfig
8/2/2019 Oracle Solaris Virtualization
25/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
25
Updating LDOM Software
See the matrix below for updating LDOM package from 1.1 to 1.2. ***** When updating to LDOM 1.2,
note that all ldoms (global or guest) should adhere to the minimum memory requirement of
12Mbytes.
Patch Platform Firmware Version LDOM Version
139434-03 T2000 6.7.4 1.2
139439-04 T5x20 7.2.2.e 1.2
139444-03 T5240 7.2.2.e 1.2
139446-03 T5440 7.2.2.e 1.2
Precautions before LDOM Software Upgrade
Prior to updating the LDOM Software, be sure to take the following precautions:
1. Backup/Save the Autosave Configuration Directories
Whenever you upgrade the OS or the LDOM software on the control domain, you must saveand restore the Logical Domains autosave configuration data, which is found in the
/var/opt/SUNWldm/autosave-autosave-name directories.
You can use tar or cpio to save and restore the entire contents of the directories.
NOTE: Each autosave directory includes a timestamp for the last SP configuration update for
the related configuration. If you restore the autosave files, the timestamp might be out of
sync. In this case, the restored autosave configurations are shown in their previous state,
either [newer] or up to date.
To save :
mkdir p /root/ldomcd /root/ldom
tar cvf autosave.tar /var/opt/SUNldm/autosave_*
To restore, be sure to remove the existing autosave directories to ensure a clean restore
operation:
cd /root/ldom
rm rf /var/opt/SUNWldm/autosave_*
tar xvf autosave.tar
2. Backup/Save the Logical Domain Constraints Database FileWhenever you upgrade the OS or LDOM package on the control domain, you must save and
restore the Logical Domains constraints database file that can be found in
/var/opt/SUNWldm/ldom-db.xml.
NOTE: Also, save and restore the /var/opt/SUNWldm/ldom-db.xml file when you perform
any other operation that is destructive to the control domainss file data, such as disk swap.
8/2/2019 Oracle Solaris Virtualization
26/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
26
**** When performing a OS Live Upgrade, be sure to preserve the Logical Domains
Constraints Database File ***** This can be done by simply adding the following line to
/etc/lu/synclist file:
/var/opt/SUNWldm/ldom-db.xml OVERWRITE
This will cause the database to be copied automatically from the active boot environment to
the new boot environment when switching boot environments.
3. Make sure that the current working LDOM configuration is saved in the Service Processor
using the command:
ldm add-spconfig working
Steps on Upgrading LDOM software
1. Stop and unbind the guest LDOMS.
2. Flash update the system firmware.
3. Disable the Logical Domains Manager daemon (ldmd)
svcadm disable ldmd
4. Follow the steps below
a. Remove the old SUNWldm package.
pkgrm SUNWldm
b. Add the new SUNWldm package
pkgadd d ./SUNWldm
c. Use ldm list to verify that the Logical Domain Manager is running.
5.
Reboot or issue init 6.Once the system comes up, verify that the other guest domains comes up and do checkout.
6. In some cases, vntsd does not come up, you lose console communication for the guest LDOM.
Issue the command:
svcadm enable vntsd
WARNING:
In case you lose some of the guest LDOMs configuration resources (ie. vnetX or vdiskX). Dont try to
remove and re-create the LDOM. Its easier just to re-add the resource using the commands within the
build scripts for the particular LDOM.
8/2/2019 Oracle Solaris Virtualization
27/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
27
LDOM Gotchas:
Housekeeping LDOM guests
1. Create an XML backup copy of the LDOM guest as follows:
ldm ls-constraints -x > .xml
This creates a backup XML file in case the guest domain needs to be re-created. To re-create
the guest domain using the XML file, issue the following command:
ldm add-domain -i .xml
ldm bind
ldm start
Edit the guest XML file and replace the lines with the guest LDOMs reported MAC address and
hostid. To get the guest LDOMS MAC and hostid, execute the command:
ldm list-domain -l | more
The lines
ldom_info
auto-allocated
0xffffffff
Replace auto-allocated with the reported MAC address
Replace 0xffffffff with the reported hostid of the LDOM guest.
2. From the control domain create an output LDOM resource list and LDOM services using the
following commands:
ldm list-bindings > ldm_list_bindings_.out
ldm list-services > ldm_list-services.out
3. Create a link name indicating the respective LDOM filesystem. As an example, ldom1 is
rncardsweb01
cd /ldomroot
ln -s ldom1 gdom01
4. Make sure to finalize guest LDOM builds with unbind/bind. This is to guarantee that the
resource list gets saved on the system controller.
NOTE : LDOM resources can be added and deleted dynamically on the fly. To guarantee safe
permanent resource assignments on the controller, a unbind/bind has to occur.
5. Make sure that the file permissions of the disk image for the guest LDOMs has the following
permission:
chmod 1600
which should provide the following permission as displayed:
-rw------T 1 root root bootdisk.img
8/2/2019 Oracle Solaris Virtualization
28/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
28
6. It would be best to prepare the ldom guest build script containing the information on how the
ldom and the primary services was made available. This would ease on re-creating the LDOM
guest and the services. The following files are given as examples:
===========================================================
===========================================================
===================================
/root/ldom/scripts/ setup_ldm_services.sh
===================================
#!/bin/ksh
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/SUNWldm/bin
# Configure the OS virtual disks
ldm add-vdsdev /ldomroot/ldom1/bootdisk.img ldom1_boot@primary-vds0
# Configure the apps virtual disks
ldm add-vdsdev /ldomroot/ldom1/apps.img ldom1_apps@primary-vds0
# Configure the Virtual Switches
ldm add-vsw net-dev=nxge3 primary-vsw0 primary
ldm add-vsw net-dev=nxge4 primary-vsw1 primary
ldm add-vsw net-dev=nxge2 primary-vsw2 primary
ldm add-vsw net-dev=nxge6 primary-vsw3 primary
===========================================================
===========================================================
===================================
/root/ldom/scripts/ build_ldom1.sh
===================================
#!/bin/ksh
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/SUNWldm/bin
# Add the main virtual disks# ldm add-vdisk vdisk0 ldom1_boot@primary-vds0 ldom1
# Add the virtual disks
ldm add-vdisk vdisk1 ldom1_apps@primary-vds0 ldom1
ldm add-vdisk vdisk2 ldom1_logs@primary-vds0 ldom1
8/2/2019 Oracle Solaris Virtualization
29/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
29
# Add the virtual networks
ldm add-vnet vnet0 primary-vsw0 ldom1
ldm add-vnet vnet1 primary-vsw1 ldom1
ldm bind ldom1
ldm unbind ldom1
ldm bind ldom1
ldm start ldom1
===========================================================
===========================================================
T5240 Additional Patches
139555-08.zip140796-01.zip
140899-01.zip
141016-01.zip
A script to transfer the files is as follows:
cd /export/PATCHES/LDOM
dsh -q $i mkdir /var/tmp/LDOM_PATH
for i in 139555-08 140796-01.zip 140899-01.zip 141016-01.zip
do
scp ${i} ${1}:/var/tmp/LDOM_PATCH
done
Manually Primary LDOM creation
1. Create the primary virtual disk server
ldm add-vds primary-vds0 primary
2. Create the primary virtual console switch
ldm add-vcc port-range=5000-5100 primary-vcc0 primary
3. Create the primary virtual switch with the initial network
ldm add-vsw net-dev=nxgeX primary-vsw0 primary
4. Set the MAU/vCPU/memory
ldm set-mau 1 primary
ldm set-vcpu 4 primary
ldm set-memory 1024m primary
NOTE: DO NOT set MAU on the primary domain as it turns off LDOM dynamic reconfiguration.
5. Create and use the new configuration
ldm add-spconfig initial
6. Reboot
7. Enable the virtual network terminal server daemon
8/2/2019 Oracle Solaris Virtualization
30/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
30
svcadm enable vntsd
8. Execute the following command:
ldm add-spconfig working
The above command is to save the current system configuration (both LDOM primary and
guests) into the Linux hypervisor running the ldom.
**** NOTE ****
Always run the command ldm add-spconfig when in doubt to save the current configuration.
Manually Guest LDOM creation
1. Add the guest domain
ldm add-domain myldom1
2. Set the guest LDOM vCPU, memory, MAU
ldm add-vcpu 12 myldom1
ldm add-memory 1g myldom1
ldm add-mau 1 myldom1
3. Add virtual network
ldm add-vnet vnet0 primary-vsw0 myldom1
4. Add virtual disk device
ldm add-vdsdev /dev/rdsk/c0t1d0s2 vol1@primary-vds0
5. Add the virtual disk
ldm add-vdisk vdisk0 vol1@primary-vds0 myldom1
6. Set the guest LDOM OBP
ldm set-variable auto-boot\?=false myldom1
ldm set-variable boot-device=/virtual-devices@100/channel-devices@200/disk@0 myldom1
7. Bind the resources to the guest ldom
ldm bind myldom1
8. Start the guest ldom
ldm start myldom1
9. From the primary ldom, execute the following command:
ldm add-spconfig working
The above command is to save the current system configuration (both LDOM primary and
guests) into the Linux hypervisor running the ldom.
**** NOTE ****
Always run the command ldm add-spconfig when in doubt to save the current configuration.
8/2/2019 Oracle Solaris Virtualization
31/31
Solaris Virtualization (Zones/Containers and LDOMs)
Solaris 10 Virtualization (Zones/Containers and LDOMs) - Building and Maintaining (v1.1)
Jeronimo M. Mulato
Appendix 1.Template for the zone configuration:
create -b
set zonepath=
set autoboot=true
set scheduling-class=FSS
add net
set address=
set defrouter=
set physical=
end
add rctl
set name=zone.cpu-shares
set value=(priv=privileged,limit=,action=none)
end
add capped-memory
set physical=
set swap=
set locked=
end
Appendix 2.Template for sysidcfg for the local zones:
system_local=C
timezone=US/Eastern
terminal=xterm
security_policy=NONE
root_password=x!tra123
name_service=NONE
network_interface=primary {hostname=
netmask=
protocol_ipv6=nodefault_route=}
nfs4_domain=dynamic
Recommended