Configuration of Virtual Storage on POWER6

Embed Size (px)

Citation preview

VIOS for AIX Storage Administrators

Speaker Name: Janel BarfieldPowerHA Senior Software Engineer email: [email protected] 2009 IBM Corporation

AgendaVirtual storage configuration concepts Describe and configure virtual SCSI Configure new file-backed virtual devices Configure NPIV resources

Answer questions as time permits email me with any questions [email protected]

2009 IBM Corporation

Virtual Storage Configuration ConceptsVirtual SCSI devices can be backed by many types of physical storage device: physical volume logical volume file optical device tape

Virtual optical devices can also be created

Used by the client like physical optical drives, but implemented as files on the VIO server

Virtual I/O server version 2.1 introduces N_Port ID Virtualization (NPIV) Allows virtual client visibility to the physical SAN storage 2009 IBM Corporation

Virtual SCSI OverviewVirtual I/O ServerPHY PHY PHY PHYVTD VTD VTD VTD VTD

Client

Client

Client

S

S

S

S

C

C

C

C

Physical Storage

Hypervisor

The red connections show two clients accessing the same physical storage (A) via two different server adapters (B) and virtual target devices (D) The blue connection shows multiple target devices (D) attached to a single server adapter (B)

PHY

Physical Adapter VSCSI Server Virtual Adapter

C

VSCSI Client Virtual Adapter

S

VTD

Virtual Target Device

2009 IBM Corporation

Virtual SCSI Configuration (1 of 3)1) Define virtual SCSI server in VIO Server partition and client adapter in AIX or Linux partition

2) Check availability of virtual SCSI server adapters on VIO Server:$ lsdev -virtual name status vasi0 Available vhost0 vsa0 Available Available description Virtual Asynchronous Services Interface (VASI) Virtual SCSI Server Adapter LPAR Virtual Serial Adapter 2009 IBM Corporation

Virtual SCSI Configuration (2 of 3)3) On the VIO Server, define storage resources

To create a volume group:$ mkvg [ -f ][ -vg VolumeGroup ] PhysicalVolume ...

To create a logical volume:$ mklv [ -lv NewLogicalVolume | -prefix Prefix ] VolumeGroup Size [PhysicalVolume ... ]

To create a storage pool:$ mksp [-f] StoragePool PhysicalVolume ...

To create a backing device from available space in a storage pool:$ mkbdsp [-sp StoragePool] Size [-bd BackingDevice] -vadapter ServerVirtualSCSIAdapter

2009 IBM Corporation

Virtual SCSI Configuration (3 of 3)4) On the VIO Server, define virtual target devices$ mkvdev -vdev TargetDevice -vadapter VirtualServerAdapter [ -dev DeviceName ]

For example:$ mkvdev vdev hdisk3 vadapter vhost0 vtscsi0 Available $ mkvdev vdev lv10 vadapter vhost0 vtscsi1 Available $ mkvdev vdev cd0 vadapter vhost0 vtopt0 Available

Check the target devices with lsdev:$ lsdev -virtual name status vtscsi0 Available vtscsi1 Available vtopt0 Available description Virtual Target Device - Disk Virtual Target Device - Logical Volume Virtual Target Device Optical Media

5) Boot the client or run cfgmgr to use new virtual devices 2009 IBM Corporation

View Configuration with lsmapUse lsmap from the VIO Server to verify mapping of virtual targets:$ lsmap -vadapter vhost0 SVSA vhost0 Physloc U9111.520.10F191F-V3-C6 Client Partition ID 0x00000003

--------------- ----------------------------------- -----------------

Client LPAR ID

Server slot IDVTD LUN Backing device Physloc VTD LUN Backing device Physloc VTD LUN Backing device Physloc vtopt0 0x8300000000000000 cd0 vtscsi0 0x8100000000000000 hdisk3 U787A.001.DNZ00G0-P1-T10-L8-L0 vtscsi1 0x8200000000000000 lv10

Physical location code

LUN ID

2009 IBM Corporation

View Configuration with lshwresUse lshwres from the HMC to see system-wide view of virtual I/O configuration (or view from HMC GUI)hscroot@skylab-hmc:~> lshwres -r virtualio --rsubtype scsi -m skylablpar_name=VIOS,lpar_id=1,slot_num=7,state=1,is_required=1,ad apter_type=server remote_lpar_id=4,remote_lpar_name=node3,remote_slot_num=6, "backing_devices=drc_name=U787F.001.DPM0ZFL-P1-T10-L4L0/log_unit_num=0x8100000000000000/ device_name=hdisk1,drc_name=U787F.001.DPM0ZFL-P1-T10-L5L0/log_unit_num=0x820000000000000/ lpar_name=node3,lpar_id=4,slot_num=6,state=1,is_required=1,a dapter_type=client,remote_lpar_id=1,remote_lpar_name=VIOS,re mote_slot_num=7,backing_devices=none

2009 IBM Corporation

Virtual Target Device ExamplePOWER6 System VIOS cl_lv clientVGhdisk7 hdisk6 hdisk5 vtscsi1 vtopt0 vtscsi0

LPAR1 cd0 hdisk1 hdisk0 vscsi0

hdisk0

fcs0

fcs1

sas0

cd0

vhost0

POWER Hypervisor

SAN FC card Internal storage Optical device

2009 IBM Corporation

File-Backed Virtual DevicesFile-back (FB) virtual device types: File-backed disk devices Files created in storage pools can be used as hdisk on client

File-backed optical media devices

Create a Virtual Media Repository which can be stocked with DVD-ROM/RAM media Clients can use images stored in repository as cd0 devices with media FB virtual device characteristics: Read-only FB devices can be shared by multiple clients Bootable FB devices appear in SMS Reside in FB storage pools

Mount Directory = /var/vio/storagepools/ LV_NAME = Granularity as small as 1MB or as large as parent Logical Volume

FB virtual devices are new as of Virtual I/O Server V1.5 2009 IBM Corporation

Creating File-Backed Virtual DisksFiles on the virtual I/O Server can be used as backing storage:1. 2. 3. 4. Create a volume group (mkvg) or storage pool (mksp -f) Create a FB disk storage pool (mksp -fb) inside volume group/storage pool Create a device in the pool (mkbdsp) and map to a vadapter The client associated with that vadapter sees new FB device as an hdiskVolume Group/Storage Pool - contains hdisk(s) FB Disk Storage Pool (contains FB virtual disks)Target dev Target dev Target dev

2009 IBM Corporation

Create FB Virtual Disks Example (1 of 2)Create new volume group/logical volume storage pool:$ mkvg -vg newvg hdisk1 OR mksp -f newvg hdisk1

New storage pool (newvg)

Create new FB storage pool in the logical volume storage pool:$ mksp -fb fbpool -sp newvg -size 10g fbpool File system created successfully. 10444276 kilobytes total disk space. New File System size is 20971520

New FB storage pool (fbpool) that is 10 GB inside of newvg

Create new file device with a certain size, create the VTD, and map to vhost adapter: Create new 30 MB file called fb_disk1$ mkbdsp -sp fbpool 30m -bd fb_disk1 -vadapter vhost3 Creating file "fb_disk1" in storage pool "fbpool". Assigning file "fb_disk1" as a backing device. vtscsi3 Available fb_disk1

Resulting VTD is named vtscsi3 and is mapped to vhost3

2009 IBM Corporation

Create FB Virtual Disks Example (2 of 2)View mapping with new backing device:$ lsmap -vadapter vhost3 SVSA ID Physloc Client Partition

--------------- ----------------------------- ----------------vhost3 VTD Status LUN Backing device Physloc U8203.E4A.10CD1F1-V1-C15 vtscsi3 Available 0x8100000000000000 /var/vio/storagepools/fbpool/fb_disk1 0x00000000

2009 IBM Corporation

Create FB Virtual Optical Device (1 of 2)Create volume group/logical volume storage pool:$ mkvg -vg medrep hdisk4 OR mksp -f medrep hdisk1 New storage pool (medrep)

Create 10 GB Virtual Media Repository in the LV pool:$ mkrep -sp medrep -size 10G Virtual Media Repository Created Repository created within "VMLibrary_LV" logical volume

Create media (aixopt1) in repository from a file: Media could be blank, loaded from cd# device, or a file$ mkvopt -name aixopt1 -file dvd.product.iso -ro

2009 IBM Corporation

Create FB Virtual Optical Device (2 of 2)View repository and its contents:$ lsrep Size(mb) Free(mb) Parent Pool Free 10198 6532 medrep 59648 Name aixopt1 Parent Size 69888 File Size Optical 3666 None Access ro Parent

Create FB virtual optical device and map to vhost adapter:$ mkvdev -fbo -vadapter vhost4 New VTD name vtopt0 Available

Load the image into the media device: Use the unloadopt command to unload $ loadopt -vtd vtopt0 -disk aixopt1 -ro

2009 IBM Corporation

Viewing FB Configuration from the HMC

HMC command line example:hmc:~> lshwres -m hurston -r virtualio --rsubtype scsi lpar_name=VIOS,lpar_id=1,slot_num=16,state=1,is_required=0,adapte r_type=server,remote_lpar_id=any,remote_lpar_name=,remote_slot_nu m=any,"backing_devices=""0x8100000000000000//""""/var/vio/VMLibra ry/aixopt1""""""" . . . 2009 IBM Corporation

FB Device Command Examples (1 of 2)List the repository and any contents:$ lsrep

Size(mb) Free(mb) Parent Pool 10198 Name aixopt1 6532 medrep

Parent Size 69888 File Size Optical 3666 vtopt0

Parent Free 59648 Access ro

List the storage pools: Notice both LVPOOL and FBPOOL types:$ lssp Pool rootvg NewVG medrep fbpool Size(mb) 69888 69888 69888 10199 Free(mb) 44544 59648 59648 6072 Alloc Size(mb) 128 64 64 64 BDs Type 1 LVPOOL 0 LVPOOL 0 LVPOOL 2 FBPOOL

List out volume groups/storage pools (LVPOOL type only):$ lsvg rootvg NewVG Medrep 2009 IBM Corporation

FB Device Command Examples (2 of 2)List LVPOOL details:$ lssp -detail -sp NewVG Name PVID hdisk3 000cd1f195f987df Size(mb) 69888

List FBPOOL details:$ lssp -bd -sp fbpool Name fb_disk1 fb_disk2 Size(mb) VTD 30 vtscsi3 4096 vtscsi4 SVSA vhost3 vhost3

Show all mounts including FB devices:$ mount node mounted mounted over vfs date options -------- --------------- --------------- ------ ------------ --------------/dev/hd4 / jfs2 Apr 18 13:01 rw,log=/dev/hd8 /dev/hd2 /usr jfs2 Apr 18 13:01 rw,log=/dev/hd8 /dev/hd9var /var jfs2 Apr 18 13:01 rw,log=/dev/hd8 /dev/hd3 /tmp jfs2 Apr 18 13:01 rw,log=/dev/hd8 /dev/hd1 /home jfs2 Apr 18 13:01 rw,log=/dev/hd8 /proc /proc procfs Apr 18 13:01 rw /dev/hd10opt /opt jfs2 Apr 18 13:01 rw,log=/dev/hd8 /dev/fbpool /var/vio/storagepools/fbpool jfs2 Apr 28 12:04 rw,log=INLINE /dev/VMLibrary_LV /var/vio/VMLibrary jfs2 Apr 28 14:36 rw,log=INLINE

2009 IBM Corporation

File-Backed Virtual Devices ExampleConfigure a file-backed virtual disk and file-backed virtual optical deviceVIOSfb_disk1

LPAR1 vtscsi2

fbpool1(FB storage pool)

fb_disk2

rootvg medrepcl_mksysb AIX53_iso AIX61_iso

stpool1(LV storage pool)

(Virtual Media Repository)

vtopt1

hdisk2

hdisk1 hdisk0 vhost1

cd1 vscsi1

POWER Hypervisor

2009 IBM Corporation

N_Port ID Virtualization (NPIV)NPIV is an industry standard technology that provides the capability to assign a physical Fibre Channel adapter to multiple unique world wide port names (WWPN) Assign at least one 8 Gigabit PCI Express Dual Port Fibre Channel Adapter to the Virtual I/O Server Create virtual client and server Fibre Channel adapter pair in each partition profile through the HMC (or IVM) Always a one-to-one relationship Each virtual Fibre Channel server adapter on the Virtual I/O Server partition connects to one virtual Fibre Channel client adapter on a virtual I/O client partition. Each virtual Fibre Channel client adapter receives a pair of unique WWPNs The pair is critical, and both must be zoned (2nd WWPN is used for Live Partition Mobility) Virtual Fibre Channel server adapters are mapped to physical ports on the physical Fibre Channel adapter on the VIO server Using the SAN tools of the SAN switch vendor, you zone your NPIV-enabled switch to include WWPNs that are created by the HMC for any virtual Fibre Channel client adapter on virtual I/O client partitions with the WWPNs from your storage device in a zone Just like for a physical storage environment

2009 IBM Corporation

NPIV RequirementsPower6 hardware A minimum System Firmware level of EL340_039 for the IBM Power 520 and Power 550, and EM340_036 for the IBM Power 560 and IBM Power 570 Minimum of one 8 Gigabit PCI Express Dual Port Fibre Channel Adapter (Feature Code 5735) NPIV enabled SAN switch Only the first SAN switch which is attached to the Fibre Channel adapter in the Virtual I/O Server needs to be NPIV capable. Other switches in your SAN environment do not need to be NPIV capable. Software HMC V7.3.4, or later Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later AIX 5.3 TL9, or later AIX 6.1 TL2, or later SDD 1.7.2.0 + PTF 1.7.2.2 SDDPCM 2.2.0.0 + PTF v2.2.0.6 SDDPCM 2.4.0.0 + PTF v2.4.0.1

2009 IBM Corporation

NPIV Configuration Basics1. Create virtual Fibre Channel adapters from the HMC for the VIO server and client partitionsCreating the client adapter generates a pair of unique WWPNs for the virtual client adapter Based on a unique 6-digit prefix that comes with the managed system, and includes 32,000 pairs of WWPNs that are not reused (have to purchase more if you run out)

2. Map the virtual Fibre Channel server adapters on the VIO server to the physical port of the physical Fibre Channel adapter with the vfcmap command on the VIO server 3. Zone and map the WWPN for the client virtual Fibre Channel adapter to the correct LUNs from the SAN switch and storage manager

2009 IBM Corporation

SAN Switch Configuration for NPIV SupportOn the SAN switch two things need to be done before it can be used for NPIV.1. Update the firmware to a minimum level of Fabric OS (FOS) 5.3.0. To check the level of Fabric OS on the switch, log on to the switch and run the version command 2. Enable the NPIV capability on each port of the SAN switch with the portCfgNPIVPort command (i.e., to enable NPIV on port 16: portCfgNPIVPort 16,1) The portcfgshow command lists information for all ports

2009 IBM Corporation

Creating Virtual Fibre Channel AdaptersCreate a virtual Fibre Channel server adapter

Create a virtual Fibre Channel client adapter These dialogs look very much like the dialogs to create virtual SCSI adapters

2009 IBM Corporation

Create Mapping from Virtual to Physical Fibre Channel Adapters on the VIOS (1 of 2)The command lsdev -dev vfchost* lists all available virtual Fibre Channel server adapters in the VIO server$ lsdev -dev vfchost* name status description vfchost0 Available Virtual FC Server Adapter

The lsdev -dev fcs* command lists all available physical Fibre Channel server adapters in the VIO server$ lsdev -dev fcs* name status description fcs2 Available 8Gb PCI Express Dual Port FC Adapter fcs3 Available 8Gb PCI Express Dual Port FC Adapter

Run the lsnports command to check the Fibre Channel adapter NPIV readiness of the adapter and the SAN switch (fabric should be set to 1)$ lsnports name physloc fabric tports aports swwpns awwpns 64 63 2048 2046 fcs3 U789D.001.DQDYKYW-P1-C6-T2 1

2009 IBM Corporation

Create Mapping from Virtual to Physical Fibre Channel Adapters on the VIOS (2 of 2)Map the virtual Fibre Channel server adapter to the physical Fibre Channel adapter with the vfcmap commandvfcmap vadapter vfchost0 fcp fcs3

List the mappings with the lsmap npiv commandlsmap npiv vadapter vfchost0 Name Physloc ClntID ===== ====== ===== vfchost0 U9117.MMA.101F170-V1-C31 3 Status:LOGGED_IN FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2 ClntName ClntOS ======== ======= AIX61 AIX

2009 IBM Corporation

Create Zoning in the SAN Switch for the Client (1 of 2)Get the information about the WWPN of the virtual Fibre Channel client adapter created in the virtual I/O client partition. From the HMC, look at the virtual adapter properties.Logon to your SAN switch and create a new zoning or customize an existing one The command zoneshow, available on the IBM 2109-F32 switch lists the existing zones To add the WWPN c0:50:76:00:0a:fe:00:14 to the zone named vios1 run the command:zoneadd "vios1", "c0:50:76:00:0a:fe:00:14

To save and enable the new zoning, run the cfgsave and cfgenable npiv commands 2009 IBM Corporation

Create Zoning in the SAN Switch for the Client (2 of 2)With the zoneshow command, you can check if the added WWPN is active:zoneshow Defined configuration: cfg: npiv vios1; vios2 zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18; c0:50:76:00:0a:fe:00:14 zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62 Effective configuration: cfg: npiv zone: vios1 20:32:00:a0:b8:11:a6:62 c0:50:76:00:0a:fe:00:18 c0:50:76:00:0a:fe:00:14 zone: vios2 c0:50:76:00:0a:fe:00:12 20:43:00:a0:b8:11:a6:62

After you have finished with the zoning, you need to map the LUN device(s) to the WWPN from the SAN storage manager application

2009 IBM Corporation

Viewing NPIV Storage Access from the ClientYou can list all virtual Fibre Channel client adapters in the virtual I/O client partition using the following command:# lsdev -l fcs* fcs0 Available 31-T1 Virtual Fibre Channel Client Adapter

Disks attached through the virtual adapter are visible with lspv# lspv hdisk0 00c1f170e327afa7 hdisk1 00c1f170e170fbb2 hdisk2 none # lspath Enabled hdisk0 vscsi0 Enabled hdisk1 vscsi0 Enabled hdisk0 vscsi1 Enabled hdisk2 fscsi0 rootvg None None active

View paths to the virtual disk with lspath

Use the mpio_get_config command to get more detailed information. For example:# mpio_get_config A Storage Subsystem worldwide name: 60ab800114632000048ed17e Storage Subsystem Name = 'ITSO_DS4800' hdisk LUN # Ownership User Label hdisk2 0 A (preferred) NPIV_AIX61 2009 IBM Corporation

Implementing Redundancy with NPIVYou can create multiple paths from a LUN in the SAN to a virtual client via multiple virtual Fibre Channel adapters You can create multiple paths from a LUN in the SAN to an AIX client using a combination of virtual and physical Fibre Channel adapters Set path priority, hcheck_interval, and hcheck_mode for disks and paths with using MPIO

2009 IBM Corporation

ConclusionThere are many options available to provide virtual storage to AIX clients with POWER5 and POWER6 systems Virtual SCSI devices (supported on POWER5 and POWER6) Virtual Fibre Channel adapters (supported on POWER6)

Consult the PowerVM Virtualization Managing and Monitoring Redpaper for detailed information about common use cases and configuration detailshttp://www.redbooks.ibm.com/Redbooks.nsf/RedbookAb stracts/sg247590.html?OpenDocument

2009 IBM Corporation

TrademarksThe following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:*, AS/400, e business(logo), DBE, ESCO, eServer, FICON, IBM, IBM (logo), iSeries, MVS, OS/390, pSeries, RS/6000, S/30, VM/ESA, VSE/ESA, WebSphere, xSeries, z/OS, zSeries, z/VM, System i, System i5, System p, System p5, System x, System z, System z9, BladeCenter

The following are trademarks or registered trademarks of other companies.Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

33

2009 IBM Corporation