Upload
xpto78
View
229
Download
0
Embed Size (px)
Citation preview
8/2/2019 VMware Implementation With IBM Midrange System Storage
1/140
8/2/2019 VMware Implementation With IBM Midrange System Storage
2/140
8/2/2019 VMware Implementation With IBM Midrange System Storage
3/140
International Technical Support Organization
VMware Implementation with IBM Midrange SystemStorage
January 2010
Draft Document for Review January 28, 2010 12:50 am 4609edno.fm
REDP-4609-00
8/2/2019 VMware Implementation With IBM Midrange System Storage
4/140
Copyright International Business Machines Corporation 2010. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
4609edno.fm Draft Document for Review January 28, 2010 12:50 am
First Edition (January 2010)
This edition applies to: VMware ESX 4.0 Server IBM Midrange Storage DS5000 running v7.60 firmware IBM DS Storage Manager v10.60.
This document created or updated on January 28, 2010.
Note: Before using this information and the product it supports, read the information in Notices onpage vii.
8/2/2019 VMware Implementation With IBM Midrange System Storage
5/140
Copyright IBM Corp. 2010. All rights reserved.iii
Draft Document for Review January 28, 2010 12:50 am 4609TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Part 1. Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Introduction of IBM VMware Midrange Storage Solutions . . . . . . . . . . . . . . 3
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 IBM VMware Storage Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 VMware ESX Server Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Overview of using VMware ESX Server with SAN . . . . . . . . . . . . . . . . . . . . . . . . . 71.3.2 Benefits of Using VMware ESX Server with SAN . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 VMware ESX Server and SAN Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Overview of VMware Consolidated Backup (VCB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Overview of VMware vCenter Site Recovery Manager (SRM) . . . . . . . . . . . . . . . . . . . . 9
Chapter 2. Security Design of the VMware Infrastructure Architecture. . . . . . . . . . . . 13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Virtualization Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 CPU Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Memory Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Service Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Virtual Networking Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Virtual Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.9 Virtual Switch VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.10 Virtual Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.11 Virtual Network Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.12 Virtual Switch Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.13 Virtual Switch Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.14 Virtualized Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.15 SAN Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.16 VMware vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Chapter 3. Planning the VMware Storage System Design . . . . . . . . . . . . . . . . . . . . . . 27
3.1 VMware ESX Server Storage Structure: Disk Virtualization . . . . . . . . . . . . . . . . . . . . . 283.1.1 Local disk usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.2 SAN disk usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3 Disk virtualization with VMFS volumes and .vmdk files . . . . . . . . . . . . . . . . . . . . 29
3.1.4 VMFS access mode - Public Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.5 vSphere Server .vmdk modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.6 Specifics of Using SAN Arrays with VMware ESX Server . . . . . . . . . . . . . . . . . . 30
3.1.7 Host Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.8 Levels of Indirection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8/2/2019 VMware Implementation With IBM Midrange System Storage
6/140
4609TOC.fm Draft Document for Review January 28, 2010 12:50 am
iv VMware Implementation with IBM Midrange System Storage
3.2 Which IBM Midrange Storage Subsystem should be used in a VMware implementation?
32
3.3 Overview of IBM Midrange Storage Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 Positioning the IBM Midrange Storage Systems. . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.2 DS4000 and DS5000 Series product comparison . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4 Storage Subsystem Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.1 Segment Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.4.2 Calculating Optimal Segment Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.4.3 Improvements in Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.4 Enabling Cache Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.5 Aligning File System Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4.6 Premium Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4.7 Considering Individual Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4.8 Determining the Best RAID Level for Logical Drives and Arrays . . . . . . . . . . . . . 39
3.4.9 Server Consolidation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4.10 VMware ESX Server Storage Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.11 Configurations by Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.12 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 4. Planning the VMWare Server Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.1 Considering the VMware Server Platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.1 Minimum Server Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.2 Maximum Physical Machine Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.3 Recommendations for Enhanced Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.4 Considering the Server Hardware Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.5 General Performance and Sizing Considerations. . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Operating System Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.1 Buffering the I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.2 Aligning Host I/O with RAID Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2.3 Locating Recommendations from for the Host Bus Adapter Settings. . . . . . . . . . 59
4.2.4 Recommendations for Fiber Channel Switch Settings . . . . . . . . . . . . . . . . . . . . . 59
4.2.5 Using Command Tag Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2.6 Analyzing I/O Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.7 Using VFMS for Spanning Across Multiple LUNs . . . . . . . . . . . . . . . . . . . . . . . . . 60
Part 2. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Chapter 5. VMware ESX Server and Storage Configuration . . . . . . . . . . . . . . . . . . . . . 63
5.1 Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1.1 Notes on mapping LUNs to a storage partition . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.1.2 Steps for verifying the storage configuration for VMware . . . . . . . . . . . . . . . . . . . 66
5.2 Installing the VMware ESX Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.2 Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.3 Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.4 Connecting to VMware vSphere Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.2.5 Post-Install Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2.6 Configuring VMware ESX Server Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.2.7 Create additional Virtual Switches for guests connectivity. . . . . . . . . . . . . . . . . 109
5.2.8 Creating Virtual Machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.2.9 Additional VMware ESX Server Storage Configuration . . . . . . . . . . . . . . . . . . . 121
8/2/2019 VMware Implementation With IBM Midrange System Storage
7/140
Contentsv
Draft Document for Review January 28, 2010 12:50 am 4609TOC.fm
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
8/2/2019 VMware Implementation With IBM Midrange System Storage
8/140
4609TOC.fm Draft Document for Review January 28, 2010 12:50 am
vi VMware Implementation with IBM Midrange System Storage
8/2/2019 VMware Implementation With IBM Midrange System Storage
9/140
Copyright IBM Corp. 2010. All rights reserved.vii
Draft Document for Review January 28, 2010 12:50 am 4609spec.fm
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area. Anyreference to an IBM product, program, or service is not intended to state or imply that only that IBM product,program, or service may be used. Any functionally equivalent product, program, or service that does notinfringe any IBM intellectual property right may be used instead. However, it is the user's responsibility toevaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where suchprovisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATIONPROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurringany obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs.
8/2/2019 VMware Implementation With IBM Midrange System Storage
10/140
4609spec.fm Draft Document for Review January 28, 2010 12:50 am
viii VMware Implementation with IBM Midrange System Storage
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both:
AIX
DS4000DS8000
FlashCopy
IBM
Redbooks
Redbooks (logo) System p
System Storage
System Storage DS
System x
TivoliXIV
The following terms are trademarks of other companies:
AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.
Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation.
Fusion-MPT, LSI, LSI Logic, MegaRAID, and the LSI logo are trademarks or registered trademarks of LSICorporation.
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
Novell, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and othercountries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/orits affiliates.
QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registeredtrademark in the United States.
Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. andother countries.
Virtual SMP, VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks
of VMware, Inc. in the United States and/or other jurisdictions.Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, othercountries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,other countries, or both.
Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks ofIntel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
8/2/2019 VMware Implementation With IBM Midrange System Storage
11/140
Copyright IBM Corp. 2010. All rights reserved.ix
Draft Document for Review January 28, 2010 12:50 am 4609pref.fm
Preface
This document is a compilation of best practices for planning, designing, implementing, and
maintaining IBM Midrange storage solutions and, more specifically, configurations for aVMware ESX and VMware ESXi Server based host environment. Setting up an IBM
Midrange Storage Subsystem can be a challenging task and the principal objective of thisbook is to allow users sufficient overview to effectively enable SAN storage and VMWare.
There is no single configuration that will be satisfactory for every application or situation, butthe effectiveness of VMware implementation is enabled by careful planning and
consideration. Although the compilation of this document is derived from an actual setup andverification, please note that it has not been stress tested or tested for all possible use casesand used in a limited configuration assessment.
Note:Because of the highly customizable nature of a VMware ESX Host environment, you must
take into consideration your specific environment and equipment to achieve optimal
performance from an IBM Midrange Storage Subsystem. When weighing therecommendations in this document, start with the first principles of I/O performance tuningand keep in mind that each environment is unique and the correct settings depend on the
specific goals, configurations, and demands for the specific environment.
Lot of the content for this document is derived from the LSI version of the same document andthe Best Practices for Running VMware ESX 3.5 on an IBM DS5000 Storage System
whitepaper available at the following weblink:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101347
The team that wrote this paperThis paper was produced by a team of specialists from around the world working at theInternational Technical Support Organization (ITSO), Austin Center.
Sangam Racherla is an IT Specialist and Project Leader working at the International
Technical Support Organization, San Jose Center. He holds a degree in electronics andcommunication engineering and has nine years of experience in the IT field. He has beenwith the International Technical Support Organization for the past six years and has extensive
experience installing and supporting the ITSO lab equipment for various Redbookspublication projects. His areas of expertise include Microsoft Windows, Linux, AIX,
System x, and System p servers and various SAN and storage products.
Corne Lottering is a Systems Storage Sales Specialist in the IBM Sub Saharan AfricaGrowth Market Region for Systems and Technology Group. His primary focus is Sales in the
Central African countries but also provide pre-sales support to the Business Partnercommunity across Africa. He as been with IBM for nine years and has experience in a wide
variety of storage technologies including the DS4000, DS5000, DS8000, XIV. IBM SANswitches, IBM Tape Systems and storage software. Since joining IBM, he has been
responsible for various implementation and support projects for customers across Africa.
John Sexton is a Certified Consulting IT Specialist, based in Auckland, New Zealand and
has over 20 years experience working in IT. He has been worked at IBM for the last 13 years.His areas of expertise include IBM eServer pSeries, AIX, HACMP, virtualization,
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101347http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP1013478/2/2019 VMware Implementation With IBM Midrange System Storage
12/140
4609pref.fm Draft Document for Review January 28, 2010 12:50 am
x VMware Implementation with IBM Midrange System Storage
storage, TSM, SAN, SVC, and business continuity. He provides pre-sales support andtechnical services for clients throughout New Zealand, including consulting, solution
implementation, troubleshooting, performance monitoring, system migration, and training.Prior to joining IBM in New Zealand, John worked in the United Kingdom supporting and
maintaining systems in the financial and advertising industries.
Pablo Pedrazas is a Hardware Specialist working with Power Servers and Storage Productsat IBM Argentina Support Center, doing post-sales second level support for Spanish
Speaking Latin American countries in the Maintenance & Technical Support Organization. Hehas 21 years of experience in the IT industry, developing expertise in UNIX Servers and
Storage products. He holds a bachelor's degree in Computer Science and a Master ofScience in Information Technology and Telecommunications Management from the EOI of
Madrid.
Chris Bogdanowicz has over 20 years of experience in the IT industry. He joined Sequent
Computer Systems 15 years ago, initially specializing in the symmetric multiprocessing Unixplatforms and later NUMA-Q technology. He remained in a support role when Sequent
merged with IBM in 1999. He is currently a member of the IBM MTS SAN and midrangestorage hardware support team in the UK. In addition, he is part of a Virtual EMEA Team(VET) providing Level 2 support for DS4000 and DS5000 products within Europe. He also
maintains a keen interest in performance and configuration issues through participation in theStorage Solution Expert (SSE) program.
Alexander Watson is a Senior IT Specialist for Storage ATS Americas in the United States.He is a Subject Matter Expert on SAN switches and the DS4000 products. He has over tenyears of experience in planning, managing, designing, implementing, problem analysis, and
tuning of SAN environments. He has worked at IBM for ten years. His areas of expertiseinclude SAN fabric networking, Open System Storage IO and the IBM Midrange Storage
Subsystems family of products.
Bruce Allworth is a Senior IT Specialist working in IBM Americas Storage AdvancedTechnical Support (ATS). He is a Subject Matter Expert and the ATS Team Leader for the
DS5000, DS4000, and DS3000 product lines. His many years of experience with theseproducts, including management, solution design, advanced problem determination, and
disaster recovery. He works closely with various IBM divisions and LSI in launching newproducts, creating critical documentation, including Technical and Delivery Assessment
Checklists, and developing and delivering technical training for a wide range of audiences.
Frank Schubert is an IBM Certified Systems Expert and Education Specialist for DS4000Storage systems. He is working for IBM Global Technology Services (GTS) in the Technical
Education and Competence Center (TECC) in Mainz, Germany. His focus is on deployingeducation and train IBM service personnel in EMEA to maintain, service, and implement IBM
storage products, such as DS4000/DS5000 and N series. He has been with IBM for the last14 years and gained storage experience since 2003 in different support rules.
Alessio Bagnaresi is a Senior Solution Architect and Technical Sales Manager at Infracom,a major IBM Business Partner in Italy. Currently he is working on customer assessments and
proof of concept about Desktop/Server/storage virtualization, consolidation, infrastructureoptimization and platform management. He is certified on several platforms such as AIX,
Linux, VMware, Citrix, Xen, Tivoli Software and IBM Enterprise System Storage products.His job includes the planning, design, and delivery of Platform Management, BusinessContinuity, Disaster Recovery, Backup/Restore and Storage/Server/Desktop Virtualization
solutions involving IBM Director, IBM System p, System x, and System Storage platforms(mostly covering IBM San Volume Controller, IBM DS4000/DS5000 Midrange Storage Server,
IBM DS8000 Enterprise Storage and IBM NSeries). About his professional career, hepreviously worked at IBM as Cross-Brand System Architect. He faced customer projects
8/2/2019 VMware Implementation With IBM Midrange System Storage
13/140
Prefacexi
Draft Document for Review January 28, 2010 12:50 am 4609pref.fm
about Server Consolidation (PowerVM, VMware, Hyper-V and Xen), Business Continuity(DS8000 Advanced Copy Services, Power HA XD, AIX Cross-site Mirroring, DB2 High
Availability and Disaster Recovery, DS4000/DS5000 Enhanced Remote Mirror), DisasterRecovery (TSM DRM, ProtecTier TS7650G) and Storage Virtualization (SVC and NSeries).
The authors would like to express their thanks to the following people, whose expertise andsupport were integral to the writing of this Redpaper:
Doris KoniecznyHarold Pike
Pete UrbisciScott RainwaterMichael D Roll
Mark BrougherBill Wilson
Alex OsunaJon Tate
Bertrand DufrasneRichard HutzlerGeorgia L Mann (Author of Best Practices for Running VMware ESX 3.5 on an IBM DS5000Storage Systemwhitepaper.)
IBM
Amanda RyanStacey Dershem
Brad Breault
LSI Corporation
Brian StefflerJed Bless
Brocade
Thanks to the following people for their contributions to this project:
Alex Osuna
Jon TateBertrand Dufrasne
Ann Lund
A Special mention must go to the authors of the LSI version of this document.
Jamal Boudi
Fred Eason
Bob HouserBob LaiRyan Leonard
LSI Corporation
8/2/2019 VMware Implementation With IBM Midrange System Storage
14/140
4609pref.fm Draft Document for Review January 28, 2010 12:50 am
xii VMware Implementation with IBM Midrange System Storage
Become a published author
Join us for a two- to six-week residency program! Help write a book dealing with specificproducts or solutions, while getting hands-on experience with leading-edge technologies. Youwill have the opportunity to team with IBM technical professionals, Business Partners, and
Clients.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivityand marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South RoadPoughkeepsie, NY 12601-5400
http://www.redbooks.ibm.com/residencies.htmlhttp://www.redbooks.ibm.com/residencies.htmlhttp://www.redbooks.ibm.com/http://www.redbooks.ibm.com/http://www.redbooks.ibm.com/contacts.htmlhttp://www.redbooks.ibm.com/contacts.htmlhttp://www.redbooks.ibm.com/http://www.redbooks.ibm.com/http://www.redbooks.ibm.com/residencies.htmlhttp://www.redbooks.ibm.com/residencies.html8/2/2019 VMware Implementation With IBM Midrange System Storage
15/140
Copyright IBM Corp. 2010. All rights reserved.1
Draft Document for Review January 28, 2010 12:50 am 4609p01.fm
Part 1 Planning
Part 1 provides the conceptual framework for understanding IBM Midrange Storage Systemsin a Storage Area Network and VMWare environment and includes recommendations, hints,
and tips for the physical installation, cabling, and zoning. Although no performance figures areincluded, we discuss the performance and tuning of various components and features to
guide you when working with IBM Midrange Storage.
Before you start any configuration of the IBM Midrange Storage Subsystem in a VMWareenvironment, you must understand the following concepts to guide you in your planning:
Recognizing the IBM Midrange Storage Subsystem feature set
Balancing drive-side performance
Understanding the segment size of logical drives
Knowing about storage system cache improvements
Comprehending file system alignment
Knowing how to allocate logical drives for ESX Host and vSphere
Recognizing server hardware architecture
Identifying specific ESX Host and vSphere settings
The following chapters will assist you in planning for the optimal design of yourimplementation.
Part 1
8/2/2019 VMware Implementation With IBM Midrange System Storage
16/140
4609p01.fm Draft Document for Review January 28, 2010 12:50 am
2 VMware Implementation with IBM Midrange System Storage
8/2/2019 VMware Implementation With IBM Midrange System Storage
17/140
Copyright IBM Corp. 2010. All rights reserved.3
Draft Document for Review January 28, 2010 12:50 am ch01-Overview.fm
Chapter 1. Introduction of IBM VMware
Midrange Storage Solutions
This chapter provides and introduction to the IBM VMware Midrange Storage Solutions andprovides an overview of the components involved.
1
8/2/2019 VMware Implementation With IBM Midrange System Storage
18/140
ch01-Overview.fm Draft Document for Review January 28, 2010 12:50 am
4 VMware Implementation with IBM Midrange System Storage
1.1 Overview
Many businesses and enterprises have implemented VMware or have plans to implementVMware. VMware provides more efficient use of assets and lower costs by consolidatingservers and storage. Applications that previously had been running in under-utilized
dedicated physical servers are migrated to their own virtual machine or virtual server that ispart of a VMware ESX cluster or a virtual infrastructure.
As part of this consolidation, asset utilization typically can be increased from under 10percent to over 85 percent. Applications that previously had dedicated internal storage nowcan use a shared networked storage system that pools storage to all of the virtual machines
and their applications. Backup, restore, and disaster recovery become more effective andeasier to manage. Because of the consolidated applications and their mixed-workloads, the
storage system must deliver balanced performance and high performance in order to supportexisting IT service level agreements. The IBM Midrange Storage Systems provide an effective
means to that end.
IBM Midrange Storage Systems are designed to deliver reliable performance for mixedapplications including transaction and sequential workloads. These workloads include
applications that are typical of a virtual infrastructure including email, database, web server,file server, data warehouse, and backup profiles. IBM offers a complete line of storage
systems from entry-level systems to midrange systems to enterprise-level systems that arecertified to work with VMware ESX Server.
The following items describe the IBM Midrange Storage Systems available from IBM andincluded in the references throughout the manuals as DS-Series. These storage subsystems
will be discussed in greater detail later on in Chapter 3, Planning the VMware StorageSystem Design on page 27. All of these systems offer shared storage that enable all of
VMwares advanced functionality e.g. VMware Distributed Resource Scheduler (DRS),VMware vCenter Site Recovery Manager (SRM), VMware High Availability (HA), etc.
The IBM DS4700 storage subsystem and DS4800 storage subsystem are Fibre Channel
storage systems that offer outstanding performance with advanced copy premium featuressuch as FlashCopy, VolumeCopy, and Enhanced Remote Mirroring. This is the first DS
system that supports SRM (Site Recovery Manager).
The IBM DS5000 storage systems offers the highest performance and the most scalability,expendability, and investment protection currently available in the IBM Midrange portfolio.
The IBM DS5000 storage subsystem offers enterprise-class features and availability. Thisstorage system can handle the largest and most demanding vir tual infrastructure
workloads. The IBM DS5000 storage systems are available with up to 448-disk drivescapability and the latest in host connectivity including Fibre Channel and iSCSI. This
system supports SRM (Site Recovery Manager).
8/2/2019 VMware Implementation With IBM Midrange System Storage
19/140
Chapter 1. Introduction of IBM VMware Midrange Storage Solutions5
Draft Document for Review January 28, 2010 12:50 am ch01-Overview.fm
1.2 IBM VMware Storage Solutions
Many companies consider and employ VMware virtualization solutions to reduce IT costswhile increasing the efficiency, utilization, and flexibility of their hardware. In fact, 100,000customers have deployed VMware, including 90% of Fortune 1000 businesses. Yet
maximizing the operational benefits from virtualization requires network storage that helpsoptimize the VMware infrastructure.
The IBM Storage solutions for VMware offer customers:
Flexibility: Support for iSCSI and Fibre Channel shared storage, plus HBA and storageport multi-pathing and boot from SAN.
Performance: Outstanding high-performance block-level storage that scales withVMwares VMFS file system; independently verified high performance by the SPC-1 and
SPC-2 (Storage Performance Council) benchmarks; and balanced performance deliveredby the IBM Midrange Storage Systems for mixed applications running in a vir tualinfrastructure.
Horizontal scalability: From entry-level through midrange to enterprise class network
storage with commonality of platform and storage management. Hot Backup and Quick recovery: Non-disruptive backup solutions using Tivoli and
NetBackup with and without VCB (VMware Consolidated Backup). Quick recovery at thefile or virtual machine level.
Disaster recovery: DS4000 and DS5000 Enhanced Remote Mirror offering affordabledisaster recovery with automatic failover in conjunction with VMware vCenter SiteRecovery Manager (SRM).
Affordability: Low TCO shared storage with included IBM Storage Manager Software andno separate software maintenance fees; cost-effective tiered storage within the samestorage system, leveraging Fibre Channel drives for high performance and SATA drives for
economical capacity.
Efficiency: Data Services features such as FlashCopy and VolumeCopy enable VMwareCentralized Backup to disk and eliminate backup windows, as well as provide required
network storage for VMware ESX Server features such as VMware VMotion, VMwareStorage vMotion, Resource Pools, VMware Dynamic Resource Scheduler (DRS) and
VMware High Availability.
VMware vSphere includes components and operations essential for managing virtualmachines. The following components form part of the new VMware vSphere suite:
VMware ESX/ESXi server
VMware vCenter Server
Datastore
Host Agent
1.3 VMware ESX Server Architecture
VMware ESX Server is virtual infrastructure partitioning software designed for server
consolidation, rapid deployment of new servers, increased availability, and simplifiedmanagement helping to improve hardware utilization, save space, IT staffing and hardware
costs.
8/2/2019 VMware Implementation With IBM Midrange System Storage
20/140
ch01-Overview.fm Draft Document for Review January 28, 2010 12:50 am
6 VMware Implementation with IBM Midrange System Storage
Many people may have had earlier experience with VMware's virtualization products in theform of VMware Workstation or VMware GSX Server. As aforementioned, VMware ESX
Server is quite different to other VMware products in that it runs directly on the hardware,offering a mainframe class virtualization software platform that enables the deployment of
multiple, secure, independent virtual machines on a single physical server.
VMware ESX Server allows several instances of operating systems like Windows Server2003, Windows Server 2008, Red Hat and (Novell) SuSE Linux, and more, to run in
partitions independent of one another. Therefore this technology is a key software enabler forserver consolidation that provides the ability to move existing, unmodified applications and
operating system environments from a large number of older systems onto a smaller numberof new high performance System x platforms.
Real cost savings can be achieved by allowing for a reduction in the number of physicalsystems to manage, saving floor space, rack space, reducing power consumption, and
eliminating the headaches associated with consolidating dissimilar operating systems andapplications that require their own OS instance.
The architecture of VMWare ESX Server is shown in Figure 1-1.
Figure 1-1 VMware ESX Server Architecture
Additionally, VMware ESX Server helps you build cost-effective, high-availability solutions by
using failover clustering between virtual machines. Until now, system partitioning (the abilityof one server to run multiple operating systems simultaneously) has been the domain of
mainframes and other large midrange servers. But with VMware ESX Server, dynamic, logicalpartitioning can be enabled on IBM System x systems.
Instead of deploying multiple servers scattered around a company and running a single
application on each, they can be consolidated together physically, while enhancing systemavailability at the same time. VMware ESX Server allows each server to run multipleoperating systems and applications in virtual machines providing centralized IT
management. Since these virtual machines are completely isolated from one another, if onewere to go down, it would not affect the others.
8/2/2019 VMware Implementation With IBM Midrange System Storage
21/140
Chapter 1. Introduction of IBM VMware Midrange Storage Solutions7
Draft Document for Review January 28, 2010 12:50 am ch01-Overview.fm
This means that not only is VMware ESX Server software great for optimizing hardwareusage, it can also give the added benefits of higher availability and scalability.
1.3.1 Overview of using VMware ESX Server with SAN
A storage area network (SAN) is a highly effective means to support and provision VMware
products. Consideration should be given for a SANs high performance characteristics andfeature functions such as Flashcopy, Volumecopy, and mirroring. The configuration of a SANrequires careful consideration of components to include host bus adapters (HBAs) in the host
servers, SAN switches, storage processors, disks, and storage disk arrays. A SAN topologyhas at least one switch present to form a SAN fabric.
1.3.2 Benefits of Using VMware ESX Server with SAN
Using a SAN with VMware ESX Server allows you to improve data accessibility and systemrecovery:
Effective store data redundantly and eliminate single points of failure.
Data Centers can quickly negotiate system failures.
VMware ESX Server systems provide multipathing by default and automatically support
virtual machines.
Using a SAN with VMware ESX Server systems extends failure resistance to servers.
Using VMware ESX Server with a SAN makes high availability and automatic load balancing
affordable for more applications than if dedicated hardware is used to provide standbyservices:
Because shared central storage is available, building virtual machine clusters that use
MSCS becomes possible.
If virtual machines are used as standby systems for existing physical servers, shared
storage is essential and a viable solution. VMware vMotion capabilities to migrate virtual machines seamlessly from one host to
another.
Use VMware High Availability (HA) in conjunction with a SAN for a cold standby solutions
guarantees an immediate, automatic failure response.
Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one
host to another for load balancing.
VMware DRS clusters, put an VMware ESX Server host into maintenance mode to havethe system migrate all running virtual machines to other VMware ESX Server hosts.
The transportability and encapsulation of VMware virtual machines complements the shared
nature of SAN storage. When virtual machines are located on SAN based storage, you canshut down a virtual machine on one server and power it up on another server or to suspend it
on one server and resume operation on another server on the same network in a matter ofminutes. This ability allows you to migrate computing resources while maintaining consistent
shared access.
1.3.3 VMwareESX Server and SAN Use Cases
Using VMware ESX Server systems in conjunction with SAN is effective for the following
tasks:
8/2/2019 VMware Implementation With IBM Midrange System Storage
22/140
ch01-Overview.fm Draft Document for Review January 28, 2010 12:50 am
8 VMware Implementation with IBM Midrange System Storage
Maintenance with zero downtime When performing maintenance, use VMware DRS orVMware vMotion to migrate virtual machines to other servers.
Load balancing Use VMware vMotion or VMware DRS to migrate virtual machines toother hosts for load balancing.
Storage consolidation and simplification of storage layout Host storage is not the mosteffective method to use storage available. Share storage is more manageable forallocation and recovery.
Disaster recovery Having all data stored on a SAN can greatly facilitate remote storageof data backups.
1.4 Overview of VMware Consolidated Backup (VCB)
VMware Consolidated Backup enables LAN-free backup of virtual machines from acentralized proxy server.
Figure 1-2 VMware Consolidated Backup
VMware Consolidated Backup allows you to:
Integrate with existing backup tools and technologies already in place.
Perform full and incremental file backups of virtual machines.
Perform full image backup of virtual machines.
Centrally manage backups to simplify management of IT resources.
Improve Performance with Centralized Virtual Machine BackupEliminate backup traffic from your network to improve the performance of production virtualmachines.
Eliminate backup traffic with LAN-free virtual machine backup utilizing tape devices.
Reduce the load on the VMware ESX Server and allow it to run more virtual machines.
8/2/2019 VMware Implementation With IBM Midrange System Storage
23/140
Chapter 1. Introduction of IBM VMware Midrange Storage Solutions9
Draft Document for Review January 28, 2010 12:50 am ch01-Overview.fm
VMware Consolidated Backup is designed for all editions of VMware Infrastructure and issupported with all editions of VMware vSphere. For the next generation of VMware
Consolidated Backup optimized for VMware vSphere view the vStorage APIs for DataProtection.
The Following steps provides a short overview of the actual VCB back up process, seeFigure 1-2 on page 8:
1. The VCB proxy server Opens communication with vCenter Server (Port 443 not 902).
2. Call is made to initiate a snapshot of a VM outlined in #4.
3. The hostd daemon on VMware ESX Server owning the Guest responds to the request byquiesing the VM (Maybe also Pre-Freeze-Script.bat if applicable)
4. The hostd creates a snapshot and a disk buffer delta.vmdk file is created to contain all
writes. A Snapshot file gets created (quickly) as well.
5. The hostd instructs VM tools to run post-thaw script and VMDK file gets open for export.
6. Copy or Export gets staged on Proxy server
7. Backup software exports the data or data gets copied to the disk, for example:
C:\mnt\8. Data deposited on disk (and can be moved via scripts or other 3rd party applications)
For additional information follow this link:
http://www.vmware.com/support/vi3/doc/releasenotes_vcb103u1.html
1.5 Overview of VMware vCenter Site Recovery Manager (SRM)
VMware vCenter Site Recovery Manager (SRM) provides business continuity and disaster
recovery protection for virtual environments. Protection can extend from individual replicated
datastores to an entire virtual site. VMwares virtualization of the data center offersadvantages that can be applied to business continuity and disaster recovery:
The entire state of a vir tual machine (memory, disk images, I/O and device state) isencapsulated. Encapsulation enables the state of a vir tual machine to be saved to a file.
Saving the state of a virtual machine to a file allows the transfer of an entire virtual machine toanother host.
Hardware independence eliminates the need for a complete replication of hardware at the
recovery site. Hardware running VMware ESX Server at one site can provide businesscontinuity and disaster recovery protection for hardware running VMware ESX Server at
another site. This eliminates the cost of purchasing and maintaining a system that sits idleuntil disaster strikes.
Hardware independence allows an image of the system at the protected site to boot from disk
at the recovery site in minutes or hours instead of days.
SRM leverages array based replication between a protected site and a recovery site, like theIBM DS Enhanced Remote Mirroring functionality. The workflow that is built into SRMautomatically discovers which datastores are setup for replication between the protected and
recovery sites. SRM can be configured to support bi-directional protection between two sites.
SRM provides protection for the operating systems and applications encapsulated by the
virtual machines running on VMware ESX Server.
http://www.vmware.com/support/vi3/doc/releasenotes_vcb103u1.htmlhttp://www.vmware.com/support/vi3/doc/releasenotes_vcb103u1.html8/2/2019 VMware Implementation With IBM Midrange System Storage
24/140
ch01-Overview.fm Draft Document for Review January 28, 2010 12:50 am
10 VMware Implementation with IBM Midrange System Storage
A SRM server must be installed at the protected site and at the recovery site. The protectedand recovery sites must each be managed by their own vCenter Server. The SRM server
uses the extensibility of the vCenter Server to provide:
1. Access control
2. Authorization
3. Custom events4. Event-triggered alarms
Figure 1-3 Data Recovery
SRM has the following prerequisites:
A vCenter Server installed at the protected site.
A vCenter Server installed at the recovery site.
Pre-configured array-based replication between the protected site and the recovery site.
Network configuration that allows TCP connectivity between SRM servers and VC servers
An Oracle or SQL Server database that uses ODBC for connectivity in the protected site
and in the recovery site. A SRM license installed on the VC license server at the protected site and the recovery
site.
8/2/2019 VMware Implementation With IBM Midrange System Storage
25/140
Chapter 1. Introduction of IBM VMware Midrange Storage Solutions11
Draft Document for Review January 28, 2010 12:50 am ch01-Overview.fm
Figure 1-4 SRM Layout
For additional information please follow this link:
http://www.vmware.com/products/srm/overview.html
For more detailed information please visit the IBM DS Series Portal, which contains updated
product materials and guides.
http://www.ibmdsseries.com/
ESG White Paper: Automated, Affordable DR Solutions with DS5000 & VMware SRM
http://www.ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=32
9&Itemid=239
ESG White paper: DS5000 Real-world Performance for Virtualized Systems
http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239
http://www.vmware.com/products/srm/overview.htmlhttp://www.ibmdsseries.com/http://www.ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=329&Itemid=239http://www.ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=329&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://www.ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=329&Itemid=239http://www.ibmdsseries.com/http://www.vmware.com/products/srm/overview.html8/2/2019 VMware Implementation With IBM Midrange System Storage
26/140
ch01-Overview.fm Draft Document for Review January 28, 2010 12:50 am
12 VMware Implementation with IBM Midrange System Storage
http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=2398/2/2019 VMware Implementation With IBM Midrange System Storage
27/140
Copyright IBM Corp. 2010. All rights reserved.13
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
Chapter 2. Security Design of the VMware
Infrastructure Architecture
This chapter covers the security design and associated items of the VMware InfrastructureArchitecture.
2
http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=239http://ibmdsseries.com/index.php?option=com_docman&task=doc_download&gid=167&Itemid=2398/2/2019 VMware Implementation With IBM Midrange System Storage
28/140
8/2/2019 VMware Implementation With IBM Midrange System Storage
29/140
Chapter 2. Security Design of the VMware Infrastructure Architecture15
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
VMware Infrastructure Architecture and Security Features
From a security perspective, VMware Infrastructure consists of several major components:
The virtualization layer, consisting of the VMkernel and the virtual machine monitor
The virtual machines
The VMware ESX Server service console
The VMware ESX Server virtual networking layer
Virtual storage
VCenter
2.2 Virtualization Layer
VMware ESX Server presents a generic x86 platform by virtualizing four key hardware
components: processor, memory, disk, and network. An operating system is then installedinto this virtualized platform. The virtualization layer, or VMkernel, is a kernel designed byVMware specifically to run virtual machines. It controls the hardware utilized by VMware ESX
Server hosts and schedules the allocation of hardware resources among the virtualmachines. Because the VMkernel is fully dedicated to supporting virtual machines and is not
used for other purposes, the interface to the VMkernel is strictly limited to the API required tomanage virtual machines. There are no public interfaces to VMkernel, and it cannot execute
arbitrary code.
The VMkernel alternates among all the virtual machines on the host in running the virtual
machine instructions on the processor. Every time a virtual machines execution is stopped, acontext switch occurs. During the context switch the processor register values are saved and
the new context is loaded. When a given virtual machines turn comes around again, thecorresponding register state is restored.
Each virtual machine has an associated virtual machine monitor (VMM). The VMM uses
binary translation to modify the guest operating system kernel code so it can run in aless-privileged processor ring. This is analogous to what a Java virtual machine does using
just-in-time translation. Additionally, the VMM virtualizes a chip set for the guest operatingsystem to run on. The device drivers in the guest cooperate with the VMM to access the
devices in the virtual chip set. The VMM passes request to the VMkernel to complete thedevice virtualization and support the requested operation.
2.3 CPU Virtualization
Binary translation is a powerful technique that can provide CPU virtualization with highperformance. The VMM uses a translator with the following properties:
Binary Input is binary x86 code, not source code.
Dynamic Translation happens at run time, interleaved with execution of the generated
code.
Note: The VMM utilized by VMware ESX Server is the same as the one used by otherVMware products that run on host operating systems, such as VMware Workstation.Therefore, all comments related to the VMM apply to all VMware vir tualization products.
8/2/2019 VMware Implementation With IBM Midrange System Storage
30/140
ch02-Security.fm Draft Document for Review January 28, 2010 12:50 am
16 VMware Implementation with IBM Midrange System Storage
On demand Code is translated only when it is about to execute. This eliminates need todifferentiate code and data.
System level the translator makes no assumptions about the code running in the virtualmachine. Rules are set by the x86 architecture, not by a higher-level application binaryinterface.
Subsetting the translators input is the full x86 instruction set, including all privilegedinstructions; output is a safe subset (mostly user-mode instructions).
Adaptive Translated code is adjusted in response to virtual machine behavior changesto improve overall efficiency.
During normal operation, the translator reads the vir tual machines memory at the address
indicated by the vir tual machine program counter, classifying the bytes as prefixes, opcodes,or operands to produce intermediate representation objects. Each intermediaterepresentation object represents one guest instruction. The translator accumulates
intermediate representation objects into a translation unit, stopping at 12 instructions or aterminating instruction (usually flow control). Buffer overflow attacks usually exploit code that
operates on unconstrained input without doing a length check. The classical example is astring that represents the name of something
Similar design principles are applied throughout the VMM code. There are few places wherethe VMM operates on data specified by the guest operating system, so the scope for buffer
overflows is much smaller than in a general-purpose operating system. In addition, VMwareprogrammers develop the software with awareness of the importance of programming in a
secure manner. This approach to software development greatly reduces the chance thatvulnerabilities will be overlooked. To provide an extra layer of security, the VMM supports the
buffer overflow prevention capabilities built in to most Intel and AMD CPUs, known as theNX or XD bit. Intels hyperthreading technology allows two process threads to execute on thesame CPU package. These threads can share the memory cache on the processor. Malicious
software can exploit this feature by having one thread monitor the execution of anotherthread, possibly allowing theft of cryptographic keys. VMware ESX Server virtual machines
do not provide hyperthreading technology to the guest operating system. VMware ESX
Server, however, can utilize hyperthreading to run two different virtual machinessimultaneously on the same physical processor. However, because vir tual machines do notnecessarily run on the same processor continuously, it is more challenging to exploit the
vulnerability discussed above. However, if you want a virtual machine to be protected againstthe small chance of the type of attack discussed above, VMware ESX Server provides an
option to isolate a virtual machine from hyperthreading. VMware knowledge base article 1728provides further details on this topic. Hardware manufacturers have begun to incorporate
CPU virtualization capabilities into processors. Although the first generation of theseprocessors does not perform as well as VMwares software-based binary translator, VMware
will continue to work with the manufacturers and make appropriate use of their technology asit evolves.
2.4 Memory Virtualization
The RAM allocated to a virtual machine by the VMM is defined by the virtual machines BIOSsettings. The memory is allocated by the VMkernel when it defines the resources to be usedby the virtual machine. A guest operating system uses physical memory allocated to it by the
VMkernel and defined in the vir tual machines configuration file.
The operating system that executes within a virtual machine expects a zero-based physical
address space, as provided by real hardware. The VMM gives each virtual machine theillusion that it is using such an address space, virtualizing physical memory by adding an
8/2/2019 VMware Implementation With IBM Midrange System Storage
31/140
Chapter 2. Security Design of the VMware Infrastructure Architecture17
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
extra level of address translation. A machine address refers to actual hardware memory, whilea physical address is a software abstraction used to provide the illusion of hardware memory
to a virtual machine. This paper uses physical in quotation marks to highlight this deviationfrom the usual meaning of the term.
The VMM maintains a pmap data structure for each vir tual machine to translate physicalpage numbers (PPNs) to machine page numbers (MPNs). Virtual machine instructions thatmanipulate guest operating system page tables or translation lookaside buffer contents are
intercepted, preventing updates to the hardware memory management unit. Separateshadow page tables, which contain virtual-to-machine page mappings, are maintained for use
by the processor and are kept consistent with the physical-to-machine mappings in the pmap.This approach permits ordinary memory references to execute without additional overhead,
since the hardware translation lookaside buffer caches direct virtual-to-machine addresstranslations read from the shadow page table. As memory management capabilities areenabled in hardware, VMware will take full advantage of the new capabilities while
maintaining the same strict adherence to isolation.
The extra level of indirection in the memory system is extremely powerful. The server canremap a physical page by changing its PPN-to-MPN mapping in a manner that is completelytransparent to the virtual machine. It also allows the VMM to interpose on guest memory
accesses. Any attempt by the operating system or any application running inside a virtualmachine to address memory outside of what has been allocated by the VMM would cause a
fault to be delivered to the guest operating system, typically resulting in an immediate systemcrash, panic, or halt in the vir tual machine, depending on the operating system. This is often
termed hyperspacing, when a malicious guest operating system attempts I/O to an addressspace that is outside normal boundaries.
When a virtual machine needs memory, each memory page is zeroed out by the VMkernel
before being handed to the virtual machine. Normally, the virtual machine then has exclusiveuse of the memory page, and no other virtual machine can touch it or even see it. The
exception is when transparent page sharing is in effect.
Transparent page sharing is a technique for using memory resources more efficiently.Memory pages that are identical in two or more virtual machines are stored once in the host
systems RAM, and each of the virtual machines has read-only access. Such shared pagesare common, for example, if many virtual machines on the same host run the same operating
system. As soon as any one vir tual machine tries to modify a shared page, it gets its ownprivate copy. Because shared memory pages are marked copy-on-write, it is impossible forone virtual machine to leak private information to another through this mechanism.
Transparent page sharing is controlled by the VMkernel and VMM and cannot becompromised by virtual machines. It can also be disabled on a per-host or per-vir tual
machine basis.
2.5 Virtual Machines
Virtual machines are the containers in which guest operating systems and their applicationsrun. By design, all VMware virtual machines are isolated from one another. Virtual machine
isolation is imperceptible to the guest operating system. Even a user with systemadministrator privileges or kernel system level access on a virtual machines guest operating
system cannot breach this layer of isolation to access another vir tual machine withoutprivileges explicitly granted by the VMware ESX Server system administrator.
This isolation enables multiple virtual machines to run securely while sharing hardware andensures both their ability to access hardware and their uninterrupted performance. For
8/2/2019 VMware Implementation With IBM Midrange System Storage
32/140
ch02-Security.fm Draft Document for Review January 28, 2010 12:50 am
18 VMware Implementation with IBM Midrange System Storage
example, if a guest operating system running in a vir tual machine crashes, other virtualmachines on the same VMware ESX Server host continue to run. The guest operating system
crash has no effect on:
The ability of users to access the other virtual machines
The ability of the running virtual machines to access the resources they need
The performance of the other virtual machines
Each virtual machine is isolated from other virtual machines running on the same hardware.While virtual machines share physical resources such as CPU, memory, and I/O devices, a
guest operating system in an individual virtual machine cannot detect any device other thanthe virtual devices made available to it.
Because the VMkernel and VMM mediate access to the physical resources and all physicalhardware access takes place through the VMkernel, virtual machines cannot circumvent this
level of isolation. Just as a physical machine can communicate with other machines in anetwork only through a network adapter, a virtual machine can communicate with other virtual
machines running on the same VMware ESX Server host only through a vir tual switch.Further, a virtual machine communicates with the physical network, including virtual
machines on other VMware ESX Server hosts, only through a physical network adapter.
In considering virtual machine isolation in a network context, you can apply these rules:
If a virtual machine does not share a virtual switch with any other virtual machine, it iscompletely isolated from other virtual networks within the host.
If no physical network adapter is configured for a vir tual machine, the virtual machine iscompletely isolated from any physical networks.
If you use the same safeguards (firewalls, antivirus software, and so forth) to protect avirtual machine from the network as you would for a physical machine, the virtual machineis as secure as the physical machine would be.
You can further protect virtual machines by setting up resource reservations and limits on the
VMware ESX Server host. For example, through the fine-grained resource controls availablein VMware ESX Server, you can configure a virtual machine so that it always gets at least 10percent of the VMware ESX Server hosts CPU resources, but never more than 20 percent.Resource reservations and limits protect virtual machines from performance degradation if
another virtual machine tries to consume too many resources on shared hardware. Forexample, if one of the virtual machines on an VMware ESX Server host is incapacitated by a
denial-of-service or distributed denial-of-service attack, a resource limit on that machineprevents the attack from taking up so many hardware resources that the other vir tual
machines are also affected. Similarly, a resource reservation on each of the vir tual machinesensures that, in the event of high resource demands by the virtual machine targeted by thedenial-of-service attack, all the other virtual machines still have enough resources to operate.
By default, VMware ESX Server imposes a form of resource reservation by applying a
distribution algorithm that divides the available host resources equally among the virtualmachines while keeping a certain percentage of resources for use by system components,such as the service console. This default behavior provides a degree of natural protectionfrom denial-of-service and distributed denial-of-service attacks. You set specific resource
reservations and limits on an individual basis if you want to customize the default behavior sothe distribution is not equal across all virtual machines on the host.
8/2/2019 VMware Implementation With IBM Midrange System Storage
33/140
Chapter 2. Security Design of the VMware Infrastructure Architecture19
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
2.6 Service Console
The VMware ESX Server service console provides an execution environment to monitor andadminister the entire VMware ESX Server host. The service console operating system is areduced version of Red Hat Enterprise Linux. Because it has been stripped of functionality
not necessary for interacting with the VMware ESX Server virtualization layer, not all
vulnerabilities of this distribution apply to the service console. VMware monitors and tracks allknown security exploits that apply to this particular reduced version and issues customupdates as and when needed.
If the service console is compromised, the virtual machines it interacts with might also be
compromised. This is analogous to an intruder gaining access to the ILOM service console ofa physical server. To minimize the risk of an attack through the service console, VMware ESX
Server protects the service console with a firewall. In addition, here are some of the otherways VMware ESX Server minimizes risks to the service console:
VMware ESX Server runs only services essential to managing its functions, and the Linux
distribution is limited to the features required to run VMware ESX Server.
By default, VMware ESX Server is installed with a high security setting, which means that
all outbound ports are closed and the only inbound ports that are open are those requiredfor interactions with clients such as the VMware Virtual Infrastructure Client. VMware
recommends that you keep this security setting unless the service console is connected toa trusted network.
All communications from clients are encrypted through SSL by default. The SSL
connection uses 256-bit AES block encryption and 1024-bit RSA key encryption.
The Tomcat Web service, used internally by VMware ESX Server to support access to theservice console by Web clients such as VMware Virtual Infrastructure Web Access, has
been modified to run only those functions required for administration and monitoring by aWeb client.
VMware monitors all security alerts that could affect service console security and, if
needed, issues a security patch, as it would for any other security vulnerability that couldaffect VMware ESX Server hosts. VMware provides security patches for Red Hat
Enterprise Linux 3, Update 6 and later as they become available.
Insecure services such as FTP and Telnet are not installed and the ports for theseservices are closed by default.
The number of applications that use a setuid or setgid flag has been minimized.
VMware ESX Server supports SNMPv1, and the management information base is
read-only. Nothing can be set through SNMP management calls.
Although the service console provides an avenue by which virtual machines can bemanipulated, VMware ESX Server is designed to enable the administrator to place it on an
entirely isolated internal network, via a separate VLAN or even using an entirely separate
network adapter. Thus the risk of compromise is one that can be managed in astraightforward way.
2.7 Virtual Networking Layer
The virtual networking layer consists of the virtual network devices through which virtual
machines and the service console interface with the rest of the network. VMware ESX Serverrelies on the virtual networking layer to support communications between virtual machinesand their users. In addition, VMware ESX Server hosts use the virtual networking layer to
8/2/2019 VMware Implementation With IBM Midrange System Storage
34/140
ch02-Security.fm Draft Document for Review January 28, 2010 12:50 am
20 VMware Implementation with IBM Midrange System Storage
communicate with iSCSI SANs, NAS storage, and so forth. The virtual networking layerincludes virtual network adapters and the virtual switches.
2.8 Virtual Switches
The networking stack was completely rewritten for VMware ESX Server using a modular
design for maximum flexibility. A virtual switch is built to order at run time from a collection ofsmall functional units, such as:
The core layer 2 forwarding engine
VLAN tagging, stripping, and filtering units
Virtual port capabilities specific to a particular adapter or a specific port on a virtual switch
Level security, checksum, and segmentation offload units
When the virtual switch is built at run time, VMware ESX Server loads only those components
it needs. It installs and runs only what is actually needed to support the specific physical andvirtual Ethernet adapter types used in the configuration. This means the system pays the
lowest possible cost in complexity and hence makes the assurance of a secure architectureall the more possible.
2.9 Virtual Switch VLANs
VMware ESX Server supports IEEE 802.1q VLANs, which you can use to fur ther protect thevirtual machine network, service console, or storage configuration. This driver is written by
VMware software engineers according to the IEEE specification. VLANs let you segment aphysical network so that two machines on the same physical network cannot send packets toor receive packets from each other unless they are on the same VLAN. There are three
different configuration modes to tag (and untag) the packets for virtual machine frames.
Virtual machine guest tagging (VGT mode) You may install an 802.1Q VLAN trunking
driver inside the virtual machine, and tags will be preserved between the virtual machinenetworking stack and external switch when frames are passed from or to vir tual switches.
External switch tagging (EST mode) you may use external switches for VLAN tagging.
This is similar to a physical network, and VLAN configuration is normally transparent toeach individual physical server.
Virtual switch tagging (VST mode) In this mode, you provision one port group on avirtual switch for each VLAN, then attach the virtual machines virtual adapter to the port
group instead of the virtual switch directly. The virtual switch port group tags all outboundframes and removes tags for all inbound frames. It also ensures that frames on one VLAN
do not leak into a different VLAN.
2.10 Virtual Ports
The virtual ports in VMware ESX Server provide a rich control channel for communication
with the virtual Ethernet adapters attached to them. VMware ESX Server virtual ports knowauthoritatively what are the configured receive filters for virtual Ethernet adapters attached tothem. This means no learning is required to populate forwarding tables.
8/2/2019 VMware Implementation With IBM Midrange System Storage
35/140
Chapter 2. Security Design of the VMware Infrastructure Architecture21
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
Virtual ports also know authoritatively the hard configuration of the virtual Ethernet adaptersattached to them. This capability makes it possible to set such policies as forbidding MAC
address changes by the guest and rejecting forged MAC address transmission, because thevirtual switch port can essentially know for sure what is burned into ROM (actually, stored in
the configuration file, outside control of the guest operating system).
The policies available in virtual ports are much harder to implement if they are possible atall with physical switches. Either someone must manually program the ACLs into the
switch port, or you must rely on such weak assumptions as first MAC seen is assumed to becorrect.
The port groups used in VMware ESX Server do not have a counterpart in physical networks.You can think of them as templates for creating vir tual ports with par ticular sets of
specifications. Because virtual machines move around from host to host, VMware ESXServer needs a good way to specify, through a layer of indirection, that a given vir tual
machine should have a particular type of connectivity on every host on which it might run.Port groups provide this layer of indirection, enabling VMware Infrastructure to provide
consistent network access to a virtual machine, wherever it runs.
Port groups are user-named objects that contain enough configuration information to provide
persistent and consistent network access for virtual Ethernet adapters:
Virtual switch name
VLAN IDs and policies for tagging and filtering
Teaming policy
Layer security options
Traffic shaping parameters
Thus, port groups provide a powerful way to define and enforce security policies for virtualnetworking.
2.11 Virtual Network Adapters
VMware Infrastructure provides several types of virtual network adapters that guest operating
systems can use. The choice of adapter depends upon factors such as support by the guestoperating system and performance, but all of them share these characteristics:
They have their own MAC addresses and unicast/multicast/ broadcast filters.
They are strictly layered Ethernet adapter devices.
They interact with the low-level VMkernel layer stack via a common API.
Virtual Ethernet adapters connect to virtual ports when you power on the virtual machine on
which the adapters are configured, when you take an explicit action to connect the device, orwhen you migrate a virtual machine using VMotion. A virtual Ethernet adapter updates thevirtual switch port with MAC filtering information when it is initialized and whenever it
changes. A virtual port may ignore any requests from the virtual Ethernet adapter that wouldviolate the level 2 security policy in effect for the port.
8/2/2019 VMware Implementation With IBM Midrange System Storage
36/140
ch02-Security.fm Draft Document for Review January 28, 2010 12:50 am
22 VMware Implementation with IBM Midrange System Storage
2.12 Virtual Switch Isolation
A common cause of traffic leaks in the world of physical switches is cascading oftenneeded because physical switches have a limited number of ports. Because virtual switchesprovide all the ports you need in one switch, there is no code to connect virtual switches.
VMware ESX Server provides no path for network data to go between virtual switches at all.
Therefore, it is relatively easy for VMware ESX Server to avoid accidental violations ofnetwork isolation or violations that result from malicious software running in a virtual machineor a malicious user. In other words, the VMware ESX Server system does not have
complicated and potentially failure-prone logic to make sure that only the right traffic travelsfrom one virtual switch to another; instead, it simply does not implement any path that anytraffic could use to travel between virtual switches. Furthermore, virtual switches cannot share
physical Ethernet adapters, so there is no way to fool the Ethernet adapter into doingloopback or something similar that would cause a leak between virtual switches.
In addition, each virtual switch has its own forwarding table, and there is no mechanism in thecode to allow an entry in one table to point to a port on another vir tual switch. In other words,every destination the switch looks up must match ports on the same virtual switch as the port
where the frame originated, even if other virtual switches lookup tables contain entries for
that address.
A would-be attacker would likely have to find a remote code execution bug in the vmkernel tocircumvent virtual switch isolation. Because VMware ESX Server parses so little of the framedata primarily just the Ethernet header this would be difficult.
There are natural limits to this isolation. If you connect the uplinks of two vir tual switches
together, or if you bridge two virtual switches with software running in a virtual machine, youopen the door to the same kinds of problems you might see in physical switches.
2.13 Virtual Switch Correctness
It is important to ensure that virtual machines or other nodes on the network cannot affect the
behavior of the virtual switch.
VMware ESX Server guards against such influences in the following ways:
Virtual switches do not learn from the network in order to populate their forwarding tables.This eliminates a likely vector for denial-of-service (DoS) or leakage attacks, either as a
direct DoS attempt or, more likely, as a side effect of some other attack, such as a worm orvirus, as it scans for vulnerable hosts to infect.
Virtual switches make private copies of any frame data used to make forwarding or filteringdecisions. This is a critical feature and is unique to vir tual switches.
It is important to ensure that frames are contained within the appropriate VLAN on a virtualswitch. VMware ESX Server does so in the following ways:
VLAN data is carried outside the frame as it passes through the virtual switch. Filtering is
a simple integer comparison. This is really just a special case of the general principle thatthe system should not trust user accessible data.
Virtual switches have no dynamic trunking support.
Virtual switches have no support for what is referred to as native VLAN.
Dynamic trunking and native VLAN are features in which an attacker may find vulnerabilitiesthat could open isolation leaks. This is not to say that these features are inherently insecure,
8/2/2019 VMware Implementation With IBM Midrange System Storage
37/140
Chapter 2. Security Design of the VMware Infrastructure Architecture23
Draft Document for Review January 28, 2010 12:50 am ch02-Security.fm
but even if they are implemented securely, their complexity may lead to mis-configuration andopen an attack vector.
2.14 Virtualized Storage
VMware ESX Server implements a streamlined path to provide high-speed and isolated I/O
for performance-critical network and disk devices. An I/O request issued by a guest operatingsystem first goes to the appropriate driver in the virtual machine. For storage controllers,
VMware ESX Server emulates LSI Logic or BusLogic SCSI devices, so the correspondingdriver loaded into the guest operating system is either an LSI Logic or a BusLogic driver. The
driver typically turns the I/O requests into accesses to I/O ports to communicate to the virtualdevices using privileged IA-32 IN and OUT instructions. These instructions are trapped by thevirtual machine monitor, and then handled by device emulation code in the virtual machine
monitor based on the specific I/O port being accessed. The vir tual machine monitor then callsdevice-independent network or disk code to process the I/O. For disk I/O, VMware ESX
Server maintains a queue of pending requests per virtual machine for each target SCSIdevice. The disk I/O requests for a single target are processed in round-robin fashion across
virtual machines by default. The I/O requests are then sent down to the device driver loadedinto VMware ESX Server for the specific device on the physical machine.
2.15 SAN Security
A host running VMware ESX Server is attached to a Fibre Channel SAN in the same way thatany other host is. It uses Fibre Channel HBAs, with the drivers for those HBAs installed in thesoftware layer that interacts directly with the hardware. In environments that do not include
virtualization software, the drivers are installed on the operating system, but for VMware ESXServer, the drivers are installed in the VMware ESX Server VMkernel. VMware ESX Server
also includes VMware Virtual Machine File System (VMware VMFS), a distributed file system
and volume manager that creates and manages virtual volumes on top of the LUNs that arepresented to the VMware ESX Server host. Those vir tual volumes, usually referred to asvirtual disks, are allocated to specific virtual machines.
Virtual machines have no knowledge or understanding of Fibre Channel. The only storage
available to virtual machines is on SCSI devices. Put another way, a vir tual machine does nothave virtual Fibre Channel HBAs but only has virtual SCSI adapters. Each virtual machine is
able to see only the virtual disks that are presented to it on its virtual SCSI adapters. Thisisolation is complete, with regard to both security and performance. A VMware virtualmachine has no visibility into the WWN (world wide name), the physical Fibre Channel HBAs,
or even the target ID or other information about the LUNs upon which its virtual disks reside.The virtual machine is isolated to such a degree that software executing in the virtual machine
cannot even detect that it is running on a SAN fabric. Even multipathing is handled in a way
that is transparent to a virtual machine. Furthermore, virtual machines can be configured tolimit the bandwidth they use to communicate with storage devices. This prevents thepossibility of a denial-of-service attack against other vir tual machines on the same host byone virtual machine taking over the Fibre Channel HBA.
Consider the example of running a Microsoft Windows operating system inside a VMwarevirtual machine. The virtual machine sees only the virtual disks chosen by the ESX Server
administrator at the time the virtual machine is configured. This operation of configuring avirtual machine to see only certain vir tual disks is effectively LUN masking in the virtualizedenvironment. It has the same security benefits as LUN masking in the physical world; it is just
done with a different set of tools.