Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

  • Upload
    vwvr9

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    1/24

    Extend a Fibre Channel SAN and

    Leverage Virtual Infrastructure via iSCSI

    Infrastructure

    Virtualization

    openBench Labs

    Analysis:

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    2/24

    Extend a Fibre Channel SAN andLeverage Virtual Infrastructure via iSCSI

    Analysis:

    Author: Jack Fegreus, Ph.D.

    Chief Technology Officer

    openBench Labs

    http://www.openBench.com

    October 30, 2007

    Jack Fegreus is Chief Technology Officer at openBench Labs, whichconsults with a number of independent publications. He currently serves asCTO of Strategic Communications, Editorial Director of Open magazineand contributes to InfoStor and Virtualization Strategy. He has served asEditor in Chief of Data Storage, BackOffice CTO, Client/Server Today, andDigital Review. Previously Jack served as a consultant to Demax Softwareand was IT Director at Riley Stoker Corp. Jack holds a Ph.D. in Mathematics

    and worked on the application of computers to symbolic logic.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    3/24

    Table of Contents

    Table of Contents

    Executive Summary 04

    Assessment Scenario 07

    Real Performance, Virtual Advantage 15

    Concentrator Value 23

    03

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    4/24

    For iSCSI storage networking over standard Gigabit Ethernetconnections, the StoneFly Storage Concentrator i4000 is an appliance

    for providing storageprovisioning using iSCSI overan Ethernet LAN. Via the

    StoneFusion OS, a specializedOS built on the Linux kernel,the StoneFly iSCSI StorageConcentrator integrates thepower of an iSCSI router withextensive management

    services. As a result, thisStoneFly appliance presents ITwith an exceptionalmechanism for extending thebenefits of an existing FibreChannel SAN to a muchbroader base of clients. Notthe least of these extended

    clients are virtual machines(VMs) running in a VMwareVirtual Infrastructure (VI).

    IT can quickly install one or more of the Stonefly Storage Concentratorsutilizing existing Ethernet and FC infrastructure. Once installed, IT canleverage the concentrator's storage-provisioning engine to provide

    advanced storage management, business continuity, and disaster recoveryfunctions, In particular, StoneFusion is quite robust in providing storage

    virtualization, both synchronous and asynchronous mirroring, snapshots,and active/active clustering. Moreover, IT can leverage the appliance's sup-port for heterogeneous hosts and storage devices to increase the utilizationof storage resources via storage pooling.

    Maximizing storage resource utilization is extremely important for

    Executive Summary

    Executive SummaryFor cost-conscious IT decision makers, StoneFly StorageConcentrators, incorporate a virtualization engine for storageprovisioning and management in order to add another importantadvantage: the ability to cut operating costs.

    openBench Labs Test Briefing:StoneFly Storage Concentrator i4000

    1) Logical volume management services: The StoneFly Storage

    Concentrator presents administrators with a uniform logical

    representation of physical storage resources to simplify operations.

    2) Web-based GUI for storage provisioning: System administrators

    create iSCSI target volumes by allocating blocks of storage and

    authorize the use of those volumes by individual host systems via an

    HTML interface resident on the Storage Concentrator.

    3) Higher I/O operations per second: Intelligent iSCSI storage packet

    routing, processes data and commands concurrently, increasing

    system efficiency and storage throughput.

    4) Volume copying: To support content distribution, such as the

    distribution of a VM from a template, a copy volume function makes

    an exact copy of a spanned volume, a mirror volume, or a Snapshot

    Live Volume.

    5) Image mirroring: To support business continuity functions with no

    single point of failure, StoneFly Reflection provides administrators

    with an easy way to create, detach, reattach, and promote mirror

    images of volumes.

    04

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    5/24

    CIOs, who are frequently under the gun to provide a more demonstrablyresponsive IT infrastructure to meet rapidly accelerating changes inbusiness cycles. As a result of that pressure, IT must frequently deploy

    new resources or repurpose existing resources. More importantly, it is notthe acquisition of resources so much as the management of thoseresources that is the biggest driver of IT costs. The general rule of thumbis that operating costs for managing storage on a per-gigabyte basis arethree to ten times greater than the capital costs of storage acquisition.

    That's because provisioning and management tasks associated withstorage resources are highly labor-intensive and often burdened by thebureaucratic inefficiencies.

    With regard to IT management costs, the 2006 McKinsey survey of sen-

    ior IT executives revealed that systems and storage virtualization hadbecome critically important to CIOs. What makes virtualization a top-of-mind proposition for CIOs today is the ability of virtual devices to be

    isolated from the constraints of physical limitations. By separating functionfrom physical implementation, IT can manage that resource as a genericdevice based on its function. That means system administrators can narrowtheir operations focus from a plethora of proprietary devices to a limitednumber of generic resource pools.

    That's why system and storage virtualization share the spotlight in theMcKinsey CIO survey. What's more, deriving the maximal benefits from

    system virtualization in a VI environment requires storage virtualization asa necessary prerequisite. The issues of availability and mobility of both aVM and its data plays an important role in such daily operational tasks asload balancing and system testing. Not surprisingly, VM availability andmobility really rise to the forefront in a disaster recovery scenario. Theimage of files stranded on storage directly attached to a nonfunctional serv-er makes a bad poster for high availability.

    SAN technology has long been the premier means of consolidating stor-

    age resources and streamlining management in large data centers.Nonetheless, storage virtualization for physical servers and commercialoperating systems, such as Microsoft Windows and Linux, is burdenedwith complexity because most commercial operating systems assume exclu-sive ownership of storage volumes.

    Storage virtualization in a VI environment, however, is a much simplerproposition as the file system for VMware ESX, dubbed VMFS, eliminatesthe burning issue of exclusive volume ownership. By handling distributed

    Executive Summary

    05

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    6/24

    file locking between systems, VMFS renders the issue of volume ownershipmoot. That opens the door to using iSCSI to extend the benefits of physicaland functional separation via a cost-effective lightweight SAN. As a result,

    iSCSI has become de rigueur in large datacenters for ESX servers.

    More importantly for cost-conscious IT decision makers, StoneFlyStorage Concentrators incorporate a storage virtualization engine for stor-age provisioning and management in order to add another important

    advantage: the ability to cut operating costs. System administrators can usethe StoneFusion management GUI to perform critical storage managementtasks from virtualization to the creation of volume copies and snapshotsand even the configuration of synchronous and asynchronous mirrors. As aresult, a system administrator servicing an iSCSI client can directly handle

    the labor-intensive storage management tasks that would normally requirecoordination with a storage administrator.

    Executive Summary

    06

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    7/24

    STAND-ALONE PHYSICAL SERVER TESTING

    To assess the Stonefly StorageConcentrator i4000, openBenchLabs set up two test scenarios. In

    the initial scenario, weconcentrated on determiningperformance parameters fortraditional physical servers. Inthis scenario, we ran WindowsServer 2003 SP2 and Novel

    SUSE Linux Enterprise Server(SLES) 10 SP1 on an HP ProliantML350 G3 server. This serversported a 2.4GHz Xeonprocessor, 2GB of RAM, and anembedded Gigabit Ethernet TOE.We also installed a QLogic 4050hardware iSCSI HBA.

    In our second scenario, weused our initial test results as atemplate for server consolida-tion. Utilizing two quad-processor servers running ESX3.0.1, openBench Labs tested

    iSCSI performance on an ESXhost server in supporting a VM

    datastore hosting a virtual workvolume. These tests were done inthe context of replacing an HP

    Proliant ML350 G3 server with a VM. In addition, we tested the volumecopy and advanced image management functionality of StoneFusion in

    Assessment Scenario

    07

    Assessment Scenario

    By performing all partitioning and management functions for virtualstorage volumes on the iSCSI concentrator and not on the FC array,openBench Labs was able to leverage key capabilities of StoneFusion toreduce operating costs by enabling system administrators to carry outtasks that normally require co-ordination with a storage administrator.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    8/24

    our VI environment. In those tests, we assessed the StoneFusion functionsas a means of enhancing the distribution of VM operating systems fromtemplates and bolstering business continuity for disaster recovery.

    Along with our Stoneflyi4000 iSCSI StorageConcentrator on the iSCSIside of our SAN fabric, we

    employed a NETGEARGSM7324 level 3 managedGigabit Ethernet switchand several QLogic 4050iSCSI HBAs. We employed

    the QLogic iSCSI HBA tomaximize throughputfrom the StoneFly i4000 by

    eliminating all of theoverhead associated withiSCSI packet processing.

    On the Fibre Channelside of our fabric, weutilized a QLogic SANbox9200 switch, an nStor 4540

    storage array, and an IBMDS4100 storage array. Wechose the IBMTotalStorage DS4100 as theprimary array for providingbackend storage for tworeasons: its large storagecapacity and its robust I/Ocaching capability.

    To support numerousiSCSI client systems, storagecapacity is often a primaryconcern when configuring

    an iSCSI fabric. Using low-cost high-capacity SATA drives, we were able toconfigure our IBM DS4100 array with 3.2TB of storage: From that pool, weassigned 1.6TB to the StoneFly i4000 in bulk via a single LUN.

    Assessment Scenario

    08

    The StoneFusion man-

    agement GUI provides a

    "Discover" button, which is

    used to launch a process

    that automatically discov-

    ers new storage resources.

    What's more, StoneFusion

    also automatically discov-

    ers any HTML-based man-

    agement utilities. That pro-

    vided us with the ability to

    bring up StorView, thestorage management GUI

    for the nStor FC-FC array

    directly from within

    StoneFusion.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    9/24

    For our tests, however,rapid response toexcessively high numbers

    of I/O operations persecond (IOPS) trumpcapacity. That's becauseour oblLoad benchmarkgenerates high numbers of

    IOPS to stress all thecomponents of a SANfabric. With respect to ouranalysis, the IBM DS4100provides an excellent bal-

    ance of capacity with I/Oresponsiveness. For I/Operformance, our DS4100

    sports two independentcontrollers, each of whichfeatures a highlyconfigurable 1GB cacheand dual 2Gbit FC ports.

    By performing allpartitioning and manage-

    ment functions for virtualstorage volumes on theiSCSI concentrator and noton the FC array,openBench Labs was ableto leverage key capabilitiesof StoneFusion to reduceoperating costs by enablingsystem administrators to

    carry out tasks thatnormally requireco-ordination with astorage administrator. Inparticular, we were able toconsolidate storage frommultiple FC arrays into a

    pool that could be managedfrom the StoneFly i4000. More importantly, we were able to configure

    Assessment Scenario

    09

    For a uniform test

    environment, we configured

    all volumes that would be

    used in benchmark testsusing 1.6TB of storage

    imported from an IBM

    DS4100 array. In particular,

    we consumed 750GB in

    creating a number of 25GB

    partitions to support VM

    operating systems and

    50GB partitions to support

    user data for applications

    on both VM and physical

    systems. More importantly,

    we could now use all of the

    advanced provisioning fea-

    tures that are part of the

    StoneFusion OS. This

    proved to be extremely

    important when working

    with VMs.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    10/24

    logical volumesdubbed resource targets in the iSCSI vernacularandexport them to client systems without any regard for the sources of theblocks within the pool.

    To maintain consistencyin benchmark

    performance, which ishighly dependent on thedisk drive characteristics,controller caching, andRAID configurationassociated with theunderlying storage array,

    openBench Labs createdall volumes that would be

    used for performancebenchmarking explicitlywith disk blocks imported

    via the 1.6TB LUN fromthe DS4100 array.

    BENCHMARK BASICS

    Like all other storage transport protocols, iSCSI performance has twodimensions: data throughput, which is typically measured in MB per

    second, and data accessibility, which is measured in I/O operationscompleted per second (IOPS). To assess overall iSCSI performance, weran our oblDisk and oblLoad benchmarks, which measure throughputand accessibility respectively.

    The oblDisk benchmark simulates high-end multimedia-especiallyvideo related-I/O operations by reading data sequentially using a range ofI/O request sizes-from 4-to-128 KB. In contrast, the oblLoad benchmarksimulates database access in a high-volume, transaction-processingenvironment using small-typically 8KB-I/O requests, which are randomwithin defined localities. In particular, oblLoad measures the total numberof IOPS that can be completed with the constraint that average responsetime never exceeds 100ms. In so doing, oblLoad generates much moreoverhead for a host system than oblDisk.

    As a system running oblLoad generates greater numbers of IOPS, astorage system that can keep pace fulfilling those requests will in turncreate more overhead on the requesting system, which must process more

    Assessment Scenario

    10

    Using the StoneFusion

    management GUI, we

    provisioned logical

    volumes for benchmarking

    manually. In this way, we

    had complete control over

    the source of disk blocks

    from the resource pool of

    FC-based storage that had

    been created on the

    StoneFly i4000.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    11/24

    network packets and SCSI commands. To eliminate this overhead from ourhost server, we installed a QLogic iSCSI HBA for use in physical server tests.In addition, for TCP packet processing, which a TOE offloads, the QLogic

    HBA also handles the processing of the embedded SCSI packets.

    The oblLoad benchmark launches an increasing number of disk I/Odaemons that initiate a series of read/write requeststypically 8KB in size.One portion of the requests is directed at a fixed hot spot representing the

    index tables of a database. The remaining portion is randomly distributedover the entire volume.

    That hot spot provides a means to test the caching capabilities of theunderlying storage system. As the number of disk daemons increases, so too

    should the effectiveness of the array controller's caching increase within thehot spot. As earlier noted, the IBM DS4100 storage system's robust ability tosupport the dynamic tuning of cache performance is precisely why we

    chose that array to support our tests.

    VIRTUAL CONSOLIDATION

    The standalone tests on the HP ProLiant ML350 G3 servers also provid-ed an interesting case study for server consolidation through system, storageand network virtualization. Virtualization extends the power of IT to inno-

    vate by providing the means to leverage logical representations of resources.Whether through aggregation or deconstruction, virtualized resources are

    not restricted by physical configuration, implementation, or geographiclocation: That makes a virtual representation more powerful and able toprovide greater benefits than the original physical configuration. Whenmaximally exploited by IT, virtualization becomes a platform for innova-tion for which the benefits move far beyond basic reductions in the totalcost of ownership (TCO).

    Scattered application servers and data storage systems often reduceadministrator productivity and increase vulnerability. In responding tothose issues, many sites began consolidating physical servers into farms

    of 1U and 2U servers in rows of racks. Nonetheless, that wave ofconsolidation did little to help improve resource utilization and oftenmade matters worse by creating serious environmental issues centered onpower and cooling.

    As a result, IT is moving away from physical server consolidation andtoward virtual server consolidation. With 4 to 8 virtual serves running ona single physical server, IT can centralize resources, address growing

    Assessment Scenario

    11

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    12/24

    datacenter environmental issues, and make dramatic improvements inresource utilization. What's more, system virtualization compounds theopportunities to leverage both the operational and performanceefficiencies of a SAN.

    To assess the

    performance of theStoneFly i4000 in a VIenvironment,openBench Labs set uptwo quad-processorservers: an HP ProLiantDL580 G3 and a Dell

    1900. Both servers ranVMware ESX v3.0.1and hosted from one-to-four simultaneousVMs that were runningeither Windows Server2003 SP2 or SUSELinux Enterprise Server10 SP1.

    For IT to get the

    maximum value from aVM, any constraintsthat bind that VM to aphysical server shouldbe avoided. First andforemost, there will bethe need to handle load

    balancing and failoverof virtual machines. Inaddition, there will be

    the need to move VMconfigurations in andout of development,test, and production

    environments. What's more, VMotion now makes it easy to move virtualmachines dynamically among host servers running ESX 3.

    12

    Assessment Scenario

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    13/24

    That means all virtualmachines on all physicalhosts must be capable of

    accessing the same storageresources, and that makesa SAN essential.Nonetheless, it is theadvanced capabilities of

    VMware to leverage SANstorage that makes a light-weight iSCSI SAN analmost definingcharacteristic for VMware

    sites.

    On each server running ESX, we set up a virtual switch-based LAN

    using two gigabit TOEs, which were teamed by ESX. Similarly, theStoneFly i4000 automatically teamed its two TOEs.

    On the ESX server's virtual LAN, we created a VMware kernel port forthe VMware software initiator to enable iSCSI connections. In addition,we also installed a QLogic iSCSI HBA on each ESX server. Within the VIconsole, the iSCSI HBA immediately appeared as an iSCSI-based StorageAdapter. Through either the hardware HBA or the software initiator, ESX

    handled every iSCSI connection.

    The StoneFly i4000 alsodistinguished each of theiSCSI initiators on each ofthe ESX servers as separatehosts. As a result, we wereable to use the StoneFlymanagement GUI to assign

    read-write access rights forvolumes explicitly to eitherthe ESX server's softwareinitiator or the QLogiciSCSI HBA. In turn, the VIClient properly displayedevery iSCSI target exported

    from the StoneFly i4000 asconnected to the appropriate iSCSI host initiator. What's more, as we

    13

    Assessment Scenario

    StoneFusion uses the

    unique ID of each iSCSI

    initiator on a client host as

    the primary means to con-trol access to virtual vol-

    umes. With a QLogic iSCSI

    HBA installed on our HP

    ProLiant DL580 server, the

    VMware software initiator

    and the iSCSI HBA

    appeared as separately

    addressable hosts.

    When authorizing access

    to a volume, the

    Challenge-Handshake

    Authentication Protocol

    (CHAP) can be invoked in

    conjunction with the iSCSI

    initiator ID for added

    security. For our volume

    Win02, which contained a

    VM running Windows

    Server 2003, we grantedfull access to both of our

    ESX servers via their

    VMware iSCSI initiator. The

    VMFS DLM ensured that

    only one server at a time

    could open and start the

    Win02 VM image.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    14/24

    created more volumes on the StoneFly i4000 and granted access to aninitiator associated with a particular ESX server, a rescan of storageadapters on the VI Client would make them visible.

    On host seversrunning VMwareESX 3, physicalresources are

    aggregated andpresented to systemadministrators asshared pools ofuniform devices. All

    of the target iSCSIvolumes exported toeither the software

    iSCSI initiator or theiSCSI HBA werepooled by the ESXsever and presented

    to the virtual machines as direct-attached SCSI disks.

    More importantly, storage virtualization in a VMware VirtualInfrastructure (VI) environment is a far less complex proposition than

    storage virtualization in an FC SAN using physical systems. Commercialoperating systems, such as Microsoft Windows and Linux, assumeexclusive ownership of their storage volumes, As a result, neitherWindows nor Linux incorporates a distributed file locking mechanism inits file system. A distributed lock manager (DLM) is essential if multiplesystems are to maintain a consistent view of a volume's contents. Withouta DLM, virtualization of volume ownership is the only means of prevent-ing the corruption of disk volumes. That has made SAN management theexclusive domain of storage administrators at most enterprise-class sites

    working with physical systems.

    On the other hand, the file system for ESX, dubbed VMFS, has a built-in mechanism to handle distributed file locking. Thanks to thatmechanism, exclusive volume ownership is not a burning issue in a VIenvironment. What's more, VMFS avoids the massive overhead that aDLM typically imposes: VMFS simple treats each disk volume as a single-file image in a way that is loosely analogous to an ISO-formattedCDROM. When a VMs OS mounts a disk, it opens a disk-image file;

    14

    Assessment Scenario

    Whether connected to

    the ESX server via the

    VMware initiator or the

    QLogic iSCSI HBA, all

    storage resources, such as

    our VM-Win02 volume,

    were aggregated into a

    virtual storage pool under

    ESX and presented to the

    VMs as direct attached

    SCSI disks.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    15/24

    VMFS locks that file; and the VM's OS gains exclusive ownership of thedisk volume.

    With the issue ofvolume ownership moot,iSCSI becomes a perfectway to extend the benefitsof physical and functional

    separation via a more cost-effective, easy-to-manage,lightweight IP SAN fabric.That has made iSCSI derigueur for ESX servers in

    large datacenters.

    By using the StoneFly

    i4000 Storage Concentrator running the StoneFusion OS to anchor aniSCSI fabric, IT can limit the involvement of storage administrators withthe iSCSI fabric. A storage administrator will only be needed to provisionthe iSCSI concentrator with bulk storage from an FC SAN array. Systemadministrators can easily manage the storage provisioning needs of theiriSCSI client systems, including ESX servers, by invoking the storageprovisioning functions within StoneFusion.

    PHYSICAL BASELINE

    We began testing on an HP Proliant ML350 G3 server runningWindows Server 2003. Thanks to Microsoft's freely available software ini-tiator, systems running a Windows OS have become the premier platformfor iSCSI. Though far less prevalent than the Microsoft iSCSI initiator, theStoneFusion OS also supports the Microsoft Internet Storage Name Service

    (iSNS). By registering with iSNS, the StoneFly i4000 insures automaticdiscovery by the Microsoft initiator.

    15

    Real Performance, Virtual Advantage

    Using StoneFusion's

    management GUI,

    openBench Labs was ableto invoke a rich collection

    of storage manage utilities.

    Among these utilities are a

    number of high-availability

    tools to create copies and

    maintain mirror images of

    volumes. Within a small VI

    environment, system

    administrators can also

    utilize these tools in

    conjunction with the basic

    VI client software to

    provide simple VMtemplate management

    capabilities that would

    normally require an

    additional server running

    the VMware Virtual Center.

    Real Performance, Virtual AdvantageWith both physical and virtual systems sustaining 10,000 IOPS onthroughout loads using 8KB data packets, the StoneFly i4000 providedexceptional performance in routing FC data traffic over a 1-Gb Ethernetfabric via iSCSI.

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    16/24

    The Qlogic iSCSI HBAalso supports iSNS, so ittoo will discover the

    StoneFly i4000automatically. What'smore, the QLogic iSCSIHBA off loads all iSCSIpacket processinga

    TOE only off loads theprocessing of the TCPpackets that encapsulatethe SCSI commandpacketsand thereby

    provides a distinct edge inprocessing IOPS. This isvery significant for

    maximizing performanceof the StoneFly i4000,which was able to sustain aload of 10,000 IOPS with

    8KB data requests.

    On Linux, the pushfor iSCSI has lagged

    behind Windows. Themerging of the Linux-iSCSI project into theOpen-iSCSI project in2005 has helped toquicken the pace ofadoption by providingLinux distributions with auniversal iSCSI option to

    include within their pack-ages.

    The new Open-iSCSIpackage is partitioned intouser and kernelcomponents. In user space,command line interface(CLI) modules handle

    16

    0

    1000

    2000

    3000

    4000

    5000

    6000

    7000

    8000

    9000

    10000

    I/Ospersecond

    Number of daemon processes

    IBM DS4100 Storage ArrayHP ML350 G3 ServerWindows Server 2003 SP2

    OPE

    NBENCH

    LABS

    oblLoad v2.0

    StoneFly i4000 iSCSI Concentrator

    QLA4050 iSCSI HBA

    MS Initiator and Ethernet TOE

    10 12 14 16 18 200 2 4 6 8

    IOPS throughput patterns

    for oblLoad using the

    QLogic HBA and the

    server's embedded TOEwere remarkably similar.

    Absolute performance

    measured in total IOPS,

    however, was distinctly

    higher for the QLogic iSCSI

    HBA. This was especially

    true for small numbers of

    daemons, which is the

    time that the host is most

    sensitive to changes in

    overhead. With more than

    12 daemons, the

    difference in the number ofIOPS completed varied by

    less than 2%.

    0

    100

    200

    300

    400

    500

    600

    700

    800

    900

    I/Ospersecond

    Number of daemon processes

    OPE

    NBENCH

    LABS

    oblLoad v2.0

    StoneFly i4000 iSCSI ConcentratorIBM DS4100 Storage ArrayHP ML350 G3 ServerSUSE Linux Enterprise Server 10 SP1

    QLA4050 iSCSI HBA (8KB i/O)Open-iSCSI Initiator and Ethernet TOE (8KB I/O)

    QLA4050 iSCSI HBA (64KB i/O)

    10 12 14 16 18 200 2 4 6 8

    We observed a very

    different pattern in IOPS

    performance on SLES.

    Using the Open-iSCSIinitiator, IOPS performance

    rose steadily as the

    number of oblLoad

    daemons rose to six. In

    contrast, IOPS performance

    continued to rise beyond 6

    daemons as performance

    diverged dramatically.

    More importantly, IOPS

    performance is invariant

    with the size of I/O

    requests because of the

    way the Linux kernelbundles I/O, Using large

    64KB I/O requests, IOPS

    performance was little

    different from 8KB I/O. The

    implications for

    applications that rely on

    large-block I/O, such as

    OLAP, are significant.

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    17/24

    configuration and control, which is still a very manual task that requireseach iSCSI target portals to be explicitly defined. More importantly, thedevelopers classify the current Open-iSCSI release as "semi-stable." As a

    result, the initiator remains as an optional component in most Linuxserver distributions.

    SLES 10 attempts to improve the usability of the Open-iSCSI initiator byadding a GUI within its YAST system management framework to simplify

    iSCSI resource configuration for system administrators. Every time we triedto configure the initiator via YAST, however, our server crashed. On theother hand, the Open-iSCSI CLI modules worked perfectly and made shortwork of connecting the server to the StoneFly i4000.

    Nonetheless, peak IOPSperformance for iSCSI on SLES10even with the QLogic iSCSI HBA

    trailed peak iSCSI performance onWindows Server 2003 by an order ofmagnitude. This is a function of theway Linux bundles I/O and hasnothing to do with the StoneFlyi4000. It is, however, a condition thatthe StoneFly i4000 can exploit.

    The StoneFusion OS is tuned forhigh data throughput. As a result, wewere able to run oblLoad with 64KBI/O requests, which can be found inmulti-dimensional BusinessIntelligence application scenarios, andmeasure the same level of IOPS while

    moving 8 times more data.

    The ability to deliver high data throughput levels is particularlyimportant in supporting high-end multimedia applications, especiallywhen dealing with streaming video. Both Linux and Windows clientsystems were able to stream large multi-gigabyte files sequentially atwire1Gbpsspeed through the StoneFly i4000.

    VIRTUALIZATION AND SAN SYMBIOSIS

    In final phase of testing of the StoneFly i4000, openBench Labs utilizedtwo quad-processor servers to run a VMware Infrastructure 3

    17

    0

    25

    50

    75

    100

    125

    150

    96 112 128

    Throug

    hput(MBpersecond)

    Unbuffered sequential read size (KB)

    OPE

    NBENCH

    LABS

    oblDisk v3.0

    StoneFly i4000 iSCSI Concentrator

    IBM DS4100 Storage ArrayHP ML350 G3 Server

    Windows Server 2003QLA4050 iSCSI HBASUSE Linux Enterprise Server 10QLA4050 iSCSI HBA

    80644832160

    For sequential I/O, thebundling of requests by

    Linux can be leveraged

    into a distinct advantage

    using the StoneFly i4000,

    which can stream data at

    wire speed. Using the

    oblDisk benchmark to read

    very large files sequential-

    ly, the only factor that lim-

    ited throughput was the

    client's ability to accept

    data coming from the

    StoneFly i4000.

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    18/24

    environment. This advanced third-generation platform virtualizes anentire IT infrastructure including servers, storage, and networks. For theopenBench Labs test scenario, we focused our attention on the problem of

    consolidating four servers along the lines of our HP ProLiant ML350 G3system on a single quad-processor server, such as an HP ProLiant DL580G3 or a Dell PowerEdge 1900.

    The VMware ESX Server provides two ways to make virtual storage

    volumes accessible to virtual machines. The first way is to use a VMFSdatastore to encapsulate a VM's disk-in a way that is analogous to aCD-ROM image file. The VM disk is a single large VMFS file that ispresented to the VM's OS as a SCSI disk drive, which contains a filesystem with many individual files. In this scheme, VMFS provides a

    distributed lock manager (DLM) for the VMFS volume and its content ofVM disk images. With a DLM, a datastore can contain multiple VM diskfiles that are accessed by multiple ESX Servers.

    The OS of the VM issues I/O commands to what appears to be a localSCSI drive connected to a local SCSI controller. In practice, the blockread/write requests are passed to the VMkernel where a physical devicedriver, such as the driver for the QLogic iSCSI HBA, forwards theread/write requests and directs them to the actual physical hardware device.

    That scheme of employing a DLM can put I/O loads on a VMFS-

    formatted volume that are significantly higher than the loads on a volumein a single-host, single-operating-system environment. To meet thoseloads, VMFS has been tuned as a high-performance file system for storinglarge, monolithic virtual disk files. Tuning an array for a particularapplication becomes irrelevant when using a VM disk file. When a VM'sfiles are encapsulated in a specially formatted disk file, the fine-grainstorage tuning associated with a physical machine looses its relevance. Theeffectiveness of the VMFS tuning scheme would immediately becomeevident when we tested IOPS performance on a Linux VM.

    The alternative to VMFS is to use a raw LUN formatted with a nativefile system associated with the virtual machine (VM). Using a raw deviceas though it were a VMFS-hosted file requires a VMFS-hosted pointer fileto redirect I/O requests from a VMFS volume to the raw LUN. Thisscheme is dubbed Raw Device Mapping (RDM). What drives the RDMscenario is the need to share data with external physical machines.

    While openBench Labs ran functionality tests of RDM volumes, we

    18

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    19/24

    chose to utilize unique VMware datastores to encapsulate single virtualvolumes in our benchmark tests. Given that the default block size forVMFS is 1MB, we followed two fundamental rules of thumb in provision-

    ing backend storage for the StoneFly i4000:1. Put as many spindles into the underlying FC array as possible.2. Make the FC array's stripe size as large as possible.

    In particular, we utilized 7-drive arrays with a stripe size of 256KBthe

    default for high-end UNIX systemsin the IBM DS4100 storage system.With our storage system sporting two independent disk controllers with a1-GB cache, we garnered a significant boost in our IOPS performance testsby exploiting read-ahead track caching. As a result, the issues at hand forperformance became the ability for the StoneFly i4000 to pass that backend

    FC throughput forward over iSCSI and the ability of the clients hardwareand software initiators to keep pace with the storage concentrator.

    BLURRING REAL AND VIRTUAL DIFFERENCES

    In provisioning 50GB logical drives for testing, ESX would create asparse file within the specified VMFS volume. Once the virtual machineenvironment was provisioned, we repeated the stand-alone server tests foreach OS with a single virtual machine running on the server. To measurescalability, openBench Labs repeated the tests on multiple virtual machines.

    We began testing iSCSI

    performance on a VMwareESX Server with virtualmachines runningWindows Server 2003 SP2.With a 50GB datastoremounted via the QLogicHBA, the number of IOPS

    completed by oblLoad wasvirtually identical to thenumber completed on our

    base HP Proliant ML350server system runningWindows Server 2003 SP2.

    By far, the most extraor-

    dinary results occurredwhen we ran SUSE LinuxEnterprise Server (SLES) 10

    0

    1000

    2000

    3000

    4000

    5000

    6000

    7000

    8000

    9000

    10000

    I/Ospersecond

    Number of daemon processes

    QLA4050 HBA Windows Server 2003

    QLA4050 HBA VMware ESX Server

    VMware Initiator and Ethernet TOE

    StoneFly i4000 iSCSI Concentrator

    OPE

    N

    BENCH

    LABS

    oblLoad v2.0

    10 12 14 16 18 200 2 4 6 8

    IBM DS4100 Storage ArrayHP ML350 G3 ServerVMware ESX Server 3.03Virtual Machine: Windows Server 2003 SP2

    In terms of IOPS

    performance, utilizing the

    QLogic iSCSI HBA on ESX

    and then virtualizing the

    volume as a direct

    attached SCSI drive

    provided the same level of

    performance as measured

    using the iSCSI HBA with a

    physical Windows server.

    Without the iSCSI HBA,

    performance did not reflect

    the boost in performance

    that theS

    toneFly i4000was able to pass on from

    the IMB DS4100 array.

    19

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    20/24

    SP1 within a VM. In this case, IOPS performance improved with both theQLogic iSCSI HBA and with the VMware iSCSI initiator in conjunctionwith the Ethernet TOE as compared to running a physical server.

    With a VM runningSLES, however, that boostto VMFS performancepropelled IOPS well beyond

    what we had measured witha physical machine. Whilethe basic pattern for IOPSthroughput remained thesame, the net performance

    result was a throughputlevel that was often on ascale showing an absolute

    increase in performancethat was often on the orderof 200-to-250% higher forany given number ofoblLoad disk daemons.

    With both physical andvirtual systems sustaining

    10,000 IOPS throughout loads using 8KB data packets, the StoneFly i4000provided exceptional performance in routing FC data traffic over a 1-GbEthernet fabric via iSCSI. Nonetheless, it was in the added provisioningfeatures of StoneFusion that the StoneFly i4000 made the biggest impactin managing a VI environment.

    In a VI environment, one of the key efficiencies for IT operations is thenotion of a template installation. Since the prime goal of systemsvirtualization is to maximize resource utilization, multiple VMs will be

    running on a host server at any instance in time. To avoid the overhead ofinstalling multiple instances of an OS, VMware supports the concept ofcreating an OS installation template and then cloning that template thenext time that the OS is to be installed. In a VI environment, the creationof templates is handled by the VMware Virtual Center software, whichrequires a separate system running Windows Server along with acommercial database, such as SQL Server or Oracle, to keep track of alldisk images.

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    2000

    I/Osper

    second

    Number of daemon processes

    QLA4050 HBA SLES 10

    QLA4050 HBA VMware ESX Server

    VMware Initiator and Ethernet TOE

    StoneFly i4000 iSCSI Concentrator

    OPE

    NBENC

    HLABS

    oblLoad v2.0

    10 12 14 16 18 200 2 4 6 8

    IBM DS4100 Storage ArrayHP ML350 G3 ServerVMware ESX Server 3.03Virtual Machine: SUSE Linux Enterprise Server SP1

    Using a ReiserFS-

    formatted data volume

    contained in a VMFS

    datastore, IOPS

    performance on a VM

    outperformed a physical

    server even when ESX

    utilized its software

    initiator and the physical

    server employed a

    hardware iSCSI HBA. In

    particular, IOPSperformance rose by

    upwards of 200% over a

    physical server when we

    used the VMware's iSCSI

    initiator.The jump in

    performance was on the

    order of 300% using the

    QLogic iSCSI HBA on the

    ESX server.

    20

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    21/24

    Similar functionalitycan be leveraged using theStoneFly i4000 Storage

    Concentrator through theStoneFusion imagemanagement functions forvolumes. While bestpractices call for

    maintaining offlinetemplate volumes for thistask, we were able to use

    any volume at any time,provided that we were able take that volume offline.

    To clone a volume image, we first needed to shutdown all VMsrunning on that virtual volume and close any iSCSI sessions that were

    open for that volume with any ESX servers. Once this was done, we couldbegin the rather simple process of adding a mirror image to the volume,which is normally done to provide for high availability in either adisaster/recovery or a backup scenario.

    The creation of a mirroris a remarkable fast andefficient process under

    StoneFusion. We monitoredthe FC switch port that wasconnected to the StoneFlyi4000 during the process ofcreating a mirror. Read andwrite data throughputremained fully synchronizedduring the process as readsand writes took place in

    lockstep at a pace of 45MBper second each, whichresulted in a full duplex I/Othroughput rate of 95MBper second. At that rate, theprocess of generating an OS

    clone complete with any additional software applications was merely amatter of minutes.

    Adding a mirror image to

    a volume is a relatively

    trivial task within the

    StoneFusion ManagementGUI. To create a clone of

    our VM-Win02 volume, we

    only needed to identify the

    volume and determine the

    number of mirrors to

    create. Once that was

    done, it was just as easy to

    detach the newly created

    mirror and promote the

    new image as VM-Win03

    in order to create a new

    independent, stand-alone

    volume.

    Monitoring the backend

    FC SAN traffic of the

    StoneFly i4000 at the

    QLogic SANbox switchrevealed the efficiency of

    StoneFusion when creating

    a mirror for our VM-Win02

    volume. Full duplex reads

    and writes were running at

    95MB per second. Even

    more remarkable was our

    inability to discern any

    imbalance or difference

    between read and write

    traffic coming to and from

    the i4000 Storage

    Concentrator.

    21

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    22/24

    Once the StoneFly image creation process had completed, we simplyauthorized access to the new volume for our ESX servers. Next, by initiatinga re-scan of the appropriate storage adapter on each ESX server, the VMFS

    formatted volume was automatically made a member of the storageresource pool on each ESX server and identified as a snapshot of Win02.

    In the final stage of theprocess, we browsed the

    VMFS datastore and addedthe cloned VM to the poolof virtual machines oneach ESX server. On pow-ering on the new VM for

    the first time, the ESXserver would recognizethat this VM had an exist-

    ing identifier and wouldrequest confirmation thatit should either retain orcreate a new ID for thisVM. Once that wascompleted, we were donewith the process of creatinga new VM.

    Once the clone of virtual

    volume VM-Win02 was

    successfully connected to

    one of our ESX servers, we

    added the copied OS to the

    inventory pool of VMs as

    oblVM-Win03. When that

    VM was started for the first

    time, the ESX server rec-

    ognized the ID of the newVM as belonging to its

    source VM, oblVM-Win02.

    At that point the ESX serv-

    er would request if this VM

    was a copy and whether it

    should create a new ID.

    22

    Real Performance, Virtual Advantage

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    23/24

    DOING IT

    For CIOs

    today, twotop-of-mind

    propositionsare resourceconsolidationand resourcevirtualization.Both areconsidered to

    be excellent ways to reduce IT operations costs through efficient andeffective utilization of IT resources, extending from capital equipment tohuman capital. Via the StoneFusion OS storage-provisioning engine, theStoneFly i4000 Storage Concentrator can directly help raise the utilizationrate of FC storage while extending the benefits of storage virtualization to abroad array of new client systems over Ethernet.

    With resource virtualization, IT can separate the functions of resources

    from the physical implementations of resources. This makes it possible forIT to concentrate on managing a small number of generic pools rather thana broad array of proprietary devices, making it far easier to create rules andprocedures for utilization. That decoupling also allows storage resources tobe physically distributed and yet centrally managed in a virtual storagepool. As a result, SANs allow administrators to more easily take advantage

    of robust reliability, availability and scalability (RAS) features for dataprotection and recovery, such as snapshots and replication.

    That synergy makes virtualization of systems, storage, and networks aholistic necessity. Nonetheless, SAN infrastructure costs have historicallypresented a significant hurdle to SAN adoption and expansion. As aresult, the benefits of SAN architecture have not been spread beyondservers in computer centers.

    Concentrator Value

    StoneFly i4000 Storage Concentrator Quick ROI

    1) Aggregate and Manage FC Array Storage for Better Resource Utilization2) Extended iSCSI Provisioning Functionality

    3) Advanced HA Functionality Including Snapshots and Mirrors

    4) Fibre Channel Path Management and Automatic Ethernet TOE Teaming

    5) 10,000 IOPS Benchmark Throughput (8KB Requests with Windows Server 2003)

    6) 133MB/s Benchmark Sequential I/O Throughput (SUSE Linux Enterprise Server 10)

    Concentrator ValueESX system administrators can leverage the high-availability functionsof the StoneFusion OS, including the creation of snapshots and mir-rors, to generate and maintain OStemplates and distribute data files as VMs are migrated in a VI envi-ronment.

    23

  • 8/14/2019 Analysis: Extend a Fibre Channel SAN and Leverage Virtual Infrastructure via iSCSI

    24/24

    Traditional storage virtualization on an FC SAN, however, is a far morecomplex proposition than storage virtualization in a VMware Virtual

    Infrastructure (VI) environment. Traditional operating systems assumeexclusive ownership of their storage volumes. Unlike ESX, their file systemsdo not include a distributed file locking mechanism and a way to keepmultiple systems with a consistent view of a volume's contents. That makesstorage virtualization an important component of SAN management and

    the exclusive domain of storage administrators at enterprise-class sites.

    On the other hand, exclusive volume ownership is not an issue for ESXservers, since VMFS handles distributed file locking. In addition, the filesin a VMFS volume are single-file images of VM disks. This means that

    when a VM mounts a disk image, VMFS locks that image as a VMFS fileand the VM has exclusive ownership of its disk volume.

    With the issue of ownership moot for VMFS datastores, iSCSI becomesa perfect way to cost effectively extend the benefits of physical andfunctional separation from an FC SAN. With the StoneFly i4000, thatfunctionality can be further leveraged by allowing system administratorsto take on many of the storage provisioning tasks that normally requirecoordination with a storage administrator. Whats more, StoneFusion'sbuilt-in advanced RAS storage management features make it easy to createvirtual-disk templates for VM operating systems in order to standardize

    IT configurations and simplify system provisioning.

    By initially provisioning bulk storage to the StoneFly i4000, interactionwith storage administrators is minimized as ESX system administrators canaddress all of the iSCSI issues, including data security. On top of that, ESXsystem administrators can leverage the high-availability functions of theStoneFusion OS, such as snapshots and mirroring, and apply those featuresto the creation and maintenance of OS templates, and to the distribution ofdata files as VMs are migrated in a VI environment. As a result, the StoneFly

    i4000 can open the door to all of the advanced features of a VI environmentwhile constraining the costs of operations management.

    Concentrator Value

    24