HP Virtualization

Embed Size (px)

Citation preview

  • 7/31/2019 HP Virtualization

    1/96

  • 7/31/2019 HP Virtualization

    2/96

    2

    HP and Client Virtualization

    Planning a Microsoft Windows 7 migration? How much of your corporate data is at the airporttoday in a lost or stolen laptop? What is the cost per year to manage your desktops? Are youprepared to support the upcoming always-on workforce?

    HP Client Virtualization can help customers achieve the goals of IT and workforce support, withoutcompromising performance, operating costs, information security, and user experience with HP Client

    Virtualization Reference Architectures. These reference architectures provide:

    Simplicity: with an integrated data center solution for rapid installation/startup and easy ongoingoperations

    Self contained and modular server, storage, and networking architecture no virtualization dataegresses the rack

    3x improvement in IT productivityOptimization: a tested solution with the right combination of compute, storage, networking, andsystem management tuned for Client Virtualization efficiency

    Scalable performance, enhanced security, always available 60% less rack space compared to competitors 95% fewer NICs, HBAs, and switches; 65% lower cost; 40% less power for LAN/SAN connectionsFlexibility: with options to scale up and/or scale out to meet precise customer requirements

    Flexible solution for all workers in an organization from task workers to PC power users Support for up to 7,800 Virtual Desktop Infrastructure (VDI) users and 6,400 Citrix XenApp

    connections in three racks using the different desktop delivery methods offered by Citrix XenDesktopwith FlexCast technology and leveraging Microsoft Hyper-V Dynamic Memory.

    Unmatched price/performance with both direct attached (DAS) and SAS tiered storage in a singlerack (50% cheaper than SAN)

    By adopting Client Virtualization, IT can drive new levels of flexibility, security, control, costs savings,management simplification, and power reduction as well as meet some of the business top initiatives.The idea is simple. Remove the traditional dependency between the end user and the compute deviceby managing the OS, applications, and data separately from the core compute resource. The resultscan enable new levels of IT agility, flexibility and control.

    The complete reference architecture is a tool for HP VirtualSystem, a strategic portfolio of infrastructuresolutions, which serves as the foundation for your virtualized workloads. Based on HP ConvergedInfrastructure (CI), HP VirtualSystem utilizes market-leading capabilities from Citrix and Microsoft tocentralize administrative tasks, improve scalability, optimize workloads, and reduce complexity.

    Purpose of this document

    This document serves three primary functions.

    Give IT decision makers, architects and implementation specialists an overview of how HP alongwith Citrix and Microsoft approach Client Virtualization and how the joint solutions they bring tomarket enable simpler, optimized and more flexible IT.

    Outline the steps required to configure and deploy the hardware platform in an optimized fashionto support Citrix XenDesktop as an enterprise level Desktop Virtualization implementation.

    Assist IT planners and architects with understanding storage patterning and tiering within thecontext of the overall architecture.

  • 7/31/2019 HP Virtualization

    3/96

    3

    This document does not discuss the in depth implementation steps to install and configure Citrix andMicrosoft software unless it directly effects the successful deployment of the overall platform.

    Abbreviations and naming conventions

    Table 1 is a list of abbreviations and names used throughout this document and their intendedmeaning.

    Table 1. Abbreviations and names used in this document

    Convention Definition

    SCVMM System Center Virtual Machine Manager

    MS RDP Microsoft Remote Desktop Protocol

    SSD Solid State Drives

    VDI Virtual Desktop Infrastructure

    OA Onboard Administrator

    LUN Logical Unit Number

    IOPs Input and Output Operations per second

    POD The scaling unit of this reference architecture

    SIM HP Systems Insight Manager

    RBSU ROM Based Setup Utility

    Target audience

    This document is targeted at IT architects and engineers that plan on implementing Citrix XenDesktop

    on Windows Server 2008 R2 SP1 and are interested in understanding the unique capabilities andsolutions that HP, Citrix, and Microsoft bring to the Client Virtualization market as well as how aviable, enterprise level desktop virtualization solution is crafted. This document is one in a series ofreference architecture documents available athttp://www.hp.com/go/cv.

    Skillset

    It is expected that the installer utilizing this document will be familiar with servers, networking andstorage principles and have skills around Microsoft virtualization. The installer should also be familiarwith HP BladeSystem. Familiarity with Client Virtualization and the various desktop and applicationdelivery model concepts and definitions is helpful, but not necessary.

    Software used for this documentThis document references numerous software components. The acceptable version of each OS andversions of software used for test are listed in this section.

    Hypervisor hosts

    Components Software description

    OS Microsoft Windows Server 2008 R2 SP1

    http://www.hp.com/go/cvhttp://www.hp.com/go/cvhttp://www.hp.com/go/cvhttp://www.hp.com/go/cv
  • 7/31/2019 HP Virtualization

    4/96

    4

    Management server operating systems

    Components Software description

    SCVMM Microsoft Windows Server 2008

    HP Systems Insight

    Manager (SIM) server Microsoft Windows Server 2008

    HP P4000 CentralManagement Consoleserver

    Microsoft Windows 7 Professional, x64

    Microsoft SQL Serverservers [1]

    Microsoft Windows Server 2008

    Management software

    Components Software description

    VM Management System Center Virtual Machine Manager (SCVMM)

    HP Systems InsightManager

    HP Systems Insight Manager 6.0

    HP P4000 SAN/iQCentralized ManagementConsole

    HP P4000 SAN/iQ Centralized Management Console (CMC) 9.0

    Microsoft SQL Server2008

    Microsoft SQL Server 2008 Enterprise edition, x64

    XenDesktop 5.0 components

    Components Software description

    Desktop Delivery Controller XD 5.0, broker software

    Setup Wizard Citrix XenDesktop 5 Setup Wizard

    Provisioning Server Citrix Provisioning Services 5.6 SP1

    Citrix Web Interface Citrix Web Interface 5.4

    Citrix Licensing Citrix Licensing 11.6.1

    [1] It is assumed that an existing SQL Server cluster will be used to host the necessary databases.

  • 7/31/2019 HP Virtualization

    5/96

    5

    Firmware revisions

    Components Version

    HP Onboard Administrator 3.30

    HP Virtual Connect 3.18

    HP ProLiant Server SystemROM Varies by server

    HP SAS Switch 2.2.15.0

    HP Integrated Lights-Out 3(iLO 3)

    1.20

    HP 600 Modular Disk Array(MDS600)

    2.66

    End user virtual machines

    Components Software description

    Operating System Microsoft Windows 7, x64

    Connection Protocol Microsoft RDP and Citrix ICA

  • 7/31/2019 HP Virtualization

    6/96

    6

    The Enterprise Client Virtualization Reference Architecturefor HP VirtualSystem

    The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem is shown in Figure 1.As discussed in the document entitled Virtual Desktop Infrastructure for the Enterprise(http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW), HP is focusingon Client Virtualization as a whole with VDI as a specific implementation of Client Virtualization

    technologies. In the image below, that means that the VDI session is represented by the ComputeResource.

    Figure 1: The HP Converged Infrastructure for Client Virtualization Reference Architecture for Desktop Virtualization

    From the endpoint device the user interfaces with to the backend data center servers and storage, HPhas the hardware and management capabilities for a complete end to end infrastructure. Client

    Virtualization is much more than delivering a virtual desktop to an end-point device. There aremultiple methods of delivering a desktop and application to a user or users. Client Virtualization isinclusive of multiple desktop and application delivery options. No one option will sufficiently satisfy a

    complete organization. There are some users that can share a virtual desktop (sessionvirtualization), which is served up with Microsoft Remote Desktop Services, formerly known asTerminal Services, while other users may require a more secure and personalized desktopenvironment, but still not require dedicated control over their desktop. Some users do not need thecapability to install software, or make changes to the underlying operating system, but still need aseparate isolated desktop when they log in. Nothing needs to be maintained in the desktops betweenlogin sessions. They receive a clean fresh desktop at each login. Profile management can be used foruser virtualization, allowing users to customize their environment without making changes to thedesktop. The user customizations like drive and printer mappings, desktop layout, color schemes and

    http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW
  • 7/31/2019 HP Virtualization

    7/96

    7

    preferences are loaded into the desktop at user login. This can be accomplished using non-persistentvirtual machines (VMs) in a Virtual Desktop configuration.

    Along with the non-persistent users, there are persistent users. These users need to preserve operatingsystem and application installation changes across logins, and may have requirements foradministrator access rights to their virtual desktop. These users will either have dedicated VMs, onefor each user creating a large storage footprint, or the users may start with the same base image fileand utilize smaller differential files to maintain their personalities.

    Whether using session-based desktops, persistent, or non-persistent workers, virtualization ofapplications should be implemented for better management and performance. Using tools like CitrixXenApp, a key component of XenDesktop, and Microsoft Application Virtualization (App-V) tovirtualize and deliver applications allows offloading the running of applications to dedicated servers,decreasing the load in the VMs being used to support the virtual desktops.

    The Citrix approach to IT for delivering multiple types of virtual desktops and applications whetherhosted or local is its FlexCast delivery technology. Using Citrix XenDesktop with FlexCast on Microsoft

    Windows Server 2008 R2 SP1 and HP hardware offers a complete Client Virtualization solution. Thisdocument will focus on using those pieces of the Hosted VDI (commonly known as VDI) delivery modeof Citrix FlexCast to create the HP Enterprise Reference Architecture for Client Virtualization with CitrixXenDesktop 5 and Microsoft Windows Server 2008 R2 SP1. The document will also touch on other

    FlexCast delivery technologies such as Hosted Shared desktops and On-Demand applications to showhow a complete Client Virtualization solution could be built by starting with this VDI referencearchitecture.

    Why HP, Citrix, and Microsoft for Client Virtualization

    Great solutions deliver value at a level that cobbled together components and poorly coordinatedpartnerships cannot approach. The HP Enterprise Reference Architecture for Client Virtualization withCitrix XenDesktop 5 on Microsoft Windows Server 2008 R2 SP1 brings together three companies thahave partnered together for many, many years, and understand the value and process of partnering.

    The partnersHP, Microsoft and Citrix all have long, strong relationships around partnering. The HP and Microsoftglobal strategic alliance is one of the longest standing alliances of its kind in the industry. The goal ishelping businesses around the world improve services through the use of innovative technologies. HPand Microsoft have more than 25 years of market leadership and technical innovation.

    Since 1996, HP and Citrix have shared a close, collaborative relationship, being mutual customers aswell as partners. HP and Citrix work together to deliver joint engineering solutions, with a dedicatedHP team supporting Citrix sales, operations, marketing, consulting and integration services, andtechnical development. HP supports Citrix StorageLink technology to simplify storage management.

    HP offers the full suite of products and services to support Citrix solutions. HP ProLiant and

    BladeSystem servers, HP P4000 storage technology, and HP Networking all provide solutionsspecifically designed to support Citrix solutions. HP thin clients are certified as Citrix Ready, andprovide support for the latest HDX and HDX 3D protocols. HP management tools provide single paneof glass for managing the reference architecture as part of an overall IT environment. HP is a leadingglobal system integrator, with hundreds of Citrix-certified professionals with deep experienceimplementing Citrix and HP solutions. HP Technology Services provides strategic assessment, solutiondesign and deployment, and migration services for Citrix products. HP Enterprise Services Client

    Virtualization Service provides application and desktop virtualization as a managed service based onXenDesktop.

  • 7/31/2019 HP Virtualization

    8/96

    8

    For Citrix and Microsoft, 2011 marks the 22nd anniversary of the Citrix/Microsoft partnership. Citrixbuilds on Windows as its Innovation Platform and continues to expand upon the successfulalignment pioneered through the collaboration between Microsoft and Citrix in the applicationdelivery marketplace. Most recently, Citrix and Microsoft have joined forces again to deliver jointdesktop virtualization offerings a market now dominated by these joint Citrix-Microsoft solutions. Inrecognizing the outstanding infrastructure solutions that Citrix brings to the Microsoft marketplace,Microsoft has awarded their annual Global Infrastructure Partner of the Year award to Citrix fourout of the last eight years.

    More information about the HP/Microsoft partnership can be found atwww.hp.com/go/microsoft.For information about the HP/Citrix partnerships go towww.hp.com/go/citrix.

    HP

    HP brings a self-contained and modular hardware solution providing performance within theenclosure with integrated tools to give you enhanced visibility and prevention notifications. Witheverything in a rack, the involvement of multiple IT teams is limited or not required. The rack hasredundant networks for connecting to the data center management and production links. All iSCSInetwork traffic, virtualized application traffic, and VM provisioning traffic stays within the rack. Withthe storage within the rack, the storage team is not required to manage or be involved in the storageconfiguration.

    When looking at networking, the HP Virtual Connect Flex-10 modules and the Flex-NICs on the HPBladeSystem servers offer tremendous reduction in external network ports. Each blade has two NICports, and each port represents four physical NICs with a combined bandwidth of 10Gb. The NICscan be teamed across the ports to create network redundancy, all without adding more NICs orswitches to the configuration.

    At the management level, HP offers the ability to manage many systems with one core infrastructureusing the Onboard Administrator of the BladeSystem enclosure to manage the blades, enclosure, SASswitches and Virtual Connect modules. Additionally, the HP Insight Control software can manage allof the servers and hardware, providing failure prevention notifications; and to highlight thepartnerships, HP Insight Control is fully integrated with Microsoft System Center managementsoftware.

    In creating the reference architecture (RA), HP looked at an optimally sized and engineered set ofhardware that leverages HPs Converged Infrastructure to pull everything together end-to-end, runningMicrosoft Windows Server 2008 R2 SP1 as a solid software base and Citrix XenDesktop 5 to givethe users the best possible user experience.

    Microsoft

    Microsoft Windows Server 2008 R2 SP1

    Microsoft released Windows Server 2008 R2 SP1 in earlier 2011 with several major enhancements,and major improvement in the performance of Hyper-V. The most important enhancement to buildingthe reference architectures is Dynamic Memory. Dynamic Memory allows utilization of physicalmemory to its fullest capacity without sacrificing performance. Leveraging Dynamic Memory allows

    utilization of all physical memory in the server. For this RA, all VMs had Dynamic Memory configuredThe release of SP1 has seen great improvements with performance, as well as the introduction ofRemoteFX, designed to bring a full Windows 7 Aero experience to the VDI user. More informationabout utilizing RemoteFX can be found atwww.hp.com/go/cv. RemoteFX is not supported in a ServerCore installation of Windows Server 2008 R2 SP1.

    http://www.hp.com/go/microsofthttp://www.hp.com/go/microsofthttp://www.hp.com/go/microsofthttp://www.hp.com/go/citrixhttp://www.hp.com/go/citrixhttp://www.hp.com/go/citrixhttp://www.hp.com/go/cvhttp://www.hp.com/go/cvhttp://www.hp.com/go/cvhttp://www.hp.com/go/cvhttp://www.hp.com/go/citrixhttp://www.hp.com/go/microsoft
  • 7/31/2019 HP Virtualization

    9/96

    9

    Citrix

    Citrix XenDesktop

    Citrix XenDesktop transforms Windows desktops to an on-demand service to any user, any device,anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or Windows, weband SaaS application to all the latest PCs, Macs, tablets, smartphones, laptops and thin clients allwith a high-definition HDX user experience. FlexCast delivery technology enables IT to optimize theperformance, security and cost of virtual desktops for any type of user, including task workers, mobileworkers, power users and contractors. XenDesktop helps IT rapidly adapt to business initiatives, such

    as offshoring, M&A and branch expansion, by simplifying desktop delivery and enabling user self-service. The open, scalable and proven architecture simplifies management, support and integration.

    Benefits of Citrix XenDesktop

    Citrix XenDesktop key features include:

    Any device, anywhere with Receiver. Todays digital workforce demands the flexibility towork from anywhere at any time using any device theyd like. Leveraging Citrix Receiver as alightweight universal client, XenDesktop users can access their desktop and corporate applicationsfrom the latest tablets, smartphones, PCs, Macs, or thin clients. This enables virtual workstyles,business continuity and user mobility.

    HDX user experience. XenDesktop 5 delivers an HDX user experience on any device, over anynetwork, while using up to 90% less bandwidth compared to competing solutions. With HDX, thedesktop experience rivals a local PC, even when using multimedia, real-time collaboration, USBperipherals, and 3D graphics. Integrated WAN optimization capabilities boost network efficiencyand performance even over challenging, high latency links.

    Beyond VDI with FlexCast. Different types of workers across the enterprise have varyingperformance and personalization requirements. Some require offline mobility of laptops, othersneed simplicity and standardization, while still others need high performance and a fullypersonalized desktop. XenDesktop can meet all these requirements in a single solution with theunique Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtualdesktop, hosted or local, optimized to meet the performance, security and mobility requirements ofeach individual user.

    Any Windows, web or SaaS app.With XenDesktop, you can provide your workforce withany type of application they need, including Windows, web and SaaS apps. For Windows apps,XenDesktop includes XenApp, the on-demand application delivery solution that enables any

    Windows app to be virtualized, centralized, and managed in the data center and instantlydelivered as a service to users anywhere on any device. For web and SaaS apps, Receiverseamlessly integrates them into a single interface, so users only need to log on once to have secureaccess to all their applications.

    Open, scalable, proven.With numerous awards, industry-validated scalability and over10,000Citrix Readyproducts, XenDesktop 5 provides a powerful desktop computing infrastructurethats easier than ever to manage. The open architecture works with your existing hypervisor,storage, Microsoft, and system management infrastructures, with complete integration and

    automation via the comprehensive SDK. Single-instance management. XenDesktop enables IT to separate the device, OS, applications

    and user personalization and maintain single master images of each. Instead of juggling thousandsof static desktop images, IT can manage and update the OS and apps once, from one location.Imagine being able to centrally upgrade the entire enterprise to Windows 7 in a weekend, insteadof months. Single-instance management dramatically reduces on-going patch and upgrademaintenance efforts, and cuts data center storage costs by up to 90 percent by eliminatingredundant copies.

    http://www.citrix.com/readyhttp://www.citrix.com/readyhttp://www.citrix.com/readyhttp://www.citrix.com/ready
  • 7/31/2019 HP Virtualization

    10/96

    10

    Data security and access control.WithXenDesktop, users can access desktops andapplications from any location or device, while IT uses policies that control where data is kept.XenDesktop can prevent data from residing on endpoints, centrally controlling information in thedata center. In addition, XenDesktop can ensure that any application data that must reside on theendpoint is protected with XenVault technology. Extensive access control and security policiesensure that intellectual property is protected, and regulatory compliance requirements are met.

    What this document produces

    Utilizing components of Citrix FlexCast delivery technology, this document will help construct aplatform capable of supporting more than 1200 task workers leveraging direct-attached storage, andmore than 400 productivity workers connected to an HP P4800 G2 SAN for BladeSystem.

    Before diving deep into how the HP platform when combined with Citrix XenDesktop creates a robust,enterprise ready VDI solution, it is first necessary to define VDI. HPs view of VDI is captured in Figure2. An end user, from a client accesses a brokering mechanism which provides a desktop over aconnection protocol to the end user.

    Figure 2: The HP Approach for VDI

  • 7/31/2019 HP Virtualization

    11/96

    11

    This desktop OS is, at logon, combined with the users personality, application and data settings andalso an application to create a runtime VDI instance as in Figure 3.

    Figure 3: The VDI runtime instance

    The entire application stack must be housed on resilient, cost effective and scalable infrastructure thatcan be managed by a minimal number of resources. There are many different terms used to definetypes of VMs associated with VDI, for this document persistent/non-persistent will be used. Apersistent VM saves changes across logins. The user usually has admin rights to the VM and canmake changes, add software, and customize the VM as they need. For a non-persistent VM, anychanges or modifications are lost when the user logs out, and at login the users is always presentedwith a pristine fresh VM. Customization to non-persistent VMs is handled by user virtualizationutilizing Citrix Profile Management. The use of non-persistent VMs minimizes the amount of SANstorage required, allows for use of Direct Attached Storage (DAS), and minimizes the amount of datarequired to be backed up.

  • 7/31/2019 HP Virtualization

    12/96

    12

    Figure 4 shows the Citrix XenDesktop architecture software stack. The XenApp component ofXenDesktop provides the application virtualization layer. Desktop Delivery Controller server acts asthe broker and desktops are delivered over the network via Citrix HDX or Microsoft RDP. Citrix allowsmultiple models for managing user data within the overall ecosystem. HP recommends selecting amechanism for user virtualization that minimizes network impact from the movement of user files andsettings and allows for the customization of the users environment based on a number of factorsincluding location, operating system and device.

    Figure 4: Citrix XenDesktop on HP Converged Infrastructure

  • 7/31/2019 HP Virtualization

    13/96

    13

    Figure 5 below shows the networks required to configure the platform for XenDesktop and where thevarious components reside. Note the dual homed storage management approach which allows allstorage traffic to remain within the Virtual Connect domain, reducing complexity and involvementfrom multiple teams.

    Figure 5: A Citrix XenDesktop specific implementation viewed from an overall network standpoint

    Citrix and Client Virtualization

    Why Citrix XenDesktop 5

    Many IT organizations are looking for a better way to manage desktops. The continuous cycle ofimaging, patching and upgrading a myriad of physical devices dispersed throughout theorganization is costly, time consuming and frustrating. With the ever increasing push to be more agileand flexible, IT organizations are increasingly looking to desktop virtualization as an alternative to

    traditional desktop management solutions. Citrix XenDesktop helps organizations deliver on their keypriorities to simplify management, increase flexibility, improve security, and lower costs with thefollowing market leading technologies and features.

    FlexCast delivery technology

    Different types of workers across the enterprise have varying performance and personalizationrequirements. Some require simplicity and standardization while others need high performance or afully personalized desktop. XenDesktop can meet all these requirements in a single solution withCitrixs unique Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual

  • 7/31/2019 HP Virtualization

    14/96

    14

    desktop, hosted or local, physical or virtual each specifically tailored to meet the performance,security and flexibility requirements of each individual user.

    Hosted Shared Desktops provide a locked down, streamlined and standardized environmentwith a core set of applications, ideally suited for task workers where personalization is not needed or allowed.

    Hosted VDI Desktops offer a personalized Windows desktop experience, typically needed byoffice workers, which can be securely delivered over any network to any device.

    Streamed Virtual Hard Drive (VHD) Desktops leverage the local processing power of richclients, while providing centralized single-image management of the desktop. These types ofdesktops are often used in computer labs and training facilities, and when users require localprocessing for certain applications or peripherals.

    Local VM Desktops extend the benefits of centralized, single-instance management to mobileworkers that need to use their laptops offline. When they are able to connect to a suitable network,changes to the OS, apps and user data are automatically synchronized with the data center.

    Modular architecture

    The Citrix XenDesktop modular architecture provides the foundation for building a scalable desktopvirtualization infrastructure. It creates a single design for a data center, integrating all FlexCastmodels.

    The modular architecture consists of three main modules:

    Control Module manages user access and virtual desktop allocation, containing components likeXenDesktop Controllers, SQL database, License Server and the Web Interface.

    Desktop Modules contains a module for each of the above mentioned FlexCast models,managing Physical Endpoints, XenApp Servers, Hypervisor pools, physical machines, etc.

    Imaging Module provides the virtual desktops with the master desktop image, managingInstalled Images, Provisioning Server and Machine Creation Services.

    For a detailed description of the modular architecture, please refer to the Citrix XenDesktop 5Reference Architecture document athttp://support.citrix.com/article/CTX127587

    Desktop provisioning technologies

    Provisioning Server (PVS)

    Citrix Provisioning Server provides images to physical and virtual desktops. Desktops utilize networkbooting to obtain the image and only portions of the desktop images are streamed across the networkas needed. Provisioning Server does require additional server resources, but can be either physical orvirtual servers depending on the capacity requirements and hardware configuration. Also,Provisioning Server does not require the desktop to be virtualized as Provisioning Server can deliverdesktop images to physical desktops.

    Machine Creation Services (MCS)

    Citrix Machine Creation Services was introduced in XenDesktop 5 and provides powerfulprovisioning and lifecycle management of hosted virtual desktop machines. As it is integrated directlyinto XenDesktop, no additional servers or connections are required making use of MCS simple foreven the smallest deployments. MCS delivers storage savings by building virtual machines from acommon master image and only storing differences for persistent desktops. This enablesadministrators to apply updates to the master image once, and have those changes applied towardall existing virtual machines without the need to re-provision.

    http://support.citrix.com/article/CTX127587http://support.citrix.com/article/CTX127587http://support.citrix.com/article/CTX127587http://support.citrix.com/article/CTX127587
  • 7/31/2019 HP Virtualization

    15/96

    15

    Machine Creation Services and Provisioning Services

    The decision between utilizing Machine Creation Services desktops or Provisioning Services desktopswill be based on the overall architecture. If there are plans to utilize other FlexCast options, likeStreamed VHD or Hosted Shared Desktops, the Provisioning Services infrastructure will already be inplace and expanding to include streamed desktops is inconsequential. However, if the implementationis focused on the use of Hosted VDI desktops only, then Machine Creation Services might be a betteroption as it requires less infrastructure servers.

    Rack layoutFigure 6 shows the overall rack layout.

    Figure 6: Citrix XenDesktop/Microsoft Windows Server 2008 R2 SP1/HP BladeSystem RA (front and back)

  • 7/31/2019 HP Virtualization

    16/96

    16

    Figure 7 shows the overall function of each component in the rack, leveraging different blade serversto support VDI desktops and both DAS and P4800 SAN to support persistent and non-persistent VDIsessions using both PVS and MCS. This also includes XenApp servers for session based applicationsand session based desktops.

    Figure 7: Hardware platform being created for this document

  • 7/31/2019 HP Virtualization

    17/96

    17

    The two management blades are running Windows Server 2008 R2 SP1 Hyper-V with MicrosoftFailover Cluster and Clustered Shared Volumes configured. This allows for high availability and livemigration of the management VMs. The following VMs are running on the management servers:

    Web Interface Server Desktop Delivery Controller (DDC) SCVMM administration serverThe server VMs reside on the P4800 as shared storage to the management servers to allow for HA.Six BL490c blades are configured for MCS and persistent VDI users, and use the P4800 as storage.Twelve BL460c servers supporting task workers are used for non-persistent VDI users. Two BL460cservers are configured as PVS servers for redundancy. A single PVS server can handle up to 5000connections, however for HA two servers are configured. Also eight BL460c servers are configured torun XenApp for the application virtualization.

    NOTE: SQL is required by multiple applications including the DDC, PVS and SCVMM servers. It isassumed the data center has a clustered SQL configuration already running. If not, then additionalservers are required to support a clustered SQL implementation.

  • 7/31/2019 HP Virtualization

    18/96

    18

    Figure 8 shows cabling for the platform outlined in this document. This minimal configuration showsfour cables supporting all users. Redundant 10GbE is dedicated to production and managementtraffic via a pair of cables. The enclosures communicate via a highly available 10GbE bi-directionalnetwork that carries migration and storage traffic without egressing. This minimizes network teaminvolvement while enhancing flexibility. This configuration can be expanded to include a 10GbEuplink to the core from each enclosure enhancing availability.

    Figure 8: Minimal total cabling within the rack required to support all users and hosts within the rack. Optionally a second setof uplinks to the core may be defined in the lower enclosure.

    Configuring the platform

    External Insight ControlPrior to configuring the platform, you will need to make a decision as to where you will locate yourInsight Control Suite. As a rule, external servers in VDI environments that will not monitor the desktopvirtual machines are recommended as they scale easily across numerous sets of hardware. Thisreduces the amount of management servers required and minimizes licensing costs. For otherimplementation scenarios it is recommended that the Insight Control software is installed within theenclosure on a management host. If you are using the Insight Control Plugins for Microsoft SystemCenter, the software required will be installed within the VC domain.

  • 7/31/2019 HP Virtualization

    19/96

    19

    Configuring the enclosures

    Once the infrastructure is physically in place, it needs to be configured to work within a VDIenvironment. The setup is straightforward and can be accomplished via a single web browsersession.

    Configuration settings for the Onboard Administrator (OA) will vary from customer to customer andthus they are not highlighted here in depth. Appendix B offers a sample script to aid with OAconfiguration and can be used to build a script customized to your environment.

    There are a couple of steps that must be undertaken to insure that your infrastructure is optimized towork with your storage in a lights out data center. This involves setting the startup timings for thevarious interconnects and servers within the infrastructure stack.

    For both enclosures, log on to your OA. In the left column, expand Enclosure Information,Enclosure Settings and then finally, click on Device Power Sequence. Insure that the tab forInterconnect Bays is highlighted. Set the power on for your SAS switches to Enabled and thedelay to 180 seconds as in Figure 9. This step should be done for any enclosure that contains SASswitches. This insures that in the event of a catastrophic power event, the disks in the P4800 G2 SANor the MDS600 will have time to fully initialize prior to the SAS switches communicating with them.

    Figure 9: Setting the SAS switch power on delay

  • 7/31/2019 HP Virtualization

    20/96

    20

    From the same page, set the Virtual Connect Flex-10 power on to Enabled and set the delay forsome point beyond the time when the SAS switches will power on. A time of 210 seconds is generallyacceptable. Click onApply when finished to save settings as in Figure 10.

    Figure 10: Configuring the power on sequence of the Flex-10 modules within the enclosures

    Highlight the Device Bays tab by clicking on it. Set power on timings for any P4460sb G2 bladesor persistent user hosts that are attached to storage to 240 seconds. Click onApply when finishedSet remaining hosts to power on at some point beyond this point.

    Before proceeding to the next section, ensure your enclosures are fully configured for yourenvironment.

    Creating a Virtual Connect domain with stacked enclosuresThis section will focus on configuring the Virtual Connect domain. The simplest way to do this is toundertake the configuration with either a minimal number or no servers within the enclosures. It isrecommended that you begin the configuration with the enclosure that will house the P4800 G2 SAN

    To begin, you should launch the Virtual Connect Manager console by either clicking the link from theOA interface or by entering the IP address or hostname directly into your browser. Log onto the VCmodules using the information provided on the asset tag that came with the primary module as inFigure 11.

    NOTE: It is recommended that you carry out the configurations in this section without the enclosures

    stacked, but the stacking will need to be in place prior to completing the import of the secondenclosure. Plan on connecting the modules prior to importing the second enclosure.

  • 7/31/2019 HP Virtualization

    21/96

    21

    Figure 11: Virtual Connect Manager logon screen

    Once logged in youll be presented with the Domain Setup Wizard. Click on Next as in Figure12 to proceed with setup.

    Figure 12: The initial Domain Setup Wizard screen

  • 7/31/2019 HP Virtualization

    22/96

    22

    You will be asked for the Administrator credentials for the local enclosure as in Figure 13. Enter theappropriate information and click on Next.

    Figure 13: Establishing communication with the local enclosure

    At the resulting screen choose to Create a new Virtual Connect domain by importing thisenclosure and click on Next as in Figure 14.

    Figure 14: Importing the first enclosure

  • 7/31/2019 HP Virtualization

    23/96

    23

    When asked, click onYes as in Figure 15 to confirm that you wish to import the enclosure.

    Figure 15: Confirm importing the enclosure

    You should receive a success message as in Figure 16 that highlights the successful import of theenclosure. Click on Next to proceed.

    Figure 16: Enclosure import success screen

  • 7/31/2019 HP Virtualization

    24/96

    24

    At the next screen, you will be asked to assign a name to the Virtual Connect domain. Keep scaling inmind as you do this. A moderate sized domain with 4,000 users can potentially be housed within asingle Virtual Connect domain. Very large implementations may require multiple domains. If you willscale to very large numbers a naming convention that scales is advisable.

    Enter the name of the domain in the text box as in Figure 17 and then click on Next to proceed.

    Figure 17: Virtual Connect domain naming screen

    Configure local user accounts at the Local User Accounts screen. Ensure that you change thedefault Administrator password. When done with this section, click on Next as in Figure 18.

    Figure 18: Configuring local user accounts within Virtual Connect Manager

  • 7/31/2019 HP Virtualization

    25/96

    25

    This will complete the initial domain configuration. Check the box to Start the Network SetupWizard as in Figure 19. Click Finish to start configuring the network.

    Figure 19: Final screen of the initial configuration

    Configuring the network

    The next screen to appear is the initial Network Setup Wizard screen. Click on Next to proceedas in Figure 20.

    Figure 20: Initial network setup screen

  • 7/31/2019 HP Virtualization

    26/96

    26

    Click Next. At the Virtual Connect MAC Address screen choose to use the static MAC addresses ofthe adapters rather than Virtual Connect assigned MAC addresses as in Figure 21. Click on Next toproceed when done.

    Figure 21: Virtual Connect MAC Address settings

    Select to Map VLAN Tags as in Figure 22. You may change this setting to be optimized for yourenvironment, but this document assumes mapped tags. Click on Next when done.

    Figure 22: Configuring how VLAN tags are handled

  • 7/31/2019 HP Virtualization

    27/96

    27

    At the next screen, you will choose to create a new network connection. The connections used in thisdocument create shared uplink sets. This initial set will be linked externally and will carry both themanagement and production networks. Choose Connection with uplink(s) carrying multiplenetworks (using VLAN tagging).Click on Next to proceed as in Figure 23. You will need toknow your network information including VLAN numbers to complete this section.

    Figure 23: Defining the network connections

  • 7/31/2019 HP Virtualization

    28/96

    28

    Provide a name for the uplink set and grant the connection the two network ports that are cabled fromthe Virtual Connect modules to the network core. Add in the management and production networks asshown in Figure 24. Click onApply to proceed.

    Figure 24: Configuring external networks

    You will be returned to the network setup screen. Choose once again to Connection with

    uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed asin Figure 25.

    Figure 25: Setting up the second uplink set

  • 7/31/2019 HP Virtualization

    29/96

    29

    This second set of uplinks will carry the migration and iSCSI networks. These networks will not egressthe Virtual Connect domain. Give a name to the uplink set and define your networks along with their

    VLAN ID, but do not assign uplink ports as in Figure 26. This will ensure the traffic stays inside thedomain. Click on Next to proceed.

    Figure 26: Defining the internal network

  • 7/31/2019 HP Virtualization

    30/96

    30

    When you return to the Defined Networks screen, choose No, I have defined all availablenetworks as in Figure 27. Click on Next to continue.

    Figure 27: Final defined networks screen

    Click on Finish at the final wizard screen. This will take you to the home screen for the VirtualConnect Manager as in Figure 28. This completes the initial setup of the Virtual Connect domain.

    Figure 28: The initial Virtual Connect Manager screen

  • 7/31/2019 HP Virtualization

    31/96

    31

    Defining profiles for hosts

    Virtual Connect allows you to build a network profile for a device bay within the enclosure. No serverneed be present. The profile will be assigned to any server that is placed into the bay. This profileconfigures the networks and the bandwidth associated with the onboard FlexNICs. The followingrecommendations work for all ProLiant servers, but if you are using the ProLiant BL620c G7 you willhave twice as many NICs to work with (up to 16 FlexNICs). You may wish to maximize bandwidthaccordingly with these adapters.

    Table 2 reiterates the networks created for the Virtual Connect domain as well as how they areassigned for hypervisor and management hosts.

    Table 2. Virtual Connect networks and path

    Network External or InternalNetwork

    VLANd Uplink Set

    Production External Yes External

    Migration Internal No Internal

    iSCSI Internal No Internal

    Management External Yes External

    The device bays for the P4800 G2 SAN are simply configured with two adapters of 10GbEbandwidth each. Both adapters are assigned to the iSCSI network.

    HP suggests the following bandwidth allocations for each network as in Table 3.

    Table 3. Bandwidth recommendations for hypervisor and management host profiles

    Network Assigned Bandwidth

    Production 1.5 Gb/s

    Migration 2 Gb/s

    iSCSI 6 Gb/s

    Management 500 Mb/s

  • 7/31/2019 HP Virtualization

    32/96

    32

    To begin the process of defining and assigning the server profiles, click on Define and then ServerProfile as in Figure 29.

    Figure 29: Define a server profile via dropdown menu

  • 7/31/2019 HP Virtualization

    33/96

    33

    You will create a single profile for the hypervisor and management hosts as in Figure 30. This profilewill be copied and assigned to each device bay.

    Right click on the Ethernet Adapter Connections and choose toAdd Connection. You will dothis eight times. Assign two adapters to each network defined in Table 3 above and assign thebandwidth to those adapters as shown. Do not assign the profile to any bay as this will serve as yourmaster profile. Click onApply when finished.

    Figure 30: Configuring the profile for hypervisor and management hosts

  • 7/31/2019 HP Virtualization

    34/96

    34

    Figure 31 shows the screen as it appears once the networks are properly defined.

    Figure 31: The host profile for hypervisor and management hosts

  • 7/31/2019 HP Virtualization

    35/96

    35

    Repeat the prior process to define a second profile to be copied to any slot where a P4460sb G2storage blade resides. Assign the full bandwidth of 10Gb to each of two adapters. Do not create anyextra Ethernet connections. Click onApply as in Figure 32 to save the profile.

    Figure 32: Master profile to be copied to P4800 storage blades

  • 7/31/2019 HP Virtualization

    36/96

    36

    Importing the second enclosure

    With the configuration of the initial enclosure and VC domain complete, additional enclosures can beincorporated into the domain. On the left column, click on Domain Enclosures and then click onthe Domain Enclosures tab. Click on the Find button and enter the information for your secondenclosure as in Figure 33.

    Figure 33: Find enclosure screen

  • 7/31/2019 HP Virtualization

    37/96

    37

    At the next screen, click the check box next to the second enclosure and choose the Import button asin Figure 34.

    Figure 34: Import the second enclosure into the domain

  • 7/31/2019 HP Virtualization

    38/96

    38

    You should receive an enclosure import success screen as in Figure 35 below.

    Figure 35: Enclosure import success

  • 7/31/2019 HP Virtualization

    39/96

    39

    Optionally, you may click on the Domain IP Address tab and assign a single IP address fromwhich to manage the entire domain as in Figure 36.

    Figure 36: Configuring an IP address for the domain

  • 7/31/2019 HP Virtualization

    40/96

    40

    Be sure to back up your configuration as in Figure 37 prior to proceeding. This will provide you witha baseline to return to.

    Figure 37: Domain settings backup screen

    With your domain configured, copy the server profiles you created and assign them to the desiredbays. Once complete, backup your entire domain configuration again for a baseline configurationwith profiles in place that can be used to restore to a starting point if needed.

    Setting up management hosts

    If you have not done so already, now is the time to insert your hosts into their respective locationswithin the enclosures. Because Virtual Connect works off of the concept of a device bay rather thanan actual server, the hosts have not been needed up to this point.

    Installing Windows Server 2008 R2 SP1

    Except for the P4800 controller blades, the remaining physical servers are installed with MicrosoftWindows Server 2008 R2 SP1 and the Hyper-V role is enabled. However, on the PVS and XenAppservers do not install the Hyper-V role unless planning to virtualize the PVS and XenApp servers.Installing the full Windows Server or the Server Core is up to the installers preference. In consideringthe choices when running only the Hyper-V role, Hyper-V is less affected by Windows updates andpatches so the benefit to running in Server Core mode is uptime and reliability. When running inServer Core mode, the server can be managed either from the Windows PowerShell command lineusing sconfig.cmd or by using the Server Management tools on a full Windows installation on anotherserver; the Server Core is fully supported by SCVMM. It is recommended to install at least one of themanagement servers with the full Windows Server as a minimum to have the ability to run GUI basedapplications from within the infrastructure.

    Configure the onboard disks as a RAID10 set in Online ROM Configuration for Arrays (ORCA) andensure they are set as the boot volume in the ROM-Based Setup Utility (RBSU).

  • 7/31/2019 HP Virtualization

    41/96

    41

    NOTE: RemoteFX is only supported in a full installation of Microsoft Windows Server 2008 R2 SP1.

    License each server as appropriate.

    Configure the management servers

    Ensure the management servers have been installed with Windows Server 2008 R2 SP1 and theHyper-V role has been configured. These servers need to be installed first, before any other servers orstorage can be configured.

    Configure networks

    If you followed the recommended configuration advice for the setup of the Virtual Connect domain (inthe Creating a Virtual Connect domain with stacked enclosures section of this doc) this section willcomplete the networking configuration through to the SCVMM. Configure your networks for eachhypervisor host as in Table 4.

    Table 4. Network configuration for hypervisor hosts

    Network Bandwidth Function

    Management 500 Mb/s Management

    Production 1.5 Gb/s Production network for protocol, application and user traffic

    Special_net 2 Gb/s Migration traffic

    iSCSI 6 Gb/s iSCSI network

    Once the management servers have been installed, Hyper-V configured, and the necessary networkscreated within Hyper-V, the next step is to create a basic VM to allow for management of the P4800.It is recommended to create a dual-homed management VM, but is not necessary. The managementconsole for the P4800 can be installed directly onto one of the management servers if it is running the

    full Windows installation, but best practices suggest creating a separate VM that can be migratedbetween management servers.

    Dual-homed management VM

    At this point there is no access to the P4800 as no external networks are defined. To address this,create a management VM and assign it two Ethernet adapters, the first on the management networkand the second on the iSCSI network. Install the operating system into this VM. You may choose froma variety of operating systems supported by the P4000 Centralized Management Console (CMC). Forthe purpose of this document we installed a copy of Microsoft Windows 7 Professional and granted ita single vCPU with 1GB of RAM. This VM should be installed on the local data store. Once the VMhas been installed and patched, install the P4000 CMC. You may choose to migrate it to a sharedstorage volume once the management hosts are fully configured.

  • 7/31/2019 HP Virtualization

    42/96

    42

    Deploying storage

    Configuring the P4800 G2 SAN for BladeSystem

    In order to access and manage the P4800 G2 you must first set IP addresses for the P4460sb G2storage blades. For each blade, perform the following steps to configure the initial network settings. Itis assumed you will not have DHCP available on the private storage network. If you are configuringstorage traffic to egress the enclosure and are running DHCP you can skip ahead.

    1. Log onto the blade from the iLO. The iLO for each blade can be launched from within theOnboard Administrator as long as the OA user has appropriate permissions. If not, use the assettag on the P4460sb to locate the iLO name and administrator password.

    2. From the command line, type the word Start.3. Choose Network TCP/IP Settings from the available options.4. Choose a single adapter to configure.5.At the Network Settings screen, enter the IP information for the node.

    When you have completed these steps for each P4460sb proceed to the next section.

    Configuring the SANWith P4000 SANs, there is a hierarchy of relationships between nodes and between SANs thatshould be comprehended by the installer. In order to create a P4800 G2 SAN, you will need todefine the nodes, cluster and a management group.

    A node in a P4000 SAN is an individual storage server, in this case, an HP P4460sb G2.

    A cluster is a group of nodes combined to form a SAN.

    A management group houses one or more clusters/SANs and serves as the management point forthose devices.

    Launch the HP P4000 Centralized Management Console by logging into your management VM andclicking the icon.

  • 7/31/2019 HP Virtualization

    43/96

    43

    Detecting nodes

    You will locate the two (2) P4000sb nodes that you just configured. Figure 38 shows the initialwizard for identifying nodes.

    Figure 38: CMC find nodes wizard

    Click on the Findbutton to proceed.

    You can now walk through the wizard adding nodes by IP address or finding them via mask.

    Once you have validated that all nodes are present in the CMC you can move on to the next sectionto create the management group.

  • 7/31/2019 HP Virtualization

    44/96

    44

    Creating the management group

    When maintaining an internal iSCSI network each Virtual Connect domain must have its own CMCand management group. The management group is the highest level from which the administrator willmanage and maintain the P4800 SAN.

    To create the first management group, click on the Management Groups, Clusters, andVolumes Wizard at the Welcome screen of the CMC as shown in Figure 39.

    Figure 39: CMC Welcome screen

  • 7/31/2019 HP Virtualization

    45/96

    45

    Click on the Nextbutton when the wizard starts. This will take you to the Choose a ManagementGroupscreen as in Figure 40.

    Figure 40: Choose a Management Group screen

    Select the New Management Group radio button and then click on the Nextbutton. This willtake you to the Management Group Name screen. Assign a name to the group and insure allP4460sb nodes are selected prior to clicking on the Nextbutton. Figure 41 shows the screen.

    Figure 41: Name the management group and choose the nodes

  • 7/31/2019 HP Virtualization

    46/96

    46

    It will take time for the management group creation to complete. When the wizard finishes, click onNext to continue.

    Figure 42 shows the resulting screen where you will be asked to add an administrative user.

    Figure 42: Creating the administrative user

    Enter the requested information to create the administrative user and then click on Next. You willhave the opportunity to create more users in the CMC after the initial installation.

    Enter an NTP server on the iSCSI network if available at the next screen and click on Next. Ifunavailable, manually set the time. An NTP server is highly recommended.

    Immediately after, you will be asked to configure DNS information for email notifications. Enter theinformation requested and click on Next.

    Enter the SMTP information for email configuration and click Next.

    The next screen will begin the process of cluster creation described in the following section.

  • 7/31/2019 HP Virtualization

    47/96

    47

    Create the cluster

    At the Create a Cluster screen, select the radio button to choose a Standard Clusteras inFigure 43.

    Figure 43: Create a standard cluster

    Click on the Next button once you are done.

    At the next screen, enter a Cluster name and verify all P4460sb nodes are highlighted. Click onthe Next button.

  • 7/31/2019 HP Virtualization

    48/96

    48

    At the next screen, you will be asked to assign a virtual IP address for the cluster as in Figure 44.

    Click onAdd and enter a Virtual IP Address and Subnet Mask on the private iSCSI network. This willserve as the target address for your hypervisor side iSCSI configuration.

    Figure 44: Select a virtual IP address

    Click on Nextwhen done.

    At the resulting screen, check the box in the lower right corner that says Skip Volume Creationand then click Finish. You will create volumes in another section.

    To create an Adaptive Load Balancing (ALB) bond, on the first P4460sb G2 node you need to clickthe plus next to the node and select TCP/IP Network as in Figure 45.

    Figure 45: Highlight TCP/IP Network

  • 7/31/2019 HP Virtualization

    49/96

    49

    Highlight both adapters in the right TCP/IP tab, right click and select New Bond. Define the IPaddress of the bond. When done, it should appear as in Figure 46. Repeat this until every P4460sbnode in the cluster has an ALB bond defined.

    Figure 46: New ALB bond screen

    Once done, close all windows.

    At this point, the sixty (60) day evaluation period for SAN/iQ 9.0 begins. You will need to licenseand register each of the P4600sb nodes within this sixty (60) day period.

    Configuring Hyper-V hosts to access SAN

    Preparing hosts for shared storageThe management hosts, SCVMM virtual machines and the MCS Hyper-V hosts require access to theP4800. To access the P4800 these servers need to have the Microsoft iSCSI initiator enabled.

    If running a full Windows installation, go to Start->Administrative Tools->iSCSI Initiator. If using ServerCore, from the command line enter the command iscsicpl.exe. If prompted start the Microsoft iSCSIservice, as shown in Figure 47.

    Figure 47: Configuration Information

  • 7/31/2019 HP Virtualization

    50/96

    50

    Once the service has started, click on the Configuration tab. This will show the iSCSI Initiator Nameassociated with the server, Figure 48.

    Figure 48: Configuration Information

    You may choose to alter this name and make it simpler by eliminating the characters after the : or

    leave it as is. Copy down this iSCSI name, it will be needed later. The servers and associated Initiatornames must be added to the P4800 cluster before the hosts can connect. Before the iSCSI Initiatorcan be fully configured on the serves, the associated volumes and access must be granted on theP4800 management group.

    Configuring the management group for hosts

    Configuring the P4000 storage and hosts for iSCSI communication is a two part process. Eachhypervisor host must have its software based iSCSI initiator enabled and pointed at the target addressof the P4800. The P4800 must have each host that will access it defined in its host list by a logicalname and iSCSI initiator name. This section covers the configuration of servers within the CMC.

    From the CMC virtual machine, start a session with the CMC as the administrative user. Highlight the

    management group you created earlier and log in if prompted.

    Right-click on Servers in the CMC and select New Server as in Figure 49.

  • 7/31/2019 HP Virtualization

    51/96

    51

    Figure 49: Adding a new server from the CMC

    The resulting window as in Figure 50 appears.

    Figure 50: The New Server window in the CMC

    Enter a name for the server (the hostname of the server works well), a brief description of the host andthen enter the initiator node name for your host that was saved earlier. If you are using CHAP youshould configure it at this time. Click on OKwhen done.

    This process will need to be repeated for every host that will attach to the P4800 G2. Currently onlythe management servers have been installed. Once the remaining servers have been installed you willneed to repeat this process for the MCS Hyper-V hosts, and the SCVMM VMs.

  • 7/31/2019 HP Virtualization

    52/96

    52

    Configuring and attaching storage for management hosts

    You will need to create an initial set of volumes on the P4800 G2 SAN that will house yourmanagement VMs. A volume of 350 GB will be created to hold the management VMs, and a secondvolume of 300 GB will be utilized by the SCVMM server VMs as a shared cluster volume to hold ISOimages, templates, VHD files, and VM information.

    From the CMC, verify the management servers have been properly defined in the servers sectionbefore proceeding.

    From the CMC, expand the cluster and click onVolumes(0) and Snapshots (0). Right click tocreate a New Volume as in Figure 51.

    Figure 51: Volumes and snapshots in the CMC

    Click the drop down labeled Tasks to be presented with options. From the drop down, select theoption for New Volume.

    In the New Volumewindow under the Basic tab, enter a volume name and short description. Entera volume size of 350GB. This volume will house the management VMs. Figure 52 shows the window.

    Figure 52: New Volume window

  • 7/31/2019 HP Virtualization

    53/96

    53

    Once you have entered the data, click on theAdvanced tab. Insure you have selected your cluster,RAID-10 replication and then click the radio button for Thin Provisioning.

    Figure 53: The Advanced tab of the New Volume window

    Click on the OKbutton when done.

    Repeat this process to create the other management volume.

    When all volumes have been created, return to the Servers section of the CMC under the mainmanagement group. You will initially assign the volumes you just created to the first managementhost. In this document this host is in device bay 1.

    Right click on your first management server and choose toAssign and Unassign Volumes andSnapshots as in Figure 54.

    Figure 54: Server options

    A window will appear with the volumes you have defined. Select the appropriate volumes to assign tothe host by selecting the check boxes under theAssigned column.

    You will repeat these steps when you create your other volumes after the other servers have beeninstalled.

  • 7/31/2019 HP Virtualization

    54/96

    54

    NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on yourP4000 software DVD. See Appendix C of this document for samples.

    Finish configuration of management servers

    Once the volumes have been assigned, go back to the management server and run the iSCSI Initiatoragain.

    Click on the Discoverytab, then on the Discover Portalbutton, Figure 55.

    Figure 55: Discovery

    Enter the IP Address of the P4800. Click OK.

  • 7/31/2019 HP Virtualization

    55/96

    55

    Select the Targets tab. The volume(s) should now be listed, and shown as Inactive, Figure 56.

    Figure 56: Targets tab

    Click on the Connect button.

  • 7/31/2019 HP Virtualization

    56/96

    56

    Click theAdd this connection to the list of Favorite Targets. If the Multipath IO feature (MPIO) hasbeen installed in the server, then select the Enable multi-path check box, Figure 57.

    Figure 57: Connect to Target

    Adding the volume to the Favorite Targets means the server will attempt to connect to the volumewhen the server restarts.

    This process needs to be repeated for the other management server, and will need to be repeated forthe SCVMM VMs and the MCS Hyper-V hosts.

    Setting up DAS for non-persistent VM hosts

    The steps in this section will help you configure direct attached storage for your hypervisor hosts thatwill be part of the non-persistent users pool. This direct attached storage will house the Write Cachefiles used by PVS as well as temporary files that will be eliminated when users log off. The number ofdisks you assign each host will vary based on your expected I/O profile. This section assumes four15K RPM LFF Hot Plug SAS disks for each host, creating a total of twelve (12) volumes. When

    mapping the drives to the server, follow the zoning rule as described in Figure 58 and in thedocumentation provided with your SAS switches.

  • 7/31/2019 HP Virtualization

    57/96

    57

    Figure 58: SAS switch zoning rules by blade slot

    Launch the Virtual SAS Manager by highlighting a SAS switch and clicking on Management

    Console. The following screen appears as in Figure 59. Highlight the Zone Groups and then clickon Create Zone Group.

    Figure 59: HP Virtual SAS Manager

  • 7/31/2019 HP Virtualization

    58/96

    58

    You will highlight between 4 and 6 disks based on expected I/O patterns for the individual hosts.Figure 60 highlights the selection of 4 disks for the new Zone Group. Click on OKonce you haveselected the disks and assigned a name to the zone group.

    Figure 60: Assigning drives to a Zone Group

  • 7/31/2019 HP Virtualization

    59/96

    59

    Repeat the process until you have Zone Groups defined for your DAS hosts. Figure 61 shows four (4)Zone Groups that have been created. Click on Save Changes prior to proceeding.

    Figure 61: Zone Groups created

  • 7/31/2019 HP Virtualization

    60/96

    60

    For each device bay that you have created a Zone Group, highlight the device bay and then clickModify Zone Access as in Figure 62.

    Figure 62: Modifying zone access by device bay

  • 7/31/2019 HP Virtualization

    61/96

    61

    Select the Zone Group that belongs to the device bay by clicking on the check box next to it.Click on OKto complete the assignment as in Figure 63.

    Figure 63: Assigning Zone Group to a device bay

    When booting, the server can boot from either the internal drives of the server attached to the P410icontroller, or from the drives just assigned using the P700m controller. To define where your serverwill boot from you will need to change the settings in the RBSU to boot from the P700m array if sodesired. Use the ORCA for the P700m, not the P410i controller, to configure the disks you assignedin this section as a RAID10 set.

    For non-persistent users a file will be created for each VM to hold the page file and client-side writecache for the provisioning server. By default, the page size is 1.5 times memory and the client-sidewrite file is a minimum of 5 GB. For a task worker this means 1.5 GB page file + 5 GB. A 6.5 GB filewill be created for each task worker supported on a server. From a performance and space

    consideration, you may want to put drives in the server, mirror those drives with RAID10 and installWindows Server 2008 R2 SP1 to the internal drives. Whichever path is chosen, verify that RBSU isset to boot from the correct device.

  • 7/31/2019 HP Virtualization

    62/96

    62

    Installation of servers physical and virtual

    This section looks at the physical and virtual servers that need to be installed, what the role of each is,and memory and disk configurations associated with each server.

    Physical Machines

    Server Type Role No. of Servers CPUsSockets/Cores

    Memory Hard Disk NICs

    BL460c G7 XenApp Servers 8 2x6 32GB 2 4

    BL490c G7 Hyper-V for MCS VMs 6 2x6 144GB 2 4

    BL460c G7 Hyper-V for PVS VMs 12 2x6 96GB 8(from DAS)

    4

    BL460c G7 Management Servers 2 2x6 96GB 2 4

    BL460c G7 PVS 2 2x6 96GB 2 4

    Virtual Machines

    Role vCPUs Memory Hard Disk NICs

    Desktop Delivery Controller 2 4GB 40GB 1SCVMM 2 4GB 40GB 1

    Windows 7 Desktop base image 1 1.5GB 40GB 1

    Web Interface VM 1 1.5GB 40GB 1

    Desktop Delivery Controller (DDC) VM:

    Operating System: Windows Server 2008 R2 SP1 XenDesktop 5 Desktop Delivery Controller Desktop Studio Console Systems Center Virtual Machine Manager Administration Console Desktop Director Citrix Web Interface 5.4 Citrix Licensing 11.6.1Microsoft System Center Virtual Machine Manager (SCVMM) VM

    Windows Server 2008 R2 SP1 Systems Center Virtual Machine Manager 2008 SQL Server 2008 (required, assumed to be installed elsewhere)

    Web Interface Server VM

    Windows Server 2008 R2 SP1 Internet Information Services (IIS)Web Interface 5.4

  • 7/31/2019 HP Virtualization

    63/96

    63

    Setting up the infrastructure

    Installing Windows Server 2008 R2 SP1

    Now that storage has been completed, the remaining physical servers can be installed with WindowsServer 2008 R2 SP1. Once the physical servers have been installed, create the remaining server VMson the management servers:

    Two SCVMM VMs Two DDC VMs One Web Interface VMsThese VMs require a full installation of Windows Server 2008 R2 Enterprise SP1 (do not install asServer Core) due to the graphically applications that will be running on them. Dynamic Memory wasnot configured for these server VMs; they were each configured as stated in the table above.

    For redundancy, NIC teaming can be configured for all of the physical servers, however to doredundant networks when utilizing the Microsoft iSCSI Initiator and Hyper-V on a network you mustenable and use MPIO. To accomplish this give both iSCSI network NICs separate IP addresses; installMPIO DSM and select the enable multipathing in the iSCSI initiator.

    Clustering needs to be configured on the management servers to allow for migration and HA of the

    management VMs. When utilizing Microsoft Failover Clustering and Hyper-V always configureClustered Shared Volumes.

    Setting up management VMs

    In this document, each management server with the exception of Microsoft SQL Server is housed in avirtual machine. You may choose to virtualize SQL Server. It is not virtualized in this configurationbecause the assumption is a large, redundant SQL Server entity exists on an accessible network that issized to accommodate all of the databases required for this configuration. By keeping themanagement components virtualized it insures that the majority of the overall architecture is madehighly available via standard Hyper-V practices and reduces the server count required to manage theoverall stack.

    Each of the following management VMs should be created based on the best practices of the softwarevendor. The vendors installation instructions should be followed to produce an optimized VM for theparticular application service. These VMs should be created on the first management host that wasinstalled and the VMs should be stored in the 200GB management servers volume. You may do alive migration to migrate the CMC VM volumes to this storage if you wish.

    SCVMM VMs

    For performance, multiple SCVMM servers are required. For best performance the number of VMs foreach SCVMM server should be in the range of 700. For this RA, two SCVMM servers are utilized.One VM is installed on each of the management servers. Store the VM VHD files on the ClusterShared Volumes associated with the management servers. This will allow for migration and HA of the

    SCVMM management VMs. Once the operating system has been installed, install SCVMM and theSCVMM administrator console into each VM. Configure each administrator console to connect to thelocal host as the SCVMM server.

    Install the Failover Cluster feature into both SCVMM VMs. Configure Cluster Shared Volumes betweenthe two VMs to access the volume defined on the P4800. This storage will be used to hold thenecessary library files for templates, images, and ISOs for installation

    Documentation for installing SCVMM can be found athttp://technet.microsoft.com/en-us/library/cc917964.aspx.

    http://technet.microsoft.com/en-us/library/cc917964.aspxhttp://technet.microsoft.com/en-us/library/cc917964.aspxhttp://technet.microsoft.com/en-us/library/cc917964.aspxhttp://technet.microsoft.com/en-us/library/cc917964.aspxhttp://technet.microsoft.com/en-us/library/cc917964.aspxhttp://technet.microsoft.com/en-us/library/cc917964.aspx
  • 7/31/2019 HP Virtualization

    64/96

    64

    HP Insight Control Plugins for Microsoft System Center

    HP Insight Control for Microsoft System Center provides seamless integration of the unique ProLiantand BladeSystem manageability features into the Microsoft System Center consoles. By integrating theserver management features of HP ProLiant and HP BladeSystem into Microsoft System Centerconsoles, administrators can gain greater control of their technology environments.

    Failover Manager

    HP P4000 SANs utilize a Failover Manager (FOM) to insure that data remain available across

    management groups in the event of a single node failure. If you want to run multi-site, then follow therecommendations of the P4000 Multi-Site HA/DR Solution Pack user guideathttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf.

    Understanding storage for XenDesktop

    The following sections will highlight how storage is attached to support persistent and non-persistentpools. Figure 64 shows the storage connectivity of the persistent VM hosts, where each Hyper-V hostsupports an image as well as the differential and identity disks for the associated VMs.

    Figure 64: Persistent VMs and the relationship between hosts and volumes

    Figure 65 shows an overview of how storage will connect to volumes supporting non-persistent VMpools. The master image is held by the provisioning server, and can be stored on a volume on the

    http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdfhttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdfhttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf
  • 7/31/2019 HP Virtualization

    65/96

    65

    SAN or on local drives of the server. A single PVS server can support up to 5000 connections, withapproximately 400 connections per NIC. Each host is assumed to hold 95-100 task workers per thesizing numbers HP has calculated in the document entitled Virtual Desktop Infrastructure for theEnterpriseathttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW.Each Page File/Write Cache is 1.5 times memory plus 5 GB.

    Figure 65: Non-persistent VMs and the relationship between hosts

    For persistent VMs, MCS is used. The master image will be 40 GB in size, and a replica of the masterimage is copied to each volume to be created. Then the differential and identity disks for the VMsassociated with that volume will be created during XenDesktop configuration. Each differential fileassociated with the image could grow to be the same size as the master image if not managedcorrectly. HP recommends consulting the Citrix documentation for managing the size of differentialfiles with MCS. For planning purposes, 20 GB will be allocated to hold a differential file and itsassociated identity disk for each VM created.

    HP suggests aligning the volumes with approximately 30-35 VMs per volume. Determining the numbeof volumes is simple math:

    Assume there is a total of 420 VMs planned, 14 volumes would be sufficient, 420/30. To determinethe total amount of space required the equation is:

    (Number of VMs * (VM Differential Size+ 300MB)) + (Number of Volumes * (2 * Master Image Size))The 300MB is to allow space per VM for each identity disk associated with the differential disks. TheNumber of Volumes * (2 * Master Image Size) allows for space for a master image and a copy of

    http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENWhttp://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW
  • 7/31/2019 HP Virtualization

    66/96

    66

    the master image per volume. In a worst case scenario, if the differential files were to grow to matchthe size of the master image file, the total space required would be 420 * (40GB+300MB) +(14 * (2*40)) equals 17.9 TB of required space. For our sizing, we assumed a maximum of 20 GBper differential disk, thereby requiring 9.5 TB of disk space. To calculate each volume size:

    For this document, our volume size is 9.5TB /14, approximately 680 GB for each volume.

    It should be noted that all P4800 SANs are ready for thin provisioning from initialization. This allowsfor overprovisioning of space to ensure that storage is not constrained by physical limits that dontalways make sense in VDI environments. This means volumes can be sized for a 100% matchbetween the master image and the differential files. The installer must understand that growth must beaccommodated and reacted to when thin provisioning is used.

    Bill of materials

    This section shows the equipment needed to build the sample configuration contained in thisdocument. It does not include clients, operating system, alternative application virtualizationtechnology, user virtualization or application costs as those are unique to each implementation. Someitems related to power and overall infrastructure may need to be customized to meet customerrequirements.

    Core Blade Infrastructure

    Quantity Part Number Description

    2 507019-B21 HP BladeSystem c7000 Enclosure with 3 LCD

    2 413379-B21 Single Phase Power Module

    2 517521-B21 6x Power supply bundle

    2 517520-B21 6x Active Cool Fan Bundle

    2 456204-B21 c7000 Redundant Onboard Administrator

    4 455880-B21 Virtual Connect Flex-10 Ethernet Module for HP BladeSystem

    Rack and Power

    Quantity Part Number Description

    1 AF002A 10642 G2 (42U) Rack Cabinet - Shock Pallet*

    1 AF009A HP 10642 G2 Front Door

    1 AF054A 10642 G2 (42U) Side Panels (set of two) (Graphite Metallic)

    - Customer Choice Power distribution unit

    - Customer Choice Uninterruptable power supply

  • 7/31/2019 HP Virtualization

    67/96

    67

    Management

    Quantity Part Number Description

    2 603718-B21 HP ProLiant BL460c G7 CTO Blade

    2 610859-L21HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO ProcessorKit

    2 610859-B21HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor

    Kit

    4 512545-B21 72GB 6GB SAS 15K SFF DP HDD

    24 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs

    Optional External Management Host (Virtualized)

    Quantity Part Number Description

    1 579240-001 HP ProLiant DL360 G7 E5640 1P

    Persistent VDI Hypervisor Hosts (Processors may be substituted to match the configurations found in thedocumentVirtual Desktop Infrastructure for the Enterprise.)

    Quantity Part Number Description

    6 603719-B21 ProLiant BL490c G7 CTO Blade

    6 603600-L21 HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit

    6 603600-B21 HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit

    6 572075-B21 60GB 3G SATA SFF Non-hot plug SSD

    108 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs

    Non-persistent Hypervisor Hosts (Processors may be substituted to match the configurations found inthe documentVirtual Desktop Infrastructure for the Enterprise.)

    Quantity Part Number Description

    12 603718-B21 ProLiant BL460c G7 CTO Blade

    12 610859-L21 HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit

    12 610859-B21 HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO ProcessorKit

    144 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs

    12 508226-B21 HP Smart Array P700m SAS Controller

    12 452348-B21 HP Smart Array P-Series Low Profile Battery

  • 7/31/2019 HP Virtualization

    68/96

    68

    P4800 G2 SAN for BladeSystem

    Quantity Part Number Description

    1 BV931A HP P4800 G2 SAN Solution for BladeSystem

    1 AJ865A HP 3Gb SAS BL Switch Dual Pack

    2 Customer Choice Mini-SAS Cable

    1 HF383E P4000 Training Module

    Direct Attached Storage

    Quantity Part Number Description

    1 AJ866A HP MDS600 with two Dual Port IO Module System

    2 AP763A HP MDS600 Dual I/O Module Option Kit

    1 AF502B C-13 Offset Power Cord Kit

    48 516816-B21 HP 450GB SAS 3.5 15K DP HDD

    1 AJ865A HP 3Gb SAS BL Switch Dual Pack

    4 Customer Choice Mini-SAS Cable

    NOTE: The drive count should reflect the number of drives (four or six) planned for each of the directattached storage hosts within the reference architecture.

    User Data Storage

    Quantity Part Number Description

    2 BV871A HP X3800 G2 Gateway

    1 BQ890A HP P4500 G2 120TB MDL SAS Scalable Capacity SAN Solution

    HP Software

    Quantity Part Number Description

    2 TC277AAE HP Insight Control for BladeSystem 16 Server license

    1 436222-B21 HP Insight Software Media Kit

    VAR HP Client Automation, Enterprise

    Installing and configuring XenDesktop 5

    Installing XenDesktop 5 will require installing the Desktop Delivery Controller (DDC), ProvisioningServer (PVS), and XenApp.

    The process for installing XenDesktop 5 can also be found athttp://edocs.citrix.com.

    http://edocs.citrix.com/http://edocs.citrix.com/http://edocs.citrix.com/http://edocs.citrix.com/
  • 7/31/2019 HP Virtualization

    69/96

    69

    The first step is to install the Desktop Delivery Controller. You must install the SCVMM AdministratorConsole prior to installing the DDC. For more information visit:http://technet.microsoft.com/en-us/library/bb740758.aspx.

    When installing the DDC on the Components to Install select all. The license server for the Citrixconfiguration will run on the DDC. Changes to the firewall may be required depending on yourfirewall settings. Please consult the XenDesktop 5 Product Documentation (http://edocs.citrix.com) forfirewall recommendations.

    Once the DDC has finished installation, go to the Start menu and launch the Desktop Studio consoleto configure the desktop deployment. This will define the XenDesktop site, licensing and databaseoptions as well as define Microsoft Virtualization as the host type. When prompted, specify theSCVMM server address and credentials to authenticate to the SCVMM server.

    For Citrix Licensing configuration visit:http://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.html

    For more information about using an existing SQL database visit:http://support.citrix.com/article/CTX128008

    Once the first DDC VM has been installed, repeat the process on the second DDC VM, joining it tothe farm created when the first DDC was configured.

    Since the RA is using both PVS and MCS (Machine Creation Services) to deploy the VMs, multiplecatalogs will need to be configured with the DDC. For PVS, no clustering is required as DAS storagewill be used to support the write cache files, this process will be defined later in this document. Forusing MCS, a Microsoft Clustered Shared Volume is required to support each master image that willbe created. This will determine the number of DDC groups required to support MCS.

    XenApp

    When installing XenApp there are two possibilities. The first is to install XenApp on bare-metalservers, requiring eight servers to be running XenApp to support the RA. The XenApp servers can alsobe virtualized if so desired. To support a similar load using virtualization as a single server wouldrunning bare metal requires four (4) VMs each with 4 vCPUs and 8 GB of memory. For this document,the XenApp servers were installed bare metal, no virtualization.

    NOTE: If you choose to run XenApp virtualized additional volumes will need to be created andconfigured on the P4800.

    For instructions for implementing XenApp, refer to the Citrix eDocs website:http://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.html.

    Only the XenApp server role was installed using the Server Role Administrator in this exercise.

    In addition to the installation, the following applications were installed and published as hostedapplications:

    Microsoft Word 2007 Microsoft Outlook 2007 Microsoft Excel 2007 Microsoft PowerPoint 2007 Microsoft Visio 2007For this exercise, virtual desktops leveraged applications by using the Citrix Online Plugin viaXenApp. To allow for this functionality, additional configurations were made to the Web Interface.

    1. Open the Desktop Studio Console on the Desktop Delivery Controller.

    http://technet.microsoft.com/en-us/library/bb740758.aspxhttp://technet.microsoft.com/en-us/library/bb740758.aspxhttp://technet.microsoft.com/en-us/library/bb740758.aspxhttp://technet.microsoft.com/en-us/library/bb740758.aspxhttp://edocs.citrix.com/http://edocs.citrix.com/http://edocs.citrix.com/http://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.htmlhttp://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.htmlhttp://support.citrix.com/article/CTX128008http://support.citrix.com/article/CTX128008http://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.htmlhttp://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.htmlhttp://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.htmlhttp://support.citrix.com/article/CTX128008http://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.htmlhttp://edocs.citrix.com/http://technet.microsoft.com/en-us/library/bb740758.aspxhttp://technet.microsoft.com/en-us/library/bb740758.aspx
  • 7/31/2019 HP Virtualization

    70/96

    70

    2. Expand the Access folder, expand Web Interface, right click on XenApp Services Site and clickon Manage Server Farms.

    3. Remove any currently configured farms, and then click Add.4. Specify the name of the farm and add all respective XenApp servers into the farm, Figure 66, then

    click OK.

    Figure 66: Selecting XenApp servers

    5.Verify all settings are correct and click OK, Figure 67.

    Figure 67: Setting Verification

  • 7/31/2019 HP Virtualization

    71/96

    71

    6. Right click on the site and choose Configure Authentication Methods.7.Verify that Pass-through and Prompt are enabled, Figure 68.8. Set Pass-through as the default authentication method and click OK.

    Figure 68: Non-persistent VMs and the relationship between hosts

    9. Desktops will now pass thru credentials to XenApp to enumerate applications within the desktopsession.

    Windows 7 Base Image

    Once the DDC has been installed, create the base image file(s) that will be used for provisioning. Thesame base image file can be used for both PVS and MCS. The Windows 7 Optimization Guide fromCitrix was used to optimize the desktop delivery. In addition the following steps were done to improveperformance:

    1. Create a virtual machine in SCVMM or Hyper-V with the following: Desired HD size, normally 40GB for Windows 7 1.5GB RAM Legacy Hyper-V NIC (required to network boot VMs)

    2. Boot the VM with Microsoft Windows 7 media.3. Install Windows 7.4.Verify Hyper-V Integration Services have been installed.5.Add machine to domain.

    When creating the base image a value of 1.5 GB was used for memory. For the VDI VMs in thetemplate Dynamic Memory was configured with a minimum of 512 MB and the max dependent onthe type of work, 1 GB for task workers, 1.5 GB for productivity users, and 2.0 GB for knowledgeusers.

    Once the Windows 7 VM has been created, XenApp support with the Citrix Online Plugin is installed,along with any additional software desired in the image. As a final step, install the Virtual Desktop

    Agent into the VM.

    To install the Virtual Desktop Agent, attach the XenDesktop5.iso to the Windows 7 VM usingSCVMM. Once the application has started select Install Virtual Desktop Agent, then select

  • 7/31/2019 HP Virtualization

    72/96

    72

    Advanced Install. In the Advanced Install select Virtual Desktop Agent and Support for XenAppApplication Delivery, specify the URL of the XenApp Services Site.

    Manually specify the DDC controller location, and allow for XenDesktop Performance Optimizations,User Desktop shadowing, and Real Time Monitoring.

    Once the installation has completed the VM can now be shutdown. Two copies need to be made, oneto be the PVS master image and one to be the MCS master image.

    Machine Creation Services

    Machine Creation Services will be used to create the persistent VMs on the SAN storage. All of theservers that will be supporting the persistent users should be installed with Windows Server 2008 R2SP1 with the Hyper-V role enabled, and configured into a cluster using Cluster Shared Volumes foreach volume that was created on the P4800 to support the persistent VMs. The VM to be used as theMCS master image should have its properties modified to enable Dynamic Memory and set themaximum memory limit for the VM as defined by the user type. Task wo