10
Resource Defragmentation using Market-Driven Allocation in Virtual Desktop Clouds Prasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri, Manish Kumar University of Missouri-Columbia, USA; University of Toledo, USA. Email: [email protected], [email protected], [email protected], [email protected] Abstract—Similar to memory or disk fragmentation in per- sonal computers, emerging “virtual desktop cloud” (VDC) ser- vices experience the problem of data center resource fragmenta- tion which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a) decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal “Market- Driven Provisioning and Placement” (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and conse- quently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a ‘Net Utility’ metric, even when VD resource location constraints are imposed to meet orthogonal security objectives. I. I NTRODUCTION User communities in retail, education and other economy sectors are increasingly transitioning “physical distributed desktops” that have dedicated hardware for traditional offline applications (e.g., MS Office, Media Player, Matlab) into “virtual desktop clouds” (VDCs) that are accessible via thin- clients. Cloud Service Providers (CSPs) have recognized this demand, and have recently started supporting on-demand, ‘pay-as-you-go’ Desktop-as-a-Service (DaaS) offerings (e.g., AWS WorkSpaces [1]) that augment their other common offer- ings such as Infrastructure-as-a-Service (IaaS), and Software- as-a-Service (SaaS). One of the critical factors that determines success of a DaaS offering is the delivery of satisfactory user quality of experi- ence (QoE). In order to comply with service level agreements (SLAs) that map to user QoE needs of users, CSPs provision resources in an ‘on-the-fly’ manner for every incoming virtual desktop (VD) request. The resource provisioning is performed across multiple data centers and particular placement of a VD request in a data center could be based upon considerations such as cost, load or latency. Similar to memory or disk fragmentation in personal computers, VDC services experi- ence the problem of data center resource fragmentation, where This material is based upon work supported by the National Science Foundation under award number CNS-1347889 and VMware. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or VMware. Fig. 1. Simple illustration of a virtual desktop cloud market ‘opportunistic’ resource placements over time lead to careless packing of VDs as in the “tetris effect” [2]. The opportunistic placements do not perform in-depth, computationally-intensive and time-consuming checks to de- termine whether every placement decision is globally optimal and leads to maximal-utility in a VDC; they use high-level information about resource status in data centers to rapidly make placement decisions. Such opportunistic VD resource allocations are desirable since they reduce user wait time for VD access after request initiation. However, due to work- load dynamics, login/logout trends of user, and possible user mobility, opportunistic placements over time create irregular resource capacity holes in data centers, and also may cause many VDs to be placed in highly sub-optimal locations. For in- stance, users may be connected to VDs that are allocated only the minimum resources due to load issues, when additional resources are available at a different data center. In extreme cases, users VD requests may also be rejected when there are many resource capacity holes in one or more data centers. Such situations can thus impact user QoE for customers, and cause increased operational costs for CSPs due to their impact on performance and scalability. In this paper, we address the above combinatorial problem in VDCs by developing a novel, optimal “Market-Driven Provisioning and Placement” (MDPP) scheme that is based upon distributed optimization principles. Our scheme uses virtual market economics and a consumer driven resource pricing strategy to optimize resource allocations, which has been successful in other large-scale complex networked sys- tems such as power grids [3] [4]. Fig. 1 shows a simple illustration of how we apply such a cross-disciplinary strategy in the context of a VDC market. In our VDC system, VD users behave as buyers who require maximized utility for the

Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

Resource Defragmentation using Market-DrivenAllocation in Virtual Desktop Clouds

Prasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri, Manish KumarUniversity of Missouri-Columbia, USA; University of Toledo, USA.

Email: [email protected], [email protected], [email protected], [email protected]

Abstract—Similar to memory or disk fragmentation in per-sonal computers, emerging “virtual desktop cloud” (VDC) ser-vices experience the problem of data center resource fragmenta-tion which occurs due to on-the-fly provisioning of virtual desktop(VD) resources. Irregular resource holes due to fragmentationlead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increasedoperational costs for VDC service providers. In this paper, weaddress this problem by developing a novel, optimal “Market-Driven Provisioning and Placement” (MDPP) scheme that isbased upon distributed optimization principles. The MDPPscheme channelizes inherent distributed nature of the resourceallocation problem by capturing VD resource bids via a virtualmarket to explore soft spots in the problem space, and conse-quently defragments a VDC through cost-aware utility-maximalVD re-allocations or migrations. Through extensive simulationsof VD request allocations to multiple data centers for diverseVD application and user QoE profiles, we demonstrate that ourMDPP scheme outperforms existing schemes that are largelybased on centralized optimization principles. Moreover, MDPPscheme can achieve high VDC performance and scalability,measurable in terms of a ‘Net Utility’ metric, even when VDresource location constraints are imposed to meet orthogonalsecurity objectives.

I. INTRODUCTION

User communities in retail, education and other economysectors are increasingly transitioning “physical distributeddesktops” that have dedicated hardware for traditional offlineapplications (e.g., MS Office, Media Player, Matlab) into“virtual desktop clouds” (VDCs) that are accessible via thin-clients. Cloud Service Providers (CSPs) have recognized thisdemand, and have recently started supporting on-demand,‘pay-as-you-go’ Desktop-as-a-Service (DaaS) offerings (e.g.,AWS WorkSpaces [1]) that augment their other common offer-ings such as Infrastructure-as-a-Service (IaaS), and Software-as-a-Service (SaaS).

One of the critical factors that determines success of a DaaSoffering is the delivery of satisfactory user quality of experi-ence (QoE). In order to comply with service level agreements(SLAs) that map to user QoE needs of users, CSPs provisionresources in an ‘on-the-fly’ manner for every incoming virtualdesktop (VD) request. The resource provisioning is performedacross multiple data centers and particular placement of a VDrequest in a data center could be based upon considerationssuch as cost, load or latency. Similar to memory or diskfragmentation in personal computers, VDC services experi-ence the problem of data center resource fragmentation, where

This material is based upon work supported by the National ScienceFoundation under award number CNS-1347889 and VMware. Any opinions,findings, and conclusions or recommendations expressed in this publicationare those of the author(s) and do not necessarily reflect the views of theNational Science Foundation or VMware.

Fig. 1. Simple illustration of a virtual desktop cloud market

‘opportunistic’ resource placements over time lead to carelesspacking of VDs as in the “tetris effect” [2].

The opportunistic placements do not perform in-depth,computationally-intensive and time-consuming checks to de-termine whether every placement decision is globally optimaland leads to maximal-utility in a VDC; they use high-levelinformation about resource status in data centers to rapidlymake placement decisions. Such opportunistic VD resourceallocations are desirable since they reduce user wait time forVD access after request initiation. However, due to work-load dynamics, login/logout trends of user, and possible usermobility, opportunistic placements over time create irregularresource capacity holes in data centers, and also may causemany VDs to be placed in highly sub-optimal locations. For in-stance, users may be connected to VDs that are allocated onlythe minimum resources due to load issues, when additionalresources are available at a different data center. In extremecases, users VD requests may also be rejected when there aremany resource capacity holes in one or more data centers.Such situations can thus impact user QoE for customers, andcause increased operational costs for CSPs due to their impacton performance and scalability.

In this paper, we address the above combinatorial problemin VDCs by developing a novel, optimal “Market-DrivenProvisioning and Placement” (MDPP) scheme that is basedupon distributed optimization principles. Our scheme usesvirtual market economics and a consumer driven resourcepricing strategy to optimize resource allocations, which hasbeen successful in other large-scale complex networked sys-tems such as power grids [3] [4]. Fig. 1 shows a simpleillustration of how we apply such a cross-disciplinary strategyin the context of a VDC market. In our VDC system, VDusers behave as buyers who require maximized utility for the

Page 2: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

resources allocated to them, while the CSPs behave as sellerstrying to provide optimal resources at low cost. The thin-clientVD requests individually bid for optimal resources based onthe user QoE needs of an application. For e.g., a thin-clientuser may request high network bandwidth to be provisionedfor high-definition video playback in a Media Player as partof a Distance Learning use case. Alternately, a thin-client usermay request high memory to be provisioned for computer-aided design model manipulation with Matlab as part of aEngineering Site use case. With MDPP, bids only need priceinformation of VDC provider and do not require detailed costmodels knowledge, which is confidential in a VDC. ThusMDPP aims to satisfy the bids of all VDs requests in adistributed manner with minimal information exchange, whileensuring allocations do not exceed available VDC resources.

In our previous work [5], we showed that defragmentation isan important optimization step to improve VDC performanceand scalability, measurable in terms of a ‘Net Utility’ metric.For a given online saturated state after opportunistic resourceallocations, we showed how a centralized greedy heuristicwith utility gain ordering (i.e., “Utility-ordered Provisioningand Placement” (UOPP)) coupled with a ‘migration cost-benefit analysis’ led to improvements in the Net Utility byup to 30%. However, due to the greedy heuristic approach,our UOPP scheme overcompensates for high-dimensionalityin the inherent problem, and is still a sub-optimal solution. Incontrast, the MDPP scheme has been designed to channelizeinherent distributed nature of the resource allocation problemby capturing VD resource bids via a virtual market to exploresoft spots in the problem space. Consequently, our MDPPscheme can optimally defragment a VDC through cost-awareutility-maximal VD re-allocations or migrations.

Through extensive simulations of VD request allocations tomultiple data centers for diverse VD application and user QoEprofiles, we demonstrate that our MDPP scheme outperformsexisting schemes including our earlier UOPP scheme, whichare largely based on centralized optimization principles. TheVD application profiles of different user groups that weuse in our simulations to determine resource provisioningrequirements are obtained from a real VDC testbed [6] [7].Moreover, we show that MDPP scheme can achieve high VDCperformance measured in terms of Net Utility, even when VDresource location constraints are imposed to meet orthogonalsecurity objectives in cases where for e.g., VDs with highly-confidential applications/data should be specifically placed onhosts that have none, or very few public-facing interfaces.

The remainder of this paper is organized as follows: InSection II, we describe related work on cloud resource alloca-tion and optimization. In Section III, we formally describethe defragmentation solution objective, while reflecting onthe involved complexity. In Section IV, we present detailsof our MDPP scheme. In Section V, we present performancecomparison of our MDPP scheme along with UOPP and othercentrally-driven schemes. Section VI concludes the paper.

II. RELATED WORK

There have been several works on opportunistic cloudresource allocation and migration optimization in the contextof provisioning and placement of virtual machines [8] - [11].A fair resource allocation scheme is proposed in [8], where

user requests are handled centrally and are statically placed ina round-robin manner across data centers based on resourcedominance levels. Works such as [9] and [10] propose dynamicplacement solutions that involve virtual machine migrationsbetween data centers for global optimization. Authors in[9] use reinforcement-learning to automate virtual machine(re)configuration in response to changing application demandsor resource supply. Whereas, authors in [10] develop an online-configuration scheme that features a genetic algorithm todynamically reallocate virtual machines across data centersby predicting the future application workloads with the aimto conserve overall power consumption. A genetic algorithmapproach is also used in [11] to make resource allocationsbased on network topology and application requirements fordata-intensive workloads. These works are targeted mainly foruser workloads in virtual machines that can be modeled astransactions with predictable resource consumption character-istics. In comparison, our work considers utility functions ofreal-time application workloads of VD applications, whichhave distinctly bursty resource consumption trends [6].

Resource fragmentation is an extensively studied problem instorage and memory systems in traditional operating systemsliterature for many years. However, only in recent years havethere have been works emerging such as [5], [12] and [13]that have begun to address the defragmentation strategies inthe context of cloud platforms. The authors in [12] developed aperiodic defragmentation scheme that is triggered in a mannerthat minimizes the cost involved in migrations necessary toconsolidate virtual machines (also resource capacity holes)onto a minimum number of servers. The authors in [13] seekto dynamically allocate resources to VMs including thosethat host virtual desktops using an iterative divvy algorithm.Their goal is to improve scalability and compliance withresource sharing policies within public clouds. Given thatthe general resource defragmentation problem is NP-hard innature, heuristic and meta-heuristic based approaches in priorliterature can suffer in terms of complexity as the total numberof decision variables (i.e., resources, constraints) increase, andit is often challenging to define a single system-wide metricfor the optimization process [3] [14].

In the distributed optimization approach used in our MDPPscheme, we resolve scalability issues by using a work sharingmodel that can handle large dimensions of decision variables.In addition, we use a Net Utility metric for a VDC system thatnot only measures the server-side utility for a given numberof provisioned VDs, but also considers the level of networkhealth that can impact the overall user QoE. The core ideaof our work is the idea to decompose the overall Net Utilityoptimization into sub-problems. We solve the sub-problems ina decentralized manner using price-based bidding mechanismsseen in market-driven techniques that have been successfullyapplied for power grids [3] [4], which have resource allocationchallenges that are comparable to the case of a VDC market.Decomposition methods can be broadly classified into primaldecomposition [15] and dual decomposition [16]. The formermethod is based on decomposition of the original primalproblem, while the latter is based on the decomposition ofthe Lagrangian dual problem [17]. In our work, we modelresource defragmentation problem as a dual decompositionproblem where the primal variables are the resources whilethe dual variables correspond to the prices of the resources.

2

Page 3: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

III. PROBLEM MOTIVATION

In this section, we first describe the online opportunisticprovisioning and placement scheme that can be used tosaturate data center resources for a large-enough set of VDrequests. Following this, we illustrate the problem of offlinedefragmentation formally and define the solution objective.

A. Online Opportunistic Resource Allocation

In our previous works [6] and [7], we developed andevaluated different opportunistic placement schemes that al-locate resources on-the-fly for every incoming VD request. Toperform provisioning of VDs, the VDC relies on informationregarding user groups (e.g., Campus Computer Lab, DistanceLearning Site, Engineering Site) and their application profilesthat are obtained via toolkits for resource consumption andnetwork health measurements. The profiles indicate the mini-mum and maximum resource amounts (i.e., Rmin and Rmax)that correlate to user QoE; resource provisioning below Rmin

violates SLAs (users find a VD unusable), and resource pro-visioning above Rmax does not produce any user-perceivableperformance benefit. The profiles thus allow ‘desktop pooling’corresponding to the resource requirements of applicationsin different user groups. After addressing the provisioningrequirements, a data center Ll location is selected with anaim to maximize Net Utility (UNet) by using one of the fourbasic placement schemes: ‘Least Cost’, ‘Least Latency’, ‘LeastLoad’, and ‘Least Joint’, which are described in the following:

1) Least Cost: Least Cost scheme considers the costsinvolved in resource price and power consumption at datacenter Ll for reserving resources to a given user. The schemeplaces VDs on data centers such that every VD is assignedRmin amount of resources in order to decrease cost andincrease the energy efficiency of a VDC. Though the userQoE perceived by VD user may not be optimum, the schemeincreases the UNet of the system since it reduces operationalcosts and allocates energy-conserving amount of resources.

2) Least Latency: Least Latency is a utility maximizationscheme which maximizes user QoE by placing VDs with closeto Rmax resources on the nearest distance data centers fromrequest locations. In other words, the perceived user QoEis inversely proportional to the function of latency (greaterthe latency, lesser is the user QoE). The scheme provides ahigh UNet due to increased user QoE with latency-awarenessbetween thin-client sites and data centers.

3) Least Load: Least Load scheme reduces the numberof VDs served on a particular data center and every VDis allocated Rmax resources in a greedy manner. Also, thescheme tries to avoid an imbalance on the data center bymigrating a heavily burdened data center’s VD request loadonto another data center which has low resource utilization.Though the user QoE is high through the load-awareness, thescheme results in a low UNet value as the number of VDrequests served are minimum due to Rmax allocations.

4) Least Joint: Least Joint scheme combines the optimiza-tions of the above three placement schemes and tries to achievea higher online VD placement utility. The scheme considers allthe factors that affect the VDC system’s performance such asserver side utility, resource price, energy cost, network latency,and network/server-side load balancing.

The detailed pseudocode of the Least Joint algorithm isshown in Algorithm 1. Given that we do not know the natureof arrival of random VD requests, we provision the VDssequentially in an online manner by executing the algorithmto determine the data center location Ll and the resourceRset (Rmin<Rset<Rmax) required by a VD request vdi. Thealgorithm chooses the network path providing maximum end-to-end bandwidth between the user site and a data center inan opportunistic manner. In a virtualized network aggregatewithin a VDC, a single physical network can be broken downinto several logical networks {Pi,1, Pi,2 ... Pi,k} availablefrom a user edge-switch i to every data center where, eachpath is a set {I1, I2 ... Ik} of interconnects that forms thepath. We choose the path that provides maximum standarddeviation from the required bandwidth usage Bnetwork as thecandidate path from a user site to the preferred data center.

Such a data center Ll selection is similar to the HillClimbing Algorithm implemented in [18] that also uses theconcept of maximum standard deviation. Unew

l and U curl

represent the new and current utility values at a data centerLl, and ∆Ul represents the increase in utility at Ll. The Unew

lat Ll is calculated using the utility function Ugi(Ri,l) for pathPi,k. If the Unew

l is greater than the cost involved in reservingthe resources on the data center (expressed as a factor ofamount of resources needed) and the new ∆Ul is greater thanits current local value, ∆Ul is updated as the current maximumutility increase. Once the above steps are performed iterativelyfor all the available data centers, the data center Ll that givesrise to maximal ∆Ul is selected for placing the VD request.

B. The Defragmentation Problem

As mentioned in Section I and as evident from Algorithm1, the opportunistic placements do not perform in-depth,computationally-intensive and time-consuming checks to de-termine whether every placement decision is globally optimaland leads to maximal-utility in a VDC; they use high-levelinformation about resource status in data centers to rapidlymake placement decisions and provide lower response timefor user access to a VD. We can envisage a case where theabove online opportunistic scheme can be used to handle VDrequests till the VDC system reaches a saturation state beyondwhich, any additional VDs allocated will violate SLAs.

Fig. 2 illustrates an example VDC saturation state scenario,where resource fragmentation can be seen to impact theperformance and scalability within a VDC system. Fig. 2(a)shows a situation when a VD request vd15 belonging to aparticular user group (uniquely identified by the size and colorpattern) that is best suited for maximizing quality at LocationL2 is allocated by a VDC’s ‘Unified Resource Broker’ toLocation L1 due resource unavailability at L2. This creates asituation where the placement is sub-optimal if the resourcesavailable at L1 are not adequate for high VD performance.Also, Fig. 2(b) shows a situation where a VD request vd21is rejected due to lack of resources at both L1 and L2 in theVDC saturation state (i.e., there is no resource capacity holelarge enough to accommodate vd21). At this point, the VDapplication workload dynamics, login/logout trends of user,and possible user mobility could get the VDC system out ofthe saturation state. For example, Fig. 2(c) shows a situationwhere one of the VDs vd17 already placed at L2 departs (i.e.,

3

Page 4: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

Algorithm 1 Least Joint placement for handling a VD request1: Input: VD request vdi, Utility function Ug(R), list

of available data centers {L1, L2 ... Ll} each con-taining n virtual desktops {vd1, vd2, ..., vdn} and mresources {R1, R2 ... Rm}, list of loop free paths{Pi,1, Pi,2 ... Pi,k}, predicted bandwidth usage Bnetwork

for vdi.2: Output: Resource allocation Ri,l at data center Ll and

path Pi,k for a new VD request vdi that maximizes ∆Ul

3: begin procedure4: max∆U = 05: for each data center Ll do6: /*Step-1: Network path determination*/7: for each path Pi,k to data center Ll do8: if ∀IjεPi,k bandwidth at Ij >= Bnetwork then9: Add Pi,k to the candidate paths set Cp

10: end if11: end for12: Select path Pi,k which provides the maximum Standard

Deviation from Bnetwork among all Pi,kεCp

13: /*Step-2: Utility estimation*/14: Unew

l = Ugi(Ri,l) for vdi at Ll given path Pi,k

15: /*Step-3: Data center selection*/16: if Unew

l >Cost involved in allocating vdi at Ll then17: ∆Ul = Unew

l - U curl

18: if ∆Ul >max∆U then19: max∆U = ∆Ul

20: end if21: end if22: end for23: Place vdi at Ll which gives max∆U24: /*Step-4: VD instantiation*/25: Allocate Ri,l to vdi on Ll with Pi,k

26: Update the selected data center utility U curl = Unew

l27: end procedure

Fig. 2. Scenario showing resource fragmentation causes, and defrag-mentation benefits in VDCs

user is finished with application tasks and VD is swappedout), creating a resource hole or fragment. In this scenario,we can solve the previous fragmentation problem by migratingvd15 that was originally placed at L1. In Fig. 2(d), since vd15migrated to L2, defragmented resources are now available atL1 to accept the earlier rejected request vd21.

Although VDs can be migrated to new locations in orderto improve the Net Utility as seen in previous scenario, wemake a cautionary remark that migrating a VD costs energyand resources, and we need to consider the cost and benefits ofmigration before we make the decision for a VD to migrate.More specifically, we need to migrate only those VDs thatgenerate a positive benefit after incurring migration coststhat can be simply modeled as shown in Eqn. 1, where:(i) a snapshot image creation cost (Sci(dcc)) is incurred atthe current datacenter, (ii) a deployment cost (Dci(dcf )) isincurred for the VD profile and data to be moved to the other(future) datacenter, and (iii) a network cost (Nci(t)) is incurredfor the VD depending on for e.g., the time of the day.

Mci = Sci(dcc) +Dci(dcf ) +Nci(t) (1)

In general, the defragmentation is a combinatorial problemwhere optimal resource allocation depends on the order ofthe VD arrival and is a directed instance of the GeneralizedAssignment Problem [19]. To state the problem more formally,we provide the following definitions for a generic scenariocomprising of n resource users (i.e., users with VD requests)and m resource providers (i.e., CSP with data centers):

• Let ri, where i = 1 ... n, be the m dimensional vectorpresenting resource request from ith resource user whererimin ≤ ri ≤ rimax

• Let bj , where j = 1 ... m, be the capacity available tothe resource provider j

• Since the total amount of resource is fixed in the VDC,∑ni=1 rij ≤ bj ∀j

The objective is then to find the optimal resource allocation(r1, r2, r3, · · · , rn) which will minimize the cost function:

min f (r) =

n∑i=1

fi (ri)

where

fi (ri) = −Uri +Mci

The optimization function can be further simplified by apply-ing the Lagrangian relaxation with the help of dual variable(λ) as follows:

min f (r) =

n∑i=1

fi (ri) + λTj (

n∑i=1

rij − bj)

s.t λ ≥ 0

We need to solve the above relaxed problem in a distributedfashion, where each resource user i solves fi (ri) + λT ri.

4

Page 5: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

IV. MARKET-DRIVEN PROVISIONING AND PLACEMENTFOR DEFRAGMENTATION

In this section, we present details of our MDPP scheme thatwe propose as the defragmentation solution within VDCs. Westart with approach preliminaries statement, and next presentthe algorithm for distributed market-driven optimization toimprove Net Utility of a VDC system that is in a saturatedstate, and is experiencing dynamic changes in supply anddemand characteristics due to VD request arrivals as well asdeallocations after completion of users’ VD application tasks.

A. Approach PreliminariesOur solution approach is based upon the seminal result of

economic theory viz., given proper conditions, a market willproduce an optimal distribution of resources with a minimumof transaction costs. Our VDC market can be considered to bea system with locally interacting agents that are buying andselling with limited information exchange, in order to achievean overall global behavior of fair resource allocation and stableprices. Moreover, our market-driven approach is additionallysuitable for VDC systems due to its flexibility in handlingcases where there are dynamic changes in demand/supplyrelation (i.e., VD requests are continuously arriving and gettingswapped out). We remark that our primary objective in thispaper is not to explore game-theoretic properties, but ratherfocus reasonably on realistic, intuitive and understandablebehavior based on everyday experience in a VDC serviceenvironment. We thus make the simplifying assumption thatagents do not speculate about how their own behavior or thatof other agents will affect the market.

B. Distributed Market-Driven OptimizationIn real-world market scenarios, we can view the utility

maximization as an agent-level process of determining idealresource quantities that provide maximum gain to the agent.Similarly, the cost minimization process captures the buyingagent’s sentiment to explore the minimum amount it has to payto maximize the profit out of the bargain for the identified idealresources. The utility maximization step in a VDC systemcorresponds to a behavior in which every virtual desktop(vdi) searches for the ideal resource provisioning in order tomaximize its utility while the cost minimization step relatesto determining a location where, the least cost of resourceconsumption is observed. We propose a “Bid” (B) metricwhich folds-in the utility maximization and cost minimizationinto one quantity. It allows for identification of ideal resourcerequirements ri for vdi and finds an optimal data center lby selecting a data center which produces minimal “Bid”value, where “Bid” value per data center is computed by theformulation given below:

Bij = min

−Ui (rij) + λTj rij +

Ng∑g=1

(Qi −Qg)2

+

+Mccf +∑

i∈Ngi

(Ui − Uj)2

(2)

Hence, every vdi computes it’s final “Bid” value (Bi =min{Bij}) by iterating over all data centers.

The above Eqn. 2 consists of several major components thatare explained in detail below:• Utility (Ui): We model the utility of vdi as a measure

of both the provisioning and placement decisions. Theprovisioning information is supplied via a resource re-quest vector (ri) which is an m dimension vector; eachdimension refers to one type of resource available in thesystem. The placement information is inherent since weare modeling “Bid” value for every data center.

• Resource Usage Cost (λTj rji ): The resource usage cost is

a measure for the cost (Cij) associated with accessingresources given by a resource request vector (ri) fromdata center j. The price vector (λj) is also a m dimensionvector where each dimension refers to the price of oneunit of one type of resource at data center j. Hence, thedot product of price vector with resource request vectorproduces the total resource usage cost of accessing riresources from data center j.

• Quality Constraint (Ng∑g=1

(Qi −Qg)2): We use additional

constraints for intra usergroup quality criteria whichfocuses on preserving equal user QoE levels or qualityfairness for the VDs belonging to the same usergroupor desktop pool. We compute the quality as a compositefunction of resource provision vector (ri).

• Migration Cost (Mcsj): The migration cost as depicted inEqn. 1 captures the cost of migrating vdi from previouslyallocated data center c to the current data center underconsideration f .

• Utility Constraint (∑

i∈Ngi

(Ui − Uj)2): Utility constraint is

similar to Quality constraint and enforces equal utility forall virtual desktops belonging to same usergroup whereNgi is the set of virtual desktops in i’s user group.

Each of the components contribute linearly to the final “Bid”value computed by a virtual desktop vdi. Therefore, at theend of first step, we have a n × m Bid matrix, where nis the number of virtual desktops and m is the number ofresources, and reflects the resource provisioned vectors forall virtual desktops. The Bid matrix is passed to the VDC’sUnified Resource Broker (URB) (whose role to dispatch VDallocations is illustrated in Fig. 2) for further processing andresource allocations to particular data centers.

C. Determining the Maximum Net Utility and Cost MinimalMigration Set

Our market-driven approach in our MDPP scheme is aniterative approach and is strongly dependent on the terminatingcriteria. Our MDPP scheme implementation uses a completesatisfaction of the Bid matrix as a terminating criteria. Thepseudocode of our MDPP scheme is shown in Algorithm 2,which iterates until bids (resource requests) from all virtualdesktops are completely met. Note that the URB is the onlyentity which has complete knowledge of each and everyincoming bid request. Hence, the terminating criteria is em-bedded in the URB’s core logic and thus, the URB works as adecision maker on final allocations. If the URB fails to satisfyall the requests from the Bid matrix, then it rejects the currentBid matrix and ask all virtual desktops to submit their bidsagain, until all bids are satisfied.

5

Page 6: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

The URB also works as a liaison between data centers(or resource providers) and virtual desktops (or resourceconsumers). The important aspect of the URB being an inter-mediator is to communicate price per resource at every datacenter to all virtual desktops. In every iteration, the price asso-ciated to resources at every data center fluctuates based on thedemand and supply relation for the resources at correspondingdata centers (i.e., VD request arrivals as well as deallocations).The URB tracks all available resource at every data centerand it also computes demand of the resources at every datacenter based on the Bid matrix. Hence, the URB is the solejudiciary to modify the price per unit of resource at every datacenter. We formulate the price update rule as a sub-gradientmethod based on the Lagrangian relaxed problem. The newprice vector which is a m dimension vector for data center jis computed as follows:

λj (t+ 1) = max

(0, λj (t) + α

(nj∑i=1

rji − bj

))(3)

The update of the Lagrangian multipliers or the prices ofthe resources, as shown in the Eqn. 3 is in the sub-gradientdirection which in economic terms, is the update accordingto the demand and supply of the resources. The max term isintroduced since the prices of the resources or the Lagrangianmultipliers are always non-negative. The term α in the aboveequation is the step size or the rate of price update which ischosen according to guidelines in [20]. The prices stabilize

when(

nj∑i=1

rji − bj)→ 0, which indicates balance in demand

(rji ) and supply (bj).

Algorithm 2 Market-Driven Placement and Provisioning1: Input: Set of VD requests placed in the system{vd1, ..., vdn}

2: Output: Resource reallocations Ri,j for VDs vdi i ∈{1 ... n} that maximize Net Utility Unet

3: begin procedure4: While5: Pool the price per resources available at all data centers

into a single set of price matrix RP1l, ..RPJl

6: Calculate Bids (Bil) for each vdi using the Lan-grangian Relaxation for each data center dcl

Bil = min{−Ui

(rli)

+ λTl rli +

Ng∑g=1

(Qi −Qg)2

+

Mccf +∑

i∈Ngi

(Ui − Ul)2}

7: Calculate the Bid (Bi) for each vdiBi = min { Bil } ∀l

8: end While9: Do

10: Until all bids are satisfied11: end Do12: end procedure

D. Comparison with Centrally-Driven Heuristic SchemesIn this subsection, we compare our MDPP scheme’s re-

source allocation approach with centrally-driven heuristicoptimization schemes, which include our previous UOPPscheme [5].

Fig. 3. UOPP example defragmentation case

For a given online saturated state after opportunistic re-source allocations, our previous UOPP scheme uses a central-ized greedy heuristic with utility gain ordering coupled with a‘migration cost-benefit analysis’, wherein the migration cost isthe same as shown in Eqn. 1. The URB in the UOPP schemepools resources across all data centers and identifies positiveVD pairs for migration such that the new utility achieved isgreater than the cost involved in migration. The core principleused in our UOPP scheme is the principle of minimizingthe loss of utility due to sub-optimal placements. The UOPPscheme assumes we know through the utility functions of aVD user group (utility function values for different user groupsshown later in Table I) that there exists an optimal upperboundutility of each VD if the resources are not fragmented acrossdata centers, and ideal resources are allocated to each VD.For further details regarding the UOPP scheme, the reader issuggested to refer [5].

Fig. 3 shows an illustrative example of the algorithmicimplementation of the UOPP scheme. For the sake of il-lustration, we assume three arriving VD requests and threedata centers which can provision Rj

max for ∀j ∈ J resourcesto achieve maximum composite quality (Qmax) for everyVD request. This can be extended to implementations withmultiple data centers and multiple VD requests as show laterin Section V. Fig. 3 shows the placement performed by‘online’ Least Joint placement scheme which produces NetUtility of 382 units until the system reaches a saturation state.The ‘offline’ (centrally-driven greedy heuristic based) UOPPdefragmentation scheme performs optimized VD reallocationsthat results in increased Net Utility of 415 units. It is selfevident from the example that the UOPP algorithm achievesa better placement with cost-aware positive pair migration, ascompared to the migration-free ‘on-line’ Least Joint scheme.

Fig. 4 shows an illustrative example to demonstrate thedefragmentation performance of our ‘off-line’ MDPP schemein comparison with the ‘on-line’ Least Joint scheme. In this

6

Page 7: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

Fig. 4. MDPP example defragmentation case

example, we assume a scenario of three data centers and threeincoming VD requests similar to our previous UOPP schemeillustration. Also, resources available at each data center aresufficient to provision Rj

max for ∀j ∈ J which results inmaximum composite quality (Qmax) for every VD request.As seen from the figure, the Least Joint scheme achieves aNet Utility of 382 units until the system reaches a saturationstate. Further, our proposed off-line MDPP defragmentationscheme captures the cost matrix across the data centers forproviding the resource bid values of the bidding agents or VDrequests across the data centers. Using much lesser number ofiterations, it creates the optimized cost matrix along with bidvalues for every VD request. The decision matrix is based onthe lowest bid values obtained for every VD request to everydata center, and the VD request is placed at the data centerwhich gives rise to the least bid value. The MDPP scheme thusachieves a significant increase in Net Utility summing to 430units, compared to the previous 382 units obtained through theLeast Joint scheme.

Fig. 5 shows another illustrative example to further demon-strate defragmentation advantages of the distributed optimiza-tion using the MDPP scheme over the UOPP scheme. Inthis example, we assume a scenario of four data centers andfour incoming VD requests for the sake of study purposes.Also, resources available at each data center are sufficientto provision Rj

max for ∀j ∈ J which results in maximumcomposite quality (Qmax) for every VD request. As seen fromthe figure, the placement of VD requests on data centers bythe Least Joint scheme produces Net Utility of 570 units.The UOPP scheme provides reallocations that results in NetUtility of 580 units, while the MDPP scheme produces aNet Utility of 600 units. It is self evident from the examplethat our proposed MDPP scheme achieves highest Net Utilitysupporting our defragmentation necessity claim based on theglobal optimization of virtual market of bidding agents. We

Fig. 5. Illustrative example case showing MDPP defragmentationadvantages over UOPP defragmentation

are thus able to hypothesize improved performance gain withmarket-driven optimization, as compared to greedy heuristicsbecause the MDPP scheme achieves a system balance bycapturing the consumers interest for providing fair user QoE,while increasing the system scalability for CSPs.

In addition to UOPP, we can envision other greedy heuristicschemes such as ‘Round Robin’ and ‘Random Walk’ defrag-mentation that are relatively naive. In the Round Robin case,the reallocations can be performed by placing the VD requestssequentially across data centers just with the constraint thatone VD request per data center is allocated in every round. Inthe Random Walk case, the reallocations can be performed byplacing the VD requests randomly across data centers with noother additional constraints. Further, we can assume a variantof the MDPP scheme viz., ‘Security-Driven Provisioning andPlacement’ (SDPP), where data centers can be labeled withsecurity levels (e.g., least, medium and maximum-security)based on for e.g., the number of public-facing network inter-faces or other such security considerations. In the SDPP case,VD resource location constraints are imposed on the MDPPscheme, and thus there will be sub-optimal placements thataffect the Net Utility after defragmentation. In the followingsection, we compare the improved Net Utility obtained withMDPP defragmentation with relation to these other defrag-mentation schemes using extensive simulations.

V. PERFORMANCE RESULTS

In this section, we first describe the simulation setup alongwith the parameters considered for the defragmentation perfor-mance comparison of our MDPP scheme with other schemes.

7

Page 8: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

Following this, we describe the metrics against which weevaluate the different schemes. Finally, we discuss the obtainedresults from simulation experiments.

A. Simulation EnvironmentSimulation Setup: Our simulation setup consists of a numberof data centers, where each data center has several servers offixed capacity. The number of data centers, number of serversper data center, and the capacity of each server are specifiedusing a configuration file. An initialization script generatesa ‘VD-request-sequence’, which is a sequence of allocationand de-allocation requests, belonging to different user groups,generated randomly based on a allocation-to-deallocation ratioparameter. Each data center and the VD-Client (from user site)request also has a physical location which can be specifiedwith X and Y co-ordinates in a 2D-space. The location of thedata center is generated either randomly or specified manually.The VD-clients are allocated a physical location randomly inthe 2D-space. The physical locations are used to calculatethe distance between data centers and VD-Clients, and thesedistances are used for estimating the network latency in thesimulations.Configuration Considerations: We considered different config-urations of the VDC system in our simulation experiments. Asa common configuration, we used 3 data centers based on ourdiscussions with VDC infrastructure experts at IBM, Dell andVMware. Given the performance of thin-client protocols overlong-distances, and challenges in synchronizing user profilesacross many data centers for desktop delivery, it is commonto have 3 data centers within the geographical size of USA. Insome simulation configurations, we increased the number ofdata centers to show our scheme effectiveness at larger scale.

We consider different CPU, memory and network bandwidthcapacities in the data centers such that they can supportdifferent VD request loads of different user groups based ontheir respective utility functions. We simulate VDs belongingto three different user groups, with different resource profilesthat satisfy diverse user QoE needs. Table I shows the Rmin

and Rmax values of the CPU, memory and network band-width resources considered in our simulation experiments thatcorrespond to the VD application profiles of the user groupsobtained from a real VDC testbed [6]. We used allocation todeallocation ratio of 3:1, such that for every 3 VD allocations,we assume there will be 1 VD deallocation in order to capturea dynamic VDC system with user login and logout trends.

TABLE IUSER GROUP UTILITY FUNCTIONS

User Group CPU (GHz) Memory (GB) Bandwidth (Mbps)Rmin Rmax Rmin Rmax Rmin Rmax

Engineering Site 0.7 1.5 0.35 1.2 0.2 1.5Distance Learning Site 0.3 1.5 0.2 1.1 2 4Campus Computer Lab 0.5 1.4 0.2 0.8 0.1 1.2

Utility versus Latency: We also consider the network latencywhile calculating the utility of the individual VDs. Networklatency is an important factor that impacts interactive responsetime performance within thin-client protocols. We model thenetwork latency as scaling factor that degrades the ideal userQoE, i.e., the user QoE that would be generated by theresources allocated to them, under ideal network conditions.

Although the quality of all user groups decreases as latencyincreases, the rate of decrease is not necessarily the samefor each user group. From our empirical studies in [6], wehave observed that the Distance Learning group will have thesharpest decline, since users in this group mainly use videostreaming and web-conferencing applications that are moreinteractive in nature, and thus the video quality decreasessharply as latency increases. In contrast, the Engineering Sitegroup has relatively slower decay, since users in this groupmostly use computationally heavy applications for designtasks, without much network resource demands.

B. MetricsWe evaluate the different defragmentation schemes viz.,

MDPP, UOPP, SDPP, Round Robin and Random Walk usingthe following performance metrics:

1) Quality Gain: This metric reflects improvement in thequality of service after defragmentation. This metricis useful to differentiate improvements achieved fromdual variable based provisioning in our proposed MDPPscheme with respect to the Least Joint scheme. Themetric reflects percentage gain rather than the absolutegain.

Quality Gain (QG) :Qnew −Qold

Qold× 100

2) Utility Gain: This metric compares the Net Utility of thesystem after defragmentation. It is used to capture thegain achieved via a global, offline optimization processwith the schemes being compared over the initial onlineplacement that caused a saturation state. This metric alsoreflects the percentage gain rather than the absolute gain.

Utility Gain (UG) :

n∑i=1

Unewi −

n∑i=1

Uoldi

n∑i=1

Uoldi

× 100

3) Migration Count: Migration count reflects the number ofmigrations suggested for the defragmentation. Since thismetric computes the number of essential migrations dic-tated by a given scheme, it is an indirect representationof a scheme’s performance.

4) Migration Cost: For this metric, we use the formulationgiven in Section IV-D to model migration cost as acomposite function of snapshot cost, deployment costand network cost.

5) Benefit: This metric is the determining attribute whichindicates whether a set of VD migrations will actuallyresult in increased performance and scalability for theCSPs, and utility (user QoE level) increase for theconsumer. The metric can be calculated by subtractingthe utility rise across all data centers with the cost factorassociated with the VD migrations.

C. ResultsFor our analysis, we generated different system saturation

scenarios using different placement schemes. We simulatedvariation in demand for VDs in ascending fashion whichindirectly forced provisioning to vary from Rmax to Rmin

8

Page 9: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

(a) Campus Computer Lab Net Utility (b) Engineering Site Net Utility

(c) Distance Learning Site Net Utility

Fig. 6. Comparison of Net Utility results for individual user groups with increasing VD request loads

TABLE IISUMMARY OF USER GROUP MIGRATION ANALYSIS

Parameters Campus Computer Lab Engineering Site Distance Learning SiteDefragmentation Schemes UOPP SDPP MDPP UOPP SDPP MDPP UOPP SDPP MDPPTotal Allocated 38 38 38 30 30 30 28 28 28Total Deallocated 2 2 2 6 6 6 6 6 6Total Migrated 7 26 29 8 22 15 1 21 29Net Utility Before Defragmentation 8.39 8.39 8.39 8.20 8.20 8.20 5.82 5.82 5.82Net Utility After Defragmentation 10.23 10.90 12.10 8.77 12.10 15.24 7.10 9.45 10.45Migration Cost 2.7 9 8.3 2.8 7.6 6.9 0.2 7.9 9.3Net Utility Benefit 1.83 2.51 3.71 0.57 3.90 7.04 1.28 3.62 4.63Net Utility Percent Increase 21.92 29.93 44.27 6.96 47.57 85.84 21.97 62.29 79.49

Fig. 7. Results showing benefits of defragmentation on Net Utilityfor the main schemes comparison

in observed saturation states. Figs. 6(a), 6(b) and 6(c) showthe Net Utility results of various offline and online placementschemes for all three user groups.

As shown in Fig. 6(a) for the Campus Computer Lab usergroup, MDPP achieved maximum rise in Net Utility whencompared to other offline schemes. This is attributed to thefact that MDPP ensures equal utility values for all users untilthe bids of all VDs are satisfied. Also, as the number of VD

Fig. 8. System Net Utility results for increasing number of datacenters

requests increases, the Net Utility of all VDs were increasedfor MDPP scheme, which is in contrast to other centralizedheuristic schemes which decreased the system Net Utility asthe number of VDs increased. In this scenario, the MDPP stillmaximizes the quality of every VD user and satisfied everyVD’s bid as the load increased by taking advantage of thefact that the CPU, RAM and network requirements of campuscomputer lab users are comparatively less.

9

Page 10: Resource Defragmentation using Market-Driven Allocation in ...faculty.missouri.edu/calyamp/publications/vdc-mdpp-defrag-ic2e15.pdfPrasad Calyam, Sripriya Seetharam, Baisravan HomChaudhuri,

Fig. 6(b) shows the Net Utility results of Engineering Siteusers whose VD quality requirements are more demandingand user QoE expectations are higher. As shown in the figure,even though the MDPP Net Utility value is comparatively highwhen compared to other offline schemes, Net Utility values ofall schemes decreased as the VD load increased. However, wenote that the decrease in Net Utility value for MDPP was moregraceful compared to the other schemes. The decreasing NetUtility is due to the fact that the combined CPU, RAM andnetwork requirements of Engineering Site users are relativelyhigh and providing the desired VD quality for every user asthe load increases is a challenging task as the system exhaustsresources quickly, compared to the other two user groups.

Fig. 6(c) shows the Net Utility results of Distance LearningSite users for the MDPP scheme evaluated against othercentralized heuristic schemes. While the rest of schemes’Net Utility consistently dropped for increasing VD load,our MDPP scheme maximized the Net Utility until the VDrequests reached a load of 270. However, Net Utility slightlydropped for further increase in VD load. This is due to the factthat the Distance Learning Site user’s networking requirementsare relatively more demanding, which in turn impacts the NetUtility as the VD load increases.

Fig. 7 captures the overall VDC System Net Utility com-bined for all the three user groups. We can observe thatthe MDPP scheme has the least decrease in Net Utility ascompared to the UOPP and SDPP schemes. Additionally,Fig. 8 shows the Net Utility for increasing number of datacenters plotted for varying VD request loads. For lower VDload settings such as 80, there is slight increase in Net Utility(shown by Slope-1) as the number of data centers increases.This is because, the optimal set of resources are provided forthe VD load as the number of data centers is increased to9. However, the highest VD load of 340 achieved a notableincrease in Net Utility (shown by Slope-2) as the number ofdata centers increased. This is because the optimal set of VDresources is provisioned for all VDs only when more numberof data centers are available to place the VD requests. Lastly,Table II summaries the user group migration results of UOPP,SDPP and MDPP schemes for a VD request load of 110.We can observe that MDPP outperforms UOPP and SDPPobtaining as much as 44%, 85% and 79% in terms of NetUtility rise, respectively as compared to 21%, 6% and 21%,and 29%, 47% and 62% obtained using UOPP and SDPPschemes, respectively for the three user groups.

VI. CONCLUSION AND FUTURE WORK

In this paper, we identified an important problem of resourcefragmentation that occurs in virtual desktop cloud (VDC) datacenters similar to memory or disk fragmentation in traditionaloperating systems that can impact performance and scalability.We proposed a defragmentation solution that uses distributionoptimization principles, and is called as the Market-DrivenProvisioning and Placement (MDPP) scheme. At the core,MDPP performs resource allocation based on VD requests’bidding values in a virtual VDC market, and helps achievefair allocation and stable prices.

We showed that MDPP out performs existing centralizedheuristic schemes in achieving global optimization in a VDCsystem in terms of a Net Utility metric. MDPP scheme can

also achieve high Net Utility, even when VD resource locationconstraints are imposed to meet orthogonal security objectives.Our evaluation has analytical rigor through application ofthe Lagrangian relaxation method. Moreover, our simulationsbuild upon utility functions and user profiles obtained froma real-world testbed, and feature realistic application behaviorbased on everyday experience in a VDC. Similar to our UOPPimplementation and optimization results in a realistic VDItestbed [7] built using VMware virtual desktop infrastructuretechnologies, MDPP can be implemented within URBs ofactual VDC environments. Consequently, our evaluation andimplementation readiness of MDPP provides insights to VDCservice providers in terms of how to improve the scalabilityand performance of their Desktop-as-a-Service offerings.

Our future work involves extending our defragmentationapproach in other cloud computing environments that aredifferent from the VDC environment in user workload charac-teristics, user QoE needs, and service provider’s cost factors.

REFERENCES

[1] Amazon Web Services Workspaces - https://aws.amazon.com/workspaces[2] A. Hillier, “Server Capacity Defrag: Maximizing Infrastructure Efficiency

through Strategic Workload Placements”, CiRBA Whitepaper, 2010.[3] S. Clearwater, “Market based control: A paradigm for distributed resource

allocation”, World Scientific Publishing,1996.[4] B. HomChaudhuri, M. Kumar, V. Devabhaktuni, “A Market Based

Distributed Optimization for Power Allocation in Smart Grid”, Proc. ofASME Dynamic Systems and Control Conference, 2011.

[5] M. Sridharan, P. Calyam, A. Venkataraman, A. Berryman, “Defragmen-tation of Resources in Virtual Desktop Clouds for Cost-Aware Utility-Optimal Allocation”, Proc. of IEEE International Conference on Utilityand Cloud Computing, 2011.

[6] P. Calyam, R. Patali, A. Berryman, A. Lai, R. Ramnath, “Utility-directedResource Allocation in Virtual Desktop Clouds”, Elsevier ComputerNetworks Journal, 2011.

[7] P. Calyam, S. Rajagopalan, S. Seetharam, A. Selvadhurai, K. Salah, R.Ramnath, “VDC-Analyst: Design and Verification of Virtual DesktopCloud Resource Allocations”, Elsevier Computer Networks Journal, 2014.

[8] A. Ghodsi, M. Zaharia, B. Hindman, A. Konwinski, S. Shenker, I.Stoica, “Dominant Resource Fairness: Fair Allocation of HeterogeneousResources in Datacenters”, Proc. of USENIX NSDI, 2011.

[9] J. Rao, X. Bu, C. Xu, L. Wang, G. Yin, “Vconf: A reinforcementlearning approach to virtual machines auto-configuration”, Proc. of ACMConference on Autonomic Computing, 2009.

[10] H. Mi, H. Wang, G. Yin, Y. Zhou, D. Shi, L. Yuan, “Online self-reconfiguration with performance guarantee for energy-efficient large-scale cloud computing data centers”, IEEE Conference on ServicesComputing, 2009.

[11] G. Lee, N. Tolia, P. Ranganathan, R. Katz, “Topology-aware resourceallocation for data-intensive workloads”, ACM SIGCOMM ComputerCommunications Review, 2010.

[12] S. Dutta, A. Verma, “Service deactivation aware placement and defrag-mentation in enterprise clouds”, Proc. of IEEE/IFIP CNSM, 2011.

[13] G. Shanmuganathan, A. Gulati, P. Varman, “Defragmenting the cloudusing demand-based resource allocation”, Proc. of SIGMETRICS, 2013.

[14] G. Aceto, W. de Donato, A. Botta, A. Pescap, “Cloud Monitoring: ASurvey”, Elsevier Computer Networks, Vol. 57. No. 9, pp. 2093-2115,2013.

[15] G. Silverman, “Primal Decomposition of Mathematical Programs byResource Allocation”, Operations Research, 1972.

[16] L. Xiao, M. Johansson, S. Boyd, “Simultaneous routing and resourceallocation via dual decomposition”, IEEE Trans. on Communications,2004.

[17] D. Bertsekas, “Nonlinear Programming”, Athena Scientific Publishing,ISBN: 1886529000, 1999.

[18] A. Gulati, G. Shanmuganathan, et. al., “VMware Distributed ResourceManagement: Design, Implementation and Lessons Learned”, VMwareTechnical Journal, 2012.

[19] D. Shmoys, E. Tardos, “An approximation algorithm for the generalizedassignment problem”, Mathematical Programming, 1993.

[20] S. Boyd, L. Xiao, A. Mutapcic, “Subgradient methods”, Lecture Notesof EE392o, Stanford University, Autumn Quarter, 2004.

10