18
Hybrid optimization for QoS control in IP Virtual Private Networks Raffaele Bolla, Roberto Bruschi * , Franco Davoli, Matteo Repetto Department of Communications, Computer and Systems Science (DIST), University of Genoa, Via Opera Pia 13, 16145 Genoa, Italy Received 14 March 2007; received in revised form 22 August 2007; accepted 17 October 2007 Available online 26 October 2007 Responsible Editor: J.C. de Oliveira Abstract A multiservice IP Virtual Private Network is considered, where Quality of Service is maintained by a DiffServ paradigm, in a domain that is supervised by a Bandwidth Broker (BB). The traffic in the network belongs to three basic categories: Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). Consistently with the DiffServ environment, the Service Provider’s Core Routers (CRs) only treat aggregate flows; on the other hand, the user’s Edge Routers (ERs) keep per-flow information and convey it to the BB. The latter knows at each time instant the number (and the bandwidth requirements) of flows in progress within the domain for both EF and AF traffic categories. A global strategy for admis- sion control, bandwidth allocation and routing within the domain is introduced and discussed in the paper. The aim is to minimize blocking of the ‘‘guaranteed’’ flows (EF and AF), while at the same time providing some resources also to BE traffic. In order to apply such control actions on-line, a computational structure is sought, which allows a relatively fast implementation of the overall strategy. In particular, a mix of analytical and simulation tools is applied jointly, by alter- nating ‘‘local’’ decisions, based on analytical models, with flow-level simulation that determine the effect of the decisions on the whole system and provide feedback information. The convergence of the scheme (under a fixed traffic pattern) is inves- tigated and the results of its application under different traffic loads are studied by simulation on three test networks. Ó 2007 Elsevier B.V. All rights reserved. Keywords: VPN; IP DiffServ; QoS-IP; Resource allocation 1. Introduction The current number of hosts in the Internet is approaching half billion [1]. Such huge number is spread across many administrative domains, orga- nized in a hierarchical structure, to cope with com- plexity and scale. Within this scenario, an additional interesting possibility is that of creating Virtual Pri- vate Networks (VPNs) for larger users, thereby fur- ther delegating control actions to more limited domains (see, among others [2–6]). A wide range of network architectures and technologies has been proposed (and used) to support this kind of service, based on both ‘‘layer 2’’ (Frame Relay or, in a broad interpretation of layer 2, ATM and MPLS) 1389-1286/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2007.10.006 * Corresponding author. Tel.: +39 10 353 2802; fax: +39 10 353 2154. E-mail addresses: raff[email protected] (R. Bolla), roberto. [email protected] (R. Bruschi), [email protected] (F. Davoli), [email protected] (M. Repetto). Available online at www.sciencedirect.com Computer Networks 52 (2008) 563–580 www.elsevier.com/locate/comnet

Hybrid optimization for QoS control in IP Virtual Private Networks

Embed Size (px)

Citation preview

Available online at www.sciencedirect.com

Computer Networks 52 (2008) 563–580

www.elsevier.com/locate/comnet

Hybrid optimization for QoS control in IPVirtual Private Networks

Raffaele Bolla, Roberto Bruschi *, Franco Davoli, Matteo Repetto

Department of Communications, Computer and Systems Science (DIST), University of Genoa, Via Opera Pia 13, 16145 Genoa, Italy

Received 14 March 2007; received in revised form 22 August 2007; accepted 17 October 2007Available online 26 October 2007

Responsible Editor: J.C. de Oliveira

Abstract

A multiservice IP Virtual Private Network is considered, where Quality of Service is maintained by a DiffServ paradigm,in a domain that is supervised by a Bandwidth Broker (BB). The traffic in the network belongs to three basic categories:Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). Consistently with the DiffServ environment,the Service Provider’s Core Routers (CRs) only treat aggregate flows; on the other hand, the user’s Edge Routers (ERs)keep per-flow information and convey it to the BB. The latter knows at each time instant the number (and the bandwidthrequirements) of flows in progress within the domain for both EF and AF traffic categories. A global strategy for admis-sion control, bandwidth allocation and routing within the domain is introduced and discussed in the paper. The aim is tominimize blocking of the ‘‘guaranteed’’ flows (EF and AF), while at the same time providing some resources also to BEtraffic. In order to apply such control actions on-line, a computational structure is sought, which allows a relatively fastimplementation of the overall strategy. In particular, a mix of analytical and simulation tools is applied jointly, by alter-nating ‘‘local’’ decisions, based on analytical models, with flow-level simulation that determine the effect of the decisions onthe whole system and provide feedback information. The convergence of the scheme (under a fixed traffic pattern) is inves-tigated and the results of its application under different traffic loads are studied by simulation on three test networks.� 2007 Elsevier B.V. All rights reserved.

Keywords: VPN; IP DiffServ; QoS-IP; Resource allocation

1. Introduction

The current number of hosts in the Internet isapproaching half billion [1]. Such huge number isspread across many administrative domains, orga-

1389-1286/$ - see front matter � 2007 Elsevier B.V. All rights reserved

doi:10.1016/j.comnet.2007.10.006

* Corresponding author. Tel.: +39 10 353 2802; fax: +39 10 3532154.

E-mail addresses: [email protected] (R. Bolla), [email protected] (R. Bruschi), [email protected] (F. Davoli),[email protected] (M. Repetto).

nized in a hierarchical structure, to cope with com-plexity and scale. Within this scenario, an additionalinteresting possibility is that of creating Virtual Pri-vate Networks (VPNs) for larger users, thereby fur-ther delegating control actions to more limiteddomains (see, among others [2–6]). A wide rangeof network architectures and technologies has beenproposed (and used) to support this kind of service,based on both ‘‘layer 2’’ (Frame Relay or, in abroad interpretation of layer 2, ATM and MPLS)

.

564 R. Bolla et al. / Computer Networks 52 (2008) 563–580

and ‘‘layer 3’’ technologies (IP over IP, GRE,IPSEC) [7]. More recently, the use of ‘‘layer 1’’ tech-nologies, such as SDH or WDM, has come intoeffect [8], offering the possibility to guarantee band-width allocations on the network links with a mini-mum acceptable level of granularity and efficiency.As an example, the SDH Generic Framing Proce-dure (GFP) [9] can provide leased links with a band-width allocation up to 10 Gbps and a minimumgranularity of about 1 Mbps (with this extension,one may indeed consider SDH as another layer 2technology). All these technological solutions canbe used to realize VPNs with guaranteed bandwidthallocations and support for QoS algorithms [10,11].

We have already addressed the issue of planningsuch networks in [12], in the presence of user trafficrequiring differentiated QoS support. This papertakes a different perspective, by addressing the on-line control of such networks, in order to ensurethat QoS differentiation is maintained. This isachieved by devising adaptive strategies for calladmission control (CAC), bandwidth allocationand routing. The paper is a re-organized andextended version of the work originally presentedin [13,14]. In particular, the following new contribu-tions and improvements can be pointed out: (i) thedefinition of a new cost function in the optimizationprocedure that leads to the CAC and bandwidthallocation; (ii) the adoption of a greatly simplified,though still sufficiently accurate, model for BE traf-fic, which significantly reduces the computationalcomplexity of the overall optimization procedure;(iii) the performance evaluation over different andincreasingly complex network topologies; (iv) thediscussion of the sources of computational complex-ity and of the scalability of the method.

A central controller, denoted as Bandwidth Bro-ker (BB), may be dedicated to perform resource res-ervation in the VPN, as well as to collect aggregateinformation for the upper levels. The BB can run ona dedicated machine or on a generic router andexchanges messages with the network nodes bysome signaling protocols. We will analyze in detailin the following what kind of information the BBneeds to exchange with the nodes in our proposal.It is worth noting that similar control architecturesmight be adopted on a larger scale, as well. Actu-ally, control actions might be applied independently(perhaps on a longer time scale) within the domainof the ISP, to share the available resources amongthe VPNs of different customers, or even acrossmultiple ISP domains.

Within a DiffServ [15] VPN domain, we considerthree QoS classes: Expedited Forwarding (EF),Assured Forwarding (AF) and Best Effort (BE). EFis the maximum priority traffic, and it should experi-ence very low loss, latency and jitter while traversingthe network [16]; such a service appears to the end-points like a point-to-point connection, or a leasedline. AF is an intermediate class, with less pressingrequirements on bandwidth, delay and jitter [17].BE is the current Internet traffic with no guarantees.

A global strategy for admission control, band-width allocation and routing within such VPNdomain is introduced and discussed in the paper.The aim is to minimize blocking of the ‘‘guaranteed’’flows (EF and AF), while at the same time providingsome resources also to BE traffic. As the main objec-tive is to apply the abovementioned control actionson-line, a computational structure is sought, whichallows a relatively fast implementation of the overallstrategy. In particular, a mix of analytical and simu-lation tools is applied jointly: ‘‘locally optimal’’bandwidth partitions for the traffic classes are deter-mined over each link (which set the parameters of thelink schedulers in the Core Routers, the routing met-rics and admission control thresholds); a ‘‘high level’’simulation (involving no packet-level operation) isperformed, to determine the per-link traffic flows thatwould be induced by the given allocation; then, a newanalytical computation is carried out, followed by asimulation run, until the allocations do not changesignificantly (under a stationary traffic pattern).Obviously, in the case of slow variations in the trafficload and proportion, the procedure can be carried oncontinuously in an adaptive fashion.

The paper is organized as follows. The followingsection outlines the main resource sharing tasks,and how the BB addresses them. Section 3 detailsthe BB architecture, the models used and the controlstrategies. In Section 4, we report the results of alarge number of numerical simulations, in order toanalyze the performance of the control architecture.Section 5 contains the conclusions. The results oftests carried out to validate the analytical and simu-lative tools used in the proposed control scheme,and the configuration parameters adopted in allour performance evaluations, respectively, arereported in the Appendices.

2. Resource allocation issues

A suitable queuing discipline is needed to achievethe required performance for every class. For this

R. Bolla et al. / Computer Networks 52 (2008) 563–580 565

purpose, we assume to have different queues on eachlink, one for each kind of traffic. The EF queue isgiven priority over all others; if we limit the amountof EF traffic in the network, this results in very lowdelay and jitter experienced by these packets. Tomeet this requirement, EF traffic is assigned a prior-ity queue, which is served by the scheduler as long asit is non-empty. Users requiring this kind of serviceare asked to declare their peak rate, which (giventhat this service is likely to be more expensive) weassume coinciding with their mean rate. Thus, EFusers are classified in a limited number of subclasses,according to their peak rates. Classifiers and trafficconditioners (meters, markers, shapers and drop-pers) are required at the network edge to performthis task, as well as to verify the compliance of theusers’ traffic to the agreement, but this issue isbeyond the scope of our treatment.

Once the EF queue is empty, AF high prioritytraffic is served, while low priority packets areserved only if there are still unused AF reservedresources. Finally, when all these queues are empty,BE traffic is served. An AF user declares its commit-ted and peak transmission rates. The committedtransmission rate is always guaranteed to users; bet-ter performance can be achieved if the network loadis low. Thus, at the edge nodes, packets belonging tothe same micro-flow will be marked as high priorityAF, until they do not exceed the user’s committedrate; surplus traffic will be marked as low priorityAF or discarded [18].

In this setting, the performance experienced bythe packets depends on the amount of traffic flowsof each category admitted into the network: CACis then necessary to achieve the QoS promised bythe DiffServ architecture. With such a discipline,each queue is served at the maximum speed allowedby the link; our aim is to control the bandwidthreserved for each class on each link. Given the peakrate for EF traffic, we know in advance how manyconnections can be accepted not to exceed thereserved bandwidth.

A similar consideration may hold for AF, byconsidering the committed rates or, in a moredetailed fashion, the effective bandwidths, computedaccording to some criterion (see, e.g. [19–21]),instead of the peak rates. In practice, starting fromthe traffic descriptors (e.g., committed rate andburst size) declared in the Service Level Agreement(SLA) of each AF flow, we can obtain a capacitythreshold, which ensures satisfaction of all thepacket-level QoS requirements of that flow. In other

words, we implicitly suppose that all QoS problemsat the packet level are already accounted for by thelink schedulers at the network routers; thus, we canconcentrate on flow-level operations, in the samefashion as for EF flows. Indeed, as already notedin [12], with respect to the same model that weadopted for network planning, at the flow level thisis equivalent to assume the presence of ContinuousBit Rate (CBR) traffic, peak (or committed/equiva-lent) rate admission control, and throughputrequirements only.

Briefly, we fix two bandwidth thresholds on eachlink: one for EF and the other for AF; the total ratesof flows traversing the link for each category cannever exceed these thresholds. BE traffic can exploitall the unused link bandwidth at any time instant.We constrain the sum of the thresholds to be lessthan or equal to the capacity of the link. Therefore,we adopt a Partitioning CAC (PCAC) policy [22],where we define two partitions for both EF andAF, but not for BE, whose upper bound is not fixed,but depends dynamically on the bandwidth left freeby the other traffic flows. However, BE is guaran-teed the link bandwidth exceeding the two thresh-olds, but the queuing disciplines do not assure anyperformance on delay and jitter. According to thepartitioning policy, different traffic classes (distin-guished by their requirements in terms of bandwidthand offered load), within EF and AF, competeamong them independently in Complete Sharing(CS) fashion. Other parametric CAC policies mightbe adopted. As noted in [23], the Upper Limit (UL),Complete Partitioning (CP) and Trunk Reservation(TR) policies (among others) have been shown tooutperform CS, when significant differences existbetween traffic classes’ requirements; a form ofdynamic TR has been analyzed in [24]. The parti-tioning policy is a hybrid between CP and CS, whichmay lend itself almost naturally in a situation likethe one we are considering, where traffic classescan be grouped in wider categories (EF and AF,in our case); moreover, as regards on-line analyticalcomputations, it has the advantage of belonging tothe class of Coordinate Convex policies, whichadmit a Product Form Solution [22]. It is worth not-ing that the degree of inflexibility caused by the useof thresholds will be mitigated by the fact that wemake them adaptive, and can change them on-line,according to an optimization procedure.

Regarding routing, as EF and AF flows requireQoS, we think that this kind of traffic needs a fixedroute from source to destination, which appears to

566 R. Bolla et al. / Computer Networks 52 (2008) 563–580

them as a virtual leased line [16]. Routing of bothAF and EF connections is a priori fixed by theBB, so that it can perform the CAC on the band-width available on each link traversed and, possibly,deny access to the network. Thus, we have alreadyidentified a first task for our BB.

Since we wish to maximize the network through-put, we use simple min-hop metrics for both AF andEF, to reduce the number of links traversed. Thewhole architecture is, however, suitable for moreadvanced routing metrics [22], especially thosebased on aggregation, and multi-parameter ones[25]. For BE traffic routing, instead, we choose acost equal to the reciprocal of the residual availablebandwidth left by the connections with a guaranteedlevel of QoS. In this case, paths vary dynamically asnetwork load changes over time.

EF, AF and BE traffic flows are generated at thenodes of the network as aggregates: this means thatthe same node can generate more EF or AF connec-tions or BE flows. In such a way, we can easily viewthe node either as a host or as a border router, con-nected to a hierarchical lower level network (asmentioned in Section 1).

Now, the issue is how to share link bandwidth inan adaptive fashion among the three competitiveclasses, by also choosing the AF and EF thresholdsaccording to some optimality criterion. Obviously,QoS traffic is critical, because it is thought of as ahigh revenue one. But we would like to give someform of ‘‘protection’’ to BE, as well, whose overallvolume can indeed produce relevant revenue. Todo this, we assign a cost to each class (as explainedlater), and try to minimize some given cost function.Nodes collect data on the incoming traffic load; theBB gathers all these data and computes a new allo-cation for all links. Finally, it sends the EF and AFthresholds to each node for its output links. This isthe second task to be performed by the BB.

To complete the description of our architecture,let us note that a third task of the BB is to collectinformation from the nodes about traffic enteringor exiting the domain, and to act as a node for theBB of an upper level within the same ISP (e.g.,one controlling bandwidth assignments among theVPNs of different organizations).

3. Bandwidth broker architecture and allocation

algorithm

Allocating resources over the whole network is avery complex task, especially if many links are pres-

ent. Furthermore, resources to be shared reside oneach link. As previously mentioned, the BB knowsthe input traffic matrix for each flow from each sin-gle node; knowing also the different allocations ofresources for every link, the BB would be able tosimulate the behavior of the whole VPN in termsof routing, mean link utilizations and blockingprobabilities. So, the BB is certainly the best net-work control point to perform a centralized taskfor the optimization of the whole system. However,when the input traffic matrix changes enough tomake new allocations necessary, in order to copewith the variations and maintain a high level of uti-lization, the task performing dynamic resource allo-cation has to be launched on the BB and shouldcomplete its calculations as soon as possible.

In order to reduce the computational complexityof this allocation task, we have decided to calculatethe new CAC thresholds independently for eachlink, (in a ‘‘locally optimal’’ way), and to use a veryfast network simulator to obtain all the data neces-sary to perform this operation. More specifically,the independent link allocations, calculated in thisway, have to be tested with a new simulation pro-cess, to understand if the new network settings haveproduced routing changes, undesired link utiliza-tions or high blocking probabilities. In particular,it has to be noted that our BE routing metricdepends on the links’ EF and AF allocations andon the ensuing traffic distribution; but the allocationitself, in its turn, depends on the paths selected bythe routing algorithm. A congested link at a certaintime instant can become an empty one after alloca-tion, owing, for example, to the discovery of a newoptimal route for BE traffic. This mechanism shouldthen be applied iteratively, until a fixed point ishopefully reached, if the traffic patterns remain sta-tionary for a sufficiently long time interval.

Therefore, the mechanism is composed by theiteration of two different modules: a fast networksimulator (a numeric fluid model that works ataggregate flow level) and an allocation algorithm,based on an analytical model (a stochastic knapsack[22]) and on the output of the simulator, which cal-culates the resource reservations independently foreach link.

Summarizing the resource allocation architec-ture, the BB alternates simulation processes andallocation procedures. Through the model we gener-ate network traffic and evaluate offered load onevery link; successive allocations try to share theavailable link bandwidth among the different traffic

R. Bolla et al. / Computer Networks 52 (2008) 563–580 567

classes. If the allocation converges, networkresources are optimized, and the BB can set newthresholds on all nodes. When variations in theincoming traffic are measured at the nodes, the BBis notified and starts a new allocation procedure.

3.1. Traffic simulation models

In a DiffServ network we can reasonably con-sider that the traffic behavior is dependent on thedifferent QoS level: in fact, it is obvious to think thatAF and EF will be used to carry real time flows, likemultimedia streams, whereas BE will be used pri-marily for data transport (i.e., web, mail, file sharingor other). Moreover, in a DiffServ environment,both admission control and traffic shaping have tobe applied to QoS flows. Therefore, in our modelwe had to take into account the significant differ-ences between services with assured QoS level andthe common BE ones.

As noted before, in this kind of environment weneed a traffic model that could yield valuable perfor-mance in describing the average behavior of aggre-gate flows, with a very low computational effort.Moreover, the aim of the traffic model is not todescribe the network dynamics at the IP packetlevel, but to extract some performance parameters,like link bandwidth occupations or blocking proba-bilities, with a good approximation. It is obviousthat for this particular purpose a packet-level simu-lator, like ns-2 [26], is not the best suitable choice.

Therefore, as regards BE traffic (which we sup-pose to be composed primarily of TCP connec-tions), characterized by an elastic behavior, wehave chosen to implement a traffic simulation modelbased on a fluid approximation, which can effi-ciently describe the average behavior of aggregateflows, while ignoring details of the packet level orabout the state of each TCP connection. With a sim-ilar viewpoint, EF and AF traffic is described exclu-sively at the call level (where the peak and thecommitted/equivalent bandwidths are assigned,respectively), on the basis of the aforementionedanalytical models, whose underlying hypotheses onthe call-level behavior of the sources are used alsofor traffic generation. Recalling the bandwidthassignment that is operated on EF and AF (theirconnections use a shortest path route from sourceto destination, which appears to them as a virtualleased line), in order to ensure performance guaran-tees, we will refer to them collectively as GuaranteedPerformance (GP) flows. This hypothesis would be

very realistic if we think that, actually, this kind ofIP traffic is frequently transported by means ofswitching technologies that simply assure thedesired QoS level (e.g., MPLS [27]). This simpleGP traffic model gives us the possibility to reducethe overall complexity level of the proposed alloca-tion scheme, and it can be regarded as a ‘‘worstcase’’ model, since, all the EF and AF flows occupytheir maximum reserved bandwidth, leaving in thisway less resources to BE traffic.

For what concerns the BE traffic, the rationale ofthe very simple model adopted relies on the fact thatour goal is not to represent the dynamics of the indi-vidual TCP connections, but rather to capture theaggregate and average behavior of TCP flowsbetween an ingress–egress router or source–destina-tion pair for control purpose. In greater detail, ourBE traffic model can be regarded as a ‘‘fast’’ proce-dure – namely, the ‘‘Stationary Model’’ previouslyintroduced in [12] – which operates on traffic aggre-gates, by using a fluid approximation and the max–min fair rule [28], to estimate their average behaviorin terms of link utilizations and mean throughput.This model is indeed an evolution of the ‘‘fluid sim-ulator’’ used in [14,29]. It needs only the trafficoffered loads of all BE flows as input parameters,and it provides an accuracy level very similar tothe ones of fluid simulators (as shown in Appendix1). Its computational times are so low to allow itseffective usage inside complex optimization proce-dures (see Section 4.4): it is worth recalling that thismodel is used to estimate the BE throughput everytime that the available bandwidth on a network linkchanges (i.e., at each AF or EF flow’s birth ordeath).

Another element in our simulation environmentis the GP traffic source. In fact, we also approximatethe incoming process of the GP flows with a sort ofaggregate source. This source is thought to generatean aggregate of GP flows, which is composed by allthe AF (or EF) connections that traverse the samepath between two network nodes (like FECs inMPLS). These connections appear directly availableat the ingress Edge Router, which can indifferentlyrepresent the access node for the hosts or a gatewayto another VPN. GP flows are generated at eachsource with exponentially distributed and indepen-dent inter-arrival and service times. To reduce thecomputational effort of the traffic model, we decidedto describe both AF and EF flows as CBR traffic (asdiscussed in Section 2, AF flows can be treated thisway, if their effective bandwidth is used). For EF

568 R. Bolla et al. / Computer Networks 52 (2008) 563–580

flows, the constant rate is the maximum assuredbandwidth, whereas, for AF ones, it is thought ofas the minimum nominal bandwidth, which mustbe guaranteed in every condition (i.e., committedor effective bandwidth). If there are sufficientresources, the AF flows can increase their rates upto a fixed percentage.

To obtain steady and meaningful average values,like BE link utilizations or GP blocking probabili-ties from the flow-level simulation model, the simu-lation lengths are extracted by a stabilizationcriterion, which allows to achieve a measured meanvalue of the parameters under study that can differfrom the statistical mean within a 5% confidenceinterval, with 95% confidence level.

3.2. Model-based resource allocation

As the BB knows the input traffic matrix for eachflow from the single node, by executing a simplifiedand fast simulation at the flow level, it can collectdata about offered load, in terms of number of flowstrying to cross a given link (GP) and average usageof BE bandwidth on the link. From this point on, itbegins an independent allocation for each link.

We choose a bandwidth step bw_step, equal tothe minimum admissible rate for GP connections.Then, the algorithm starts by calculating the costsfor all possible bandwidth assignments, which aregenerated as follows.

GP-assigned bandwidth is initially set to zero,then it is increased by bw_step, until the link band-width is reached. For each of these values, GP band-width is shared between EF and AF in a similarway: the EF threshold is set initially to zero andthe AF threshold to the total GP assigned band-width. All possible assignments are generated byincrementing EF and decreasing AF thresholds bybw_step.

3.2.1. Cost calculation

In this particular environment, GP traffic lookslike traditional circuit switched one: there areincoming connections of a given size (peak ratefor EF and committed or effective rate for AF),competing for a limited resource viewed at eachlink. This is the so-called Stochastic Knapsackproblem; the blocking probabilities can be easilycalculated with an extended Erlang Loss formula[22].

Let K denote the number of possible rates (of EF,for example) and C the resource units. Connections

of the kth class are distinguished by their bit rate, bk,their arrival rate, kk, and their mean duration, 1/lk.Let qk = kk/lk and let us denote with nk the numberof active connections of class k and with S the spaceof admissible states:

S ¼ fn 2 IK : b � n 6 Cg; ð1Þwhere IK denotes the Kth dimensional set ofnon-negative integers, b = (b1, b2, . . . , bK), n =(n1, n2, . . . , nK) and Æ is the scalar product.

By defining Sk as the subset of states in which theknapsack admits an arriving class-k object, that is

Sk ¼ fn 2 S : b � n 6 C � bkg ð2Þthe blocking probability for a class-k connection is

Bk ¼ 1�P

n2Sk

QKj¼1q

nj=nj!Pn2S

QKj¼1q

nj=nj!: ð3Þ

The blocking probability is computed separately forEF and AF, since each category has its own re-served bandwidth and incoming traffic, indepen-dently of the others. Efficient algorithms for thiscomputation can be found in [22].

The link cost is derived from the blocking prob-abilities of the different rate classes:

J GP ¼ maxkfBkg ð4Þ

and it holds for both EF and AF.For BE traffic, we have a single aggregate: it is

difficult to say if it has enough space or not, becauseTCP aims at occupying all the available bandwidth.A possibility consists of deciding that the bandwidthavailable to BE on a link can be considered suffi-cient if its average utilization by the BE traffic iswithin a given threshold. To fix this kind of thresh-old analytically is a very complex task; instead, wehave chosen to run several ns-2 simulations toextract information about the average behavior ofan individual connection, with respect to differentvalues of aggregate utilization ratio. In Figs. 1 and2, we report two among several results obtainedabout the individual TCP connection’s perfor-mance; for both cases, we have a quick deteriorationof the average rate per connection as the aggregaterate approaches 80% of the bandwidth. Thus, wecan choose to impose 80% as maximum thresholdfor the utilization ratio. This threshold is indeedfairly conservative; as noted in [30] (see also[31,32]), the performance of a fairly shared bottle-neck is excellent, as long as demand is somewhatless than capacity.

Fig. 1. Average rate per connection, with respect to differentvalues of aggregate utilization of a single link, whose bandwidthis 10 Mbps.

Fig. 2. Average rate per connection, with respect to differentvalues of aggregate utilization of a single link, whose bandwidthis 100 Mbps.

R. Bolla et al. / Computer Networks 52 (2008) 563–580 569

To compute the utilization, let q(c) be the proba-bility to have c resources occupied in the system byEF and AF connections (which can be calculatedfrom the knapsack model, given the EF and AFthresholds); the average quantity UTIL of occupiedresources results:

UTIL ¼XC

c¼0

cqðcÞ: ð5Þ

Thus, for a total link bandwidth LINKBW, themean available bandwidth for BE is

AVLBBE ¼ LINKBW�UTILEF �UTILAF: ð6ÞThe BB keeps track of the BE mean occupied band-width on the link, OCCBW, during the simulation,so we now are able to define the utilization ratioUT as

UT ¼ OCCBW

AVLBBE

: ð7Þ

Starting from this, we expect that connections onthe link are given enough bandwidth if this ratio isless than a given value d = 0.8.

3.2.2. Optimization function

Several cost functions might be selected to opti-mize the bandwidth sharing, depending on whichgoals one is pursuing. Our aim is to equalize thecosts of the traffic classes, so that we can vary thesystem’s fairness simply by changing some weightparameters.

However, we want to give some kind of priority toAF and EF traffic classes, with respect to the BE one.Thus, we have chosen a particular equalization costfunction J, which gives us the possibility to smoothlyimpose two different constraints: the first one on themaximum blocking probability JGP, and the secondone on the BE aggregate utilization ratio.

To take these constraints into account, we havedefined an ad hoc BE cost function, namely

J BE ¼arc tanfa½UT� d�g þ p

2

p� arctanhfUT� bg ; ð8Þ

where d > 0 is the desired utilization ratio value, anda > 0 and b > 0 are two shape parameters; more-over, we have introduced a penalty function forthe GP traffic, that is

eJ GP ¼arc tanfe � ½J GP � u�g � arc tan½�eu�

arc tanfe � ½1� u�g � arc tan½�eu� ; ð9Þ

where e > 0 is a shape parameter and u is the max-imum acceptable blocking probability.

With these new cost components we are nowready to write the final equalization function as

J TOT ¼ J BE þ J GP þ eJ GP � J BEeJ GP ð10Þ

so that the equalization of costs results in theminimization

minT EF;T AF

fJ TOTg ð11Þ

with respect to TEF and TAF, which are the thresh-olds for EF and AF traffic, respectively.

A graph of the chosen cost function is reported inFig. 3, where d = 0.77, a = 50, b = 0.7, e = 500 andu = 0.05. As can be seen, this function can beroughly divided into three different regions, whichare smoothly interconnected:

• the first region corresponds to values of UT andJGP that are both below the upper bounds;namely, both constraints are satisfied, and the

Fig. 3. Graph of the cost equalization function used in linkoptimization.

570 R. Bolla et al. / Computer Networks 52 (2008) 563–580

associated global cost is very low, as the link isnot congested;

• the second region corresponds to an excessivevalue of UT by BE traffic, while the blockingprobabilities remain below their upper bound:in this case, the cost is quite high, as there areinsufficient resources on the link to carry bothBE and GP traffic efficiently, and the allocatordecides to penalize BE, to respect the constrainton GP blocking probabilities;

• finally, the third region corresponds to the casewhere the GP constraint is not respected: the costequalization function takes on higher valuesbecause, in heavy congestion conditions, networkoptimization attempts to respect the GP constraintfirst, and then, if possible, to protect the BE traffic.

Obviously, the optimization problem is notdefined for all values of UT and JGP, but only in alimited domain, which depends on both BE andGP traffic load on the considered link, with fixed

Fig. 4. Example of a domain for the link resource allocationproblem.

bandwidth. Since the reserved bandwidths are cal-culated with a minimum granularity equal to thelowest admissible rate for GP flows, the domain ofthis problem is also discrete. As an example,Fig. 4 reports one of the computed domains, whichwas obtained by varying all the possible reservedbandwidth thresholds for AF and EF flows, underfixed link capacity and traffic load, and by calculat-ing the resulting UT and JGP values.

4. Numerical results

With the aim of analyzing the performance of theproposed bandwidth allocation scheme, in the fol-lowing we report and discuss the results of three testsessions, carried out with the network topologies inFigs. 5, 10 and 18, respectively, each characterizedby an increasing complexity level.

For each network considered, we studied how theallocator’s efficiency changes according to the trafficload. Moreover, we have decided to compare theperformance results of the proposed frameworkwith the ones of a CS-based network: with this kindof policy we expect to obtain (with respect to aPCAC policy with the same allocation mechanism)no protection for BE traffic, but better (overall)GP performance, especially at high network loads,because in this case the only thresholds for theCAC are the link bandwidths.

This Section is organized as follows. Sections 4.1,4.2 and 4.3 report the performance test resultsobtained with the three network topologies consid-ered; namely, ‘‘simple’’, ‘‘intermediate’’ and ‘‘com-plex’’, respectively. A large set of input parametersto the simulator, like traffic matrixes and linkthresholds obtained from the optimization, can be

Fig. 5. The simple network topology: the bandwidth is fixed to100 Mbps for all links, except L3 and L6 that are fixed to50 Mbps.

0

2

4

6

8

10

12

14

110 115 120 125 130 135 140 145 150 155 160

Blo

ckin

g Pr

obab

ilitie

s [%

]

Offered Load [Mbps]

AllocatorCS

Fig. 7. Simple Network environment: maximum EF blockingprobabilities.

14] Allocator

R. Bolla et al. / Computer Networks 52 (2008) 563–580 571

found in Appendix 2. Section 4.4 includes a detailedanalysis of the computational complexity of theallocator. Some validation results regarding theaccuracy of both the analytical and the simulativemodules can be found in Appendix 1.

Some basic parameter values are kept constantthroughout all tests in the paper: the mean lifetimeof EF connections is fixed to 200 s, the one of AFconnections to 800 s, and the mean burst size ofBE traffic is 0.122 MBytes. The propagation delayfor all links is 5 ms. The values of the cost functionparameters are fixed to d = 0.77, a = 50, b = 0.7,e = 500 and u = 0.05, respectively, which meansthat we try to maintain the GP blocking probabilityunder 5% and the utilization ratio for BE under77%.

0

2

4

6

8

10

12

110 115 120 125 130 135 140 145 150 155 160

Blo

ckin

g Pr

obab

ilitie

s [%

Offered Load [Mbps]

CS

Fig. 8. Simple Network environment: maximum AF blockingprobabilities.

4.1. Simple network environment

The first test network used for the allocator’s per-formance evaluation is represented in Fig. 5. Thisnetwork topology is not too complex, and allowseasy understanding of how the mechanism works:in fact, it includes only three bottleneck links (withreference to Fig. 5, L2, L3 and L6, respectively)where the resource sharing among the traffic classesmight be critical.

The performance test has been performed byincreasing the GP traffic offered load (i.e., by risingthe number of incoming AF and EF flows), whilekeeping BE traffic constant. Details about the trafficmatrix parameters and the optimal link thresholds,calculated by the allocator, are reported in Tables2 and 4 in Appendix 2.

Fig. 6 depicts the total average throughput of allBE aggregate flows, while Figs. 7 and 8 show themaximum blocking probabilities for EF and AF

46

48

50

52

54

56

58

60

62

64

110 115 120 125 130 135 140 145 150 155 160

Thr

ough

put [

Mbp

s]

Offered Load [Mbps]

AllocatorCS

Fig. 6. Simple Network environment: overall throughput of BEflows.

flows, respectively. From all these results we canremark how the BE aggregates are ‘‘protected’’, asfar as possible, by GP allocated thresholds, as thenetwork congestion level rises. On the contrary,we can stress how a CS policy (without BE protec-tion) could obviously improve the GP performancein terms of higher average throughputs and lowerblocking probabilities.

To better understand how the allocation deci-sions are adaptively extracted from the congestionlevel of the links, Fig. 9 shows for each bottlenecklink how the cost of the ‘‘locally optimal’’ linkresource configuration changes, according to theincrease in network offered load. In this respect,we have reported all the optimal allocations calcu-lated during the test on the cost functions of the linkthat better reflects the specific behavior we want tohighlight.

In Fig. 9a, we can note how the allocator tries tosatisfy both BE and GP constraints on the bottle-neck link L2, as long as the network congestion islow; when the traffic load increases and linkresources are no longer sufficient, the allocator

Fig. 9. Mapping on the cost function profile of the change in resource allocation on link L2, L3 and L6, while the traffic load increases.

572 R. Bolla et al. / Computer Networks 52 (2008) 563–580

decides to maintain the GP blocking probabilityconstraint and to sacrifice BE performance, accept-ing a value of utilization ratio higher than thedesired one. On the other hand, Fig. 9b highlightsthe shift in the optimal configuration during theincrease of the traffic load for link L3: in such case,the resources remain sufficient to satisfy both theGP constraint and the BE one, and so all the alloca-tions appear mapped onto the lower part of the costfunction. Finally, as we can observe for link L6 inFig. 9c, first, when the traffic load rises slightly,the allocator has not enough resources to assurethe BE ratio constraint; afterwards, while trafficload continues to increase further, there are no moreresources to satisfy the GP constraint, as well. Notethat in this last case the BE ratio values of the cal-culated allocation do not diverge, because BE trafficcontinues to use all the reserved bandwidth that theAF and EF flows do not temporarily utilize.

4.2. Intermediate network environment

The second performance test session is based onthe network topology in Fig. 10. The bandwidth

Fig. 10. The Intermediate network topology used for evaluatingthe allocator’s performance test.

of all links is fixed to 80 Mbps. In this new test ses-sion we used six EF aggregate flows, five AF and sixBE ones; the tests shown were performed by increas-ing only the GP traffic loads up to exceed the overallnetwork capacity (see Table 3 in Appendix 2 formore details on the traffic matrix used).

Figs. 11 and 12 report bandwidth utilizations andblocking probabilities, respectively, of flows cross-ing link 8: these results show that, in the presenceof network congestion, the proposed mechanismtends to equalize the resources reserved to the QoStraffic classes and the BE one, with respect to theCS policy. More in detail, the proposed mechanismallows achieving bandwidth utilization about8 Mbps larger than the CS policy (on a total BEoffered load equal to 22 Mbps). Regarding theblocking probabilities, Fig. 12 shows that the alloca-tor respects the imposed constraints (EF and AFmaximum blocking probabilities equal to 4% and3%, respectively), until the network becomes heavilycongested (offered load values equal to 294, 305 and316 Mbps are larger than the whole network capac-ity). Moreover, as can be noted from Fig. 12, while,with the CS policy, we obtain blocking probabilities

5

10

15

20

25

30

35

40

45

160 180 200 220 240 260 280 300 320

Thr

ough

put [

Mbp

s]

Offered Load [Mbps]

QoS - AllocatorQoS - CS

BE - AllocatorBE - CS

Fig. 11. Throughput of all the flows crossing link 8 withallocation and with CS policy.

0

1

2

3

4

5

6

7

160 180 200 220 240 260 280 300 320

Blo

ckin

g Pr

obab

ilitie

s [%

]

Offered Load [Mbps]

EF - AllocatorEF - CS

AF - AllocatorAF - CS

Fig. 12. Maximum blocking probability value for all the EFflows crossing link 8 with allocation and with CS policy.

56789

101112131415

160 180 200 220 240 260 280 300 320

Thr

ough

put [

Mbp

s]

Offered Load [Mbps]

AllocatorCS

Fig. 14. Throughput of BE flow 4 in both allocator and CSenvironments.

R. Bolla et al. / Computer Networks 52 (2008) 563–580 573

near to zero when there is no network congestion,the proposed mechanism results in blocking proba-bilities close to the constraint values. This effect issubstantially due to a design choice: we have chosento limit, even when there is no congestion, the band-width utilization of QoS traffic to avoid possibletransients with no resources for BE traffic.

Figs. 13 and 14 report the maximum blockingprobability values of AF flow 2 (see again Table3) and the throughput of BE flow 4, respectively.Also in this case, when the network becomes con-gested, the allocation policy tends, somehow, totrade off QoS blocking probabilities with BEthroughput.

Figs. 15–17 show the overall performance resultsobtained in this test session. More in particular,Fig. 17 shows the total BE offered load and thesum of the throughputs of all BE aggregate flows,for both the allocation with our mechanism andthe CS policy. Figs. 15 and 16 show the maximumblocking probability values for EF and AF flows,

0

1

2

3

4

5

6

7

160 180 200 220 240 260 280 300 320

Blo

ckin

g Pr

obab

ilitie

s [%

]

Offered Load [Mbps]

AllocatorCS

Fig. 13. Blocking probability of AF flow with id = 2 in bothallocator and CS environments.

respectively, still in both allocations, with our mech-anism and the CS policy. As we can observe fromthese results, under considerable congestion levelsthe allocation mechanism, limiting the resourcesreserved to GP, assures better performance to theBE traffic class than the CS policy.

4.3. Complex network environment

We finally performed a test session to evaluatethe performance of the proposed allocation frame-work when a complex network environment is used.In greater detail, we chose the network topology inFig. 18, which includes 13 nodes and 40 links withbandwidth equal to 1 Gbps, except the onesbetween node 0 and node 2, and node 1 and node0, whose bandwidth is fixed to 350 Mbps. In greaterdetail, these two links can be considered like theonly two bottlenecks in the network. For what con-cerns the traffic matrix, we used the aggregate flowsshown in Table 5 (Appendix 2). The traffic offered

160 180 200 220 240 260 280 300 320Offered Load [Mbps]

12

10

8

6

4

2

0

Blo

ckin

g Pr

obab

ilitie

s [%

] AllocatorCS

Fig. 15. Maximum blocking probability for all the EF flows withallocation and with CS policy.

160 180 200 220 240 260 280 300 320Offered Load [Mbps]

12

10

8

6

4

2

0

Blo

ckin

g Pr

obab

ilitie

s [%

] AllocatorCS

Fig. 16. Maximum blocking probability for all the AF flows withallocation and with CS policy.

160 180 200 220 240 260 280 300 320Offered Load [Mbps]

1009590858075

706560T

hrou

ghpu

t [M

bps]

AllocatorCS

BE Offered Load

Fig. 17. Total offered load and sum of the throughputs of all theBE aggregate flows with allocation and with CS policy.

0

1

2

3

4

5

2 3 4 5 6 7 8 9 10 11Blo

ckin

g Pr

obab

ilitie

s [%

]

AF Aggregated Flows [#]

EFAF

Fig. 19. Maximum blocking probability for both the EF and theAF flows with the allocation policy.

200

250

300

350

400

450

500

550

2 3 4 5 6 7 8 9 10 11T

hrou

ghpu

t [M

bps]

AF Aggregated Flows [#]

AllocatorCS

Fig. 20. Overall BE throughput with allocation and with CSpolicy.

574 R. Bolla et al. / Computer Networks 52 (2008) 563–580

loads are kept constant for all the aggregated flows,while the performance tests were carried out with avariable number of AF aggregates: we started withonly AF0 and AF1; then, we added a new aggregatefor each following test, up to include the AF10 flow.

Fig. 19 shows the blocking probabilities obtainedwith the allocator; the CS ones are not reported,since they are very close to zero (about 0.001%).

Fig. 18. The complex network topology.

As shown in Fig. 20, this performance gap in termsof GP blocking probability with respect to the CSpolicy is clearly balanced by a higher BE through-put. Thus, also in such case, the proposed mecha-nism tends to balance the resources reserved to theQoS traffic classes and the BE one.

4.4. Complexity analysis

For what regards the analysis of computationalcomplexity, we must consider the specific featuresof both the analytical and simulative models andtheir iterations. In particular, the number of itera-tions between the simulative and the analyticalmodules, needed to achieve convergence, strictlydepends on the dynamics of the link metrics of therouting algorithm in use: more specifically, as thedynamic level of link metrics rises, we need a largernumber of iterations. In all tests performed, we usedlink weights depending on the available band-width for BE traffic, and we never exceeded fouriterations.

With respect to the simulative module, we maynote that the latter is substantially composed byan event-driven simulator, which computes theaverage bandwidth utilization of network links ateach GP flow’s birth and death. The simulation is

R. Bolla et al. / Computer Networks 52 (2008) 563–580 575

stopped only when the output parameters (i.e., thelink utilizations and the GP blocking probabilities)are stabilized.

The computational complexity needed to per-form a single simulation event is very low, andmostly depends, in a nearly linear way, on the num-bers of both GP and BE aggregated traffic flows andof network links. The largest contribution to com-putational complexity is basically due to the numberof events that we need to simulate for achieving astable estimation of all the measured parameters.In greater detail, in order to evaluate the stabilitydegree of output parameters, we used a methodbased on the T-Student probability distribution,which substantially fixes the number of simulatedevents and, then, the time horizon of simulations.In this sense, we can reasonably conclude that thetime horizon and the simulated events’ number(and then the computational complexity of the sim-ulative module) generally increase according to thesecond order statistics of measured networkparameters.

For what concerns the analytical module, wehave to underline that it is independently appliedon each link; then, its complexity rises in a linearway according to the number of network links.Moreover, to further reduce the computation times,we can parallelize more instances of this module(i.e., computing the bandwidth allocations of differ-ent links simultaneously). With this assumption inmind, the computational complexity of the analyti-cal module can be considered equal to the oneneeded to compute the allocation of a generic net-work link. Moreover, the latter linearly dependson the ratio between the link bandwidth and theminimum GP class rate.

Fig. 21 reports the computation times of the sim-ulative module and of the analytical one (for a sin-gle link allocation only), obtained with the complex

0200400600800

100012001400

2 3 4 5 6 7 8 9 10 11

Com

puta

tion

Tim

es [

s]

AF Aggregated Flows [#]

Simulation1 link Allocation

Fig. 21. Average computation times of the simulative and theanalytical modules, according to different numbers of AF flows.

topology used in Section 4.3. All these results wereobtained on an AMD Athlon MP processor withan internal clock equal to 1.5 GHz and a cache sizeequal to 256 kByte.

5. Conclusions

A multiservice IP VPN based on the DiffServparadigm has been considered, composed by EdgeRouters (ER) and Core Routers (CR), forming adomain that is supervised by a Bandwidth Broker(BB). The traffic in the network belongs to threebasic categories: Expedited Forwarding (EF),Assured Forwarding (AF) and Best Effort (BE). Aglobal strategy for admission control, bandwidthallocation and routing within the domain has beenintroduced and discussed. The aim is to minimizeblocking of the ‘‘guaranteed’’ flows (EF and AF)within a fixed constraint on the maximum blockingvalue, while at the same time providing someresources also to BE traffic in terms of ‘‘utilizationratio’’. As the main objective is to apply the abovementioned control actions on-line, a computationalstructure has been devised, which allows a relativelyfast implementation of the overall strategy. In par-ticular, a mix of analytical and simulation tools isapplied jointly: ‘‘locally optimal’’ bandwidth parti-tions for the traffic classes are determined over eachlink (which set the parameters of the link schedulersin the CRs, the routing metrics and admission con-trol thresholds); a ‘‘high level’’ simulation (involvingno packet-level operation) is performed, to deter-mine the per-link traffic flows that would be inducedby the given allocation; then, a new analytical com-putation is carried out, followed by a simulationrun, until the allocations do not significantlychange. The possible convergence of the scheme(under a fixed traffic pattern) has been investigatedby ns-2 simulation and the results of its applicationunder different traffic loads have been studied on atest network. The results show the capability ofthe proposed method of controlling the perfor-mance of the network and of maintaining a goodlevel of global efficiency.

Acknowledgement

This work was supported in part by the ItalianMinistry of Education, University and Research(MIUR) under the TANGO and FAMOUSprojects.

0

0.2

0.4

0.6

0.8

1

110 120 130 140 150 160

Err

or [

%]

BEAFEF

Offered load [Mbps]

Fig. 22. Traffic model error with respect to ns-2 to estimate thethroughput of EF, AF, and BE aggregated traffic flows.

576 R. Bolla et al. / Computer Networks 52 (2008) 563–580

Appendix 1. Validation tests

In this Appendix, we report some numericalresults obtained to validate both the simulativeand the analytical components of the proposedframework. All tests shown were carried out byusing the simple network topology in Fig. 5 andthe traffic matrix in Table 2. The link thresholdsare those calculated during Simulation # 8 of theallocation test in Section 4.1 (shown in Table 4 inAppendix 2). In greater detail, our first goal is tounderstand and estimate the accuracy level of thetraffic model used within the allocation mechanism.To this purpose, we decided to compare our flow-level simulation model with the packet level Net-work Simulator 2 (ns-2). Moreover, to obtain steadyand meaningful average values from both the simu-lative model and ns-2, the simulation lengths areextracted by a stabilization criterion, which allowsto achieve a measured mean value of the parametersunder study that can differ from the statistical meanwithin a 5% confidence interval, with 95% confi-dence level.

Fig. 22 reports the error of the adopted trafficmodel with respect to ns-2 to estimate the average

Table 1Invariance test

Link 1 Link 2

EF AF EF

Sim 1 0 0 0Sim 2 0 20 0Sim 3 0 40 0Sim 4 0 60 0Sim 5 0 80 0Sim 6 0 100 0Sim 7 20 0 20Sim 8 20 20 20Sim 9 20 40 20Sim 10 20 60 20Sim 11 20 80 20Sim 12 40 0 40Sim 13 40 20 40Sim 14 40 40 40Sim 15 40 60 40Sim 16 60 0 60Sim 17 60 20 60Sim 18 60 40 60Sim 19 80 0 80Sim 20 80 20 80Sim 21 100 0 100Min allocation 48 43 46Max allocation 50 44 46

Initial network configurations and Optimal thresholds.All the threshold values are in Mbps.

throughput of EF, AF and BE aggregated flows,respectively. Observing this figure, we can notehow low the model error is for all traffic classes.In fact, it is generally much lower than 0.05%,except for BE traffic, whose error increases up to0.65% when the network achieves total congestion.For what concerns the blocking probabilities, weobserve identical values between ns-2 and the trafficmodel used in all tests performed.

In the second validation session, we verified theinvariance of the allocation results with respect todifferent initial network configurations. With this

Link 6 Link 7

AF EF AF EF AF

0 0 0 0 020 0 10 0 2040 0 20 0 4060 0 30 0 6080 0 20 0 20

100 0 50 0 1000 10 0 20 0

20 10 10 20 2040 10 20 20 4060 10 30 20 6080 10 40 20 80

0 20 0 40 020 20 10 40 2040 20 20 40 4060 20 30 40 60

0 30 0 60 020 30 10 60 2040 30 20 60 40

0 40 0 80 020 40 10 80 20

0 50 0 100 041 26 42 28 2642 26 44 28 26

R. Bolla et al. / Computer Networks 52 (2008) 563–580 577

aim, using the network topology in Fig. 5 and thetraffic matrix in Simulation # 7 of Table 2 (Appen-dix 2), we have executed some allocations, startingwith 21 different configurations of reserved band-width in all the network links.

Table 1 shows both the 21 initial conditions andthe corresponding optimal link bandwidth thresh-olds, calculated with the proposed scheme. Observ-ing these results, we can note how similar theresulting bandwidth allocations are (they differwithin a unit of bandwidth granularity); then, wecan clearly conclude that the new optimal network

Table 2Simple Network environment

EF 0 EF 1 AF 0

Tx Node 0 1 0Rx Node 5 6 6

Sim 1 28 14 28Sim 2 29 15 29Sim 3 30 16 30Sim 4 31 17 31Sim 5 32 18 32Sim 6 33 19 33Sim 7 34 20 34Sim 8 35 21 35Sim 9 36 22 36Sim 10 37 23 37Sim 11 38 24 38Sim 12 39 25 39Sim 13 40 26 40

Average offered loads (in Mbps) for the aggregate flows.

Table 3Intermediate network environment

EF EF EF EF EF EF AF AF0 1 2 3 4 5 0 1

Tx Node 0 6 7 1 5 0 7 0Rx Node 3 4 4 3 3 4 3 3

Sim 0 7 10 8 7 5 8 4 4Sim 1 8 11 9 8 6 9 5 5Sim 2 9 12 10 9 7 10 6 6Sim 3 10 13 11 10 8 11 7 7Sim 4 11 14 12 11 9 12 8 8Sim 5 12 15 13 12 10 13 9 9Sim 6 13 16 14 13 11 14 10 10Sim 7 14 17 15 14 12 15 11 11Sim 8 15 18 16 15 13 16 12 12Sim 9 16 19 17 16 14 17 13 13Sim10 17 20 18 17 15 18 14 14Sim11 18 21 19 18 16 19 15 15Sim12 19 22 20 19 17 20 16 16Sim13 20 23 21 20 18 21 17 17Sim14 21 24 22 21 19 22 18 18

Offered load (in Mbps) of the aggregate flows.

allocations do not depend on the starting networkconfigurations, but correctly depend only on thetraffic matrix.

Appendix 2. Test configuration parameters

This Appendix includes the configuration param-eters used in all test sessions shown in Section 4. Inparticular, Tables 2 and 4 regard the simple networkenvironment (Section 4.1), and report the trafficmatrix parameters and the resulting optimal thresh-olds for GP traffic, respectively. Tables 3 and 5

AF 1 BE 0 BE 1 BE 2

1 2 4 16 5 6 6

15 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.515 24 11.5 11.5

AF AF AF BE BE BE BE BE BE2 3 4 0 1 2 3 4 5

6 0 5 6 6 6 7 7 73 4 4 2 4 0 2 4 0

5 8 10 8 14 20 10 14 206 9 11 8 14 20 10 14 207 10 12 8 14 20 10 14 208 11 13 8 14 20 10 14 209 12 14 8 14 20 10 14 2010 13 15 8 14 20 10 14 2011 14 16 8 14 20 10 14 2012 15 17 8 14 20 10 14 2013 16 18 8 14 20 10 14 2014 17 19 8 14 20 10 14 2015 18 20 8 14 20 10 14 2016 19 21 8 14 20 10 14 2017 20 22 8 14 20 10 14 2018 21 23 8 14 20 10 14 2019 22 24 8 14 20 10 14 20

Table 5Complex network environment

Flow ID Node Tx Node Rx Offered load (Mbps)

EF 0 5 4 65EF 1 0 3 65EF 2 6 11 65EF 3 9 10 65EF 4 7 3 65BE 0 0 4 65BE 1 9 3 65BE 2 0 4 65BE 3 9 3 65BE 4 0 4 65BE 5 9 3 65BE 6 0 4 65BE 7 9 3 65AF 0 5 3 65AF 1 0 3 65AF 2 8 11 65AF 3 7 3 65AF 4 6 12 65AF 5 5 3 65AF 6 0 3 65AF 7 8 11 65AF 8 7 3 65AF 9 7 3 65AF 10 5 11 65

Traffic Matrix and Offered Load values (in Mbps).

Table 4Simple Network environment

Link 1 Link 2 Link 6 Link 7

EF AF EF AF EF AF EF AF

Sim 1 42 36 42 36 21 36 21 26Sim 2 44 38 44 38 21 38 22 28Sim 3 44 39 44 39 22 39 23 26Sim 4 46 39 46 39 23 39 24 28Sim 5 46 42 46 42 24 41 26 26Sim 6 48 42 46 42 25 42 27 26Sim 7 48 44 46 41 26 43 28 26Sim 8 50 45 46 41 26 45 29 26Sim 9 50 46 50 44 27 45 31 26Sim 10 52 47 52 47 28 46 32 26Sim 11 54 46 54 46 28 47 32 28Sim 12 52 47 52 47 28 48 34 26Sim 13 52 47 52 47 28 49 35 26

Resource thresholds (in Mbps) allocated for EF and AF traffic.

578 R. Bolla et al. / Computer Networks 52 (2008) 563–580

contain all the traffic parameters used in both theIntermediate and the Complex network environ-ments (Sections 4.2 and 4.3).

References

[1] Network Wizards, Internet Domain Survey. <http://www.is-c.org/ds>.

[2] Y. Maeda, Standards for Virtual Private Networks (GuestEditorial), IEEE Commun. Mag. 42 (6) (2004) 114–115.

[3] M. Carugi, J. De Clercq, Virtual Private Network Services:scenario, requirements and architectural construct from astandardization perspective, IEEE Commun. Mag. 42 (6)(2004) 116–122.

[4] C. Metz, The latest in Virtual Private Networks: part I, IEEEInternet Comp. 7 (1) (2003) 87–91.

[5] C. Metz, The latest in VPNs: part II, IEEE Internet Comp. 8(3) (2004) 60–65.

[6] W. Cui, M.A. Bassiouni, Virtual Private Network bandwidthmanagement with traffic prediction, Comp. Networks 42 (6)(2003) 765–778.

[7] P. Knight, C. Lewis, Layer 2 and 3 Virtual Private Networks:taxonomy, technology, and standardization efforts, IEEECommun. Mag. 42 (6) (2004) 124–131.

[8] T. Takeda, I. Inoue, R. Aubin, M. Carugi, Layer 1 VirtualPrivate Networks: service concepts, architecture require-ments, and related advances in standardization, IEEECommun. Mag. 42 (6) (2004) 132–138.

[9] S.S. Gorshe, T. Wilson, Transparent Generic FramingProcedure (GFP): a protocol for efficient transport ofblock-coded data through SONET/SDH networks, IEEECommun. Mag. 40 (5) (2002) 88–95.

[10] J. Zeng, N. Ansari, Toward IP Virtual Private Networkquality of service: a service provider perspective, IEEECommun. Mag. 41 (40) (2003) 113–119.

[11] I. Khalil, T. Braun, Edge provisioning and fairness in VPN-DiffServ networks, J. Network Syst. Manage. 10 (1) (2002)11–37.

[12] R. Bolla, R. Bruschi, F. Davoli, Capacity planning in IPVirtual Private Networks under mixed traffic, Comp. Net-works 50 (8) (2006) 1069–1085.

[13] R. Bolla, R. Bruschi, F. Davoli, M. Repetto, Analytical/simulation optimization system for access control andbandwidth allocation in IP networks with QoS, in: Proceed-ings of the 2005 International Symposium on PerformanceEvaluation of Computer and Telecommunication System(SPECTS’05), Cherry Hill, NJ, July 2005, pp. 339–348.

[14] R. Bolla, F. Davoli, M. Repetto, A control architecture forquality of service and resource allocation in multiservice IP

R. Bolla et al. / Computer Networks 52 (2008) 563–580 579

networks, in: Proceedings of the International Workshop onArchitectures for Quality of Service in the Internet (Art-QoS2003), Warsaw, Poland, March 2003, Lecture Notes inComputer Science, vol. 2698, Springer-Verlag, Berlin, 2003,pp. 49–63.

[15] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W.Weiss, An Architecture for Differentiated Services, RFC2475, URL <http://www.ietf.org/rfc/rfc2475.txt>, Decem-ber 1998.

[16] V. Jacobson, K. Nichols, K. Poduri, An Expedited For-warding PHB, RFC 2598, URL <http://www.ietf.org/rfc/rfc2598.txt>, June 1999.

[17] J. Heinanen, T. Finland, F. Baker, W. Weiss, J. Wroclawski,Assured Forwarding PHB Group, RFC 2597, URL: <http://www.ietf.org/rfc/rfc2597.txt>, June 1999.

[18] J. Heinanen, R. Guerin, Network Working Group, A TwoRate Three Color Marker, RFC 2698, URL <http://www.ietf.org/rfc/rfc2698.txt>, September 1999.

[19] F. Kelly, Notes on effective bandwidths, in: StochasticNetworks: Theory and Applications, Oxford UniversityPress, 1996.

[20] H. Li, C. Huang, M. Devetsikiotis, G. Damm, Extending theconcept of effective bandwidths to DiffServ networks, in:Proceedings of the Electrical and Computer EngineeringConference, Ontario, Canada, May 2004, vol. 3, pp. 1669–1672.

[21] L. Aspirot, P. Belzarena, P. Bermolen, A. Ferragut, G.Perera, M. Simon, Quality of service parameters and linkoperating point estimation based on effective bandwidths,Perform. Evaluation 59 (2005) 103–120.

[22] K.W. Ross, Multiservice Loss Models for BroadbandTelecommunication Networks, Springer, London, 1995.

[23] C.C. Beard, V.S. Frost, Prioritized resource allocation forstressed networks, IEEE/ACM Trans. Network. 9 (5) (2001)618–633.

[24] S.C. Borst, D. Mitra, Virtual partitioning for robustresource sharing: computational techniques for heteroge-neous traffic, IEEE J. Select. Areas Commun. 16 (5) (1998)668–678.

[25] S. Chen, K. Nahrstedt, An overview of quality-of-servicerouting for the next generation high-speed networks:problems and solutions, IEEE Network 12 (6) (1998) 64–79.

[26] The Network Simulator – ns-2. Documentation and SourceCode from the Home Page: <http://www.isi.edu/nsnam/ns/>.

[27] D. Awduche, J. Malcom, J. Agogbua, M. O’Dell, J.McManus, Requirements for Traffic Engineering OverMPLS, RFC 2702, URL <http://www.ietf.org/rfc/rfc2702.txt.Sept>, 1999.

[28] D. Bertsekas, R. Gallager, Data Networks, second ed.,Prentice-Hall, 1992.

[29] R. Bolla, R. Bruschi, M. Repetto, A fluid simulator foraggregate TCP connections, in: Proceedings of the 2004International Symposium on Performance Evaluation ofComputer and Telecommunication System (SPECTS’2004),San Diego, CA, July 2004, pp. 5–10.

[30] S. Ben Fredj, T. Bonald, A. Proutiere, G. Regnie, J.W.Roberts, Statistical bandwidth sharing: a study of con-gestion control at flow level, in: Proceedings of theACM SIGCOMM, San Diego, CA, August 2001, pp.111–122.

[31] J.W. Roberts, A survey on statistical bandwidth sharing,Comp. Networks 45 (3) (2004) 319–332.

[32] J.W. Roberts, Internet traffic, QoS and pricing, Proc. IEEE92 (9) (2004) 1389–1399.

Raffaele Bolla received the ‘‘laurea’’degree in Electronic Engineering in 1989and the Ph.D. degree in Telecommuni-cations in 1994 from the University ofGenoa. From 1994 to 1996 he had apost-doc grant at the Department ofCommunications, Computer and Sys-tems Science (DIST)-University ofGenoa. Since November 1996 he is‘‘Ricercatore’’ with DIST at the Uni-versity of Genoa, and since September

2004 he is ‘‘Professore associato’’ at the same Department. Since1999 he teaches Telematics 1 and 2, Telecommunication Net-

works and Telemedicine, Protocols and Architectures for Wire-less, for a total number of six classes in the Master Degrees ofTelecommunications Engineering, Computer Engineering, Bio-medical Engineering and Environmental Engineering. He hasbeen and is responsible for DIST of many important researchprojects and contracts with both public Institutions (ItalianMinistry, European Union, Regional Entities) and private com-panies (Selex Communication, Alcatel-Lucent, Ericsson, Intel, ...)in the telecommunication field. He acts as reviewer for manydifferent international magazines (IEEE/ACM Transaction ofNetworking, IEEE Transaction on Vehicolar Technology,Computer Networks, ...) and congresses; he participates totechnical committees of international congresses (SPECTS, QoS-IP, Globecom, ...). He has co-authored over 130 scientific pub-lications in international journals and international conferenceproceedings. His current research interests are in: (i) resourceallocation, routing and control of IP QoS (Quality of Services)networks, (ii) modeling and design TCP/IP networks, (iii) P2Poverlay network traffic measure and modeling, (iv) advanceplatform for software routers, (v) performance and traffic moni-toring in IP networks and (vi) advance mobility management inheterogeneous wireless networks.

Roberto Bruschi received the ‘‘laurea’’degree in Telecommunication Engineer-ing in 2002, and the Ph.D. degree in 2006from the University of Genoa. Cur-rently, he works with the Department ofCommunications, Computer and Sys-tems Science of the University of Genoa.He is also a member of CNIT, the Italianinter-university consortium for telecom-munications. He was an active memberof some Italian research project groups,

such as TANGO, FAMOUS, BORA-BORA and EURO. He hasco-authored over 15 scientific publications in international jour-

nals and international conference proceedings.

His main interests include Open Routers, Network processors,TCP modelling, VPN design, P2P and QoS network control.

580 R. Bolla et al. / Computer Networks 52 (2008) 563–580

Franco Davoli received the ‘‘laurea’’degree in Electronic Engineering in 1975from the University of Genoa, Italy.Since 1990 he has been Full Professor ofTelecommunication Networks at theUniversity of Genoa, where he is withthe Department of Communications,Computer and Systems Science (DIST).From 1989 to 1991 and from 1993 to1996, he was also with the University ofParma, Italy. His current research inter-

ests are in bandwidth allocation, admission control and routingin multiservice networks, wireless mobile and satellite networks

and multimedia communications and services.

He has co-authored over 250 scientific publications in inter-national journals, book chapters and conference proceedings. Heis Associate Editor of the International Journal of Communica-tion Systems (Wiley), member of the Editorial Board of theinternational journal Studies in Informatics and Control, andArea Editor of Simulation – Transactions of the SCS.

In 2004, he has been the recipient of an Erskine Fellowshipfrom the University of Canterbury, Christchurch, New Zealand,as Visiting Professor. He has been a Principal Investigator in alarge number of projects and has served in several positions in the

Italian National Consortium for Telecommunications (CNIT).He was the Head of the CNIT National Laboratory for Multi-media Communications in Naples for the term 2003–2004, and heis currently Vice-President of the CNIT Management Board.

Matteo Repetto was born in Genova,Italy, in 1976. He received the ‘‘laurea’’degree in Telecommunication Engineer-ing in 2000 and the Ph.D. degree inElectronics and Informatics in 2004 fromthe University of Genoa. He is also amember of CNIT, the Italian inter-uni-versity consortium for telecommunica-tions.

He has co-authored over 10 scientificpublications in international journals

and conference proceedings, and he is cooperating in many dif-ferent national and European research projects in the networking

area.

His current research interests are in the modeling of wirelessnetworks, multiple access techniques for ad hoc networks, esti-mation of freeway vehicular traffic and in resource allocation forwireless and wired IP networks.