- Home
- Documents
*Algorithms for Higher-Order Derivatives of Erlang C for Higher-Order Derivatives of Erlang C ... n...*

prev

next

out of 6

View

218Download

6

Embed Size (px)

Algorithms for Higher-Order Derivatives of Erlang C Function

JORGE SA ESTEVESUniversity of Aveiro

Department of MathematicsCampus de Santiago, 3810-193 Aveiro

PORTUGALsaesteves@ua.pt

Abstract: In this paper we analyze the partial derivatives of any order of the continued Erlang C function inthe number of servers. For the numerical computation of those derivatives, several algorithms are proposed andcompared in terms of stability, efficiency and precision. This study concludes that a recursive matrix relationpresented in a previous work [4, 5], may be used for the establishment of a simple and reliable algorithm havingthe best performance considering the trade-off of the different criteria. Some computational results are presentedand discussed. A conjecture about the convexity of the first derivative is supported by these computational results.

KeyWords: Performance Evaluation, Queueing Systems, Erlang C Formula, Stable Recursions.

1 Introduction

The derivatives of Erlang C function are useful in theoptimum design of stochastic service systems (queuesor networks of queues). Actually, one of the mostcommon models to support performance evaluation ofthose systems is the M/M/x/ (Erlang C model).Moreover, the Erlang C formula plays an importantrole in approximations for more general systems. Ex-amples of such systems are telephone switching net-works, computer systems (jobs submitted to parallelprocessors, dynamic shared memory), satellite com-munication systems (allocation of transmission band-width). Recently, Erlang-C model has been subjectedto intensive study in the context of the dimensioningand workforce management of call centers [6].

Non-linear programming problems encounteredin optimizations of those systems can be solved byusing the performance functions and their derivatives,namely for Newton-Raphson and gradient type solu-tions. Moreover, modelers in areas such design opti-mization and economics are often interest in perform-ing post-optimal sensitivity analysis. This analysisconsists in determining the sensitivity of the optimumto small perturbations in the parameter or constraintvalues. Additionally, the obtainment of an efficientmethod, with good accuracy, for calculating higher or-der derivatives of Erlang C function may be used forlocal approximations of the function and its deriva-tives by using a convenient Taylor or Hermite polyno-mial. Those approximations are specially importantfor iterative algorithms in small neighborhoods of thesolution.

A method for calculating the derivatives of ordern of the Erlang B function in the number of serverswas proposed by the author in [4]. In the sequel, asecond paper [5] showed that for high values of the ar-guments it is possible to obtain a significant improve-ment in the method efficiency, without jeopardizingthe required precision, by defining a reduced recur-sion starting from a point closer to the desired valueof the number of servers(see [5]).

Following the above mentioned works on the Er-lang B function derivatives, a method for calculatingdirectly higher-order derivatives of Erlang C function,is developed in the present work.

2 The Erlangs B and C FormulasThe Erlang B and C formulas are true probability clas-sics. Indeed, much of the theory was developed byA. K. Erlang and his colleagues prior to 1925 [2].The subject has been extensively studied and appliedby telecommunications engineers and mathematiciansever since. A nice introductory account, includingsome of the telecommunications subtleties, is pro-vided by [3]. The Erlang B (or loss) formula givesthe (steady-state) blocking probability in the Erlangloss model, i.e., in the M/M/x/0 model (see for ex-ample [3, pp. 5 and 79]):

B(a, x) .=ax/x!xj=0 a

j/j!, x N0, a R+. (1)

The numerical studies regarding this formula are usu-ally based on its analytical continuation, ascribed to

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 72 ISBN: 978-960-474-098-7

R. Fortet [7]:

[B(a, x)]1 = I(a, x) = a +

0eaz (1 + z)x dz

(2)which is valid for traffic offered a R+ and x R+0servers. The function I(a, x) tends to be easier to ana-lyze than B(a, x) and may be named as the reciprocal,or the inverse probability of blocking.

A classical result is the following recursion ob-tained by partial integration of (2):

I(a, x + 1) =x + 1

aI(a, x) + 1, x R+0 , (3)

Since B(a, 0) = Ia(0) = 1 for all a R+, I(a, x)may be calculated by recursion (3) for any positiveinteger x. Actually, in [4] it is shown that (3) definesa very stable numerical recursion.

The Erlang C (or delay) formula gives the(steady-state) probability of delay (that an arrival mustwait before beginning service) in the Erlang delaymodel. i.e., in the M/M/x/ model (e.g., see pp.7 and 91 of [3]):

C(a, x) .=ax

x!x

xax1j=0

aj

j! +x

xa, for 0 < a < x. (4)

The Erlang C formula is intimately related to the Er-lang B formula provided that 0 < a < x. Indeed it iseasy to show that:

C(a, x) =xB(a, x)

x a + aB(a, x) , 0 < a < x. (5)

The reciprocal of C(a, x) or the inverse probabilityof waiting may be defined as J(a, x) .= 1/C(a, x).Using (5) and (3) it may be shown that

J(a, x) = I(a, x) I(a, x 1). (6)

An integral representation for the continued Erlang-Cfunction may be obtained substituting relation (2) in(6):

[C(a, x)]1 = J(a, x) = a +

0eaz(1+z)x1 z dz.

(7)In Section 4, methods for computing the deriva-

tives C(k)x (a, x), k = 0(1)n will be given. Thesemethods are generalizations of those for the deriva-tives B(k)x (a, x), k = 0(1)n proposed in [4] and [5].

3 Erlang B Function DerivativesThe main features of the method proposed in [4] arethe starting point of this present work. In the sequel ,we summarize the most important results from [4] andthe corresponding notations used in this paper.

Successive differentiation of (2) leads to:

Ik(a, x) =kI

xk= a

+

0eaz(1 + z)x lnk(1 + z) dz,

(8)and successive differentiation of (7) yields

Jk(a, x)=kJ

xk=a

+

0eaz(1+z)x1z lnk(1+z) dz.

(9)Successive differentiation of (3) with respect to xleads to the general recursive relation (valid for k =1, 2, . . .):

Ik(a, x + 1) =x + 1

aIk(a, x) +

k

aIk1(a, x). (10)

Note that, if a convenient algorithm for calculating thevalues Ik(a, x), k = 0(1)n is obtained, then it is pos-sible to calculate all the derivatives B(k)x (a, x), k =0(1)n using Algorithm 2 (see Appendix A). The gen-eral recursive matrix relation for calculating deriva-tives of B(a, x) in the number of servers x proposedin [4] is defined from (3) and (10). Also in [4] it isshown this matrix scheme defines a very stable numer-ical recursion. For high values of the arguments (sayx > a > 100), a significant improvement in efficiencymay be obtained by considering a reduced recursionstarting from a point closer to the desired value of x(see [5]). This can be attained without jeopardizingthe required precision. For short, this method is re-ferred to as RR method (Reduced Recursion Method),by opposition to the method proposed in [4] is referredto as CR method (Complete Recursion Method). TheRR method has been established for derivatives untilorder two [5]. However, this method may be easilygeneralized for calculating higher order derivatives.Indeed, computational experiments show that the ini-tial point used for calculating derivatives until ordertwo may be used for calculating derivatives until or-der five with excellent precision (all the15 figures areexact, except for some arguments greater than105).

4 Erlang C Function DerivativesSuccessive differentiation of (6) with respect to xleads to

Jk(a, x) = Ik(a, x) Ik(a, x 1), k = 0, 1, 2, . . .(11)

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 73 ISBN: 978-960-474-098-7

Since we have a convenient algorithm to calculatethe values Ik(a, x), k = 0(1)n it is plausible to as-sume that (11) might be used directly in order to com-pute the Erlang C function derivatives of any orderof variable x. Unfortunately, for x >> a we haveIk(a, x) Ik(a, x1) and (11) is numerically unsta-ble since the value calculated for Jk(a, x) is affectedby subtractive cancelation.

In order to avoid this difficulty, our initialgoal is to reformulate the problem of calculatingJk(a, x), k = 0(1)n in terms of the quantitiesIk(a, x), k = 0(1)n. The following proposition givesthe first method for accomplish that reformulation.

Proposition 1 For k = 0, 1, 2, . . . we have:

Jk(a, x)=+

j=1

Ik+j(a, x 1)j!

=+

j=1

(1)j+1Ik+j(a, x)

j!

Proof: These two series representations of Jk(a, x)are obtained by taking the Taylor expansions ofIk(a, x) at point x 1 and of Ik(a, x 1) at pointx, respectively. utProposition 1 may be used to compute Jk(a, x)with arbitrary precision, since the truncation error ofthe partial summation of the series may be easilybounded. Indeed, since Jk(a, x) is the sum of an al-ternated series, it follows that:

Jk(a, x) r

j=1

(1)j+1Ik+j(a, x)

j!, (12)

with absolute error less than Ik+r+1(a, x)/(r + 1)!.However, summation (12) converges slowly (seeLemma 4 of Appendix A of [4]). Furthermore, theevaluation of the Ik+r(a, x) for large r by recursionhas an important computational cost. The conclusionis that the algorithm inspired in relation (12) is numer-ically stable but is rather inefficient.

Proposition 2 For a R+, x [1,+[ and k N:

J0(a, x) =x a

aI0(a, x 1) + 1

Jk(a, x) =k

aIk1(a, x 1) +

x aa

Ik(a, x 1).

Proof: Since I0(a, x) = x I0(a, x 1)/a + 1, from(6) th