6

Click here to load reader

Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

Embed Size (px)

Citation preview

Page 1: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

Algorithms for Higher-Order Derivatives of Erlang C Function

JORGE SA ESTEVESUniversity of Aveiro

Department of MathematicsCampus de Santiago, 3810-193 Aveiro

[email protected]

Abstract: In this paper we analyze the partial derivatives of any order of the continued Erlang C function inthe number of servers. For the numerical computation of those derivatives, several algorithms are proposed andcompared in terms of stability, efficiency and precision. This study concludes that a recursive matrix relationpresented in a previous work [4, 5], may be used for the establishment of a simple and reliable algorithm havingthe best performance considering the trade-off of the different criteria. Some computational results are presentedand discussed. A conjecture about the convexity of the first derivative is supported by these computational results.

Key–Words:Performance Evaluation, Queueing Systems, Erlang C Formula, Stable Recursions.

1 Introduction

The derivatives of Erlang C function are useful in theoptimum design of stochastic service systems (queuesor networks of queues). Actually, one of the mostcommon models to support performance evaluation ofthose systems is theM/M/x/∞ (Erlang C model).Moreover, the Erlang C formula plays an importantrole in approximations for more general systems. Ex-amples of such systems are telephone switching net-works, computer systems (jobs submitted to parallelprocessors, dynamic shared memory), satellite com-munication systems (allocation of transmission band-width). Recently, Erlang-C model has been subjectedto intensive study in the context of the dimensioningand workforce management of call centers [6].

Non-linear programming problems encounteredin optimizations of those systems can be solved byusing the performance functions and their derivatives,namely for Newton-Raphson and gradient type solu-tions. Moreover, modelers in areas such design opti-mization and economics are often interest in perform-ing post-optimalsensitivity analysis.This analysisconsists in determining the sensitivity of the optimumto small perturbations in the parameter or constraintvalues. Additionally, the obtainment of an efficientmethod, with good accuracy, for calculating higher or-der derivatives of Erlang C function may be used forlocal approximations of the function and its deriva-tives by using a convenient Taylor or Hermite polyno-mial. Those approximations are specially importantfor iterative algorithms in small neighborhoods of thesolution.

A method for calculating the derivatives of ordern of the Erlang B function in the number of serverswas proposed by the author in [4]. In the sequel, asecond paper [5] showed that for high values of the ar-guments it is possible to obtain a significant improve-ment in the method efficiency, without jeopardizingthe required precision, by defining areduced recur-sion starting from a point closer to the desired valueof the number of servers(see [5]).

Following the above mentioned works on the Er-lang B function derivatives, a method for calculatingdirectly higher-order derivatives of Erlang C function,is developed in the present work.

2 The Erlang’s B and C FormulasThe Erlang B and C formulas are true probability clas-sics. Indeed, much of the theory was developed byA. K. Erlang and his colleagues prior to 1925 [2].The subject has been extensively studied and appliedby telecommunications engineers and mathematiciansever since. A nice introductory account, includingsome of the telecommunications subtleties, is pro-vided by [3]. The Erlang B (or loss) formula givesthe (steady-state) blocking probability in the Erlangloss model, i.e., in theM/M/x/0 model (see for ex-ample [3, pp. 5 and 79]):

B(a, x) .=ax/x!∑xj=0 aj/j!

, x ∈ N0, a ∈ R+. (1)

The numerical studies regarding this formula are usu-ally based on its analytical continuation, ascribed to

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 72 ISBN: 978-960-474-098-7

Page 2: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

R. Fortet [7]:

[B(a, x)]−1 = I(a, x) = a

∫ +∞

0e−az (1 + z)x dz

(2)which is valid for traffic offereda ∈ R+ andx ∈ R+

0servers. The functionI(a, x) tends to be easier to ana-lyze thanB(a, x) and may be named as thereciprocal,or theinverse probability of blocking.

A classical result is the following recursion ob-tained by partial integration of (2):

I(a, x + 1) =x + 1

aI(a, x) + 1, x ∈ R+

0 , (3)

SinceB(a, 0) = Ia(0) = 1 for all a ∈ R+, I(a, x)may be calculated by recursion (3) for any positiveintegerx. Actually, in [4] it is shown that (3) definesa very stable numerical recursion.

The Erlang C (or delay) formula gives the(steady-state) probability of delay (that an arrival mustwait before beginning service) in the Erlang delaymodel. i.e., in theM/M/x/∞ model (e.g., see pp.7 and 91 of [3]):

C(a, x) .=ax

x!x

x−a∑x−1j=0

aj

j! + xx−a

, for 0 < a < x. (4)

The Erlang C formula is intimately related to the Er-lang B formula provided that0 < a < x. Indeed it iseasy to show that:

C(a, x) =xB(a, x)

x− a + aB(a, x), 0 < a < x. (5)

The reciprocal of C(a, x) or the inverse probabilityof waiting may be defined asJ(a, x) .= 1/C(a, x).Using (5) and (3) it may be shown that

J(a, x) = I(a, x)− I(a, x− 1). (6)

An integral representation for the continued Erlang-Cfunction may be obtained substituting relation (2) in(6):

[C(a, x)]−1 = J(a, x) = a

∫ +∞

0e−az(1+z)x−1 z dz.

(7)In Section 4, methods for computing the deriva-

tives C(k)x (a, x), k = 0(1)n will be given. These

methods are generalizations of those for the deriva-tivesB

(k)x (a, x), k = 0(1)n proposed in [4] and [5].

3 Erlang B Function DerivativesThe main features of the method proposed in [4] arethe starting point of this present work. In the sequel ,we summarize the most important results from [4] andthe corresponding notations used in this paper.

Successive differentiation of (2) leads to:

Ik(a, x) =∂kI

∂ xk= a

∫ +∞

0e−az(1 + z)x lnk(1 + z) dz,

(8)and successive differentiation of (7) yields

Jk(a, x)=∂kJ

∂ xk=a

∫ +∞

0e−az(1+z)x−1z lnk(1+z) dz.

(9)Successive differentiation of (3) with respect toxleads to the general recursive relation (valid fork =1, 2, . . .):

Ik(a, x + 1) =x + 1

aIk(a, x) +

k

aIk−1(a, x). (10)

Note that, if a convenient algorithm for calculating thevaluesIk(a, x), k = 0(1)n is obtained, then it is pos-

sible to calculate all the derivativesB(k)x (a, x), k =

0(1)n using Algorithm 2 (see Appendix A). The gen-eral recursive matrix relation for calculating deriva-tives ofB(a, x) in the number of serversx proposedin [4] is defined from (3) and (10). Also in [4] it isshown this matrix scheme defines a very stable numer-ical recursion. For high values of the arguments (sayx > a > 100), a significant improvement in efficiencymay be obtained by considering areduced recursionstarting from a point closer to the desired value ofx(see [5]). This can be attained without jeopardizingthe required precision. For short, this method is re-ferred to as RR method (Reduced Recursion Method),by opposition to the method proposed in [4] is referredto as CR method (Complete Recursion Method). TheRR method has been established for derivatives untilorder two [5]. However, this method may be easilygeneralized for calculating higher order derivatives.Indeed, computational experiments show that the ini-tial point used for calculating derivatives until ordertwo may be used for calculating derivatives until or-der five with excellent precision (all the15 figures areexact, except for some arguments greater than105).

4 Erlang C Function DerivativesSuccessive differentiation of (6) with respect toxleads to

Jk(a, x) = Ik(a, x) − Ik(a, x− 1), k = 0, 1, 2, . . .(11)

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 73 ISBN: 978-960-474-098-7

Page 3: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

Since we have a convenient algorithm to calculatethe valuesIk(a, x), k = 0(1)n it is plausible to as-sume that (11) might be used directly in order to com-pute the Erlang C function derivatives of any orderof variablex. Unfortunately, forx >> a we haveIk(a, x) ≈ Ik(a, x−1) and (11) is numerically unsta-ble since the value calculated forJk(a, x) is affectedby subtractive cancelation.

In order to avoid this difficulty, our initialgoal is to reformulate the problem of calculatingJk(a, x), k = 0(1)n in terms of the quantitiesIk(a, x), k = 0(1)n. The following proposition givesthe first method for accomplish that reformulation.

Proposition 1 For k = 0, 1, 2, . . . we have:

Jk(a, x)=+∞∑

j=1

Ik+j(a, x− 1)j!

=+∞∑

j=1

(−1)j+1Ik+j(a, x)j!

Proof: These two series representations ofJk(a, x)are obtained by taking the Taylor expansions ofIk(a, x) at pointx − 1 and ofIk(a, x − 1) at pointx, respectively. utProposition 1 may be used to computeJk(a, x)with arbitrary precision, since the truncation error ofthe partial summation of the series may be easilybounded. Indeed, sinceJk(a, x) is the sum of an al-ternated series, it follows that:

Jk(a, x) ≈r∑

j=1

(−1)j+1 Ik+j(a, x)j!

, (12)

with absolute error less thanIk+r+1(a, x)/(r + 1)!.However, summation (12) converges slowly (seeLemma 4 of Appendix A of [4]). Furthermore, theevaluation of theIk+r(a, x) for larger by recursionhas an important computational cost. The conclusionis that the algorithm inspired in relation (12) is numer-ically stable but is rather inefficient.

Proposition 2 For a ∈ R+, x ∈ [1,+∞[ andk ∈ N:

J0(a, x) =x− a

aI0(a, x− 1) + 1

Jk(a, x) =k

aIk−1(a, x− 1) +

x− a

aIk(a, x− 1).

Proof: SinceI0(a, x) = x I0(a, x− 1)/a + 1, from(6) the first equality follows. Successive differenti-ation with respect to variablex leads to the secondequality. utFinally, Proposition 2 establishes a convenient com-putational method. Indeed, we only need to computeIk(a, x − 1), k = 0(1)m using the matrix recursion

defined in [4] and then apply Proposition 2 in order tocomputeJk(a, x), k = 0(1)m. If a andx are greaterthan100, the RR algorithm may be used to improvethe computational efficiency. Algorithm 1 defines thecomputational procedure and calls Algorithm 2 (seeAppendix A).

Algorithm 1 CR Method forC(n)x (a, x), n = 0(1)m

Require: a ∈ R+, x ∈ N andm ∈ N0;Ensure: Ck = C

(n)x (a, x) for n = 0, 1, . . . ,m;

Obtain initial valuesIn(a, 0), n = 0(1)m;for j = 0 to x− 1 do

for k = m to 1 doIk ← (k · Ik−1 + j · Ik)/a;

end forI0 ← 1 + j · I0/a;

end forJ0 ← 1 + (x− a) · I0/a;for k = 1 to m do

Jk ← (k · Ik−1 + (x− a) · Ik)/aend forComputeC(n)

x , n = 0(1)m by Algorithm 2;

15 20 25 3010

−5

10−4

10−3

10−2

10−1

100

NUMBER OF SERVERS (x)

LOG

OF

DE

RIV

AT

IVE

S

ERLANG−C DERIVATIVES ON THE Nb. OF SERVERS

a=15 ErlangLog(C(0))

Log(−C(1)x

)

Log(C(2)x

)

Log(−C3x)

Log(|C(4)x

|)

Figure 1: Graphics of|C(k)x (15, x)| for x ∈ [15, 30]

computed by CR Method (Algorithm 1).

5 Computational ResultsFirst note thatC(a, x) is a strictly decreasing func-tion of x, thereforeC ′

x(a, x) < 0. SinceC(a, x) is aconvex function in variablex (see [8]),C ′′

x(a, x) ≥ 0.Numerical evidence, obtained by our computationalexperiments, leads to the conjectureC ′′′

x (a, x) < 0,implying thatC ′

x(a, x) is also a convex function ofx.

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 74 ISBN: 978-960-474-098-7

Page 4: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

100 110 120 130 140 150 160 170 18010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

NUMBER OF SERVERS (x)

LOG

OF

DE

RIV

AT

IVE

S

ERLANG−C DERIVATIVES ON THE Nb. OF SERVERS

a=100 ErlangLog(C(0))

Log(−C(1)x

)

Log(C(2)x

)

Log(−C3x)

Log(|C(4)x

|)

Figure 2: Graphics of|C(k)x (100, x)| for x ∈

[100, 180] computed by CR Method (Algorithm 1).

However, the derivatives of higher order do not seemto preserve sign. Figures 1 and 2 show samples ofgraphic representations which illustrate these proper-ties.

Algorithm 1 defines the CR method for the com-putation of high-order derivatives of Erlang-C func-tion. The initial values are computed using Gauss-Laguerre quadrature (degree15) as explained in [4].The implementation of the correspondent RR methodit is straightforward since the only modification is thepoint to start the recurrence calculations easily com-puted as indicated in [5]. The implementation of allthe algorithms was performed using Turbo C compilerversion 2.1. The measuring of processing times (du-rations of evaluations) was performed on a PC (IntelCore2 dual E6400 processor running at 2.13 GHz and2 GB RAM) using MS Windows XP and time criti-cal priority of the process. All the calculations weremade in double precision, and tables are produced bywriting automatically LATEX code.

Extensive computational results have been ob-tained, enabling to compare the precision and effi-ciency of the RR method, the CR method and Jager-man algorithm based in cardinal series quadrature (seeAppendix B of [4]).

5.1 Small values of the argumentsFor small values of the arguments, saya < x < 100the RR method is not applicable. So, we only com-pare the CR method with Jagerman method. Since,in this range, CR method is much more efficient thanJagerman method, it remains to compare the precisionof the calculated values.

Tables 1 and 2 show some of those results. The

Table 1: DerivativesC(k)x for a = 1 andx = 2

k CR Alg. Jagerman Alg. Error

0 3.333 333 E − 1 3.333 333 E − 1 0.000000%

1 −3.995 941 E − 1 −3.995 942 E − 1 0.000010%

2 4.116 842 E − 1 4.116 840 E − 1 0.000054%

3 −3.323 324 E − 1 −3.323 319 E − 1 0.00014%

4 1.553 650 E − 1 1.553 671 E − 1 0.0013%

5 5.263 250 E − 2 5.260 938 E − 2 0.044%

6 −1.445 280 E − 1 −1.444 383 E − 1 0.062%

7 −4.022 122 E − 3 −4.013 443 E − 3 0.22%

8 2.599 829 E − 1 2.572 807 E − 1 1.1%

9 −1.249 069 E − 1 −1.029 022 E − 1 21%

10 −6.055 802 E − 1 −7.186 006 E − 1 16%

Table 2: DerivativesC(k)x for a = 10 andx = 12

k CR Alg. Jagerman Alg.

0 4.493 882 242 982 E − 1 4.493 882 242 982 E − 1

1 −1.957 138 108 265 E − 1 −1.957 138 108 265 E − 1

2 6.853 579 227 757 E − 2 6.853 579 227 757 E − 2

3 −1.630 475 310 872 E − 2 −1.630 475 310 872 E − 2

4 9.195 949 758 946 E − 4 9.195 949 758 954 E − 4

5 8.915 293 473 794 E − 4 8.915 293 473 783 E − 4

6 −7.780 633 149 666 E − 5 −7.780 633 149 575 E − 5

7 −1.801 545 889 623 E − 4 −1.801 545 889 630 E − 4

8 1.197 684 653 079 E − 6 1.197 684 653 852 E − 6

9 6.757 700 191 225 E − 5 6.757 700 191 184 E − 5

10 1.163 815 001 485 E − 5 1.163 815 001 145 E − 5

approximations calculated are presented together withthe percentage of (relative) error assuming that theJagerman algorithm is exact. In fact, it may be no-ticed that the Jagerman algorithm is very precise inthis range of the arguments. Indeed, comparisons withmethods based on adaptive quadrature show results ofthe Jagerman method in that range with more than10exact figures. On the other hand, the precision of theRC method proposed is less than the obtained withthe Jagerman method. This fact is easily explained:Gauss-Laguerre quadrature calculates a crude approx-imation of the initial values and for small values ofaandx the relative error of the calculated values doesnot decay sufficiently since very few steps of the re-cursion are used. Moreover and as expected, whenthe order of the derivative increases the accuracy de-creases. Table 2 shows that the degradation of the pre-cision of the calculated values by CR method practi-cally disappear forx > a > 10. In that table thedigits that are not in accordance with the values cal-culated by Jagerman method are emphasized in bold.Therefore, forx > a > 10 it may be concluded thatthe CR method is much more efficient and allows highprecision calculations.

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 75 ISBN: 978-960-474-098-7

Page 5: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

102

103

104

105

106

107

10−2

10−1

100

101

102

103

104

COMPUTING C(k)x

(a,x) for k=0,1

OFFERED TRAFFIC (a)

CP

U T

IME

(M

ILIS

EC

ON

DS

)

Values of x pre−computed for

constant ratio C(a,x)/(x−a)=0.5

Reduced RecursionJagerman AlgorithmComplete Recursion

102

103

104

105

106

107

10−2

10−1

100

101

102

103

104

COMPUTING C(k)x

(a,x) for k=0,1,2,3,4

OFFERED TRAFFIC (a)

CP

U T

IME

(M

ILIS

EC

ON

DS

)

Values of x pre−computed for

constant ratio C(a,x)/(x−a)=0.5

Reduced RecursionJagerman AlgorithmComplete Recursion

Figure 3: Processing times for the three algorithms.

5.2 High values of the arguments

For x > a > 100 the RR method may be used inorder to improve the efficiency of the CR method.The precision of the three method is very good, sothe critical criterion of comparison is the efficiency.In order to accomplish that comparison, Figure 3 ispresented showing two log-log graphics of processingtimes (in milliseconds) versus the magnitude of the ar-gumenta. The value ofx was pre-calculated such thatW = C(a, x)/(x − a) = 0.5, meaning that we havea typical Erlang-C system with the mean waiting timeis half the mean values of service time. For obtainingbetter precision for the values of processing times, av-erages over the values in 1000 runs (RC and Jagermanmethod) and in 5000 runs (RR method), were com-puted. In the next paragraph, some conclusions maybe drawn from the computational results.

The CR Algorithm is consistently more efficientthan the Jagerman method excepting for sufficientlyhigh values ofa where the Jagerman approach is moreefficient. The critical value ofa beyond which thathappens seems to be in the range 1000–5000. How-ever, the RR algorithm is much more efficient than theCR algorithm and the obtained approximations are ex-actly the same. For calculating derivatives until orderfour, the most efficient method is the RR algorithmproviding thata ≤ 3× 104.

Nevertheless, in the rangex > a > 105 somemethods based on asymptotic approximations maybe considered even for high precision computations.Those methods are well known forC(a, x) and forthe first derivativeC ′

x(a, x) (see [1, 7]).

6 ConclusionExtensive computation has shown that the proposedCR and RR methods are very accurate in a widerange of values ofa andx and compares favorably,in terms of efficiency, with the method based in car-dinal series quadrature, excepting for very high val-ues ofx. For very high values of the arguments (sayx > a > 5×104) Jagerman algorithm shows better ef-ficiency. Nevertheless, it is important to notice that inthis case the precision of the Jagerman algorithm fallsdown. However in that range of arguments of veryhigh magnitude there are few applications. In thesecases, we suggest asymptotic expansion methods.

Considering those features as well as the numeri-cal robustness of the methods we believe that the CRand RR methods proposed in this paper may be attrac-tive and reliable for all ranges of applications (namelyin Teletraffic engineering and call centers industry),even if the number of servers is a very large value.

Finally, it is conjectured thatC ′x(a, x) is a convex

function ofx. This numerical evidence is supportedby extensive computation covering a wide range of theparameters by showing in all cases thatC ′′′

x (a, x) <0. The theoretical analysis of that conjecture will becarried out elsewhere and may lead to an importantproperty of the Erlang C function.

A Reciprocal Function DerivativesHere we develop an algorithm to compute the succes-sive derivatives of areciprocal functiongiven by thenext Proposition.

Proposition 3 Let H : DH ⊂ R −→ R be an + 1continuously differentiable function in a neighbor-

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 76 ISBN: 978-960-474-098-7

Page 6: Algorithms for Higher-Order Derivatives of Erlang C · PDF fileAlgorithms for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order

hood of a pointy∗ ∈ int(DH) andH(y∗) 6= 0. Thesuccessive derivatives of the functionG(y) = 1/H(y)calculated fory = y∗ are given forn = 0, 1, 2, . . . by

G(n+1)=−n∑

k=0

[(n

k

) k∑

i=0

(k

i

)G(i)G(k−i)

]H(n+1−k).

Proof: The first step in order to get an expressionfor G(n)(y∗) requires the definition of an appropri-ate notation able of avoiding unwanted formal com-plications. Thence,G andHk may replaceG(y∗) andH(k)(y∗) respectively, and:

Gk = G(k)(y∗), k = 0, 1, 2, . . .

By differentiation ofG it is immediate thatG1 =−G2 H1, or, in a more simple way

G1 = β0 H1 , (13)

by introducing the notationβ = β0 = −G2, and:

βk =∂kβ

∂ yk, k = 1, 2, . . .

To obtain a general expression forGn, equation (13)will be successively derived, which requires the cal-culation of theβk. These may be expressed as a poly-nomial inG0, . . . , Gk, by using the Leibniz formula.Thence, fork = 0, 1, 2, . . .:

βk = − [ G ·G ](k) = −k∑

i=0

(k

i

)Gi Gk−i. (14)

Then, from (13)Gn+1 = [β0 H1](n), or

Gn+1 =n∑

k=0

(n

k

)βk Hn+1−k, n = 0, 1, 2, . . .

(15)The result follows by introducing (14) in (15). utAlgorithm 2 shows the details of the computationalscheme in order to compute the expression of Propo-sition 3. The binomial coefficients present in (14) and(15) may be efficiently computed by recursion.

Acknowledgements: The research was supportedby the Center for Research on Optimization andControl (CEOC), University of Aveiro, Portugal,from the “Fundacao para a Ciencia e a Tecnologia”(FCT), co-financed by the European Community FundFEDER/POCI 2010.

Algorithm 2 — Computes the expression of Prop. 3.

Require: Hk = H(k)(y∗) for k = 0(1)m;Ensure: Gk = G(k)(y∗) for k = 0(1)m;

G0 ← 1/H0;for n = 0 to m− 1 do

βn ← 0;Hn+1 ← 0;Q← 1;for k = 0 to n do

βn ← βn −Q ·Gk ·Gn−k;Gn+1 ← Gn+1 + Q · βk ·Hn+1−k;Q← (n− k) ·Q/(k + 1);{Binomial coef.;}

end forend for

References:

[1] H. Akimaru and H. Takahashi. Asymptotic ex-pansion for Erlang loss function and its derivative.IEEE Transactions on Communications, 29(9),1981, pp. 1257–1260.

[2] E. Brockmeyer, H. L. Halstrom, and A. Jensen.The Life and Works of A. K. Erlang. Dan-ish Academy of Technical Sciences, Copenhagen,1948.

[3] R. Cooper. Introduction to Queueing Theory.North Holland, 1981.

[4] Jorge Sa Esteves, J. Craveirinha, and D. Car-doso. Computing Erlang-B function derivatives inthe number of servers — a generalized recursion.ORSA Communications in Statistics, StochasticModels, 11(2), 1995, pp. 311-331.

[5] Jorge Sa Esteves, J. Craveirinha, and D. Car-doso. A reduced recursion for computing Erlang-B function derivatives. InProceedings of the 15thInt. Teletraffic Congress, Elsevier Science B. V.,1997, pp. 1315-1326.

[6] N. Goans, G. Koole, and A. Mandelbaum. Tele-phone call centers: Tutorial, review, and researchprospects.Manufacturing and Service OperationsManagement, 5(2), 2003, pp. 79–141.

[7] D. L. Jagerman. Some properties of the Erlangloss function.The Bell System Technical Journal,53(3), 1974, pp. 525–551.

[8] A. A. Jagers and E. A. Van Dorn. Convexity offunctions wich are generalizations of the Erlangloss function and the Erlang delay function.SIAMReview, 33(2), June 1991, pp. 281-283.

Proceedings of the 13th WSEAS International Conference on COMMUNICATIONS

ISSN: 1790-5117 77 ISBN: 978-960-474-098-7