- Home
- Documents
- Algorithms for Higher-Order Derivatives of Erlang C for Higher-Order Derivatives of Erlang C ... n of the Erlang B function in the number of ... directly higher-order derivatives of Erlang C function,

Published on

28-Mar-2018View

218Download

6

Transcript

Algorithms for Higher-Order Derivatives of Erlang C FunctionJORGE SA ESTEVESUniversity of AveiroDepartment of MathematicsCampus de Santiago, 3810-193 AveiroPORTUGALsaesteves@ua.ptAbstract: In this paper we analyze the partial derivatives of any order of the continued Erlang C function inthe number of servers. For the numerical computation of those derivatives, several algorithms are proposed andcompared in terms of stability, efficiency and precision. This study concludes that a recursive matrix relationpresented in a previous work [4, 5], may be used for the establishment of a simple and reliable algorithm havingthe best performance considering the trade-off of the different criteria. Some computational results are presentedand discussed. A conjecture about the convexity of the first derivative is supported by these computational results.KeyWords: Performance Evaluation, Queueing Systems, Erlang C Formula, Stable Recursions.1 IntroductionThe derivatives of Erlang C function are useful in theoptimum design of stochastic service systems (queuesor networks of queues). Actually, one of the mostcommon models to support performance evaluation ofthose systems is the M/M/x/ (Erlang C model).Moreover, the Erlang C formula plays an importantrole in approximations for more general systems. Ex-amples of such systems are telephone switching net-works, computer systems (jobs submitted to parallelprocessors, dynamic shared memory), satellite com-munication systems (allocation of transmission band-width). Recently, Erlang-C model has been subjectedto intensive study in the context of the dimensioningand workforce management of call centers [6].Non-linear programming problems encounteredin optimizations of those systems can be solved byusing the performance functions and their derivatives,namely for Newton-Raphson and gradient type solu-tions. Moreover, modelers in areas such design opti-mization and economics are often interest in perform-ing post-optimal sensitivity analysis. This analysisconsists in determining the sensitivity of the optimumto small perturbations in the parameter or constraintvalues. Additionally, the obtainment of an efficientmethod, with good accuracy, for calculating higher or-der derivatives of Erlang C function may be used forlocal approximations of the function and its deriva-tives by using a convenient Taylor or Hermite polyno-mial. Those approximations are specially importantfor iterative algorithms in small neighborhoods of thesolution.A method for calculating the derivatives of ordern of the Erlang B function in the number of serverswas proposed by the author in [4]. In the sequel, asecond paper [5] showed that for high values of the ar-guments it is possible to obtain a significant improve-ment in the method efficiency, without jeopardizingthe required precision, by defining a reduced recur-sion starting from a point closer to the desired valueof the number of servers(see [5]).Following the above mentioned works on the Er-lang B function derivatives, a method for calculatingdirectly higher-order derivatives of Erlang C function,is developed in the present work.2 The Erlangs B and C FormulasThe Erlang B and C formulas are true probability clas-sics. Indeed, much of the theory was developed byA. K. Erlang and his colleagues prior to 1925 [2].The subject has been extensively studied and appliedby telecommunications engineers and mathematiciansever since. A nice introductory account, includingsome of the telecommunications subtleties, is pro-vided by [3]. The Erlang B (or loss) formula givesthe (steady-state) blocking probability in the Erlangloss model, i.e., in the M/M/x/0 model (see for ex-ample [3, pp. 5 and 79]):B(a, x) .=ax/x!xj=0 aj/j!, x N0, a R+. (1)The numerical studies regarding this formula are usu-ally based on its analytical continuation, ascribed toProceedings of the 13th WSEAS International Conference on COMMUNICATIONSISSN: 1790-5117 72 ISBN: 978-960-474-098-7R. Fortet [7]:[B(a, x)]1 = I(a, x) = a +0eaz (1 + z)x dz(2)which is valid for traffic offered a R+ and x R+0servers. The function I(a, x) tends to be easier to ana-lyze than B(a, x) and may be named as the reciprocal,or the inverse probability of blocking.A classical result is the following recursion ob-tained by partial integration of (2):I(a, x + 1) =x + 1aI(a, x) + 1, x R+0 , (3)Since B(a, 0) = Ia(0) = 1 for all a R+, I(a, x)may be calculated by recursion (3) for any positiveinteger x. Actually, in [4] it is shown that (3) definesa very stable numerical recursion.The Erlang C (or delay) formula gives the(steady-state) probability of delay (that an arrival mustwait before beginning service) in the Erlang delaymodel. i.e., in the M/M/x/ model (e.g., see pp.7 and 91 of [3]):C(a, x) .=axx!xxax1j=0ajj! +xxa, for 0 < a < x. (4)The Erlang C formula is intimately related to the Er-lang B formula provided that 0 < a < x. Indeed it iseasy to show that:C(a, x) =xB(a, x)x a + aB(a, x) , 0 < a < x. (5)The reciprocal of C(a, x) or the inverse probabilityof waiting may be defined as J(a, x) .= 1/C(a, x).Using (5) and (3) it may be shown thatJ(a, x) = I(a, x) I(a, x 1). (6)An integral representation for the continued Erlang-Cfunction may be obtained substituting relation (2) in(6):[C(a, x)]1 = J(a, x) = a +0eaz(1+z)x1 z dz.(7)In Section 4, methods for computing the deriva-tives C(k)x (a, x), k = 0(1)n will be given. Thesemethods are generalizations of those for the deriva-tives B(k)x (a, x), k = 0(1)n proposed in [4] and [5].3 Erlang B Function DerivativesThe main features of the method proposed in [4] arethe starting point of this present work. In the sequel ,we summarize the most important results from [4] andthe corresponding notations used in this paper.Successive differentiation of (2) leads to:Ik(a, x) =kI xk= a +0eaz(1 + z)x lnk(1 + z) dz,(8)and successive differentiation of (7) yieldsJk(a, x)=kJ xk=a +0eaz(1+z)x1z lnk(1+z) dz.(9)Successive differentiation of (3) with respect to xleads to the general recursive relation (valid for k =1, 2, . . .):Ik(a, x + 1) =x + 1aIk(a, x) +kaIk1(a, x). (10)Note that, if a convenient algorithm for calculating thevalues Ik(a, x), k = 0(1)n is obtained, then it is pos-sible to calculate all the derivatives B(k)x (a, x), k =0(1)n using Algorithm 2 (see Appendix A). The gen-eral recursive matrix relation for calculating deriva-tives of B(a, x) in the number of servers x proposedin [4] is defined from (3) and (10). Also in [4] it isshown this matrix scheme defines a very stable numer-ical recursion. For high values of the arguments (sayx > a > 100), a significant improvement in efficiencymay be obtained by considering a reduced recursionstarting from a point closer to the desired value of x(see [5]). This can be attained without jeopardizingthe required precision. For short, this method is re-ferred to as RR method (Reduced Recursion Method),by opposition to the method proposed in [4] is referredto as CR method (Complete Recursion Method). TheRR method has been established for derivatives untilorder two [5]. However, this method may be easilygeneralized for calculating higher order derivatives.Indeed, computational experiments show that the ini-tial point used for calculating derivatives until ordertwo may be used for calculating derivatives until or-der five with excellent precision (all the15 figures areexact, except for some arguments greater than105).4 Erlang C Function DerivativesSuccessive differentiation of (6) with respect to xleads toJk(a, x) = Ik(a, x) Ik(a, x 1), k = 0, 1, 2, . . .(11)Proceedings of the 13th WSEAS International Conference on COMMUNICATIONSISSN: 1790-5117 73 ISBN: 978-960-474-098-7Since we have a convenient algorithm to calculatethe values Ik(a, x), k = 0(1)n it is plausible to as-sume that (11) might be used directly in order to com-pute the Erlang C function derivatives of any orderof variable x. Unfortunately, for x >> a we haveIk(a, x) Ik(a, x1) and (11) is numerically unsta-ble since the value calculated for Jk(a, x) is affectedby subtractive cancelation.In order to avoid this difficulty, our initialgoal is to reformulate the problem of calculatingJk(a, x), k = 0(1)n in terms of the quantitiesIk(a, x), k = 0(1)n. The following proposition givesthe first method for accomplish that reformulation.Proposition 1 For k = 0, 1, 2, . . . we have:Jk(a, x)=+j=1Ik+j(a, x 1)j!=+j=1(1)j+1Ik+j(a, x)j!Proof: These two series representations of Jk(a, x)are obtained by taking the Taylor expansions ofIk(a, x) at point x 1 and of Ik(a, x 1) at pointx, respectively. utProposition 1 may be used to compute Jk(a, x)with arbitrary precision, since the truncation error ofthe partial summation of the series may be easilybounded. Indeed, since Jk(a, x) is the sum of an al-ternated series, it follows that:Jk(a, x) rj=1(1)j+1Ik+j(a, x)j!, (12)with absolute error less than Ik+r+1(a, x)/(r + 1)!.However, summation (12) converges slowly (seeLemma 4 of Appendix A of [4]). Furthermore, theevaluation of the Ik+r(a, x) for large r by recursionhas an important computational cost. The conclusionis that the algorithm inspired in relation (12) is numer-ically stable but is rather inefficient.Proposition 2 For a R+, x [1,+[ and k N:J0(a, x) =x aaI0(a, x 1) + 1Jk(a, x) =kaIk1(a, x 1) +x aaIk(a, x 1).Proof: Since I0(a, x) = x I0(a, x 1)/a + 1, from(6) the first equality follows. Successive differenti-ation with respect to variable x leads to the secondequality. utFinally, Proposition 2 establishes a convenient com-putational method. Indeed, we only need to computeIk(a, x 1), k = 0(1)m using the matrix recursiondefined in [4] and then apply Proposition 2 in order tocompute Jk(a, x), k = 0(1)m. If a and x are greaterthan 100, the RR algorithm may be used to improvethe computational efficiency. Algorithm 1 defines thecomputational procedure and calls Algorithm 2 (seeAppendix A).Algorithm 1 CR Method for C(n)x (a, x), n = 0(1)mRequire: a R+, x N and m N0;Ensure: Ck = C(n)x (a, x) for n = 0, 1, . . . ,m;Obtain initial values In(a, 0), n = 0(1)m;for j = 0 to x 1 dofor k = m to 1 doIk (k Ik1 + j Ik)/a;end forI0 1 + j I0/a;end forJ0 1 + (x a) I0/a;for k = 1 to m doJk (k Ik1 + (x a) Ik)/aend forCompute C(n)x , n = 0(1)m by Algorithm 2;15 20 25 30105104103102101100NUMBER OF SERVERS (x)LOG OF DERIVATIVESERLANGC DERIVATIVES ON THE Nb. OF SERVERSa=15 ErlangLog(C(0))Log(C(1)x)Log(C(2)x)Log(C3x)Log(|C(4)x|)Figure 1: Graphics of |C(k)x (15, x)| for x [15, 30]computed by CR Method (Algorithm 1).5 Computational ResultsFirst note that C(a, x) is a strictly decreasing func-tion of x, therefore C x(a, x) < 0. Since C(a, x) is aconvex function in variable x (see [8]), C x(a, x) 0.Numerical evidence, obtained by our computationalexperiments, leads to the conjecture C x (a, x) < 0,implying that C x(a, x) is also a convex function of x.Proceedings of the 13th WSEAS International Conference on COMMUNICATIONSISSN: 1790-5117 74 ISBN: 978-960-474-098-7100 110 120 130 140 150 160 170 180101410121010108106104102100NUMBER OF SERVERS (x)LOG OF DERIVATIVESERLANGC DERIVATIVES ON THE Nb. OF SERVERSa=100 ErlangLog(C(0))Log(C(1)x)Log(C(2)x)Log(C3x)Log(|C(4)x|)Figure 2: Graphics of |C(k)x (100, x)| for x [100, 180] computed by CR Method (Algorithm 1).However, the derivatives of higher order do not seemto preserve sign. Figures 1 and 2 show samples ofgraphic representations which illustrate these proper-ties.Algorithm 1 defines the CR method for the com-putation of high-order derivatives of Erlang-C func-tion. The initial values are computed using Gauss-Laguerre quadrature (degree 15) as explained in [4].The implementation of the correspondent RR methodit is straightforward since the only modification is thepoint to start the recurrence calculations easily com-puted as indicated in [5]. The implementation of allthe algorithms was performed using Turbo C compilerversion 2.1. The measuring of processing times (du-rations of evaluations) was performed on a PC (IntelCore2 dual E6400 processor running at 2.13 GHz and2 GB RAM) using MS Windows XP and time criti-cal priority of the process. All the calculations weremade in double precision, and tables are produced bywriting automatically LATEX code.Extensive computational results have been ob-tained, enabling to compare the precision and effi-ciency of the RR method, the CR method and Jager-man algorithm based in cardinal series quadrature (seeAppendix B of [4]).5.1 Small values of the argumentsFor small values of the arguments, say a < x < 100the RR method is not applicable. So, we only com-pare the CR method with Jagerman method. Since,in this range, CR method is much more efficient thanJagerman method, it remains to compare the precisionof the calculated values.Tables 1 and 2 show some of those results. TheTable 1: Derivatives C(k)x for a = 1 and x = 2k CR Alg. Jagerman Alg. Error0 3.333 333 E 1 3.333 333 E 1 0.000000%1 3.995 941 E 1 3.995 942 E 1 0.000010%2 4.116 842 E 1 4.116 840 E 1 0.000054%3 3.323 324 E 1 3.323 319 E 1 0.00014%4 1.553 650 E 1 1.553 671 E 1 0.0013%5 5.263 250 E 2 5.260 938 E 2 0.044%6 1.445 280 E 1 1.444 383 E 1 0.062%7 4.022 122 E 3 4.013 443 E 3 0.22%8 2.599 829 E 1 2.572 807 E 1 1.1%9 1.249 069 E 1 1.029 022 E 1 21%10 6.055 802 E 1 7.186 006 E 1 16%Table 2: Derivatives C(k)x for a = 10 and x = 12k CR Alg. Jagerman Alg.0 4.493 882 242 982 E 1 4.493 882 242 982 E 11 1.957 138 108 265 E 1 1.957 138 108 265 E 12 6.853 579 227 757 E 2 6.853 579 227 757 E 23 1.630 475 310 872 E 2 1.630 475 310 872 E 24 9.195 949 758 946 E 4 9.195 949 758 954 E 45 8.915 293 473 794 E 4 8.915 293 473 783 E 46 7.780 633 149 666 E 5 7.780 633 149 575 E 57 1.801 545 889 623 E 4 1.801 545 889 630 E 48 1.197 684 653 079 E 6 1.197 684 653 852 E 69 6.757 700 191 225 E 5 6.757 700 191 184 E 510 1.163 815 001 485 E 5 1.163 815 001 145 E 5approximations calculated are presented together withthe percentage of (relative) error assuming that theJagerman algorithm is exact. In fact, it may be no-ticed that the Jagerman algorithm is very precise inthis range of the arguments. Indeed, comparisons withmethods based on adaptive quadrature show results ofthe Jagerman method in that range with more than10exact figures. On the other hand, the precision of theRC method proposed is less than the obtained withthe Jagerman method. This fact is easily explained:Gauss-Laguerre quadrature calculates a crude approx-imation of the initial values and for small values ofaand x the relative error of the calculated values doesnot decay sufficiently since very few steps of the re-cursion are used. Moreover and as expected, whenthe order of the derivative increases the accuracy de-creases. Table 2 shows that the degradation of the pre-cision of the calculated values by CR method practi-cally disappear for x > a > 10. In that table thedigits that are not in accordance with the values cal-culated by Jagerman method are emphasized in bold.Therefore, for x > a > 10 it may be concluded thatthe CR method is much more efficient and allows highprecision calculations.Proceedings of the 13th WSEAS International Conference on COMMUNICATIONSISSN: 1790-5117 75 ISBN: 978-960-474-098-7102103104105106107102101100101102103104COMPUTING C(k)x(a,x) for k=0,1 OFFERED TRAFFIC (a)CPU TIME (MILISECONDS)Values of x precomputed forconstant ratio C(a,x)/(xa)=0.5Reduced RecursionJagerman AlgorithmComplete Recursion102103104105106107102101100101102103104COMPUTING C(k)x(a,x) for k=0,1,2,3,4OFFERED TRAFFIC (a)CPU TIME (MILISECONDS)Values of x precomputed forconstant ratio C(a,x)/(xa)=0.5Reduced RecursionJagerman AlgorithmComplete RecursionFigure 3: Processing times for the three algorithms.5.2 High values of the argumentsFor x > a > 100 the RR method may be used inorder to improve the efficiency of the CR method.The precision of the three method is very good, sothe critical criterion of comparison is the efficiency.In order to accomplish that comparison, Figure 3 ispresented showing two log-log graphics of processingtimes (in milliseconds) versus the magnitude of the ar-gument a. The value of x was pre-calculated such thatW = C(a, x)/(x a) = 0.5, meaning that we havea typical Erlang-C system with the mean waiting timeis half the mean values of service time. For obtainingbetter precision for the values of processing times, av-erages over the values in 1000 runs (RC and Jagermanmethod) and in 5000 runs (RR method), were com-puted. In the next paragraph, some conclusions maybe drawn from the computational results.The CR Algorithm is consistently more efficientthan the Jagerman method excepting for sufficientlyhigh values of a where the Jagerman approach is moreefficient. The critical value of a beyond which thathappens seems to be in the range 10005000. How-ever, the RR algorithm is much more efficient than theCR algorithm and the obtained approximations are ex-actly the same. For calculating derivatives until orderfour, the most efficient method is the RR algorithmproviding that a 3 104.Nevertheless, in the range x > a > 105 somemethods based on asymptotic approximations maybe considered even for high precision computations.Those methods are well known for C(a, x) and forthe first derivative C x(a, x) (see [1, 7]).6 ConclusionExtensive computation has shown that the proposedCR and RR methods are very accurate in a widerange of values of a and x and compares favorably,in terms of efficiency, with the method based in car-dinal series quadrature, excepting for very high val-ues of x. For very high values of the arguments (sayx > a > 5104) Jagerman algorithm shows better ef-ficiency. Nevertheless, it is important to notice that inthis case the precision of the Jagerman algorithm fallsdown. However in that range of arguments of veryhigh magnitude there are few applications. In thesecases, we suggest asymptotic expansion methods.Considering those features as well as the numeri-cal robustness of the methods we believe that the CRand RR methods proposed in this paper may be attrac-tive and reliable for all ranges of applications (namelyin Teletraffic engineering and call centers industry),even if the number of servers is a very large value.Finally, it is conjectured that C x(a, x) is a convexfunction of x. This numerical evidence is supportedby extensive computation covering a wide range of theparameters by showing in all cases that C x (a, x)