11
Variance Reduction by Antithetic Variates in GI/G/1 Queuing Simulations Author(s): Bill Mitchell Source: Operations Research, Vol. 21, No. 4 (Jul. - Aug., 1973), pp. 988-997 Published by: INFORMS Stable URL: http://www.jstor.org/stable/169149 . Accessed: 09/05/2014 06:45 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research. http://www.jstor.org This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AM All use subject to JSTOR Terms and Conditions

Variance Reduction by Antithetic Variates in GI/G/1 Queuing Simulations

Embed Size (px)

Citation preview

Variance Reduction by Antithetic Variates in GI/G/1 Queuing SimulationsAuthor(s): Bill MitchellSource: Operations Research, Vol. 21, No. 4 (Jul. - Aug., 1973), pp. 988-997Published by: INFORMSStable URL: http://www.jstor.org/stable/169149 .

Accessed: 09/05/2014 06:45

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

http://www.jstor.org

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction by Antithetic Variates in GI/ G/ 1 Queuing Simulations

Bill Mitchell

California State University, Hayward, California

(Received September 17, 1971)

This paper considers the use of antithetic variates to reduce the variance of estimates obtained in the simulation of a GI/G/1 queue. Two experimental configurations are considered: in the first, 2n observations are taken in a single run; in the second, n observations are taken in each of two runs. If the sequences of uniform random variables that generate the realizations of the queuing system in the two runs are antithetic, we show that the variance of estimates of the mean and distribution of stationary waiting time and num- ber in the queue is less in the second configuration than in the first. We also obtain sufficient conditions for the covariance of functions of a vector of uniform random variables to be nonnegative. Experimental results are given for M/M/1 queuing simulations to illustrate the magnitude of the variance reduction.

THE TECHNIQUE of antithetic variates has been widely used to reduce the sample size in simulation experiments (see HAMMERSLEY AND HANSCOMB121 or

NAYLOR, ET AL.[4] for a discussion of antithetic variates). In this paper we investi- gate the use of antithetic variates in GI/G/1 queuing simulations and extend some results of PAGE.351

In a typical simulation of a GI/G/1 queue, we are interested in estimates of the probability distributions of W, the stationary waiting time of a customer, or N, the stationary number in the queue. In particular, we are interested in the ex- pected values of W and N, namely, EW and EN. More generally, suppose X. is a random variable that describes some property P of the queue relative to the nth customer. Suppose X is the stationary random variable that describes this prop- erty P in steady state. For example, X; might be the waiting time of the nth cus- tomer and X the stationary waiting time of a customer.

Suppose we are interested in an estimate of the mean of X. If 2n successive customer random variables X1'), X ..., X21) are observed in a simulation run, then the usual unbiased estimate of the mean of X is X, = (1/2n) (X' 2+A1')+* +X(')). In what follows we use a superscript in parentheses to denote the simula- tion run number. If, instead, two distinct simulation runs are made for the same queuing system, yielding respective customer random variables X(2) X(2) ,

X and X() X( * , then another estimate of the mean of X is X2=

(1/2n) (X(2 +X () + ** +X3)). If we assume that all observations are taken during steady-state conditions, then the two estimates have the same expected value, i.e., EX1=EX2.

Suppose { Up) } and {V } are sequences of independent uniform random varia- bles that generate the sequences of interarrival times [A k) } and service times

988

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction in Queuing Simulations 989

JB 1k)}, respectively, in simulation run k. We are interested in choices of U'k) and Vie) that will yield a smaller variance for the second estimate of the mean of X, i.e., varX2 5varXj, for various choices of X.

We will show that, if either U(-=i1-U 20 and V3) =-1 _IV2), or 0)= V2' and V(3 = U2), then varX2<varX1 when EX, the expected value of our estimate, is equal to EW (the average waiting time of a customer), PIW>y}, or P{N>l}. That is, for the same number of observations, we can reduce the variance of our estimate X by suitable choice of U(3) and V(3).

If either U03) = 1- UW2 and V(3) = 1-V2) or 0) = V(.) and V 3)=U 2) we will refer to the pairs (Ui2), V(2) and (U03)T I( -)) as antithetic variates, because, with either choice, if U 2' and V.2) tend to increase congestion in the queue, then U(3)

and V(3) tend to decrease congestion, and vice versa. In Section 1 we collect the definitions and assumptions for our model. In

Section 2 we prove a general result, Theorem 1, which states that, if f and g are real-valued functions of a vector xeRk such that, for each argument xi, f and g are monotonic in x; in the opposite (same) direction, and if U= (U1, - * , Uk) is a vector of independent random variables, each uniform on [0, 1], then covff (U), g (U)] < ( _)0. This theorem extends a result of Page, [51 who proved a similar theorem for the case k =1. The theorem will be used in Section 3 to prove Theorem 2, varX2 <

varXi. In Section 3 we prove- that, if IX, } and X satisfy certain properties, then

coV(X(k?, X k)), the covariance between random variables in the same simulation run, is nonnegative. In addition, if either U(3) =1- U(2) and V(T3)=iIV(2), or U'31 = V(2) and V(3) = U(,2), we show that cov (X(2), X(3)2 the covariance between random variables in runs 2 and 3, is nonpositive. These two results imply Theorem 2, varX2<varX,. Page[5' showed that for the above choices of U(3) and V?'

cov(Bi 2-A ' , Bi3 -A3) so In Section 4 we show how Theorem 2 can be applied to simulation estimates of

some useful properties of queues, namely, the average waiting time of a customer, the probability distribution of the stationary waiting time, and the probability distribution of the stationary number in the queue. In each case we show that we can reduce the variance of the estimates by choosing UT3) and V3 to be antithetic to U(2) and V(2)

Finally, in Section 5 we give some experimental results for M/M/1 queuing simulations to illustrate the magnitude of the variance reduction for estimates of the average waiting time of a customer.

1. THE MODEL: DEFINITIONS AND ASSUMPTIONS

THE SIMULATION EXPERIMENT consists of three distinct runs for the same GI/G/1 queuing system with FIFO order of service. We index the runs by k and attach a superscript to variables to indicate a reference to a particular run. A variable without a superscript refers to any run. Define:

A = time between arrival of the ith and (i+ I)st customer. Bi= service time of the ith customer.

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

990 Bill Mitchell

F = distribution function of Ai. G = distribution function of Be.

Xi (d); dED =a family of random variables, indexed through a set D, describing some property P of the queue relative to the ith customer.

{X (d), dED =a family of random variables describing the stationary property P of the queue.

Wi= waiting time of the ith customer. W= stationary waiting time of a customer. N = stationary number of customers in the queue.

M (t) = number of arrivals in (0, t), given an arrival at time zero. Let { U(k)I be a sequence of independent random variables, each uniform on

[0, 1], used to generate the sequence of interarrival times {A ~k) in run k. Let V(k) I be a sequence of independent random variables, each uniform on [0, 1] and

independent of the { Use) 1, used to generate the sequence of service times {B(k) I

in run k. We will assume that EB(k) <EA(k) < oo. LINDLEY"3' has shown that this is a necessary and sufficient condition for the existence of a stationary waiting-time distribution.

We will assume throughout this paper that observations in all runs are taken during the steady state, say, beginning with the (m+ 1) st customer in each run. To simplify the notation, we will write Xi (d) as Xi and X (d) as X where no con- fusion will arise.

Define two unbiased estimates, X1 and X2, of EX, the mean of X, by

X1 = (1/2n) Lam+2+1 X5l) (la)

X2 =(1/2n) ark-2 E m+n 1 X(k) ( lb) The variances of these estimates are

varX, = (1/2n 2[? varX(1)+2E=m.+1 Zmfl CO(X X~1 )

+2 EZM+1 E =+n+1 cov (X, X51 ) (2a) 2 m+2n-1 m+2n cov(X(1), X(1)], +2m+n+l X =i+l io V j)]

varX2= (1/2n) 2[r kZ3t ZImZ+1n varX k)

? 2 A,~nl E=M+z1 cov(X 2X( Zr+n1 Zrn~n X(2), X52) (b

+2 E=m+ 1 E=m+n 1 COV(X(2 ) X(3))

? m+n-1 m+n (X(3, X(3))]. +2Ei=mf+l Ej=i+1 cov I J

Define O2

= varX;

0f (i) c oV (XIk), X j+k)) (m+1 :! jm+2n-1, 1<i<m+2n-j; k=1)

or (m+ljim+n-1, 1?i~m+n-j; k=2, 3)

(i) =cov(X(23, Xj~). ) (m+1 <j~m+n, -(m+1-j) ?i<m+n-j)

We will assume that a (i) < X and + (i) < co for i in the above ranges. We can rewrite (2a) and (2b) as

varXi = (1/2n) [2n J+4 Ei8-i (n-j),Tf( j) +2 Sian i~nl(i (3a)

varX2= (1/2n) [2n? +4EZi1i (n-j)<a(j)?+2Z:i= E+-i O(j-n)] (3b)

Therefore,

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction in Queuing Simulations 991

varX1-varX2 = (1/2n2) E Zl }-1 [Ic (j )-4 (j-n) ]. (4)

As in (la) and (ib) we define two unbiased estimates, W1 and W2, of EW, the mean waiting time of a customer, by

W1 = (1/2n) ZAiAm+ WtV P (5a)

W2 = (1/2n) 5k=2 Z-?dj=m+1 Wi I (5b)

To obtain estimates of the distribution of W and N, we define for n _ 1, y 0, and 1=0, 1, 2,

I () = i if othewis, (6a)

{0, otherwise; ( IO otherwise,;7a = (1 - f, if M (W') >l, (7a)

~0, otherwise;

0,l ={ otherwise. (b

We note that M (W) and M (Wn) are well defined from the definitions of M (t), W and WJJJ if we set M (0) =0 with probability one. Since EI (y) =P {W> y 1, we define two unbiased estimates, I1 (y) and I2(y) of P{ W? y }, by

I (y) = (1/2n) 72-+ I(1) (y) * (8a)

I2 (Y) = (I /2n) Ek=3 Mi+n Y (8b)

HAJI AND NEWELLN1] have shown that, under fairly general conditions, N has the same distribution as M (W). Therefore, since EJ (1) = P { M (W) > 1 1, we define two unbiased estimates, J1 (1) and J2 (1), of P {N > l } by

Ji (1) = (1/2n) ?=m+l J4) (1), (9a)

J2(1) = (1/2n) Z- Zi- Jkk) (1). (9b)

Finally, we denote the set of nonnegative real numbers by R+ and the product space of the interval [0, 1] with itself n times by [0, 1f].

2. THE COVARIANCE OF NONNEGATIVE MONOTONIC FUNCTIONS

THE AIM OF this section is to prove the following general theorem; PageN5' first proved it for the case k= 1. THEOREM 1. Let f (x) and g (x) be real valued, Lebesque integrable functions of xERk. Suppose for each argument xj, 1$j $ k, either f is nondecreasing and g nonincreasing (nondecreasing), or f is nonincreasing and g nondecreasing (nonincreasing) in xj. Let U= (U1, * *, Uk) be a vector of independent random variables each uniform on [0, 1]. Then cov[g(U), f(U)]_ (?)0.

Proof. Ef(U) =Jfol **. fol f(ul, *, uk) du, * duk. Define ro(ul, , k)

f * **, Uk) for uiE4O, 1], 1$i:k. Define rj(uj+?, uk) =Jfol rj-1(uj, uj+1,, Uk) duj for k > 2, 1 <j<k, and uiEO, 1]. Suppose first that f is nonincreasing and g nondecreasingin ul. Given uiE, 1], 2<i<k, let tl=sup{x:xE0, 1],f(x, U2, , Uk)

>Ef (u) } or set t1=O if f (O, U2, Uk) < Ef (u). Now

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

992 Bill Mitchell

cov[g (U), f(U)] =E[g (U) -Eg (U) Iff(U) -Ef(U)] =Eg (U) Vf(U) -Ef(U)]

=1y* fJ~ i (U1, *Uk)[f(Ul, ,Uk) -Ef(U)] du, dUk

tl 1 1 1 1 1

fff ? t1*| +f ff * (Ull .. * Uk)[f(Ul *, Uk)

-Ef(U)] du, duk

< | .f * *| g(ti, U2, *Uk){ +f[f(Ul1, * Uk)-Ef(U)] dui~du2*

If k = 1, the term in braces is clearly equal to zero. If k> 2,

cov[g ( U)f (U) ]f < | f9(tl, U2, , Uk)[rl(U2, , Uk)-Ef(U)] du2 * duk.

Clearly, if f is nondecreasing (nonincreasing) in some uh, 2<j k, then so is r1. Now, if f is nondecreasing and g nonincreasing in ul, define

t1=sup{x:xE[O, 1], f(x, u2, * , uk) _Ef(U) }

or set t1 = 0 if f(0, U2, * , Uk) > Ef(U). With this modification, it is easy to see that the above arguments hold in this case.

It is also easy to see that, if k> 2 we can repeat the above steps (k -2) times, yielding

cov~g (U), (U) I < g(t, t2 '', tk)l [rk-1 (Uk)-Ef (U)] dUk.

But

Ef(U)=f* f (U1, . * uk) dul ... duk =f. rf(U2 **, Uk) du2 ... dUk

= rk-1 (Uk) dUk-

Hence cov[g (U), f (U) ] _ O. From the above arguments, it is easy to see that, if, for each xj, 1 <j _ k, f and

g are both nondecreasing or both nonincreasing in x;, then cov[g (U), f (U)] ?0.

3. THE VARIANCE OF THE ESTIMATES Xj(d)

WE WILL ASSUME that the family of random variables {Xn(d): n> 1, deD satisfies the following properties:

(P1) Xi (d) = 0 with probability one for deD. (P2) There exists a sequence of functions {f, such that fn: [O, 1]2nXD- R+ and

Xn+? (d) =fn (U1, -, U. ,1 *V , V., d) for n> 1 and deD. (P3) Xj (d) is independent of Uj and Vi for i>j and d&D. (P4) fn is nonincreasing in Uj and nondecreasing in Vi for 1 < i _ n.

(P5) The stationary distribution X (d) exists for each d&D. To simplify the notation, we will write fn (U1, 7, U, V,, * * ', Vn, d) as fn (U1,

*, U., V1, * , V,) and Xj(d) as Xj where no confusion will arise. It will also be convenient to define the following functions for use in the proofs in this and later sections. For n_ 1 and 1 ?k n, define (a) the function gnk: [0, 1]2,-2kXD -R+ by

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction in Queuing Simulations 993

gnk (U1, *, Un-k V1i ** Vn-k d) (10)

-Efn (ul ,, un-k, Un-k+?1 . - Un, V.,.. Vn-k, Vn-k+? . - Vn, d);

and (b) the functions sj, tj: [0, 1]2'XD- R+ by

Si (ul I u , Vi, .. * Vj1 d) =fj (1 - u, - , 1- u;, 1 - vi,* , 1- v;, d), (lla)

tj (ul, , u;, vi, * , Vj, d) =fj(vi, *, ,Vj, ul,, , u;, d). (l lb)

We first prove that the covariance between the random variables Xj+n and Xj in the same run is nonnegative. LEMMA 1. If the family {Xn (d) :n > 1, deD } satisfies (P1) through (P4), then COV[Xj+n (d), X3(d)]>O for j >1 n >1 and dED.

Proof. Since X1 = 0 with probability 1, coV (Xn+l, X1) 0 for n_ 1. There- fore, assume j> 2. Then,

cov (Xj+n, Xj) = E (Xj+n -EXj+n) (X1j-EXj) = EXj+n (Xj- EXj)

= Efj+.-l(U1,*- Uj+n-1, VI,- Vj+n-1)

* Ifi-i (U1, Ui_i, VI, , Vj_i) -EXj] = Egj+n-lyn (U1 *-, Uj1, V, ,j)

* [fj- (U1, , Uj11, V1, , Vj_) -EXJ]

from (P3) and (10). But Efj11(Ul, *, Uj_, VI, , Vj_) =EXj. From (P2), (P4), and (10) it is easy to see that the functions gj+n-l ,n and fji- satisfy the hy- pothesis of Theorem 1. Hence cov (Xj+n, Xi) > 0.

In the next two results we prove that, for suitable choice of the { UjI and { VJ}, the covariance between the random variables Xk and Xi in runs 2 and 3 is nonposi- tive. LEMMA 2. If UW) = 1- U-2 and V3 =1- V(2 for i> 1 and the family {Xn(d) :n> 1, dED I satisfies (P1) through (P4), then cov[X"2) (d), Xi3 (d)] 0 for k, j_ 1 and dED.

Proof. It is sufficient to prove the result for k ?j. Let n = k-j. Since Xf1 = 0 with probability 1, CoV[X(1,) X(3)]=0 for n 0. Therefore, assume j_ 2. Then,

COV[X1+n ? X?1 ] E(X ?-nEX ?n) (X 31 EX 31) - (X 1 -EX 13) - Ef1')n -U2~ EX *,+' -EX =EX

=Efj+n-1Ul('() , U J+>n-, 1 2) "* V(+n-1)

X [fi~ ~~U-1 ( 1 , 11 Z() 3-)1 )-EX;3)

[from (P2)]

Efj+n-1 (U1 *, U1+n-b ( * *,+

Xfj_ (-U ,** *, i-UZ~1, 1-1, *, 1-V1-h)

- EX;3 (12)

=Eg j+n-l,n (1 y 'yU-1 V1 y 'nV-1)

X [fj-(1 _ U1 2'* *,1 U1 , 1i-v _ * * *, 1-VZ2)

-EX(3)] [from (P3) and (10)]

[S j~-l n (U( * f U(2)lf V(2), * *f Vm2) (a)] [j1(1 X XU-12 1( ) V2)) -EXI3)

[from (l la)]

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

994 Bill Mitchell

Now 1- U(2 and U 2T have the same distribution; so do 1- V(2) and V(2). There- fore, it is easy to see from (P2) and (hla) that Esj_1(U,2), .., U(2)1 V(2)

V?1) I=EX? . From (P2), (P4), (10), and (11a) it is clear that the functions gj+,n-i,,n and si-i satisfy the hypothesis of Theorem 1. Hence coV (X 2) , X(3)) ? 0. COROLLARY 1 If U(3) = V(2) and V(3) = U(2) for i> 1 and the family {Xn (d) :n > 1, dEDI satisfies (P1) through (P4), then COV[X(2) (d), X(23 (d)] < 0for k, j> 1 and dED.

Proof. From (12),

COV(X(2)", X(3)) =Efj+n_I(U(2)n U(2) -1V(2) v(2) _)

I , J~-1 1 V, 2nJ X [f_(U(), *,U3), V(3)i , V3) -EX ]3)

=EinlU(2) U U(2)n, V(2), . V(+)-1

Xtj (2(), ., V(2)1, U(2), ,U(2)1) _EX(3)]

=gn-n(U(2)n ** U(2)1, V(2) ... V(2) )

X [tj_l (U(2), ,U(2)1, V(2) V(2)1)_EX(3)],

from (P3), (10), and (lib). But Etj_1(U(2), , U2), V(2) , V21) =EX 3)

and gj+n-I ,n and tj-1 satisfy the hypothesis of Theorem 1. Hence,

cov(X 2)?, X 3)) <0O

We are now in a position to prove the major result of this paper. THEOREM 2. If U(3)= 1U(2) and V(3) =1- V2), or U(3) = V(2) and VW3) = U(2) and the families {X (d) :n_ 1, dEDI and {X(d) :dEDI satisfy (P1) through (P5), then varX2 (d) < varX1 (d) for dED.

Proof. From (4),

var~l,-varX2 =(1/2n 2) Eil Ei+n-1 [,y (j) d(j -n)]

But a(j)= cov(X k), X k ) _0 by Lemma 1, and 4,(j) =-cov(X(2), X+21) _0 by

Lemma 2 and Corollary 1. Hence, varX2 _ varXi.

4. THE VARIANCE OF THE ESTIMATES Wj, Ij(y), AND J (1)

WE APPLY THE results of the last section to estimates of the mean and distribution of the stationary waiting time W and number in the queue N. We will show that properties (P1) through (P5) hold for the sequences of random variables {fW"), {In(y) }, and {Jn(l) } and the stationary random variables W, I(y), and J(l).

It is well known that Wn satisfies, for n> 2,

Wn = max(O- Wn_I+Bn-Il-An-I) . (13) Define F-l(y) =sup{x:F(x) ?y )and G-l(y) =sup{x:G(x) ?y )for y0[, 1]. Define a function g by

g (x, u, v) = max[0, x+G-1 (v) -F-1 (u)]. (xER+; u, v4J0, 1]). (14)

Define a sequence of functions {fn I recursively by

fi(u1, vI) =g(0, ul, vI), (15a)

fn (u1 -- Un, VI , Vn) = 9 Ifn-I (U1, * - - . Un-1, VI,*- Vn-l) . Un. vnl (1 5b)

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction in Queuing Simulations 995

for n> 2 and ui, vic[O, 1]. From (13) through (15) it is easy to see that

Wn = fn-I WIn * * * X Un-1, VI, Vn,1) . (n? 2) (16)

If we assume W, = 0 with probability 1 and let the index set D consist of a single element, it is easy to see that properties (P1) through (P4) are satisfied by the sequences of random variables Wn I and functions {fn) . As remarked earlier, EBi <EAi< oo implies that the stationary distribution of W exists. Hence prop- erties (P1) through (P5) are satisfied by { Wn 1, W, and {f,).

Equations (6b) and (16) define a sequence of functions {hn I implicitly such that

In (y) = hn-1 (Ul, , Unl, VIi, , Vn-1, y) (n > 2, y > 0) (17)

Similarly, (7b) and (16) define a sequence of functions {qn) implicitly such that

Jn(l) =qn-I(Ui, , Un-1, V1b Vn_1, 1). (n>2; 1=0, 1, 2, ) (18)

It is easy to see that the families of random variables { I, (y) :n _ 1, y >0 and {I(y) :y >01 and the sequence of functions {hn) satisfy properties (P1) through (P5). Similarly, for { Jn(l) :n>1 1 =0, 1, 2, ), {J(l): 1=0 1 2, ), and qn I -

Finally, Theorem 2 and the above results imply the following theorem. THEOREM 3. If U = 1- U,2) and V( = 1- V or U()= V(2) and V(3= U

then varW2 < varWI, varI2 (y) < varIh (y) for y _ 0, and varJ2 (1)< varJ1 (1) for 1>0.

5. EXPERIMENTAL RESULTS

IN THIS SECTION we give some experimental results for M/M/1 queuing simulations to illustrate the magnitude of the variance reduction for estimates of the average waiting time. From (5a) and (5b) we have

var~fl = (1/2n) ?,=n+l I= +2+ CO l)Wl (19a)

varW2= (1/2n) k=2 >1=2 Zi6--+mO+ jZM+r l COV(Wk) WIt). (19b)

Define unbiased estimates &(j), j(j), and ~(j) of cov(WY'), W(+)2), cov(W~k), Wi)S), and cov(WI2 , iV3), respectively, by

c (j) = [1/ (2n-j)] m+2n j (W ') - EW) (W(+) - EW),

(j=0, 1, ,2n-1)

(i) [1/2 (n ij) I Ek=3 E.I-m j (Wf(k) -EW) (W (k)j EW),

(j=0,1, ,n-1) (0) (1/n) Zm-+mZ (W12 -EW) (W13)-EW), (22a)

e (j) = [1/2 (n-j)] mZ+mI- [ (

(2) -EW) (W (3)i-EW)].

(j=1,2, ,n-1)

Define unbiased estimators varW, and varW2 of varW1 and varW2, respectively, by substituting the estimators u (j), 3 (j). and (j) in (19a) and (19b). This yields

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

996 Bill Mitchell

TABLE I COMPUTATIONAL RESULTS FOR THE CASE U0 = 1-OU AND VT= 1-VX2

_______ ______(Var W 1)I/2

m n P EW W1 W2 Var W1 Var iV2 (Var w2)1/2xloo (Var 1172)1/2 EW -X100

(Var WI) 1/2

20 150 0.7 1.633 1.665 1.573 0.00467692 0.00295270 3.33% 20.55% 80 200 0.9 8.100 7.863 8.361 0.263139 0.152922 4.83% 23.8%

100 225 0.95 18.05 18.82 17.88 1.2906 0.475059 3.82% 39.4%

Definitions of symbols: m=number of customers to arrive in each p = traffic intensity== (expected service

run before observations begin. time)/(expected interarrival time) n=one half number of observations in run EW= theoretical average stationary waiting

number one time of a customer. =the number of observations in runs W7i=estimate of EW [see (5a) and (5b)].

two and three. Var Wi=estimate of the variance of fVi [see (23a) and (23b)].

varW l= (1 /2n) 22n (O)) + 2 (2n-j)&(j) }, (23a)

varW2 = (1/2n) j2n[08(0) +^y(0)]+4Ei=1l (n -j) [,B (j) +,e (j)] (23b)

varWi is an estimate of the variance of the estimate Wi of the average waiting time. In Tables I and II varWI and varW2 are given for simulations of an MI/M/1

queue. Results are listed in each table for three different values of the traffic intensity p, which is defined as

p = EB/EA. (24)

EA is set equal to one and EB is changed to vary the traffic intensity. Each row in the tables represents the average of 100 independent simulations. As indicated in Section 1, observations begin in each run with the (m+l)st customer. In run number one in each simulation, 2n observations are taken, and, in runs two and three, n observations. For an M/M/1 queue it is well known that EW is given by

EW= (EB) 2/ (EA-EB). (25)

Table I gives the results for the case U(3) = 1- U(2) and V) = 1- V2). Table II gives the results for the case U'3) = V 2) and V'3) = U(2) The last two columns in

TABLE II COMPUTATIONAL RESULTS FOR THE CASE U0 VT2) AND V) = U2

________ _______ ~~~~(V ar_ WVI)I/2

m P EW jj p V- (Var W2)12 120-V TV nIp ER' l W2 Var wI Var W2 EW X r(Va W2)1/ XlOO

(Var WI)1/2

20 150 0.7 1.633 1.665 1.633 0.00467692 0.00301863 3.36% 19.7% 80 200 0.9 8.100 7.863 7.974 0.263139 0.143764 4.68% 26.1%

100 225 0.95 18.05 18.82 17.59 1.2906 0.451187 3.72% 40.99%

For definitions of the symbols, see Table I.

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions

Variance Reduction in Queuing Simulations 997

the tables give the coefficient of variation of W72 and the relative reduction in the standard deviation of W1, respectively.

The results indicate a variance reduction in M/M/1 simulations of at least 20 per cent for cases in which the coefficient of variation of W2 is less than 5 per cent.

ACKNOWLEDGMENTS

THE AUTHOR IS indebted to BJ6RN LEONARDZ, Department of Business Administra- tion, University of Stockholm, and to WILLIAM D. WHISLER, Department of Man- agement Sciences, California State University, Hayward, for their helpful com- ments and discussion of this paper.

REFERENCES

1. R. HAJI AND G. NEWELL, "A Relation Between Stationary Queue and Waiting Time Distributions" J. Apple. Prob. 8, 617-620 (1971).

2. J. M. HAMMERSLEY AND D. C. HANSCOMB, Monte Carlo Methods, Methuen, London, 1964. 3. D. V. LINDLEY, "The Theory of Queues with a Single Server," Proc. Camb. Phil. Soc. 48,

277-289 (1952). 4. T. H. NAYLOR, ET AL., Computer Simulation Techniques, Wiley, New York, 1966. 5. E. PAGE, "Simulation of Queuing Systems," Opns. Res. 13, 300-305 (1965).

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 06:45:02 AMAll use subject to JSTOR Terms and Conditions