Cumulative Distribution Function Technic

Embed Size (px)

Citation preview

  • 8/3/2019 Cumulative Distribution Function Technic

    1/8

    Week 5: Distributions of Function of Random Variables

    1. Introduction

    Suppose X1, X2,...,Xn are n random variables. Inthis chapter, we develop techniques that may be used

    to find the distribution of functions these random

    variables, say Y = u (X1,...,Xn).Some of the techniques we consider are:

    1. The Cumulative Distribution Function (CDF)

    Technique

    2. The Jacobian Transformation Technique

    3. The Moment Generating Function (MGF) Tech-

    nique

    Here this week, we also talk about:

    Distributions of Order Statistics

    Special Sampling Distributions

    2. The CDF Technique

    Let X be a continuous random variable with cumu-lative distribution function FX () and density functionfX ().

    Now suppose that Y = g (X) is a function ofX whereg is differentiable and strictly increasing. Thus, its

    inverse g1 uniquely exists. The CDF of Y can bederived using

    FY (y) = Prob (Y y)= Prob

    X g1 (y)

    = FX

    g1 (y)

    and its density is given by

    fY (y) =

    d

    dyFY (y) =

    d

    dyFX

    g1

    (y)

    = fX

    g1 (y)

    d

    dyg1 (y) .

    Ifg were strictly decreasing, then we would have

    fY (y) = fXg1 (y) d

    dyg1

    (y) .

    In summary, ifg is strictly monotonic function, then

    fY (y) = fX

    g1 (y)

    d

    dyg1 (y)

    .

  • 8/3/2019 Cumulative Distribution Function Technic

    2/8

    3. Example - CDF Technique

    Let X be a random variable with p.d.f.

    f(x) =ex

    (1 + ex)2for x .

    We wish to find the distribution of

    Y = eX.

    Here we have g (X) = eX which is strictly decreasingfunction. Thus,

    g1 (y) = ln y

    so thatd

    dyg1 (y) =

    1

    y and applying the formula

    above, we have

    fY (y) = fX

    g1 (y)

    ddyg1 (y)

    =y

    (1 + y)

    2

    1

    y

    =

    1

    (1 + y)

    2

    where the range of y is obviously

    0 < y < .

    4. The Jacobian TransformationTechnique

    To explain this technique, we consider only the caseof two continuous random variables X1 and X2 andassume that they are mapped onto U1 and U2 by thetransformation

    u1 = g1 (x1, x2) and u2 = g2 (x1, x2) .

    Suppose this transformation is one-to-one so that we

    can invert them to getx1 = h1 (u1, u2) and x2 = h2 (u1, u2) .

    The Jacobian of this transformation is the determinant

    J(x1, x2) = det

    g1x1

    g1x2

    g2x1

    g2x2

    =g1x1

    g2x2

    g2x1

    g1x2

    ,

    provided this is not zero. Suppose the joint density

    of X1 and X2 is denoted by fX1X2. Then, the joint

    density ofU1 and U2 is given byfU1U2 (u1, u2) =

    1

    |J(h1 (u1, u2) , h2 (u1, u2))|fX1X2 (h1 (u1, u2) , h2 (u1, u2)) .

    The above technique can be easily extended to

    several variables. See Hogg & Craig (1995).

  • 8/3/2019 Cumulative Distribution Function Technic

    3/8

    5. Example - Jacobian Technique

    As an illustration of the Jacobian transformationtechnique, let us consider deriving the t-distribution.

    Suppose Z N(0, 1) and V 2 (r) and areindependent. Then, the random variable

    T =Z

    pV /rhas a t-distribution with r degrees of freedom.

    Define the variables

    s = v and t =zpv /r

    so that this forms a one to one transformation with

    the inversionz = t

    ps /r and v = s.

    Its Jacobian is

    J(z, v) = det

    s

    z

    s

    v

    t

    z

    t

    v

    = det

    0 1

    1pv /r

    12

    zv3/2r

    = 1pv /r

    = 1ps /r

    Since Z and V are independent, their joint density

    can be written as

    fZV (z, v) = fZ (z) fV (v)

    =1

    2e

    1

    2z2 1

    (r/2) 2r/2vr/21ev/2

    Thus, using the Jacobain transformation formula

    above, the joint density of (S, T) is given by

    fST (s, t) =p

    s /r12

    e1

    2

    t

    s/r2 1 (r/2) 2r/2

    sr/21es/2

    = 12 (r/2) 2r/2

    sr/21p

    s /r exps2

    1 + t

    2

    r

    ,

    where we note that since

    0 < v < and < z < then

    0 < s < and < t < .Therefore, the marginal density of T is given by

    fT (t) =

    Z0

    fST (s, t) ds

    =

    Z0

    12 (r/2) 2r/2

    sr/21p

    s /r

    exp

    s

    2

    1 +

    t2

    r

    ds.Making the transformation

    w =s

    2

    1 +

    t2

    r

  • 8/3/2019 Cumulative Distribution Function Technic

    4/8

    so that

    dw =1

    2

    1 +

    t2

    r

    ds

    and therefore

    fT (t) =

    Z0

    12 (r/2) 2r/2

    2w

    1 + t2/r

    (r+1)/21

    ew

    2

    1 + t2/r

    dw

    = [(r + 1) /2]r (r/2)

    1

    (1 + t2/r)(r+1)/2,

    for < t < .

    6. The MGF Technique

    This method can be effective in instances where we

    can derive a recognizable m.g.f. because when it

    exists, it is unique and it uniquely determines the

    distribution.

    Suppose we are interested in the distribution of

    U = g (X1,...,Xn)

    where X1,...,Xn have a joint density f(x1,...,xn).Then, we find the m.g.f. ofU using

    MU (t) = E

    eUt

    =

    Z

    Z

    eg(x1,...,xn)tf(x1,...,xn) dx1...dxn.

    In the special case where U is the sum of the randomvariablesU = X1 + + Xn

    and X1,...,Xn are independent, we have

    MU (t) = E

    e(X1++Xn)t

    = E

    eX1t

    E

    eXnt

    = MX1 (t) MXn (t) .

    The m.g.f. of U is the product of the m.g.f. ofX1,...,Xn.

  • 8/3/2019 Cumulative Distribution Function Technic

    5/8

    7. Examples - The MGF Technique

    Example (Poisson): Let X1

    Poisson(1) and X

    2 Poisson(2) where X1, X2 are independent. Then themgf ofU = X1 + X2 is given by

    MU (t) = MX1 (t) MX2 (t)

    = e1(et1)e2(e

    t1) = e(1+2)(et1)

    which is the mgf of another Poisson with parameter

    1 + 2, i.e.U Poisson (1 + 2) .

    Example (Normal): Let X1 N

    1, 21

    and

    X2 N

    2,22

    where X1, X2 again are independent.

    Then the mgf ofU = X1 + X2 is given by

    MU (t) = MX1 (t) MX2 (t)

    = e1t+1

    221t2e2t+

    1

    222t2

    = e(1+2)t+1

    2(21+22)t2

    which is the mgf of another Normal with mean 1 + 2and variance 21 +

    22. That is

    U N1 + 2,21 +

    22 .

    8. Distributions of Order Statistics

    Assume X1, X2,...,Xn are n independent identicallydistributed (i.i.d.) random variables and let their

    common distribution function be FX and density fX.

    Suppose we sort these variables and denote by

    X(1) < X(2) < < X(n)the order statistics. In particular, X(1) = min (X1,...,Xn)

    is the minimum and X(n) = max(X1,...,Xn). For sim-plicity, denote by U = X(n) and V = X(1).

    Distribution of the Maximum

    Deriving the distribution of the maximum, we have

    FU (u)

    = Prob (U u)= Prob (X1 u) Prob (X2 u) Prob (Xn u)= [F (u)]n

    and the density function is

    fU (u) = nf(u) [F (u)]n1 .

    Distribution of the Minimum

    We have

    FV (v) = Prob (V v) = 1 Prob (V > v)= 1 [Prob (X1 > u) Prob (Xn > u)]= 1 [1 F(v)]n

  • 8/3/2019 Cumulative Distribution Function Technic

    6/8

    and the corresponding density function is

    fV (v) = nf(v) [1 F(v)]n1 .

    In general, we can show that the probability densityof the k-th order statistic is given by

    fk (x) =n!

    (k 1)!(n k)!f(x) [F(x)]k1 [1 F(x)]nk .

    The joint probability density of the order statistic is

    given by:

    f12...n (y1, y2,...,yn) = n!f(y1) f(y2) f(yn) .

    9. Example - Order Statistics

    Consider a system with n components. Assumethat the lifetimes of the components are T1, T2,...,Tnwhich are i.i.d. with exponential distribution with

    parameter.

    Suppose that the system are connected in series, that

    is, the system will fail if any one of the components

    fail. The lifetime V of the system is therefore theminimum of the Tk, i.e.

    V = min (T1,...Tn) .

    Therefore the density of V is given by

    fV (v) = nf(v) [1 F (v)]n1

    = nev

    evn1 = (n) e(n)v

    which is exponential with parameter n.

    Suppose that the system are connected in parallel,

    that is, the system will fail only if all of the components

    fail. The lifetime U of the system is therefore theminimum of the Tk, i.e.

    V = min (T1,...Tn) .

    Therefore the density of V is given by

    fU (u) = nf(u) [F (u)]n1

    = neu

    1 eun1

    .

  • 8/3/2019 Cumulative Distribution Function Technic

    7/8

    10. Some Special SamplingDistributions

    We now consider some results regarding distributions

    resulting from sampling from a normal distribution.

    A Single Normal and Chi-Square. Suppose

    Z

    N(0, 1), then

    Y = Z2 2 (1)has a chi-square distribution with 1 degree of free-dom. It is interesting to prove this, and it uses the

    CDF technique. Consider

    FY (y) = Prob

    Z2 y

    = Prob (y Z y)

    =Z

    y

    y

    12e

    1

    2

    z2

    dz = 2Z

    y

    0

    12e

    1

    2

    z2

    dz

    and now applying change of variable, say z =

    w, sothat dz = 12w

    1/2dw. Therefore, we have

    FY (y) = 2

    Zy0

    12

    1

    2w1/2e

    1

    2wdw

    Differentiating to get the p.d.f. we get

    fY (y) =12

    y1/2e1

    2y =

    1

    21/212

    y(12)/2ey/2which is the density of a 2 (1) distributed randomvariables.

    Normal and Chi-Square. Suppose Z1, Z2,...,Zrare independent standard normal random variables.

    Then, the random variable

    V = Z21 + Z22 + + Z

    2r =

    rXk=1

    Z2k

    has a chi-square distribution with r degrees offreedom.

    t-distribution. Suppose Z N(0, 1) and V 2 (r)and are independent. Then, the random variable

    T =ZpV /r

    has a t-distribution with r degrees of freedom.

    F-distribution. Suppose U 2 (r1) and V 2 (r2)are two independent chi-square distributed random

    variables Then, the random variable

    F =U/r1V /r2

    has an F-distribution with r1 and r2 degrees of

    freedom.

    Sample Mean and Sample Variance. Suppose

    X1, X2,...,Xn are n independent random variableswith identical distribution N

    ,2

    . Define the

  • 8/3/2019 Cumulative Distribution Function Technic

    8/8

    sample mean by

    X =1

    n

    n

    Xk=1

    Xk

    and the sample variance by

    S2 =1

    n 1nX

    k=1

    Xk X

    2.

    Then the following important properties can be

    verified:

    X N, 1n2

    (n 1) S22

    2 (n 1)

    X and S2 are independent.

    Using these results, it can further be shown that

    T =X S/

    nhas a t-distribution with n 1 degrees of freedom.