3
STATS 200 (Stanford University, Summer 2015) 1 Solutions to Homework 1 “DeGroot & Schervish X.Y.Z” means Exercise Z at the end of Section X.Y in our text, Probability and Statistics (Fourth Edition) by Morris H. DeGroot and Mark J. Schervish. Please be aware that problems numbers in older editions may not match. 1. DeGroot & Schervish 3.2.9. Solution: Clearly we must have c 0 since a pdf must be nonnegative. Note that c = 0 yields f (x)= 0 for all x R, and R 0 dx = 0 1. Thus, we must have c > 0. However, for any c > 0, -∞ f (x) dx = 0 c 1 + x dx = c log(1 + ∞) - c log(1 + 0)=∞≠ 1. Thus, there does not exist any c R such that f (x) is a pdf. 2. DeGroot & Schervish 3.5.4. Solution to (a): The marginal pdf of X is f (X) (x)= R f (x, y) dy = 1-x 2 0 15 4 x 2 dy = 15 4 x 2 (1 - x 2 ) if -1 x 1, with f (X) (x)= 0 otherwise. To find the marginal pdf of Y , note that {(x, y)∈ R 2 0 y 1 - x 2 } can be written as {(x, y)∈ R 2 x1 - y, 0 y 1}. Then f (Y ) (y)= R f (x, y) dx = 1-y - 1-y 15 4 x 2 dx = 5 2 (1 - y) 32 , if 0 y 1, with f (Y ) (y)= 0 otherwise. Solution to (b): Since f (X) (x) f (Y ) (y)≠ f (x, y), X and Y are not independent. 3. DeGroot & Schervish 4.1.6. Also, find Var(1X ). Solution: First, E(1X )= R (1x) f (x) dx = 1 0 (1x) 2x dx = 2. To find Var(1X ), we first find E[(1X ) 2 ]= R (1x) 2 f (x) dx = 1 0 (1x) 2 2x dx = 2 1 0 log x dx = 2 ⋅∞=∞. Thus, Var(X )= E[(1X ) 2 ]-[E(1X )] 2 =∞- 2 2 =∞. 4. DeGroot & Schervish 4.2.4. Also, find the variance of the area of the rectangle. (The continuous uniform distribution on [a, b] has mean (a + b)2 and variance (b - a) 2 12. You may use these facts without proof.) Solution: The area of the rectangle is the random variable XY . Since X and Y are independent, E(XY )= E(X ) E(Y )=(12)(7)= 72. To find Var(XY ), we first find E[(XY ) 2 ]= E(X 2 Y 2 )= E(X 2 ) E(Y 2 ) = [E(X )] 2 + Var(X )[E(Y )] 2 + Var(Y ) = (12) 2 + 1127 2 + 43 =(13)(1513)= 1519. Then Var(XY )= E[(XY ) 2 ]-[E(XY )] 2 = 1519 -(72) 2 = 16336.

Homework 1 Solution

Embed Size (px)

DESCRIPTION

fwe

Citation preview

Page 1: Homework 1 Solution

STATS 200 (Stanford University, Summer 2015) 1

Solutions to Homework 1

“DeGroot & Schervish X.Y.Z” means Exercise Z at the end of Section X.Y in our text,Probability and Statistics (Fourth Edition) by Morris H. DeGroot and Mark J. Schervish.Please be aware that problems numbers in older editions may not match.

1. DeGroot & Schervish 3.2.9.

⊳ Solution: Clearly we must have c ≥ 0 since a pdf must be nonnegative. Note thatc = 0 yields f(x) = 0 for all x ∈ R, and ∫R 0 dx = 0 ≠ 1. Thus, we must have c > 0. However,for any c > 0,

∫∞

−∞

f(x) dx = ∫∞

0

c

1 + xdx = c log(1 +∞) − c log(1 + 0) =∞ ≠ 1.

Thus, there does not exist any c ∈ R such that f(x) is a pdf. ⊲

2. DeGroot & Schervish 3.5.4.

⊳ Solution to (a): The marginal pdf of X is

f (X)(x) = ∫Rf(x, y) dy = ∫

1−x2

0

15

4x2 dy =

15

4x2(1 − x2)

if −1 ≤ x ≤ 1, with f (X)(x) = 0 otherwise. To find the marginal pdf of Y , note that{(x, y) ∈ R2 ∶ 0 ≤ y ≤ 1 − x2} can be written as {(x, y) ∈ R2 ∶ ∣x∣ ≤

√1 − y, 0 ≤ y ≤ 1}. Then

f (Y )(y) = ∫Rf(x, y) dx = ∫

1−y

1−y

15

4x2 dx =

5

2(1 − y)3/2,

if 0 ≤ y ≤ 1, with f (Y )(y) = 0 otherwise. ⊲

⊳ Solution to (b): Since f (X)(x) f (Y )(y) ≠ f(x, y), X and Y are not independent. ⊲

3. DeGroot & Schervish 4.1.6. Also, find Var(1/X).

⊳ Solution: First, E(1/X) = ∫R(1/x) f(x) dx = ∫1

0 (1/x)2x dx = 2. To find Var(1/X),

we first find E[(1/X)2] = ∫R(1/x)2 f(x) dx = ∫

1

0 (1/x)2 2x dx = 2 ∫1

0 logx dx = 2 ⋅ ∞ = ∞.Thus, Var(X) = E[(1/X)2] − [E(1/X)]2 =∞− 22 =∞. ⊲

4. DeGroot & Schervish 4.2.4. Also, find the variance of the area of the rectangle. (Thecontinuous uniform distribution on [a, b] has mean (a+b)/2 and variance (b−a)2/12. Youmay use these facts without proof.)

⊳ Solution: The area of the rectangle is the random variable XY . Since X and Y areindependent, E(XY ) = E(X)E(Y ) = (1/2)(7) = 7/2. To find Var(XY ), we first find

E[(XY )2] = E(X2Y 2) = E(X2)E(Y 2) = {[E(X)]2 +Var(X)}{[E(Y )]2 +Var(Y )}

= [(1/2)2 + 1/12][72 + 4/3]

= (1/3)(151/3) = 151/9.

Then Var(XY ) = E[(XY )2] − [E(XY )]2 = 151/9 − (7/2)2 = 163/36. ⊲

Page 2: Homework 1 Solution

Solutions to Homework 1 2

5. DeGroot & Schervish 4.6.18.

⊳ Solution: First, we compute the required expectations:

E(X) =∬R2xf(x, y)dxdy = ∫

1

0∫

1

0(x2 + xy)dxdy = ∫

1

0(

1

3+y

2)dy =

1

3+

1

4=

7

12,

E(Y ) =∬R2y f(x, y)dxdy = ∫

1

0∫

1

0(y2 + xy)dy dx = ∫

1

0(

1

3+x

2)dx =

1

3+

1

4=

7

12,

E(XY ) =∬R2xy f(x, y)dxdy = ∫

1

0∫

1

0(x2y + xy2)dxdy = ∫

1

0(y

3+y2

2)dy =

1

6+

1

6=

1

3.

Then Cov(X,Y ) = E(XY ) −E(X)E(Y ) = (1/3) − (7/12)2 = −1/144. ⊲

6. Suppose we construct a random variable X as follows. Let Y ∼ Bin(1, θ), where 0 < θ < 1.If Y = 1, then X = 0. If instead Y = 0, then X has a Poisson(λ) distribution. Thenthe marginal distribution of X (not conditional on Y ) is called a zero-inflated Poissondistribution. (The mean and variance of the Poisson distribution are listed in Section 5.4of DeGroot & Schervish. You may use these facts without proof.)

(a) Calculate E(X) and Var(X), the marginal mean and variance of X.

⊳ Solution: By the law of total expectation,

E(X) = E[E(X ∣ Y )] =1

∑y=0

E(X ∣ Y = y) P (Y = y)

= E(X ∣ Y = 0) P (Y = 0) +E(X ∣ Y = 1) P (Y = 1)

= λ(1 − θ) + 0 ⋅ θ = λ(1 − θ).

By the law of total variance,

Var(X) = E[Var(X ∣ Y )] +Var[E(X ∣ Y )]

= E[Var(X ∣ Y )] +E{[E(X ∣ Y )]2} − {E[E(X ∣ Y )]}

2

=1

∑y=0

Var(X ∣ Y = y) P (Y = y) +1

∑y=0

[E(X ∣ Y = y)]2 P (Y = y) − [E(X)]2

= Var(X ∣ Y = 0) P (Y = 0) +Var(X ∣ Y = 1) P (Y = 1)

+ [E(X ∣ Y = 0)]2 P (Y = 0) + [E(X ∣ Y = 1)]2 P (Y = 1) − [E(X)]2

= λ(1 − θ) + 0 ⋅ θ + λ2(1 − θ) + 0 ⋅ θ − [λ(1 − θ)]2 = λ(1 − θ)(1 + λθ).

An alternative approach is to calculate the marginal pmf of X and use it to find theexpectation and variance directly. ⊲

Page 3: Homework 1 Solution

Solutions to Homework 1 3

(b) Find the conditional pmf of Y given X.

⊳ Solution: First,

f (Y ∣X)(0 ∣ 0) = P (Y = 0 ∣X = 0)

=P (X = 0, Y = 0)

P (X = 0)

=P (X = 0, Y = 0)

P (X = 0, Y = 0) + P (X = 0, Y = 1)

=P (X = 0 ∣ Y = 0) P (Y = 0)

P (X = 0 ∣ Y = 0) P (Y = 0) + P (X = 0 ∣ Y = 1) P (Y = 1)

=exp(−λ)(1 − θ)

exp(−λ)(1 − θ) + 1 ⋅ θ=

(1 − θ) exp(−λ)

(1 − θ) exp(−λ) + θ.

It follows that

f (Y ∣X)(1 ∣ 0) = 1 −(1 − θ) exp(−λ)

(1 − θ) exp(−λ) + θ=

θ

(1 − θ) exp(−λ) + θ.

Next, note that f (X,Y )(x,1) = P (X = x, Y = 1) = 0 for any x > 0. It followsimmediately that f (Y ∣X)(1 ∣ x) = P (Y = 1 ∣ X = x) = 0 for every x > 0, and hencef (Y ∣X)(0 ∣ x) = 1 for every x > 0. ⊲

7. Let {Xn ∶ n ≥ 1} be a sequence of random variables such that E(Xn) exists and is finitefor every n ≥ 1. Let X be a random variable such that E(X) exists and is finite.

(a) Suppose that Xn →P X. Does it follow that E(Xn)→ E(X)? If so, prove it. If not,give a counterexample.

⊳ Solution: No, it does not follow that E(Xn) → E(X). Many counterexamplesare possible, though perhaps the simplest is as follows. Suppose that

Xn =

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

0 with probability 1 −1

n,

n with probability1

n.

for every n ≥ 1, and let X = 0 (i.e., a constant random variable). Then Xn →P Xsince for every ε > 0,

P (∣Xn −X ∣ > ε) = P (∣Xn∣ > ε) ≤ P (Xn ≠ 0) =1

n→ 0.

However,

E(Xn) = 0(1 −1

n) + n(

1

n) = 1

for every n ≥ 1, whereas E(X) = 0. Thus, E(Xn)↛ E(X). ⊲

(b) Suppose that E(Xn)→ E(X). Does it follow that Xn →P X? If so, prove it. If not,give a counterexample.

⊳ Solution: No, it does not follow that Xn →P X. Many extremely simple coun-terexamples are possible, e.g., take X = 0 and Xn ∼ N(0,1) for every n ≥ 1. ⊲