62
INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr. Olga Korosteleva Department of Mathematics and Statistics California State University, Long Beach 1 Conditional Expectation and Variance ............. 1 2 Distribution of Order Statistics ................. 3 3 Asymptotic Distribution (Convergence in Distribution) .... 4 4 Maximum Likelihood Estimator ................. 4 5 Method of Moments Estimator ................. 8 6 Unbiased Estimator ........................ 9 7 Fisher Information, Cram´ er-Rao Lower Bound, Efficient Esti- mator ............................... 11 8 Consistent Estimator ....................... 13 9 Sufficient Statistic, Factorization Theorem ........... 15 10 Uniform Minimum Variance Unbiased Estimator (UMVUE), Lehmann-Scheff´ e Theorem .................... 18 11 Ancillary Statistic ......................... 21 12 Bayesian Estimator ........................ 22 13 Likelihood Ratio Test ....................... 24 14 Power Function of a Test ..................... 26 1 Conditional Expectation and Variance Definition A conditional expectation of a random variable X , given that a random variable Y y, is defined by the formula: ErX |Y ys“ $ & % 8 ÿ x“´8 x PpX x|Y yq, in discrete case, ż 8 ´8 xf X|Y px|Y yq dx, in continuous case. Remark Note that the conditional expectation ErX |Y ys is a function of y only. Proposition An expected value of a random variable X can be computed via double expectation as EpX q“ EE X |Y . Proof of the Proposition: We will consider only the case when both X and Y are discrete. All the other cases can be shown analogously. We write EE X |Y 8 ÿ y“´8 E X |Y y PpY yq“ 8 ÿ y“´8 8 ÿ x“´8 x PpX 1

INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

  • Upload
    lamdung

  • View
    256

  • Download
    1

Embed Size (px)

Citation preview

Page 1: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

INTRODUCTORY MATHEMATICAL STATISTICS:MUST-KNOW EXERCISES WITH SOLUTIONS

byDr. Olga Korosteleva

Department of Mathematics and StatisticsCalifornia State University, Long Beach

1 Conditional Expectation and Variance . . . . . . . . . . . . . 12 Distribution of Order Statistics . . . . . . . . . . . . . . . . . 33 Asymptotic Distribution (Convergence in Distribution) . . . . 44 Maximum Likelihood Estimator . . . . . . . . . . . . . . . . . 45 Method of Moments Estimator . . . . . . . . . . . . . . . . . 86 Unbiased Estimator . . . . . . . . . . . . . . . . . . . . . . . . 97 Fisher Information, Cramer-Rao Lower Bound, Efficient Esti-

mator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Consistent Estimator . . . . . . . . . . . . . . . . . . . . . . . 139 Sufficient Statistic, Factorization Theorem . . . . . . . . . . . 1510 Uniform Minimum Variance Unbiased Estimator (UMVUE),

Lehmann-Scheffe Theorem . . . . . . . . . . . . . . . . . . . . 1811 Ancillary Statistic . . . . . . . . . . . . . . . . . . . . . . . . . 2112 Bayesian Estimator . . . . . . . . . . . . . . . . . . . . . . . . 2213 Likelihood Ratio Test . . . . . . . . . . . . . . . . . . . . . . . 2414 Power Function of a Test . . . . . . . . . . . . . . . . . . . . . 26

1 Conditional Expectation and Variance

Definition A conditional expectation of a random variable X, given that arandom variable Y “ y, is defined by the formula:

ErX|Y “ ys “

$

&

%

8ÿ

x“´8

x PpX “ x|Y “ yq, in discrete case,

ż 8

´8

x fX|Y px|Y “ yq dx, in continuous case.

Remark Note that the conditional expectation ErX|Y “ ys is a function ofy only.

Proposition An expected value of a random variable X can be computedvia double expectation as EpXq “ EE

X|Y‰

.

Proof of the Proposition: We will consider only the case when bothX and Y are discrete. All the other cases can be shown analogously. We

write EE“

X|Y‰

8ÿ

y“´8

E“

X|Y “ y‰

PpY “ yq “

8ÿ

y“´8

8ÿ

x“´8

x PpX “

1

Page 2: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

x|Y “ yq

ı

PpY “ yq “

8ÿ

x“´8

x”

8ÿ

y“´8

PpX “ x|Y “ yqPpY “ yq

ı

8ÿ

x“´8

x”

8ÿ

y“´8

PpX “ x, Y “ yq

ı

8ÿ

x“´8

xPpX “ xq “ EpXq.

Definition A conditional variance of a random variable X, given a random

variable Y , is defined as VarrX|Y s “ E“

X2|Y‰

´`

ErX|Y s˘2.

Exercise 1 Consider two discrete random variables X and Y . Show thatthe variance of X may be computed by conditioning on Y as follows:

VarpXq “ Var`

ErX|Y s˘

` E`

VarrX|Y s˘

.

Exercise 2 Suppose a random variable N is Poisson with mean Λ, which isalso a random variable. Prove that EpNq “ EpΛq and VarpNq “ VarpΛq `

EpΛq. Compute the mean and variance of N if PpΛ “ 2q “ 0.2, PpΛ “ 3q “

0.5, and PpΛ “ 4q “ 0.3.

Exercise 3 Let X be a random variable that is Uniformly distributed be-

tween 0 and Y where Y is itself a random variable. Show that EpXq “1

2EpY q

and VarpXq “ 13VarpY q ` 1

12pEpY qq2. Use these formulas to calculate the

mean and variance of X if Y „ Uniformp4, 8q.

Exercise 4 Consider a random sum of random variables defined as S “Yÿ

i“1

Xi where Xi’s are iid random variables with mean EpX1q and variance

VarpX1q, that are also independent of a random variable Y . Derive that

EpSq “ EpX1qEpY q, and VarpSq “`

EpX1q˘2VarpY q ` VarpX1qEpY q.

Exercise 5 Let S “

Nÿ

i“1

Xi where Xi are iid random variables independent

of N „ Poissonpλq.(a) Use the result of Exercise 4 to show that EpSq “ λEpX1q, and VarpSq “

λEpX21 q.

(b) Suppose the number of customers who enter a department store duringa given hour is a Poisson random variable with mean 20. Each customerspends between $10 and $110, independently of others. Find the averageamount spent in the store by all the customers. Find the variance of thisamount.

2

Page 3: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

2 Distribution of Order Statistics

Definition Suppose we have n observations X1, . . . , Xn. Denote by Xp1q ď

Xp2q ď ¨ ¨ ¨ ď Xpnq the ordered set. For any i, i “ 1, . . . , n, Xpiq is called thei-th order statistic. Note that Xp1q is the minimum, whereas Xpnq denotesthe maximum.

Proposition Suppose X1, . . . , Xn are iid random variables with a commonpdf fpxq and cdf F pxq. The pdf of the i-th order statistic has the form

fXpiqpxq “

n!

pi ´ 1q!pn ´ iq!

F pxq‰i´1

fpxq“

1 ´ F pxq‰n´i

.

Proof: If the i-th order statistic is “equal” to x (contributing fpxq), then

i ´ 1 observations necessarily lie below x (contributing“

F pxq‰i´1

) , and the

other n ´ i lie above x (contributing“

1 ´ F pxq‰n´i

). Finally, the multiplica-tive factor is the number of ways to choose i ´ 1 observations to lie below x,and n ´ i to exceed x.

Example If we let i “ n in the above proposition, we obtain the pdf of themaximum of n iid observations,

fXpnqpxq “

n!

pn ´ 1q!pn ´ nq!

F pxq‰n´1

fpxq“

1´F pxq‰n´n

“ n fpxq“

F pxq‰n´1

.

This is intuitive, since the pdf of Xpnq can also be obtained by the followingreasoning:

FXpnqpxq “ PpXpnq ď xq “ PpX1 ď x, X2 ď x, . . . , Xn ď xq “

F pxq‰n,

and, thus, the pdf is fXpnqpxq “ F 1

Xpnqpxq “ n fpxq

F pxq‰n´1

.

Exercise 6 Consider n iid observations with the common pdf fpxq and cdfF pxq. Use the formula for the pdf of the i-th order statistic to show that the

pdf of the minimum is fXp1qpxq “ n fpxq

1´F pxq‰n´1

. Also, find the pdf byfirst deriving the expression for the cdf, arguing from the first principles.

Exercise 7 Let X1, . . . , Xn be iid realizations of a standard uniform ran-dom variable. Find the pdf’s of: (a) i-th order statistic, i “ 1, . . . , n, (b)minimum, and (c) maximum. Specify the name of the distribution and re-spective parameters.

Exercise 8 Let X1, . . . , Xn be independent exponential random variableswith mean 1{β. Find the densities of: (a) Xpiq, i “ 1, . . . , n, (b) minimum

3

Page 4: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

(give the distribution name and specify parameters), and (c) maximum.

3 Asymptotic Distribution (Convergence in

Distribution)

Definition Suppose we observe an infinite sequence X1, X2, . . . , Xn, . . . .Consider a statistic upX1, . . . , Xnq that depends on the first n observations,and let Fnpxq denote its cdf. If there exists a cdf F pxq such that, for every x,Fnpxq ÝÑ

nÑ8F pxq, then upX1, . . . , Xnq is said to converge in distribution, and

F pxq is called the asymptotic (or limiting) cdf.

Exercise 9 Let X1, . . . , Xniid„ Uniformp0, θq. Show that (a) Xpnq goes in

distribution to a degenerate random variable with a point mass at θ, and (b)Zn “ npθ´Xpnqq has asymptotically an exponential distribution with mean θ.

Exercise 10 Let X1, . . . , Xn be iid random variables with cdf F pxq. Provethat Yn “ n

`

1´F pXpnqq˘

converges in distribution to an exponential randomvariable with the unit mean.

Exercise 11 Suppose X1, . . . , Xn constitute a random sample from a dis-tribution with cdf F pxq.(a) Use integration by parts to show that the cdf of Xp2q is FXp2q

pxq “

1 ´`

1 ´ F pxq˘n

´ nF pxq`

1 ´ F pxq˘n´1

.(b) Prove that the limiting distribution of Yn “ nF pXp2qq is Gammap2, 1q.

Exercise 12 Consider independent observationsX1, . . . , Xn that come froman exponential distribution with mean 1{β. Show that Zn “ β Xpnq ´ lnn hasthe asymptotic distribution with the cdf FZnpxq “ expt´ expt´xuu,´8 ă

x ă 8. This distribution is called the standard Gumbel (or extreme value)distribution.

4 Maximum Likelihood Estimator

Definition Suppose X1, . . . , Xn are iid random variables with a commonpmf (discrete case) or pdf (continuous case) fpx; θq. The likelihood function

4

Page 5: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

is a function of the unknown parameter θ that is given by

Lpθq “ Lpθ;X1, . . . , Xnq “

i“1

fpXi; θq.

Definition An estimator pθ “ pθpX1, . . . , Xnq is called the maximum likeli-hood estimator (MLE) of θ if it maximizes the likelihood function Lpθq.

Example 1 Let X1, . . . , Xniid„ Bernoullippq. The likelihood function is

Lpp;X1, . . . , Xnq “

i“1

pXip1 ´ pq1´Xi “ přn

i“1 Xip1 ´ pqn´řn

i“1 Xi .

It is easier to work with the log-likelihood function, the natural logarithm ofthe likelihood function,

lnLpp;X1, . . . , Xnq “

nÿ

i“1

Xi ln p ` pn ´

nÿ

i“1

Xiq lnp1 ´ pq.

To maximize the log-likelihood function, we equate to zero the first partialderivative of lnLpp;X1, . . . , Xnq with respect to p, and solve for p. We obtain

0 “B lnLpp;X1, . . . , Xnq

Bp“

řni“1 Xi

n ´řn

i“1 Xi

1 ´ p.

Thus, pp, the maximum likelihood estimator of p, satisfies the equation

řni“1 Xi

pp“

n ´řn

i“1 Xi

1 ´ pp,

from where pp “řn

i“1 Xi{n “ sX. The MLE pp “ sX represents the proportionof successes among n observations, and is an intuitive estimator of p, theprobability of success.

Exercise 13 Let X1, . . . , Xniid„ BinomialpN, pq where N is known. Show

that the MLE of p is pp “ sX{N .

Exercise 14 LetX1, . . . , Xniid„ Geometricppq with pmf ppxq “ pp1´pqx´1, x “

1, 2, . . . . Prove that the MLE of p is pp “ 1{ sX.

Exercise 15 Let X1, . . . , Xniid„ Poissonpλq. Check that the MLE of λ is

pλ “ sX.

5

Page 6: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 16 Let X1, . . . , Xniid„ Uniformp0, θq. Show that the MLE of θ is

pθ “ Xpnq, the nth order statistic (or, simply, the maximum).

Exercise 17 Let X1, . . . , Xniid„ Uniformpa, bq. Verify that the MLE of a

is pa “ Xp1q, the first order statistic (i.e., the minimum), and that the MLE

of b is pb “ Xpnq, the nth order statistic (i.e., the maximum).

Exercise 18 Let X1, . . . , Xniid„ Exponential with mean β. Show that the

MLE of β is pβ “ sX.

Exercise 19 Let X1, . . . , Xniid„ Exponential with mean 1{β. Show that

the MLE of β is pβ “ 1{ sX.

Exercise 20 Let X1, . . . , Xniid„ Normalpµ, σ2q. Prove that the MLE of µ

is pµ “ X, and the MLE of σ2 is pσ2 “ 1n

řni“1 pXi ´ sXq2.

Exercise 21 Let X1, . . . , Xniid„ Weibullpαq where the pdf is defined as

fpx;αq “ αxα´1 expt´xαu, x ą 0, α ą 0. Show that pα, the MLE of α, isthe solution of the equation

n

pα`

nÿ

i“1

lnXi ´

nÿ

i“1

X pαi lnXi “ 0.

This equation has no closed-form solution and has to be solved numerically.Check that if X1 “ 0.4, X2 “ 0.3, and X3 “ 0.6, the MLE is pα “ 1.0067.

Exercise 22 Let X1, . . . , Xniid„ Bernoullippq, 0 ď p ď 1{5. Verify that the

MLE of p is pp “ sX, if 0 ď sX ď 1{5, and 1{5, if sX ą 1{5 .

Exercise 23 Let X1, . . . , Xniid„ fpx; θq “ 1

βe´x{β, x ą 0, β ą 4. Prove that

the MLE of β is sX, if sX ě 4, and 4, if 0 ď sX ă 4.

Exercise 24 Let X1, . . . , Xniid„ Normalpµ, 1q where µ ě 0. Show that the

MLE of µ is pµ “ sX if sX ě 0, and 0, if sX ă 0.

Exercise 25 Let X1, . . . , Xniid„ fpx; θq where the pmf fpx; θq is given by

the table:x

θ 1 2 40 1/4 1/2 1/41/3 1/2 0 1/21/4 3/5 1/5 1/5

6

Page 7: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Check that if the observations are X1 “ 1, X2 “ 4, and X3 “ 2, then theMLE of θ is equal to 0.

Theorem 1 (Functional Invariance of MLE) SupposeX1, . . . , Xniid„

pmf or pdf fpx; θq. Let g be some continuous function, and let δ “ gpθq.

Denote by pθ the MLE of θ. Then the MLE of δ can be computed as pδ “ gppθq.

Exercise 26 LetX1, . . . , Xniid„ Bernoullippq. Show that the MLE of VarpX1q “

pp1 ´ pq is sXp1 ´ sXq.

Exercise 27 Let X1, . . . , Xniid„ Geometricppq. Verify that the MLE of

EpX1q “ 1{p is sX.

Exercise 28 Let X1, . . . , Xniid„ X „ Poissonpλq. Prove that the MLE of

PpX1 “ 1q “ λ expt´λu is sX expt´ sXu.

Exercise 29 Let X1, . . . , Xniid„ Uniformp0, θq. Check that the MLE of

VarpX1q “ θ2{12 is X2pnq{12.

Exercise 30 LetX1, . . . , Xniid„ ppx, θq where pp0, θq “ expt´θu and pp1, θq “

1 ´ expt´θu. Prove that the MLE of θ is pθ “ ´ lnp1 ´ sXq.

Exercise 31 Let X1, . . . , Xniid„ Normalpµ, σ2q. Prove that the MLE of σ

is pσ “

b

1n

řni“1 pXi ´ sXq2.

7

Page 8: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

5 Method of Moments Estimator

Definition Suppose X1, . . . , Xn are iid random variables with a commondistribution that depends on k parameters θ1, . . . , θk. The method of mo-ments (MM) estimators of the parameters solve the system of k equations

$

&

%

EpX1q “

řni“1 Xi

n“ sX,

EpX21 q “

řni“1 X2

i

n,

EpX31 q “

řni“1 X3

i

n,

¨ ¨ ¨

EpXk1 q “

řni“1 Xk

i

n.

That is, in each equation the theoretical moment is equated to the corre-sponding empirical moment.

Example 2 Let X1, . . . , Xniid„ Normalpµ, σ2q. To find the MM estimators

of µ and σ2, we equate the first and second theoretical and empirical mo-ments, respectively:

$

&

%

EpX1q “ µ “

řni“1 Xi

n“ sX,

EpX21 q “ σ2 ` µ2 “

řni“1 X2

i

n.

The solution of this system is pµ “ sX, and pσ2 “

řni“1 X2

i

n´ sX2 “

řni“1pXi ´ sXq2

n.

Note that the MM estimators of µ and σ2 coincide with the correspondingMLEs.

Exercise 32 Let X1, . . . , Xniid„ Bernoullippq. Show that the MM estimator

for p is pp “ sX, the same as the MLE.

Exercise 33 Let X1, . . . , Xniid„ BinomialpN, pq where N is known. Verify

that the MM estimator for p is sX{N , the same as the MLE.

Exercise 34 Let X1, . . . , Xniid„ Geometricppq. Show that the MM estima-

tor for p is pp “ 1{ sX and coincides with the MLE.

8

Page 9: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 35 Let X1, . . . , Xniid„ Poissonpλq. Prove that the MM estimator

for λ is pλ “ sX, the same as the MLE.

Exercise 36 Let X1, . . . , Xniid„ Uniformp0, θq. Prove that the MM esti-

mator for θ is pθ “ 2 sX. This estimator is different from the MLE. Check bygiving a numeric example that the MM estimator may be smaller than theMLE, and thus, the MM estimator doesn’t always make sense.

Exercise 37 Let X „ Uniformpa, bq. Show that the MM estimators for aand b have the form

pa “ sX ´

c

řni“1 X2

i

n´ sX2

¯

, and pb “ sX `

c

řni“1 X2

i

n´ sX2

¯

.

These estimators are different from the MLE’s and don’t always make sense.

Exercise 38 Let X „ Exponential with mean β. Prove that the MM es-timator for β is sX, the same as the MLE.

Exercise 39 Let X „ Exponential with mean 1{β. Prove that the MMestimator for β is 1{ sX, the same as the MLE.

6 Unbiased Estimator

Definition LetX1, . . . , Xniid„ pmf or pdf fpx; θq. Denote by pθ “ pθpX1, . . . , Xnq

an estimator of θ. The estimator pθ is called unbiased if Eppθq “ θ. An esti-mator that is not unbiased is called biased.

Example 3 Let X1, . . . , Xniid„ Bernoullippq, and consider pp “ sX, the MLE

and MM estimator of p. This estimator is unbiased because Epppq “ Ep sXq “

EpX1q “ p. In fact, for any distribution, an estimator sX is an unbiasedestimator of the mean since Ep sXq “ EpX1q.

Exercise 40 Let X1, . . . , Xniid„ BinomialpN, pq where N is known. Verify

that the MLE and MM estimator pp “ sX{N is an unbiased estimator of p.

9

Page 10: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 41 Let X1, . . . , Xniid„ Geometricppq. Show that the MLE and

MM estimator pp “ 1{ sX is a biased estimator of p. Show also that among allestimators of p that are based on X1 alone, the only unbiased estimator is

pppX1q “

#

1, if X1 “ 1,

0, if X1 “ 2, 3, . . . .

Exercise 42 Let X1, . . . , Xniid„ Geometricppq. Show that sX is an unbiased

estimator of the mean EpX1q “ 1{p.

Exercise 43 Let X1, . . . , Xniid„ Poissonpλq. Check that the MLE and MM

estimator pλ “ sX is an unbiased estimator of λ.

Exercise 44 Let X1, . . . , Xniid„ Uniformp0, θq. Prove that Xpnq, the MLE

of θ, is biased, whereas 2 sX, the MM estimator, is unbiased. Show thatn ` 1

nXpnq is an unbiased estimator of θ.

Exercise 45 Let X1, . . . , Xniid„ Uniformpa, bq. Show that Xp1q, the MLE

of a, is biased, and so is Xpnq, the MLE of b. Derive that1

n ´ 1

`

nXp1q ´Xpnq

˘

is an unbiased estimator of a, and1

n ´ 1

`

nXpnq ´ Xp1q

˘

is an unbiased esti-

mator of b.

Exercise 46 Let X1, . . . , Xniid„ Exponential with mean β. Verify that sX,

the MLE and MM estimator of β, is an biased estimator of β.

Exercise 47 Let X1, . . . , Xniid„ Exponential with mean 1{β. Verify that

1{ sX, the MLE and MM estimator of β, is a biased estimator of β. Show

also thatn ´ 1

n sXis an unbiased estimator of β. Hint: Use the fact that

nÿ

i“1

Xi „ Gammapn, βq.

Exercise 48 Let X1, . . . , Xniid„ Normalpµ, σ2q. Verify that sX, the MLE

and MM estimator of µ, is unbiased, whereas pσ2 “1

n

nÿ

i“1

pXi ´ sXq2, the MLE

10

Page 11: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

and MM estimator of σ2, is biased. Prove that

n

n ´ 1pσ2 “

1

n ´ 1

nÿ

i“1

pXi ´ sXq2

is an unbiased estimator of σ2.

7 Fisher Information, Cramer-Rao Lower Bound,

Efficient Estimator

Definition. Let X „pmf or pdf fpx; θq such that the support in x doesnot depend on θ. The Fisher information is defined as

Ipθq “ E´

B ln fpX; θq

¯2

.

Equivalently, a more computationally convenient formula holds

Ipθq “ ´E´

B2 ln fpX; θq

Bθ2

¯

$

&

%

´

8ÿ

x“´8

´

B2 ln fpx; θq

Bθ2

¯

fpx; θq, in discrete case,

´

ż 8

´8

´

B2 ln fpx; θq

Bθ2

¯

fpx; θq dx, in continuous case.

For distributions for which the support of pmf or pdf in x depends on θ, theFisher information is defined to be infinite, that is, Ipθq “ 8.

Theorem 2 (Cramer-Rao Lower Bound) Let X1, . . . , Xniid„ pmf or

pdf fpx; θq. Let pθ “ pθpX1, . . . , Xnq be an unbiased estimator of θ. Then the

lower bound for the variance of pθ holds:

Varppθq ě1

n Ipθq.

This bound is called Cramer-Rao lower bound (often abbreviated as CRLB).For distributions for which the support in x depends on θ, the CRLB=0.

Definition Let X1, . . . , Xniid„ pmf or pdf fpx; θq. Let pθ “ pθpX1, . . . , Xnq be

an unbiased estimator of θ. The estimator pθ is called an efficient estimator

of θ if its variance attains the CRLB, that is, if Varppθq “1

n Ipθq.

11

Page 12: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Example 4 Let X „ Bernoullippq. The pmf of X is of the form fpx; pq “

pxp1 ´ pq1´x, x “ 0 or 1. Thus, ln fpx; pq “ x ln p ` p1 ´ xq lnp1 ´ pq. Nowwe differentiate:

B ln fpx; pq

Bp“

x

1 ´ x

1 ´ p, and

B2 ln fpx; pq

Bp2“ ´

x

p2´

1 ´ x

p1 ´ pq2.

The Fisher information is computed as

Ippq “ ´E´

B2 ln fpX; pq

Bp2

¯

“ ´E´

´X

p2´

1 ´ X

p1 ´ pq2

¯

“EpXq

p2`

1 ´ EpXq

p1 ´ pq2“

p

p2`

1 ´ p

p1 ´ pq2“

1

pp1 ´ pq.

The CRLB is1

n Ippq“

1

n 1pp1´pq

“pp1 ´ pq

n.

The estimator pp “ sX, which is the MLE and MM estimator of p, is efficientbecause it is unbiased and

Varp sXq “pp1 ´ pq

n“ CRLB.

Another estimator pp “ X1, for instance, is an unbiased estimator of p (sinceEpX1q “ p), but it is not efficient due to the fact that its variance doesn’tattain the CRLB. To see this, we write

VarpX1q “ pp1 ´ pq ąpp1 ´ pq

n“ CRLB.

Exercise 49 Let X „ BinomialpN, pq where N is known. Show that the

Fisher information is Ippq “N

pp1 ´ pq, and the CRLB is

pp1 ´ pq

nN. Verify

that the MLE and MM unbiased estimator pp “ sX{N is efficient. Show alsothatX1{N and pX1`X2q{p2Nq are unbiased but not efficient estimators of p.

Exercise 50 Let X „ Poissonpλq. Show that Ipλq “ 1{λ, the CRLB is

λ{n, and the MLE and MM unbiased estimator pλ “ sX is an efficient estima-tor of λ. Check also that pX1 ` X3 ` X5q{3 is unbiased but not an efficientestimator of λ.

Exercise 51 Let X „ Uniformp0, θq. Show that Ipθq “ 8, and theCRLB=0.

12

Page 13: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 52 Let X „ Exponential with mean β. Derive that Ipβq “ 1{β2,

the CRLB=β2{n, and the MLE and MM unbiased estimator pβ “ sX is anefficient estimator of β.

Exercise 53 Let X „ Exponential with mean 1{β. Prove that Ipβq “

1{β2, and, thus, the CRLB=β2{n. Show also that the unbiased estimatorn ´ 1

n sXhas variance

β2

n ´ 2, and hence it is not an efficient estimator of β.

Exercise 54 Let X „ Normalpµ, σ2q where σ is known. Verify that Ipµq “

1{σ2, the CRLB is equal to σ2{n, and that the MLE and MM unbiased esti-mator pµ “ sX is efficient.

Exercise 55 Let X „ Normalpµ, σ2q where µ is known. Show that theFisher information Ipσ2q “ 1{p2σ4q and, thus, the CRLB is 2σ4{n. Prove

that the estimator pσ21 “

1

n

nÿ

i“1

pXi´µq2 is an efficient estimator of σ2, whereas

the unbiased estimator pσ22 “

1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is not efficient. Hint: Check

that the variance of pσ22 is

2σ4

n ´ 1, which exceeds the CRLB.

8 Consistent Estimator

Definition Let X1, . . . , Xn be independent with a common density fpx; θq.

An estimator pθn “ pθnpX1, . . . , Xnq is called a consistent estimator of θ, if for

any ε ą 0, P`

|pθn ´ θ| ě ε˘

Ñ 0, as n Ñ 8.

Proposition The Chebyshev inequality states that for any ε ą 0,

P`

|pθn ´ θ| ě ε˘

ďE“

ppθn ´ θq2‰

ε2.

From here, if E“

ppθn ´ θq2‰

Ñ 0 as n Ñ 8, then pθn is a consistent estimatorof θ.

13

Page 14: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Definition The quantity E“

ppθn ´ θq2‰

is called the mean square error andis denoted by MSE. The mean square error can be expressed as the sum oftwo terms:

MSE “ E“

ppθn ´ θq2‰

“ E“

ppθn ´ Eppθnq ` Eppθnq ´ θq2‰

“ E“

ppθn ´ Eppθnqq2‰

` 2:0

E“

pθn ´ Eppθnq‰“

Eppθnq ´ θ‰

`“

Eppθnq ´ θ‰2

“ Var`

pθn˘

`“

Eppθnq ´ θ‰2.

The quantity“

Eppθnq ´ θ‰

represents the bias of an estimator pθn. Thus, theformula for the MSE has the form:

MSE “ Var`

pθn˘

`“

biasppθn, θqs2.

If pθn is an unbiased estimator of θ, then MSE “ Varppθnq, and if this variancetends to zero as n increases, then the estimator of consistent.

A biased estimator for which the bias goes to zero as n goes to infinity, iscalled asymptotically unbiased. Thus, an estimator may be biased, but it isconsistent if it is asymptotically unbiased and its variance decreases as thesample size increases.

Exercise 56 Show that if an estimator is efficient, it is consistent. Usethis to argue that the MLEs are consistent for the parameters of the follow-ing distributions: Bernoullippq, BinomialpN, pq with fixed N , Poissonpλq,Exponential with mean β, and Normalpµ, σ2q for given σ.

Exercise 57 Let X1, . . . , Xniid„ Uniformp0, θq. Verify that

(a) The unbiased estimatorn ` 1

nXpnq is a consistent estimator of θ. Hint:

Show that MSE “θ2

npn ` 2q.

(b) The MLE Xpnq is asymptotically unbiased and is a consistent estimator

of θ. Hint: Prove that bias “ ´θ

n ` 1and MSE “

2θ2

pn ` 1qpn ` 2q.

(c) The estimatorn ` 2

n ` 1Xpnq, which has the smallest MSE among all scalar

multiples of Xpnq (prove this!), is a consistent estimator of θ. Hint: Show

that its MSE “θ2

pn ` 1q2.

14

Page 15: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 58 Consider X1, . . . , Xn that come from a Uniformp0, θq distri-bution. Prove that(a) The MM estimator 2 sXn is unbiased, consistent estimator of θ. Hint:

Show that MSE “θ2

3n.

(b) The bias of the estimator sXn is independent of n, and thus this estimatoris not a consistent estimator of θ.

Exercise 59 Let X1, . . . , Xn be iid realizations of an exponential randomvariable with mean 1{β. Check that(a) The MLE 1{ sXn is asymptotically unbiased and consistent estimator of

β. Hint: Prove that its bias “β

n ´ 1and MSE “

pn ` 2qβ2

pn ´ 1qpn ´ 2q.

(b) The unbiased estimatorn ´ 1

n sXn

is consistent. Hint: Use the fact proved

in Exercise 53 that the variance of this estimator isβ2

n ´ 2.

Exercise 60 Suppose X1, . . . , Xn is a random sample from Normalpµ, σ2q

distribution. Show that

(a) The unbiased estimator1

n ´ 1

nÿ

i“1

pXi ´ sXnq2 is a consistent estimator of

σ2. Hint: As checked in Exercise 55, its variance is equal to2σ4

n ´ 1.

(b) The MLE1

n

nÿ

i“1

pXi ´ sXnq2 is asymptotically unbiased, consistent esti-

mator of σ2. Hint: Show that its MSE “2n ´ 1

n2σ4.

9 Sufficient Statistic, Factorization Theorem

Definition LetX1, . . . , Xniid„ pmf or pdf fpx; θq. A statistic pθ “ pθpX1, . . . , Xnq

is called a sufficient statistic for θ, if the conditional distribution ofX1, . . . , Xn,given pθ, does not depend on θ.

It is more practical to find sufficient statistics not using the definition, butrather using the factorization theorem.

15

Page 16: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Theorem 3 (Factorization Theorem) Let X1, . . . , Xniid„ pmf or pdf

fpx; θq. Then pθ “ pθpX1, . . . , Xnq is a sufficient statistic for θ if and only ifthere exist two nonnegative functions g and h such that

i“1

fpXi; θq “ gpX1, . . . , Xnqhppθ; θq.

This expression is interpreted as saying that the likelihood function for theobservations X1, . . . , Xn can be written as a product of two functions, oneof which depends only on the observations, and the other depends on somestatistic that cannot be separated from the parameter. Both functions aremultiplicative factors, thus the name factorization theorem.

Remark The factorization theorem can be formulated for distributions thatdepend on several parameters. The vector of estimators ppθ1, . . . , pθkq is asufficient statistic for the vector of parameters pθ1, . . . , θkq if and only if thereexist two nonnegative functions g and h such that

i“1

fpXi; θ1, . . . , θkq “ gpX1, . . . , Xnqhppθ1, . . . , pθk; θ1, . . . , θkq.

Proposition Any invertible function of a sufficient statistic is itself a suf-ficient statistic.

Example 5 Let X1, . . . , Xniid„ Bernoullippq. The likelihood function has

the form

i“1

fpXi; pq “

i“1

pXip1 ´ pq1´Xi “ přn

i“1 Xi p1 ´ pqn´řn

i“1 Xi .

Now let

gpX1, . . . , Xnq “ 1, pp “

nÿ

i“1

Xi, and hppp; pq “ pppp1 ´ pqn´pp.

We see that

i“1

fpXi; pq “ pppp1 ´ pqn´pp “ gpX1, . . . , Xnqhppp; pq.

By factorization theorem,řn

i“1 Xi is a sufficient statistic for p. Since anyinvertible function is also sufficient, we can conclude that sX “

řni“1 Xi{n is

also a sufficient statistic for p.

16

Page 17: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 61 Let X1, . . . , Xniid„ BinomialpN, pq where N is known. Show

that sX{N is a sufficient statistic for p.

Exercise 62 Let X1, . . . , Xniid„ Geometricppq where N is known. Verify

that sX is a sufficient statistic for p.

Exercise 63 Let X1, . . . , Xniid„ Poissonpλq. Prove that sX is a sufficient

statistic for λ.

Exercise 64 Let X1, . . . , Xniid„ Uniformp0, θq. Check that Xpnq is a suffi-

cient statistic for θ.

Exercise 65 Let X1, . . . , Xniid„ Uniformpa, bq. Prove that the vector of es-

timators pXp1q, Xpnqq is a sufficient statistic for the vector of parameters pa, bq.

Exercise 66 Let X1, . . . , Xniid„ Exponentialpβq. Check that sX is a suffi-

cient statistic for β.

Exercise 67 Let X1, . . . , Xniid„ Normalpµ, σ2q. Verify that the vector of

estimators´

sX,1

n ´ 1

nÿ

i“1

pXi ´ sXq2¯

is sufficient for the vector of parameters

pµ, σ2q. Hint: Show first that přn

i“1 Xi,řn

i“1 X2i q is sufficient.

17

Page 18: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

10 Uniform Minimum Variance Unbiased Es-

timator (UMVUE), Lehmann-Scheffe The-

orem

Definition LetX1, . . . , Xniid„ pmf or pdf fpx; θq. An estimator pθ “ pθpX1, . . . , Xnq

of θ is called a uniformly minimum variance unbiased estimator (UMVUE),

if it is unbiased and its variance is minimal, that is, if Eppθq “ θ and Varppθq ď

Varpθq for any unbiased estimator θ.

Remark If an estimator is efficient, that is, its variance attains the CRLB,then it is the UMVUE.

Example 6 Let X1, . . . , Xniid„ Bernoullippq. We would like to find the

UMVUE for p. We recall that sX is an efficient estimator of p. Hence, it isthe UMVUE.

Exercise 68 Let X1, . . . , Xniid„ BinomialpN, pq where N is fixed. Show

that sX{N is the UMVUE for p.

Exercise 69 Let X1, . . . , Xniid„ Poissonpλq. Verify that sX the UMVUE

for λ.

Exercise 70 Let X1, . . . , Xniid„ Exponential with mean β. Check that sX

is the UMVUE for β.

Exercise 71 Let X „ Normalpµ, σ2q where σ is known. Verify that pµ “ sXis the UMVUE for µ.

Remark For some distributions, the CRLB is unattainable. Therefore, wecannot produce an efficient estimator that is the UMVUE of the parameter.Or, in other instances, we might want to find the UMVUE of a function ofthe parameter. Finding an efficient estimator of a function of the parametermay be a daunting task even if an efficient estimator of the parameter ex-ists. To find the UMVUE in those cases, we resort to the Lehmann-Scheffetheorem introduced below.

Definition Suppose X„ pmf or pdf fpx; θq. The distribution fpx; θq iscalled complete if and only if the only unbiased estimator of zero is zero it-self, that is, the condition EpupXqq “ 0 implies that upXq “ 0 everywhere

18

Page 19: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

except on a set of probability zero.Definition A statistic for the parameter of a complete distribution is calledcomplete.

Theorem 4 (Lehmann-Scheffe) Let X1, . . . , Xniid„ pmf or pdf fpx; θq,

and let pθ “ pθpX1, . . . , Xnq be an unbiased estimator of θ based on a complete

sufficient statistic for θ. Then pθ is the unique UMVUE of θ.

Example 7 Let X1, . . . , Xniid„ Uniformp0, θq. Recall that the Fisher infor-

mation Ipθq “ 8 and the CRLB=0. Thus, an efficient estimator of θ doesnot exist. However, we can find the UMVUE for θ if we use the Lehmann-Scheffe theorem.

We have seen earlier thatXpnq is a sufficient statistic for θ, and thatn ` 1

nXpnq

is an unbiased estimator for θ. Therefore, if we can show that Uniformp0, θq

is a complete distribution, it would follow from the Lehmann-Scheffe theorem

thatn ` 1

nXpnq is the UMVUE for θ.

To show that Uniformp0, θq is a complete distribution, we takeX „ Uniformp0, θq

and consider an unbiased estimator upXq of zero. We have

0 “ EpupXqq “

ż θ

0

upxq

θdx.

Differentiating this identity, we get

0 “

ż θ

0

upxq

θdx

ı1

“upθq

θ´

1

θ2

ż θ

0

upxq dx.

The second term is equal to zero, and, therefore, upθq{θ “ 0, or, equivalently,upθq “ 0 for any θ. Thus, the Uniformp0, θq is a complete distribution, andn ` 1

nXpnq is the UMVUE for θ.

Example 8 Let X1, . . . , Xniid„ Bernoullippq. We would like to find the

UMVUE for the variance VarpX1q “ pp1´pq. First we show thatBernoullippq

is a complete distribution. We take X „ Bernoullippq and let upXq be anunbiased estimator of zero. We obtain

0 “ EpupXqq “ up0qp1 ´ pq ` up1qp “ up0q ` pup1q ´ up0qqp.

Since this is a linear function in p, both coefficients must be equal to zero,that is, up0q “ 0 and up1q ´ up0q “ 0. From here, upXq “ 0 for X “ 0 or 1,and, thus, the distribution is complete.

19

Page 20: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Further, we know that sX is a sufficient statistic for p. It remains to find anunbiased estimator of pp1 ´ pq based on sX. Consider sXp1 ´ sXq. Its mean iscomputed as

E“

sXp1 ´ sXq‰

“ Ep sXq ´ Ep sX2q “ Ep sXq ´

Varp sXq ``

Ep sXq˘2ı

“ p ´

”pp1 ´ pq

n` p2

ı

“n ´ 1

npp1 ´ pq.

Hence,n

n ´ 1sXp1´ sXq is an unbiased estimator of pp1´pq. By the Lehmann-

Scheffe theorem, it is the UMVUE for pp1 ´ pq.

Exercise 72 Let X1, . . . , Xniid„ Uniformpa, bq. Show that

1

n ´ 1

`

nXp1q ´

Xpnq

˘

is the UMVUE for a, and1

n ´ 1

`

nXpnq ´ Xp1q

˘

is the UMVUE for b.

Exercise 73 Let X1, . . . , Xniid„ Exponential with mean 1{β. Show that

n ´ 1

n sXis the UMVUE for β.

Exercise 74 LetX1, . . . , Xniid„ Normalpµ, σ2q. Prove that sX is the UMVUE

for µ, and1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is the UMVUE for σ2.

Exercise 75 Let X1, . . . , Xniid„ BinomialpN, pq where N is fixed. Show

thatn

nN ´ 1sX pN ´ sXq is the UMVUE for the variance Npp1 ´ pq.

Exercise 76 Let X1, . . . , Xniid„ Poissonpλq. Derive that

`

1 ´ 1{n˘

sX ` sX2

is the UMVUE for the second moment EpX21 q “ λp1 ` λq.

Exercise 77 LetX1, . . . , Xniid„ Normalpµ, σ2q. Denote by pσ2 “

1

n ´ 1

nÿ

i“1

pXi´

sXq2. Verify that sX2 ´ pσ2{n is the UMVUE for µ2.

20

Page 21: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

11 Ancillary Statistic

Definition Suppose X1, . . . , Xn are independent random variables withsome common pmf or pdf fpx; θq. A statistic upX1, . . . , Xnq is called anancillary statistic, if its distribution doesn’t depend on the parameter θ. Anancillary statistic contains no information about θ.

Proposition LetX1, . . . , Xniid„ pmf or pdf fpx; θq where fpx; θq “ f0px´θq,

that is, θ is a location parameter. Then any statistic that is a function ofXi ´ Xj, i ą j, i, j “ 1, . . . , n is an ancillary statistic.Proof: Since θ is a location parameter, for any i “ 1, . . . , n, Xi “ Zi ` θ,for some Zi which distribution doesn’t depend on θ. Thus, a function ofXi ´ Xj is a function of Zi ´ Zj which distribution is θ-free.

Proposition Let Let X1, . . . , Xniid„ pmf or pdf fpx; σq where fpx; σq “

p1{σqf0px{σq, that is, σ is a scale parameter. Then any statistic that is a

function ofXi

Xj

, i ą j, i, j “ 1, . . . , n is an ancillary statistic.

Proof: Since σ is a scale parameter, for any i “ 1, . . . , n, Xi “ σ Zi,for some Zi which distribution doesn’t depend on σ. Thus, a function ofXi

Xj

“σ Zi

σ Zj

is a function of Zi{Zj which distribution is independent of σ.

Basu’s Theorem A complete sufficient statistic is independent of any an-cillary statistic.

Exercise 78 A random sample X1, . . . , Xn comes from a Uniformpθ, θ`1q

distribution. Show that:(a) θ is a location parameter.(b) The difference X1 ´ X4 is an ancillary statistic for θ.(c) The range R “ Xpnq ´ Xp1q is an ancillary statistic for θ.(d) The two-dimensional statistic pXp1q, Xpnqq is sufficient for θ but not com-plete. Show that it is not independent of the ancillary statistic R. Explainwhy it doesn’t contradict the Basu’s theorem.

Exercise 79 SupposeX1, . . . , Xn are iid random variables that have a Uniformp0, θq

distribution. Show that:(a) θ is a scale parameter.(b) The ratio X2{X5 is an ancillary statistic for θ.

(c) The statistic1

n

nÿ

i“1

lnXi ´ lnX1 is ancillary for θ.

(d) The MLE Xpnq is independent ofX2X3

X4X5

.

21

Page 22: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 80 Let X1, . . . , Xniid„ fpx; θq “ expt´px ´ θqu, x ą θ ą 0. Check

that:(a) θ is a location parameter.

(b) The sample variance S2 “1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is ancillary for θ.

(c) The MLE Xp1q is a complete sufficient statistic for θ.(d) Xp1q is independent of S

2.

Exercise 81 Independent observationsX1, . . . , Xn come fromNormalpµ, 1q

distribution. Verify that:(a) µ is a location parameter.

(b) The sample variance S2 “1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is ancillary for µ.

(c) The sample mean sX is complete and sufficient for µ.(d) The statistics sX and S2 are independent.

12 Bayesian Estimator

Definition Let X1, . . . , Xniid„ pmf or pdf fpx; θq. In the Bayesian approach

to parameter estimation, θ is modeled as a random variable Θ with the priorpmf or pdf πpθq. Thus, the observable random variables X1, . . . , Xn havepmf or pdf fpx|Θq where Θ „ πpθq. The joint pmf or pdf of X1, . . . , Xn andΘ is the function

i“1

fpXi|θqπpθq.

The marginal pmf or pdf of X1, . . . , Xn is

$

&

%

8ÿ

θ“´8

i“1

fpXi|θqπpθq, if Θ is discrete,

ż 8

´8

i“1

fpXi|θq πpθq dθ, if Θ is continuous.

The conditional pmf or pdf of Θ, given X1, . . . , Xn, is called the posteriorpmf or pdf of Θ and is found as

$

&

%

śni“1 fpXi|θq πpθq

ř8

θ“´8

śni“1 fpXi|θqπpθq

, if Θ is discrete,śn

i“1 fpXi|θqπpθqş8

´8

śni“1 fpXi|θq πpθq dθ

, if Θ is continuous.

22

Page 23: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Remark The denominatorř8

θ“´8

śni“1 fpXi|θqπpθq (if Θ is discrete) or

ş8

´8

śni“1 fpXi|θqπpθq dθ (if Θ is continuous) depends only on the values

X1, . . . , Xn and is a normalizing constant for the posterior pmf of pdf.

Definition. The Bayesian estimator of the parameter θ is the mean of theposterior distribution of Θ, that is,

pθpX1, . . . , Xnq “ EpΘ |X1, . . . , Xnq

$

&

%

8ÿ

θ“´8

θ

śni“1 fpXi|θqπpθq

ř8

θ“´8

śni“1 fpXi|θqπpθq

, if Θ is discrete,

ż 8

´8

θ

śni“1 fpXi|θqπpθq

ş8

´8

śni“1 fpXi|θq πpθq dθ

dθ, if Θ is continuous.

Definition Let X1, . . . , Xniid„ pmf or pdf fpx|Θq where Θ „ πpθq. The

prior distribution πpθq is called the conjugate prior to the pmf or pdf fpx|Θq,if the posterior distribution of Θ has the same algebraic form as the priordistribution.

Remark If a conjugate prior is used in the Bayesian estimation, then theposterior distribution has a known form, and so the need for computing thenormalizing constant is eliminated.

Example 9 Let X1, . . . , Xniid„ Bernoullippq. The likelihood function has

the formnź

i“1

pXip1 ´ pq1´Xi “ přn

i“1 Xi p1 ´ pqn´řn

i“1 Xi .

If we look at this function as a function of the unknown parameter p, we cansee that it has the form of a beta density. Thus, using the Bayesian approach,we model p as a random variable, and take a Betapα, βq distribution for someknown α and β as the conjugate prior distribution. The posterior distributionof the random variable p is equal (up to a multiplicative constant) to

přn

i“1 Xi p1 ´ pqn´řn

i“1 Xipα´1p1 ´ pqβ´1 “ přn

i“1 Xi`α´1 p1 ´ pqn´řn

i“1 Xi`β´1.

This means that the posterior distribution of p is also beta with parametersřn

i“1 Xi ` α and n ´řn

i“1 Xi ` β. Therefore, the Bayesian estimator of p isthe mean

pp “

řni“1 Xi ` α

řni“1 Xi ` α ` n ´

řni“1 Xi ` β

řni“1 Xi ` α

n ` α ` β.

Exercise 82 Let X1, . . . , Xniid„ BinomialpN, pq. Show that Betapα, βq

with some known α and β is a conjugate prior distribution for the ran-dom variable p. Verify that the posterior distribution of p is Betap

řni“1 Xi `

23

Page 24: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

α, nN ´řn

i“1 Xi ` βq, and that pp “

řni“1 Xi ` α

nN ` α ` βis the Bayesian estimator

of p.

Exercise 83 Let X1, . . . , Xniid„ Geometricppq. Show that Betapα, βq with

some known α and β is a conjugate prior distribution for the random variablep. Verify that the posterior distribution of p is Betapn`α, n

řni“1 Xi´n`βq,

and that pp “n ` α

řni“1 Xi ` α ` β

is the Bayesian estimator of p.

Exercise 84 LetX1, . . . , Xniid„ Poissonpλq. Prove that Gammapα, βq with

some known α and β is a conjugate prior for the random variable λ. Checkthat Gamma

`řn

i“1 Xi ` α, pn ` 1{βq´1˘

is the posterior distribution of λ,

and that the Bayesian estimator of λ is pλ “

řni“1 Xi ` α

n ` 1{β.

Exercise 85 Let X1, . . . , Xniid„ Normalpµ, σ2q where σ2 is known. Show

that Normalpµ0, σ20q with some known µ0 and σ2

0 is a conjugate prior for therandom variable µ. Check that the posterior distribution of µ is Normal

with mean´

řni“1 Xi

σ2`

µ0

σ20

¯

{

´ n

σ2`

1

σ20

¯

, and variance´ n

σ2`

1

σ20

¯´1

. Verify

also that the Bayesian estimator of µ is pµ “

´

řni“1 Xi

σ2`

µ0

σ20

¯

{

´ n

σ2`

1

σ20

¯

.

Exercise 86 Let X „ Poissonpλq. Suppose λ is considered as a randomvariable with the prior distribution function πp1q “ 1{2, πp2q “ 1{4, andπp3q “ 1{4. Assume that X “ 2 is observed. Show that the Bayesian esti-

mator of λ is pλ “ 1.8332.

13 Likelihood Ratio Test

Definition Let X1, . . . , Xn be iid with pdf fpx; θq. Suppose we want totest H0 : θ “ θ0 against H1 : θ ­“ θ0. Define the likelihood ratio test asfollows. The test statistic is the ratio of the two likelihood functions wherethe parameter θ assumes the values θ0 and the MLE pθ, respectively, that

is, Λ “Lpθ0q

Lppθq“

śni“1 fpXi; θ0q

śni“1 fpXi; pθq

. If θ0 is the true value of θ, then Lpθ0q is

asymptotically the maximum value of Lpθq (Intuitively, if we sample the en-tire population, the most likely value of θ is θ0). Thus, under H0, Λ shouldbe close to 1, and the decision rule for the test is to reject H0 if Λ ď c, where

24

Page 25: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

a constant c is such that α “ PpΛ ď c |H0 is true) for a significance level α.The region tX1, . . . , Xn : Λ ď cu is called the rejection region.

As a rule, Λ is a very complicated function, and its distribution is very hardto figure out. However, an asymptotic distribution can be used.

Proposition UnderH0, for large n, ´2 lnΛ has approximately a chi-squareddistribution with one degree of freedom.

Definition An asymptotic likelihood ratio test with a significance level αhas the test statistic χ2 “ ´2 lnΛ, and rejects H0 if χ

2 ě χ2αp1q, where χ2

αp1q

is the p1 ´ αq´percentile of a chi-squared distribution with one degree offreedom.

Example. Suppose X1, . . . , Xniid„ Bernoullippq, and suppose we are inter-

ested in testing H0 : p “ p0 versus H1 : p ­“ p0. The likelihood ratio is

Λ “

śni“1 pXi

0 p1 ´ p0q1´Xi

śni“1

sXXip1 ´ sXq1´Xi“

pnsX

0 p1 ´ p0qn´n sX

sXn sXp1 ´ sXqn´n sX. For large n, we reject H0

if χ2 “ ´2 lnΛ “ ´2n sX ln`p0sX

˘

´ 2np1 ´ sXq ln`1 ´ p0

1 ´ sX

˘

exceeds the critical

value χ2αp1q.

Exercise 87 Let X1, . . . , Xniid„ BinomialpN, pq with a known N . Suppose

we are testing H0 : p “ p0 against H1 : p ­“ p0. Find the expression for theasymptotic likelihood ratio test statistic. State the decision rule.

Exercise 88 Suppose X1, . . . , Xniid„ Geometricppq. Compute the likeli-

hood ratio test statistic for testing H0 : p “ p0 against H1 : p ­“ p0. Assumen is large. Specify the decision rule.

Exercise 89 Assume X1, . . . , Xniid„ Poissonpλq. We are conducting the

likelihood ratio test with H0 : λ “ λ0 and H1 : λ ­“ λ0. Find the teststatistic for n large. Find the rejection region.

Exercise 90 Consider X1, . . . , Xniid„ Uniformp0, θq. Produce the test

statistic for the asymptotic likelihood ratio test with H0 : θ “ θ0 andH1 : θ ­“ θ0. Specify the decision rule.

25

Page 26: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 91 Let X1, . . . , Xniid„ Exponential with mean β. Write down

the asymptotic likelihood ratio test statistic for testing H0 : β “ β0 versusH1 : β ­“ β0. Specify the rejection region.

Exercise 92 Consider X1, . . . , Xniid„ Normalpµ, σ2q where σ is given. Find

the expression for the asymptotic likelihood ratio test statistic ´2 lnΛ andshow that it has an exact χ2-distribution with one degree of freedom. AssumeH0 : µ “ µ0 and H1 : µ ­“ µ0. State the decision rule.

14 Power Function of a Test

Definition The probability of Type II error is β “ Ppaccept H0 |H1 is trueq.Note that β is a function of θ which range is determined by H1. Typically,β is computed for a specific value of θ in that range.

Definition A power of a statistical test is power “ 1´β “ Ppreject H0 |H1 is trueq.

Example Suppose we have a single observation X from a Binomialp5, pq

distribution which we use to test H0 : p ă 1{2 against H1 : p ě 1{2. For arejection region tX “ 5u, the power of the test is power “ 1 ´ β “ PpX “

5 | p ě 1{2q “ p5, 1{2 ď p ď 1.

Exercise 93 Take X „ Binomialp6, pq. Suppose we are interested in test-ing H0 : p ă 1{3 against H1 : p ě 1{3. Compute the power of the testis we define the rejection region as: (a) tX “ 6u, (b) tX “ 5, 6u, and(c) tX “ 4, 5, 6u.

Exercise 94 Let X1, . . . , Xniid„ Uniformp0, θq. Consider the asymptotic

likelihood ratio test for testing H0 : θ “ θ0 against H1 : θ ­“ θ0 with a sig-nificance level α. The test statistic for this test has been derived in Exercise90. Present the power of this test as a function of θ.

Exercise 95 LetX1, . . . , Xn be a random sample taken from aNormalpµ, σ2q

distribution with some known σ. The testing is done between H0 : µ “ µ0

and H1 : µ “ µ1 where µ1 ą µ0. The rejection region of the test isdefined as t sX ą ku for some constant k. Suppose that the significancelevel α is specified. Prove that the power of this test can be written as

power “ 1 ´ Φ´

Φ´1pαq ´µ1 ´ µ0

σ{?n

¯

where Φ denotes the cdf of the standard

normal distribution.

26

Page 27: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

SOLUTIONS TO EXERCISES

Exercise 1 By definitions of conditional expectation and conditional vari-ance, we write

Var`

ErX|Y s˘

`E`

VarrX|Y s˘

“ E”

`

ErX|Y s˘2ı

´

´

EErX|Y s

¯2

`EErX2|Y s

´E”

`

ErX|Y s˘2ı

“ EpX2q ´`

EX˘2

“ VarpXq.

Exercise 2 We know that ErN |Λs “ VarrN |Λs “ Λ. Conditioning on Λ,we obtain that the mean of N is EN “ EErN |Λs “ EpΛq. The variance ofN is computed as VarpNq “ Var

`

ErN |Λs˘

`E`

VarrN |Λs˘

“ VarpΛq`EpΛq.

Suppose PpΛ “ 2q “ 0.2, PpΛ “ 3q “ 0.5, and PpΛ “ 4q “ 0.3. ThenEpNq “ EpΛq “ p2qp0.2q`p3qp0.5q`p4qp0.3q “ 3.1, and VarpNq “ VarpΛq`

EpΛq “ p4qp0.4q ` p9qp0.5q ` p16qp0.3q ´ 3.12 ` 3.1 “ 4.39.

Exercise 3 We have ErX|Y s “ Y {2, and VarrX|Y s “ Y 2{12. Now we

condition on Y to get EpXq “ EErX|Y s “ EpY {2q “1

2EpY q, and VarpXq “

Var`

ErX|Y s˘

`E`

VarrX|Y s˘

“ VarpY {2q`EpY 2{12q “ 14VarpY q` 1

12

VarpY q`

pEpY qq2‰

“ 13VarpY q ` 1

12pEpY qq2.

If Y „ Uniformp4, 8q, then EpXq “ 12EpY q “ 1

2¨ 6 “ 3, and VarpXq “

13VarpY q ` 1

12pEpY qq2 “ 1

3¨ 1612

` 112

¨ 36 “ 349

“ 3.44.

Exercise 4 We condition on values of Y and use the double-expectation ap-

proach to obtain that EpSq “ E”

Yÿ

i“1

Xi

ı

“ EE”

Yÿ

i“1

Xi |Yı

“ ErY EpX1qs “

EpX1qEpY q, and VarpSq “ Var”

Yÿ

i“1

Xi

ı

“ Var´

E”

Yÿ

i“1

Xi |Yı¯

`E´

Var”

Yÿ

i“1

Xi |Yı¯

VarrY EpX1qs ` ErY VarpX1qs “`

EpX1q˘2 VarpY q ` VarpX1qEpY q.

Exercise 5 (a) By the result of Exercise 4, EpSq “ EpNqEpX1q “ λEpX1q,

and VarpSq “`

EpX1q˘2VarpNq`VarpX1qEpNq “

`

EpX1q˘2

λ`VarpX1qλ “

λEpX21 q.

(b) It is given that λ “ 20, and X1 „ Uniformp10, 110q. Thus, EpSq “

λEpX1q “ 20 ¨ 60 “ 1, 200, and VarpSq “ λEpX21 q “ 20 ¨

ż 110

10

x2

110 ´ 10dx “

27

Page 28: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

20 ¨1

p110q3 ´ p10q3q

100“ 88, 666.67.

Exercise 6 In the formula for the pdf of the i-th order statistic we let i “ 1

to obtain that fXp1qpxq “

n!

p1 ´ 1q!pn ´ 1q!

F pxq‰1´1

fpxq“

1 ´ F pxq‰n´1

n fpxq“

1 ´ F pxq‰n´1

. We can also find the pdf of the minimum as follows:

1´FXp1qpxq “ PpXp1q ě xq “ PpX1 ě x, X2 ě x, . . . , Xn ě xq “

1´F pxq‰n,

therefore, FXp1qpxq “ 1 ´

1 ´ F pxq‰n, and fXpnq

pxq “ F 1Xpnq

pxq “ n fpxq“

1 ´

F pxq‰n´1

.

Exercise 7 We are given that fpxq “ 1, and F pxq “ x, 0 ď x ď 1. Hence,

(a) the i-th order statistic has the pdf fXpiqpxq “

n!

pi ´ 1q!pn ´ iq!xi´1p1 ´

xqn´i, that is, Xpiq „ Betapi, n ´ i ` 1q.(b) If we let i “ 1, we get the pdf of the minimum, fXp1q

pxq “ np1 ´ xqn´1,that is, Xp1q „ Betap1, nq.(c) Letting i “ n, we obtain the pdf of the maximum, fXpnq

pxq “ nxn´1, thatis, Xpnq „ Betapn, 1q.

Exercise 8 The pdf ofX’s is fpxq “ β expt´β xu, and the cdf is F pxq “ 1´

expt´β xu, x ą 0, β ą 0. Therefore, (a) the i-th order statistic has the pdf

fXpiqpxq “

n!

pi ´ 1q!pn ´ iq!

1´expt´β xu‰i´1

β expt´β x u“

expt´β x u‰n´i

n!

pi ´ 1q!pn ´ iq!β expt´β x pn ´ i ` 1qu

1 ´ expt´β xu‰i´1

.

(b) In particular, for i “ 1, the pdf of the minimum is fXp1qpxq “ nβ expt´β nxu,

that is, Xp1q has an exponential distribution with mean1

nβ.

(c) The pdf of the maximum is derived by letting i “ n. We have fXpnqpxq “

nβ expt´β xu“

1 ´ expt´β xu‰n´1

. We can also notice that the cdf of the

maximum is F pxq “`

1 ´ expt´β xu˘n, which can be obtained by either in-

tegrating the density or arguing that all n observations must not exceed x,if the maximum doesn’t exceed x.

Exercise 9 (a) SinceX1, . . . , Xniid„ Uniformp0, θq, fXpnq

pxq “nxn´1

θn, 0 ă

28

Page 29: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

x ă θ. Thus,

FXpnqpxq “

$

&

%

0, x ď 0,`

˘n, 0 ă x ă θ,

1, x ě θ.

ÝÑnÑ8

#

0, x ă θ

1, x ě θ.

The limiting random variable assumes value θ with probability one.

(b) The cdf of Zn “ npθ ´ Xpnqq is derived as

FZnpzq “ PpZn ď zq “ P`

npθ ´ Xpnqq ď z˘

“ P`

Xpnq ě θ ´ z{n˘

“ 1 ´ FXpnqpθ ´ z{nq “ 1 ´

´θ ´ z{n

θ

¯n

“ 1 ´

´

1 ´z

¯n

.

Thus, the asymptotic cdf of Zn, is limnÑ8

1 ´

´

1 ´z

¯nı

“ 1 ´ expt´z{θu,

which corresponds to an exponential distribution with mean θ.

Exercise 10 We write

FYnpyq “ PpYn ď yq “ P´

n`

1 ´ F pXpnqq˘

ď y¯

“ P´

F pXpnqq ě 1 ´y

n

¯

“ 1 ´ P´

Xpnq ď F´1`

1 ´ y{n˘

¯

“ 1 ´“

F`

F´1p1 ´ y{nq˘‰n

“ 1 ´

´

1 ´y

n

¯n

ÝÑnÑ8

1 ´ expt´yu.

This proves that Yn converges in distribution to an exponential random vari-able with the unit mean.

Exercise 11 (a) The pdf ofXp2q is fXp2qpxq “ npn´1qF pxqfpxq

`

1´F pxq˘n´2

.Consequently, the cdf can be computed as:

FXp2qpxq “

ż x

´8

fXp2qpyq dy “

ż x

´8

npn ´ 1qF pyqfpyq`

1 ´ F pyq˘n´2

dy

“ n”

´ F pyq`

1 ´ F pyq˘n´1

ˇ

ˇ

ˇ

x

´8`

ż x

´8

`

1 ´ F pyq˘n´1

fpyq dyı

“ n“

´ F pxq`

1 ´ F pxq˘n´1

´1

n

`

1 ´ F pyq˘n

ˇ

ˇ

ˇ

x

´8

ı

“ 1 ´`

1 ´ F pxq˘n

´ nF pxq`

1 ´ F pxq˘n´1

.

(b) The limiting distribution of Yn “ nF pXp2qq is derived as:

FYnpyq “ PpYn ď yq “ P`

nF pXp2qq ď y˘

“ P`

Xp2q ď F´1py{nq˘

29

Page 30: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

“ FXp2q

`

F´1py{nq˘

“ 1´`

1´F pF´1py{nqq˘n

´nF pF´1py{nqq`

1´F pF´1py{nqq˘n´1

“ 1 ´`

1 ´ y{n˘n

´ n py{nq`

1 ´ y{n˘n´1

ÝÑnÑ8

1 ´ expt´yu ´ y expt´yu.

We get the limiting density by differentiation:“

1´expt´yu´y expt´yu‰1

expt´yu´expt´yu`y expt´yu “ y expt´yu, which is the density ofGammap2, 1q

distribution.

Exercise 12 It is given that F pxq “ expt´β xu, x ě 0. Therefore, FXpnqpxq “

F pxq‰n

1´expt´ β xu

ın

, x ě 0. The asymptotic cdf of Zn “ β Xpnq´lnn

is obtained as follows:

FZnpzq “ PpZn ď zq “ P`

β Xpnq ´ lnn ď z˘

“ P´

Xpnq ďz ` lnn

β

¯

“ FXpnq

´z ` lnn

β

¯

1 ´ exp!

´ βz ` lnn

β

)ın

1 ´expt´zu

n

ın

ÝÑnÑ8

expt´ expt´zuu.

Exercise 13 The likelihood function has the form

Lpp;X1, . . . , Xnq “

i“1

ˆ

N

Xi

˙

pXip1 ´ pqN´Xi

i“1

ˆ

N

Xi

˙

ı

přn

i“1 Xip1 ´ pqnN´řn

i“1 Xi .

The log-likelihood function is

lnLpp;X1, . . . , Xnq “ ln”

i“1

ˆ

N

Xi

˙

ı

`

nÿ

i“1

Xi ln p``

nN ´

nÿ

i“1

Xi

˘

lnp1´ pq.

The MLE pp solves the equation

0 “B lnLpp;X1, . . . , Xnq

Bp

ˇ

ˇ

ˇ

p“pp“

řni“1 Xi

pp´

nN ´řn

i“1 Xi

1 ´ pp.

Hence, the MLE of p is

pp “

řni“1 Xi

nN“

sX

N.

To understand the structure of this estimator, we can rewrite it as

pp “

řni“1 pXi{Nq

n,

30

Page 31: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

which is the average of proportions of successes among N trials.

Exercise 14 The likelihood function has the form

Lpp;X1, . . . , Xnq “

i“1

p p1 ´ pqXi´1 “ pn p1 ´ pqřn

i“1 Xi´n.

The log-likelihood function is

lnLpp;X1, . . . , Xnq “ n ln p ``

nÿ

i“1

Xi ´ n˘

lnp1 ´ pq.

The MLE pp solves the equation

0 “B lnLpp;X1, . . . , Xnq

Bp

ˇ

ˇ

ˇ

p“pp“

n

pp´

řni“1 Xi ´ n

1 ´ pp,

and so,

pp “n

řni“1 Xi

“1sX.

Since the mean of Xi’s is equal to 1{p, the MLE is an estimator of p derivedfrom estimating the mean by the sample mean sX.

Exercise 15 The likelihood function is

Lpλ;X1, . . . , Xnq “

i“1

λXi expt´λu

Xi!“

i“1

1

Xi!

ı

λřn

i“1 Xi expt´nλu,

and the log-likelihood function takes the form

lnLpλ;X1, . . . , Xnq “ ln”

i“1

1

Xi!

ı

`

nÿ

i“1

Xi lnλ ´ nλ.

The MLE pλ is the solution of the equation

0 “B lnLpλ;X1, . . . , Xnq

ˇ

ˇ

ˇ

λ“pλ“

řni“1 Xi

pλ´ n.

Hence,

pλ “

řni“1 Xi

n“ sX.

Indeed, it is intuitive to estimate the mean λ by the sample mean sX.

31

Page 32: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 16 The likelihood function is derived as

Lpθ;X1, . . . , Xnq “

i“1

1

θIt0 ď Xi ď θu “

1

θnIt0 ď Xpnq ď θu.

Here ItAu denotes the indicator function of an event A, that is, it is equalto 1 if A occurs, and 0, otherwise. The last equality is justified by noticingthat the events t0 ď Xi ď θu occur simultaneously for all i “ 1, . . . , n, if andonly if the event t0 ď Xpnq ď θu occurs.

Next, we plot the likelihood function Lpθq “ Lpθ;X1, . . . , Xnq “ 1{θn, θ ě

Xpnq, against θ to see where it attains the maximum value.

-

6

0 Xpnq θ

Lpθq

As seen on the graph, the maximum is attained at Xpnq, thus pθ “ Xpnq is theMLE of θ. On intuitive level, if X1, . . . , Xn are observed, and we know thateach of them doesn’t exceed θ, then our best guess about the value of θ isthe maximum of all the observations.

Exercise 17 The likelihood function is

Lpa, bq “ Lpa, b;X1, . . . , Xnq “

i“1

1

b ´ aIta ď Xi ď bu

“1

pb ´ aqnIta ď Xp1q ď Xpnq ď bu.

To maximize this likelihood function, we have to minimize the denominatorpb ´ aqn, or, equivalently, minimize the distance between a and b. Since itmust be true that a ď Xp1q ď Xpnq ď b, the distance is minimal when a isequal to Xp1q and b is equal to Xpnq. This leads to conclusion that the MLE

of a is pa “ Xp1q and the MLE of b is pb “ Xpnq.

Exercise 18 The likelihood function is written as

Lpβ;X1, . . . , Xnq “

i“1

1

βexpt´Xi{βu “

1

βnexpt´

nÿ

i“1

Xi{βu,

32

Page 33: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

and the log-likelihood function takes the form

lnLpβ;X1, . . . , Xnq “ ´n ln β ´

řni“1 Xi

β.

The maximum likelihood estimator of β satisfies the equation

0 “B lnLpβ;X1, . . . , Xnq

ˇ

ˇ

ˇ

β“pβ“ ´

n

pβ`

řni“1 Xi

pβ2.

From here,

pβ “

řni“1 Xi

n“ sX.

We see that it is only reasonable to estimate the mean β by the sample meansX.

Exercise 19 The likelihood function has the form

Lpβ;X1, . . . , Xnq “

i“1

β expt´β Xiu “ βn expt´βnÿ

i“1

Xiu,

and the log-likelihood function is

lnLpβ;X1, . . . , Xnq “ n ln β ´ βnÿ

i“1

Xi.

Differentiating the log-likelihood function, we get an equation for the MLEpβ:

0 “B lnLpβ;X1, . . . , Xnq

ˇ

ˇ

ˇ

β“pβ“

n

pβ´

nÿ

i“1

Xi.

Thus,

pβ “n

řni“1 Xi

“1sX.

Exercise 20 First, we obtain the likelihood function. We write

Lpµ, σ2;X1, . . . , Xnq “

i“1

1?2πσ2

exp!

´pXi ´ µq2

2σ2

)

“1

p2πσ2qn{2exp

!

´

řni“1 pXi ´ µq2

2σ2

)

.

Next, we find the log-likelihood function as

lnLpµ, σ2;X1, . . . , Xnq “ ´n

2lnp2πq ´

n

2lnσ2 ´

řni“1 pXi ´ µq2

2σ2.

33

Page 34: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The maximum likelihood estimators pµ and pσ2 are solutions of the system oftwo equations

$

&

%

0 “B lnLpµ, σ2;X1, . . . , Xnq

ˇ

ˇ

ˇ µ“pµ,σ2“pσ2

řni“1 pXi ´ pµq

pσ2,

0 “B lnLpµ, σ2;X1, . . . , Xnq

Bσ2

ˇ

ˇ

ˇ µ“pµ,σ2“pσ2

“ ´n

2pσ2`

řni“1 pXi ´ pµq2

2pσ4,

so

pµ “

řni“1 Xi

n“ sX, and pσ2 “

řni“1 pXi ´ sXq2

n.

Since µ is the mean of the normal distribution, the estimator is indeed in-tuitive. The variance is estimated by the average squared distance betweeneach observation and the sample mean, which is a natural measure of spread.

Exercise 21 We derive the likelihood function as follows:

Lpα;X1, . . . , Xnq “

i“1

αXα´1i expt´Xα

i u

“ αn´

i“1

Xi

¯α´1

expt´

nÿ

i“1

Xαi u.

The log-likelihood function is given by

lnLpα;X1, . . . , Xnq “ n lnα ` pα ´ 1q

nÿ

i“1

lnXi ´

nÿ

i“1

Xαi .

Differentiating the log-likelihood function with respect to α and setting thederivative equal to zero, we obtain the equation that the MLE of α solves:

0 “B lnLpα;x1, . . . , Xnq

ˇ

ˇ

ˇ

α“pα“

n

pα`

nÿ

i“1

lnXi ´

nÿ

i“1

X pαi lnXi.

There is no explicit solution to this equation, thus is has to be solved numer-ically. For the observations X1 “ 0.4, X2 “ 0.3, and X3 “ 0.6, the MLE of αsolves

3

pα` ln 0.4 ` ln 0.3 ` ln 0.6 ´ p0.4pα ln 0.4 ` 0.3pα ln 0.3 ` 0.6pα ln 0.6q “ 0.

Using Excel, for example, it is easy to verify that pα “ 1.0067.

Exercise 22 In Example 1 we have shown that the maximum of the likeli-hood function

Lppq “ Lpp;X1, . . . , Xnq “ přn

i“1 Xip1 ´ pqn´řn

i“1 Xi

34

Page 35: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

is attained when p “ sX.

We will plot this likelihood function against values of p when sX is on eitherside of 1{5 to see where the maximums of this function are attained onr0, 1{5s.

-

6

-

6

0 1sX15

p

Lppq

pp

0 1sX15

p

Lppq

pp

From the graphs, if 0 ď sX ď 1{5, then the maximum of Lppq on the interval0 ď p ď 1{5 is attained at sX, whereas when sX ą 1{5, then the maximum ofthe likelihood function on this interval is attained at 1{5. Thus, the MLE ofp is

pp “

#

sX, if 0 ď sX ď 1{5,

1{5, if sX ą 1{5.

Exercise 23 We know from Exercise 18 that in the general case of β ą 0,

the likelihood function Lpβq “1

βnexpt´

nÿ

i“1

Xi{βu attains its maximum at

pβ “ sX. In this exercise, the values of β are bounded from below by 4. Thetwo graphs below present two possible scenarios: when 0 ď sX ă 4 and whensX ě 4.

-

6

-

6

0 sX 4 β

Lpβq

0 sX4 β

Lpβq

As seen on the graphs, the maximum of the likelihood function is attainedon r4,8q at pβ “ 4 if 0 ď sX ă 4, and at pβ “ sX, if sX ě 4.

Exercise 24 From Exercise 20, we know that if there are no restrictions on

35

Page 36: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

the value of µ, the maximum of the likelihood function

Lpµq “1

p2πqn{2exp

´

nÿ

i“1

pXi ´ µq2{2(

is attained at pµ “ sX. In the present exercise, it is assumed that µ ě 0.

To see how the plot of Lpµq looks like, we rewrite the likelihood function as

Lpµq “1

p2πqn{2exp

!

´1

2

´

nÿ

i“1

X2i ´ 2µn sX ` nµ2

¯)

“1

p2πqn{2exp

!

´1

2

nÿ

i“1

X2i `

n

2sX2)

exp!

´n

2pµ ´ sXq2

)

.

From here we can see that Lpµq is bell shaped and is centered around sX.Now we plot Lpµq in the cases sX ě 0 and sX ă 0, respectively, to determineat what respective points the maximums are attained on r0,8q.

-

6

-

6

0 sX µ

Lpµq

0sX µ

Lpµq

As depicted on the graphs, in the case when sX ě 0, the MLE of µ is pµ “ sX,while if sX ă 0, then pµ “ 0.

Exercise 25 The likelihood function is calculated as

Lpθ;X1, X2, X3q “ fp1; θqfp4; θqfp2; θq “

$

&

%

p1{4qp1{4qp1{2q “ 0.03125, if θ “ 0,

p1{2qp1{2qp0q “ 0, if θ “ 1{3,

p3{5qp1{5qp1{5q “ 0.024, if θ “ 1{4.

The largest value of the likelihood function is 0.03125 and corresponds to theMLE pθ “ 0.

Exercise 26 We know from Example 1 that the MLE of p is pp “ sX, andby Theorem 1, the MLE of VarpX1q “ pp1 ´ pq is Xp1 ´ Xq.

36

Page 37: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 27 By Exercise 14, the MLE of p is pp “ 1{ sX. Hence, usingTheorem 1, we find that the MLE of EpX1q “ 1{p as 1{p1{ sXq “ sX.

Exercise 28 By Exercise 15, the MLE of λ is pλ “ sX. Thus, using Theorem1, we conclude that the MLE of PpX1 “ 1q “ λ expt´λu is sX expt´ sXu.

Exercise 29 As shown in Exercise 16, the MLE of θ is pθ “ Xpnq. ApplyingTheorem 1, we get that the MLE of VarpX1q “ θ2{12 is X2

pnq{12.

Exercise 30 The likelihood function has the form

Lpθ;X1, . . . , Xnq “

i“1

p1 ´ expt´θuqXi pexpt´θuq1´Xi

“ p1 ´ expt´θuqřn

i“1 Xi pexpt´θuqn´řn

i“1 Xi .

This is a Poisson distribution truncated at x “ 1, or, alternatively, it can belooked at as a Bernoulli distribution with p “ 1 ´ expt´θu. The quickestway to find the MLE of θ is to recall from by Example 1 that the MLE ofp is pp “ sX, and now use Theorem 1 to conclude that the MLE of θ solvespp “ sX “ 1 ´ expt´pθu. Thus, pθ “ ´ lnp1 ´ sXq.

Exercise 31 In Exercise 20 we have shown that the MLE of σ2 is pσ2 “1

n

nÿ

i“1

pXi ´ sXq2. We use this result and Theorem 1 to conclude that the

MLE of σ is pσ “?pσ2 “

d

1

n

nÿ

i“1

pXi ´ sXq2.

Exercise 32 To find the MM estimator of p, we equate the theoretical andempirical first moments. We have

EpX1q “ p “

řni“1 Xi

n“ sX.

The solution is pp “ sX, and, thus, the MM estimator coincides with the MLEfor p.

Exercise 33 The MM estimator for p solves the equation EpX1q “ Np “ sX,or pp “ sX{N . It is the same as the MLE.

37

Page 38: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 34 The MM estimator for p satisfies

EpX1q “1

p“ sX.

Hence, pp “ 1{ sX, and it coincides with the MLE for p.

Exercise 35 The MM estimator for λ is the solution of the equation

EpX1q “ λ “ sX,

and so, pλ “ sX. It is the same as the MLE.

Exercise 36 To find the MM estimator for θ we write

EpX1q “θ

2“ sX,

thus, pθ “ 2 sX. This estimator is not the same as Xpnq, the MLE of θ.Moreover, for some observations, 2 sX is smaller than Xpnq, and hence, theMM estimator doesn’t always make sense. For example, if X1 “ 1, X2 “

1, X3 “ 2, and X4 “ 8. Then 2 sX “ 6, whereas Xp4q “ 8, so we have anobservation that exceeds our MM estimate of θ.

Exercise 37 We find the MM estimators for a and b by solving the systemof equations:

$

&

%

EpX1q “a ` b

2“ sX,

EpX21 q “

ż b

a

x2

b ´ adx “

b3 ´ a3

3pb ´ aq“

b2 ` ab ` a2

3“

řni“1 X2

i

n.

Hence, pa and pb satisfy the equations$

&

%

pa `pb “ 2 sX,

pa2 ` papb `pb2 “ 3

řni“1 X2

i

n.

Squaring the first equation and subtracting the second, we get$

&

%

pa `pb “ 2 sX,

papb “ 4 sX2 ´ 3

řni“1 X2

i

n.

Letting pb “ 2 sX ´ pa and plugging it into the second equation, we arrive at aquadratic equation. The system becomes

$

&

%

pa `pb “ 2 sX,

pa2 ´ 2 sXpa ` 4 sX2 ´ 3

řni“1 X2

i

n“ 0.

38

Page 39: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The solution of this system is

pa “ sX ´

c

řni“1 X2

i

n´ sX2

¯

, and pb “ sX `

c

řni“1 X2

i

n´ sX2

¯

.

Note that these estimators are not the same as the MLE’s Xp1q and Xpnq. In

addition, they may not make sense for some data sets, where the minimumis below pa and/or the maximum is above pb.

Exercise 38 The MM estimator for β is the solution of the equationEpX1q “ β “ sX, thus, pβ “ sX, and is equal to the MLE.

Exercise 39 The MM estimator pβ satisfies sX “ EpX1q “ 1{pβ. Thus,pβ “ 1{ sX,the same as the MLE.

Exercise 40 We write Epppq “ Ep sX{Nq “ EpX1q{N “ Np{N “ p, thus theestimator is unbiased.

Exercise 41 The sumřn

i“1 Xi of n independentGeometricppq random vari-ables has a NegativeBinomialpn, pq distribution with the pmf

P p

nÿ

i“1

Xi “ xq “

ˆ

x ´ 1

n ´ 1

˙

pnp1 ´ pqx´n, x “ n, n ` 1, . . . .

So, we write

Epppq “ E` 1sX

˘

“ E` nřn

i“1 Xi

˘

8ÿ

x“n

n

x

ˆ

x ´ 1

n ´ 1

˙

pnp1 ´ pqx´n ­“ p.

Thus, the estimator is biased.

For an estimator pp “ pppX1q to be an unbiased estimator of p, it must satisfy

the identity Epppq “ EppppX1qq “

8ÿ

x“1

pppxqpp1 ´ pqx´1 “ p. Since the left-hand

side is a polynomial in p, the only solution is

pppX1q “

#

1, if X1 “ 1,

0, if X1 “ 2, 3, . . . .

39

Page 40: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 42 We have that Ep sXq “ EpX1q “ 1{p, thus, it is an unbiasedestimator.

Exercise 43 We write Eppλq “ Ep sXq “ λ, hence, pλ is an unbiased estimatorof λ.

Exercise 44 We start by finding the cdf of the largest order statistic:

FXpnqpx; θq “ PpXpnq ď xq “ PpX1 ď x, . . . , Xn ď xq

“ PpX1 ď xq ¨ ¨ ¨PpXn ď xq, by independence,

“xn

θn, 0 ď x ď θ.

From here, the density of Xpnq is fXpnqpx; θq “ F 1

Xpnqpx; θq “ nxn´1{θn, 0 ď

x ď θ. And thus the expected value is derived as

EpXpnqq “

ż θ

0

xnxn´1

θndx “

n

n ` 1θ “

`

1 ´1

n ` 1

˘

θ ă θ.

We can see that Xpnq is a biased estimator of θ, and, in fact, it underestimatesθ by 1{pn ` 1qth of θ, on average. An unbiased estimator of θ based on the

maximum value isn ` 1

nXpnq.

In the case of the MM estimator of θ we write Ep2 sXq “ 2EpX1q “ 2pθ{2q “ θ.Thus, it is unbiased.

Exercise 45 We derive the cdf of the smallest order statistic. We have

PpXp1q ě xq “ PpX1 ě x, . . . , Xn ě xq

“ PpX1 ě xq ¨ ¨ ¨P pXn ě xq, by independence,

“pb ´ xqn

pb ´ aqn, a ď x ď b.

Hence, the cdf of Xp1q is

FXp1qpx; a, bq “ PpXp1q ď xq “ 1 ´ PpXp1q ě xq “ 1 ´

pb ´ xqn

pb ´ aqn, a ď x ď b.

The pdf is equal to fXp1qpx; a, bq “ F 1

Xp1qpx; a, bq “ n pb´ xqn´1{pb´ aqn, a ď

x ď b. The expectation is found as

EpXp1qq “

ż b

a

xnpb ´ xqn´1

pb ´ aqndx “ ´

ż b

a

pb ´ x ´ bqnpb ´ xqn´1

pb ´ aqndx

40

Page 41: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

“ ´

ż b

a

npb ´ xqn

pb ´ aqndx ` b

ż b

a

npb ´ xqn´1

pb ´ aqndx

“ ´n

n ` 1pb ´ aq ` b “ a `

1

n ` 1pb ´ aq ą a.

Thus, Xp1q is biased, and overestimates the lower endpoint a by 1{pn ` 1qthof the length b ´ a of the interval, on average.

Further, the cdf of the nth order statistic is

FXpnqpx; a, bq “ PpXpnq ď xq “ PpX1 ď x, . . . , Xn ď xq

“ PpX1 ď xq ¨ ¨ ¨PpXn ď xq, by independence,

“px ´ aqn

pb ´ aqn, a ď x ď b.

The pdf of Xpnq is fXpnqpx; a, bq “ F 1Xpnq

px; a, bq “ n px ´ aqn´1{pb ´ aqn, a ď

x ď b. The mean is computed as

EpXpnqq “

ż b

a

xnpx ´ aqn´1

pb ´ aqndx “

ż b

a

px ´ a ` aqnpx ´ aqn´1

pb ´ aqndx

ż b

a

npx ´ aqn

pb ´ aqndx ` a

ż b

a

npx ´ aqn´1

pb ´ aqndx

“n

n ` 1pb ´ aq ` a “ b ´

1

n ` 1pb ´ aq ă b.

This indicates that Xpnq is biased, and, on average, it underestimates theupper endpoint b by 1{pn ` 1qth of the length b ´ a of the interval.

To see what estimators based on Xp1q and Xpnq are unbiased estimators of aand b, we solve the following system with respect to a and b:

$

&

%

EpXp1qq “ a `1

n ` 1pb ´ aq,

EpXpnqq “ b ´1

n ` 1pb ´ aq.

Adding and subtracting the equations yield$

&

%

E`

Xp1q ` Xpnq

˘

“ a ` b,

E`

Xpnq ´ Xp1q

˘

“n ´ 1

n ` 1pb ´ aq,

or

$

&

%

E`

Xp1q ` Xpnq

˘

“ a ` b,

E”n ` 1

n ´ 1

`

Xpnq ´ Xp1q

˘

ı

“ b ´ a.

Again adding and subtracting the equations yield$

&

%

a “ E“1

2

`

Xp1q ` Xpnq ´n ` 1

n ´ 1pXpnq ´ Xp1qq

˘‰

“ E“ 1

n ´ 1

`

nXp1q ´ Xpnq

˘‰

,

b “ E“1

2

`

Xp1q ` Xpnq `n ` 1

n ´ 1pXpnq ´ Xp1qq

˘‰

“ E“ 1

n ´ 1

`

nXpnq ´ Xp1q

˘‰

.

41

Page 42: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Thus,1

n ´ 1

`

nXp1q ´Xpnq

˘

is an unbiased estimator of a, and1

n ´ 1

`

nXpnq ´

Xp1q

˘

is an unbiased estimator of b.

Exercise 46 We know that Ep sXq “ EpX1q “ β. Thus, sX is an unbiasedestimator of β.

Exercise 47 As the sum of n independent exponential random variables,řn

i“1 Xi has a Gamma distribution with the pdf fpxq “βn xn´1 expt´β xu

pn ´ 1q!,

x ą 0. Hence, we expected value of 1{ sX can be computed explicitly as fol-lows:

E`

1{ sX˘

“ E`

n{

nÿ

i“1

Xi

˘

ż 8

0

n

x

βn xn´1 expt´β xu

pn ´ 1q!dx

“npn ´ 2q!

pn ´ 1q!β

ż 8

0

βn´1 xn´2 expt´β xu

pn ´ 2q!dx “

n

n ´ 1β.

Thus, 1{ sX is a biased estimator of β, butn ´ 1

n sXis unbiased.

Exercise 48 Since Ep sXq “ EpX1q “ µ, sX is an unbiased estimator of µ.

Next, we compute the expected value of pσ2. We write

Eppσ2q “ E” 1

n

nÿ

i“1

pXi ´ sXq2ı

“1

nE`

nÿ

i“1

X2i ´ n sX2

˘

“ EpX21 q ´ Ep sX2q “ VarpX1q ` pEpX1qq2 ´

Varp sXq ` pEp sXqq2‰

“ σ2 ` µ2 ´`σ2

n` µ2

˘

“n ´ 1

nσ2.

Hence,n

n ´ 1pσ2 “

1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is an unbiased estimator of σ2.

Exercise 49 Let X „ BinomialpN, pq. The pmf has the form fpx; pq “ˆ

N

x

˙

pxp1 ´ pqN´x, x “ 0, . . . , N . We compute ln fpx; pq “ ln

ˆ

N

x

˙

`

x ln p ` pN ´ xq lnp1 ´ pq, and, thus,

B ln fpx; pq

Bp“

x

N ´ x

1 ´ p, and

B2 ln fpx; pq

Bp2“ ´

x

p2´

N ´ x

p1 ´ pq2.

42

Page 43: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Hence, the Fisher information is equal to

Ippq “ ´E´

B2 ln fpX; pq

Bp2

¯

“ ´E´

´X

p2´

N ´ X

p1 ´ pq2

¯

“EpXq

p2`

N ´ EpXq

p1 ´ pq2“

Np

p2`

N ´ Np

p1 ´ pq2“

N

pp1 ´ pq.

The CRLB is found as

1

n Ippq“

1

n Npp1´pq

“pp1 ´ pq

nN.

The estimator pp “ sX{N is efficient because it is unbiased and

Varpppq “ Var`

sX

N

˘

“VarpX1q

nN2“

N pp1 ´ pq

nN2“

pp1 ´ pq

nN“ CRLB.

The estimatorX1{N is an unbiased estimator of p since EpX1{Nq “ N p{N “

p but it is not efficient for

Var`X1

N

˘

“N pp1 ´ pq

N2“

pp1 ´ pq

pp1 ´ pq

nN“ CRLB.

Likewise, pX1 ` X2q{p2Nq is an unbiased estimator of p because

E”X1 ` X2

2N

ı

“EpX1q ` EpX2q

2N“

2N p

2N“ p,

however, it is not efficient due to the fact that

Var”X1 ` X2

2N

ı

“VarpX1q ` VarpX2q

4N2

“2N pp1 ´ pq

4N2“

pp1 ´ pq

2Ną

pp1 ´ pq

nN“ CRLB.

Exercise 50 Let X „ Poissonpλq. The pmf can be written as fpx;λq “λx

x!expt´λu, x “ 0, . . . . We obtain that ln fpx;λq “ x lnλ´ lnpx!q´λ, and,

thus,B ln fpx;λq

λ“

x

λ´ 1, and

B2 ln fpx;λq

Bλ2“ ´

x

λ2.

Therefore, the Fisher information is equal to

Ipλq “ ´E´

B2 ln fpX;λq

Bλ2

¯

“ ´E´

´X

λ2

¯

“EpXq

λ2“

λ

λ2“

1

λ.

The CRLB is found as1

n Ipλq“

1

n{λ“

λ

n.

43

Page 44: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The estimator pλ “ sX is unbiased, and its variance is equal to the CRLB.Indeed,

Varppλq “ Varp sXq “VarpX1q

n“

λ

n“ CRLB.

Thus, sX is an efficient estimator of λ.

The estimator pX1 ` X3 ` X5q{3 is unbiased, that is,

E”X1 ` X3 ` X5

3

ı

“EpX1q ` EpX3q ` EpX5q

3“

3“ λ,

but it is not an efficient estimator of λ since

Var”X1 ` X3 ` X5

3

ı

“VarpX1q ` VarpX3q ` VarpX5q

9

“3λ

9“

λ

λ

n“ CRLB.

Exercise 51 The density of a Uniformp0, θq distribution has the form

fpx; θq “1

θIt0 ď x ď θu.

Therefore, the support in x depends on θ, and, by definition, the Fisher in-formation is equal to infinity, that is, Ipθq “ 8. As a result, the CRLB“ 1{pn Ipθqq “ 0. This is not an informative lower bound since it doesn’thelp to find an estimator of θ that is “the best” in some way (for example,efficient in the Cramer-Rao sense).

It is very instructive to see why we define the Fisher information to be infinitein this case. We will formally find the expression for the Fisher informationdisregarding the fact that the density has support in x that depends on θ.We write

ln fpx; θq “ ´ ln θ,B ln fpx; θq

Bθ“ ´

1

θ,

hence,

Ipθqdef“ E

´

B ln fpX; θq

¯2

“1

θ2, and CRLB “

1

n {θ2“

θ2

n.

Now we consider, for example, the MM unbiased estimator pθ “ 2 sX. Itsvariance is computed as

Varp2 sXq “4VarpX1q

n“

4 θ2

12n“

θ2

3n,

44

Page 45: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

which is smaller than the CRLB:

θ2

3nă

θ2

n“ CRLB.

This brings us to a contradiction. It follows that the Cramer-Rao theoremdoes not apply to a uniform distribution unless the Fisher information isinfinite.

Exercise 52 The density is of the form

fpx; βq “1

βexpt´x{βu, x ą 0.

Thus,

ln fpx; βq “ ´ ln β ´x

β,

B ln fpx; βq

Bβ“ ´

1

β`

x

β2,

B2 ln fpx; βq

Bβ2“

1

β2´

2x

β3,

and, thus,

Ipβq “ ´E´

B2 ln fpX; βq

Bβ2

¯

“ ´E´ 1

β2´

2X

β3

¯

“ ´1

β2`

2EpXq

β3“ ´

1

β2`

β3“

1

β2.

The CRLB is found as

CRLB “1

n{β2“

β2

n.

To verify that the unbiased estimator pβ “ sX is efficient, we write

Varp sXq “VarpX1q

n“

β2

n“ CRLB.

Exercise 53 The density is fpx; βq “ β expt´β xu, x ą 0. Hence, the log-arithm of the density has the form ln fpx; βq “ ln β ´ β x, which partialderivatives are

B ln fpx; βq

Bβ“

1

β´ x, and

B2 ln fpx; βq

Bβ2“ ´

1

β2.

Thus, the Fisher information is equal to

Ipβq “ ´E´

B2 ln fpX; βq

Bβ2

¯

“ ´E´

´1

β2

¯

“1

β2.

45

Page 46: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The CRLB“1

n Ipβq“

β2

n.

Next, we compute the variance of the unbiased estimatorn ´ 1

n sX. We get

Var´n ´ 1

n sX

¯

“ pn ´ 1q2”

ż 8

0

1

x2

βn xn´1 expt´β xu

pn ´ 1q!dx

´

´

ż 8

0

1

x

βn xn´1 expt´β xu

pn ´ 1q!dx

¯2ı

“ pn´1q2”β2 pn ´ 3q!

pn ´ 1q!

ż 8

0

βn´2 xn´3 expt´β xu

pn ´ 3q!dx

´

´β pn ´ 2q!

pn ´ 1q!

ż 8

0

βn´1 xn´2 expt´β xu

pn ´ 2q!

¯2ı

“ pn´1q2 β2” 1

pn ´ 1qpn ´ 2q´

1

pn ´ 1q2

ı

“ β2”n ´ 1

n ´ 2´ 1

ı

“β2

n ´ 2ą

β2

n“ CRLB.

Exercise 54 We obtain

fpx;µq “1

?2πσ2

exp!

´px ´ µq2

2σ2

)

, ln fpx;µq “ ´1

2lnp2πσ2q ´

px ´ µq2

2σ2,

B ln fpx;µq

Bµ“

x ´ µ

σ2,

B2 ln fpx;µq

Bµ2“ ´

1

σ2.

From here,

Ipµq “ ´E´

B2 ln fpX;µq

Bµ2

¯

“ ´E`

´1

σ2

˘

“1

σ2, and CRLB “

1

n{σ2“

σ2

n.

The unbiased estimator pµ “ sX has variance

Varppµq “ Varp sXq “VarpX1q

n“

σ2

n“ CRLB.

Exercise 55 We have

fpx;σ2q “1

?2πσ2

exp!

´px ´ µq2

2σ2

)

, ln fpx;µq “ ´1

2lnp2πq´

1

2lnpσ2q´

px ´ µq2

2σ2,

B ln fpx;σ2q

Bσ2“ ´

1

2σ2`

px ´ µq2

2pσ2q2,

B2 ln fpx;σ2q

Bpσ2q2“

1

2 σ4´

px ´ µq2

σ6.

Whence,

Ipσ2q “ ´E´

B2 ln fpX;σ2q

Bpσ2q2

¯

“ ´E´ 1

2 σ4´

pX ´ µq2

σ6

¯

“ ´1

2σ4`

EpX ´ µq2

σ6“ ´

1

2 σ4`

σ2

σ6“

1

2σ4.

46

Page 47: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The CRLB is as follows:

CRLB “1

n{p2σ4q“

2 σ4

n.

The estimator pσ21 “ 1

n

řni“1 pXi ´ µq2 is unbiased since

E” 1

n

nÿ

i“1

pXi ´ µq2ı

“ EpX1 ´ µq2 “ σ2.

To see what the variance of pσ21 is, we notice that pXi´µq{σ are iidNormalp0, 1q

random variables, i “ 1, . . . , n. Thus, their sum of squaresnÿ

i“1

pXi ´ µq2{σ2

has a chi-squared distribution with n degrees of freedom. It is know that itsvariance is equal to 2n. Therefore,

Varppσ21q “ Var

” 1

n

nÿ

i“1

pXi ´ µq2ı

“σ4

n2Var

nÿ

i“1

pXi ´ µq2

σ2

ı

“σ4

n22n “

2σ4

n“ CRLB.

Further, the unbiased estimator pσ22 “

1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is not an efficient

estimator of σ2 because its variance is larger than the CRLB. The easiestway to find the variance is to notice that pXi ´ sXq{σ are Normalp0, 1q ran-dom variables, i “ 1, . . . , n. They are not independent because there is onerelation that connects all of them:

nÿ

i“1

Xi ´ sX

σ“

1

σ

´

nÿ

i“1

Xi ´ n sX¯

“ 0.

Therefore, the sum of squares of these variablesnÿ

i“1

pXi ´ sXq2{σ2 is a chi-

squared random variable with n ´ 1 degrees of freedom. Thus, its varianceis equal to 2pn ´ 1q. We write

Varppσ22q “ Var

” 1

n ´ 1

nÿ

i“1

pXi ´ sXq2ı

“σ4

pn ´ 1q2Var

´

nÿ

i“1

pXi ´ sXq2

σ2

¯

“σ4

pn ´ 1q22pn ´ 1q “

2σ4

n ´ 1ą

2σ4

n“ CRLB.

Exercise 56 Efficient estimators are unbiased and their variances are equal

to CRLB “1

n IpθqÑ 0 as n Ñ 8. Hence, efficient estimators are consistent.

47

Page 48: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Further, the MLEs for the parameters of the listed distributions are efficient,and, therefore, consistent.

Exercise 57 (a) The density of Xpnq is fXpnqpxq “

nxn´1

θn, 0 ă x ă θ,

therefore, the mean squared error of the unbiased estimatorn ` 1

nXpnq is

computed as follows:

MSE “ Var”n ` 1

nXpnq

ı

´n ` 1

n

¯2

Var`

Xpnq

˘

´n ` 1

n

¯2 ”ż θ

0

nxn`1

θndx ´

´ nθ

n ` 1

¯2ı

´n ` 1

n

¯2 ” nθ2

n ` 2´

n2θ2

pn ` 1q2

ı

´n ` 1

n

¯2 n θ2

pn ` 2qpn ` 1q2“

θ2

npn ` 2qÑ 0 as n Ñ 8.

Thus, it is a consistent estimator of θ.

(b) The bias of Xpnq is equal to EpXpnqq ´ θ “n

n ` 1θ ´ θ “ ´

θ

n ` 1. Since

the bias goes to zero as n increases, this estimator is asymptotically unbiased.Its mean square error is

MSE “ VarpXpnqq `“

biaspXpnq, θq‰2

“n θ2

pn ` 2qpn ` 1q2`

´θ

n ` 1

ı2

“θ2

pn ` 1q2

” n

n ` 2` 1

ı

“2θ2

pn ` 1qpn ` 2q.

Since MSE Ñ 0 as n Ñ 8, the estimator is consistent.

(c) First we will show thatn ` 2

n ` 1Xpnq has the smallest MSE among all es-

timators of the form cXpnq, where c “ cpnq is a function of n. We write

MSE “ VarpcXpnqq `“

biaspcXpnq, θq‰2

“c2 n θ2

pn ` 2qpn ` 1q2`

´ c n θ

n ` 1´ θ

¯2

.

We would like to minimize with respect to c the following function

c2 n

pn ` 2qpn ` 1q2`

´ c n

n ` 1´ 1

¯2

.

Taking derivative with respect to c and setting it equal to zero, we arrive atthe identity

2cn

pn ` 2qpn ` 1q2`

2n

n ` 1

´ cn

n ` 1´ 1

¯

“ 0,

48

Page 49: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

from where c “n ` 2

n ` 1. The MSE of this estimator is

MSE “

´n ` 2

n ` 1

¯2 n θ2

pn ` 2qpn ` 1q2` θ2

´

pn ` 2qn

pn ` 1q2´ 1

¯2

“pn ` 2qn θ2

pn ` 1q4`

θ2

pn ` 1q4“

θ2

pn ` 1q2.

The MSE goes to zero, as n increases, which proves the consistency of theestimator.

Exercise 58 (a) The MM estimator 2 sXn is unbiased, and its mean squareerror is obtained as

MSE “ Varp2 sXnq “ 4θ2

12n“

θ2

3n.

The MSE tends to zero when n goes to infinity, implying consistency of theestimator.

(b) The bias of the estimator sXn is biasp sXn, θq “ Ep sXnq ´ θ “θ

2´ θ “

´θ

2­Ñ 0, as n Ñ 8. It means that this estimator is not asymptotically

unbiased, and, consequently, not consistent.

Exercise 59 (a) The bias of 1{ sXn is biasp1{ sXn, βq “ E`

1{ sXn

˘

´ β “ż 8

0

n

x

βn xn´1 expt´β xu

pn ´ 1q!dx ´ β “

n ´ 1´ β “

β

n ´ 1Ñ 0, as n Ñ 8.

Hence, the estimator is asymptotically unbiased. The mean square error is

MSE “ Var” 1sXn

ı

`“

biasp1{ sXn, βq‰2

ż 8

0

n2

x2

βn xn´1 expt´β xu

pn ´ 1q!dx

´

´ nβ

n ´ 1

¯2

`

´ β

n ´ 1

¯2

“n2β2

pn ´ 1qpn ´ 2q´

n2β2

pn ´ 1q2`

β2

pn ´ 1q2

“pn ` 2qβ2

pn ´ 1qpn ´ 2qÑ 0,

as n increases. Therefore, the estimator is consistent.

(b) The mean square error of an unbiased estimatorn ´ 1

n sXn

is equal to its

variance. The expression for the variance was derived in Exercise 53. Thus,the mean square error is

MSE “ Var”n ´ 1

n sXn

ı

“β2

n ´ 2Ñ 0, as n Ñ 8.

49

Page 50: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

This proves consistency.

Exercise 60 (a) The MSE of an unbiased estimator1

n ´ 1

nÿ

i“1

pXi ´ sXnq2

is equal to its variance, which is2σ4

n ´ 1. This quantity tends to zero as n

increases, implying the consistency of the estimator.

(b) The bias of the MLE1

n

nÿ

i“1

pXi ´ sXnq2 is computed as

bias´ 1

n

nÿ

i“1

pXi ´ sXnq2, σ2¯

“ E” 1

n

nÿ

i“1

pXi ´ sXnq2ı

´ σ2

“n ´ 1

nE” 1

n ´ 1

nÿ

i“1

pXi ´ sXnq2ı

´ σ2 “n ´ 1

nσ2 ´ σ2 “ ´

σ2

nÑ 0,

as n tends to infinity. Thus, this estimator is asymptotically unbiased. Itsmean square error is found as

MSE “ Var” 1

n

nÿ

i“1

pXi ´ sXnq2ı

`

bias´ 1

n

nÿ

i“1

pXi ´ sXnq2, σ2¯ı2

´n ´ 1

n

¯2

Var” 1

n ´ 1

nÿ

i“1

pXi ´ sXnq2ı

`

´

´σ2

n

¯2

´n ´ 1

n

¯2 2σ4

n ´ 1`

σ4

n2“

2n ´ 1

n2σ4 Ñ 0 as n Ñ 8,

whence, the estimator is consistent.

Exercise 61 The likelihood function is of the form

i“1

fpXi; pq “

i“1

ˆ

N

Xi

˙

pXi p1 ´ pqN´Xi

i“1

ˆ

N

Xi

˙

ı

přn

i“1 Xi p1 ´ pqnN´řn

i“1 Xi .

Now we take

gpX1, . . . , Xnq “

i“1

ˆ

N

Xi

˙

, pp “

nÿ

i“1

Xi, and hppp; pq “ ppp p1 ´ pqnN´pp.

The likelihood function can be written as the product of g and h, and, there-fore, by the factorization theorem,

řni“1 Xi is sufficient. Since any invertible

function of a sufficient statistic is sufficient,řn

i“1 Xi{pnNq “ sX{N is also a

50

Page 51: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

sufficient statistic for p.

Exercise 62 We write the likelihood function as

i“1

fpXi; pq “

i“1

pp1 ´ pqXi´1 “ pn p1 ´ pqřn

i“1 Xi´n.

If we suppose that

gpX1, . . . , Xnq “ 1, pp “

nÿ

i“1

Xi, and hppp; pq “ pn p1 ´ pqpp´n,

then the likelihood function becomes

i“1

fpXi; pq “ pn p1 ´ pqpp´n “ gpX1, . . . , Xnqhppp; pq.

From here, by factorization theorem, we conclude thatřn

i“1 Xi is a sufficientstatistic for p, and thus,

řni“1 Xi{n “ sX is sufficient.

Exercise 63 The likelihood function is of the form

i“1

fpXi;λq “

i“1

λXi

Xi!expt´λu “

i“1

Xi!ı´1

λřn

i“1 Xi expt´nλu.

We take

gpX1, . . . , Xnq “

i“1

Xi!ı´1

, pλ “

nÿ

i“1

Xi, and hppλ;λq “ λpλ expt´nλu.

Hence,nź

i“1

fpXi;λq “ gpX1, . . . , Xnqhppλ;λq,

which means thatřn

i“1 Xi is sufficient for λ, thereforeřn

i“1 Xi{n “ sX issufficient as well.

Exercise 64 The likelihood function is

i“1

fpXi; θq “

i“1

1

θIt0 ď Xi ď θu “

1

θnIt0 ď Xpnq ď θu.

We let

gpX1, . . . , Xnq “ 1, pθ “ Xpnq, and hppθ, θq “1

θnIt0 ď pθ ď θu.

51

Page 52: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

The likelihood function can be factored into g and h. Hence, by the factor-ization theorem, Xpnq is a sufficient statistic for θ.

Exercise 65 We write the likelihood function

i“1

fpXi; a, bq “

i“1

1

b ´ aIta ď Xi ď bu “

1

pb ´ aqnIta ď Xp1q ď Xpnq ď bu.

If we define gpX1, . . . , Xnq “ 1, pa “ Xp1q, pb “ Xpnq, and

hppa,pb; a, bq “1

pb ´ aqnIta ď pa ď pb ď bu,

then the likelihood function is factored into g and h, and, thus, by the fac-torization theorem, pXp1q, Xpnqq is sufficient for pa, bq.

Exercise 66 We have that the likelihood function

i“1

fpXi; βq “

i“1

1

βexpt´Xi{βu “

1

βnexpt´

nÿ

i“1

Xi{βu.

Hence the likelihood function is a product of gpX1, . . . , Xnq “ 1, and hppβ; βq “1βn expt´pβ{βu where pβ “

řni“1 Xi. By the factorization theorem,

řni“1 Xi is

sufficient for β, and thereforeřn

i“1 Xi{n “ sX is sufficient.

Exercise 67 The likelihood function has the form

i“1

fpXi;µ, σ2q “

i“1

1?2πσ2

exp!

´pXi ´ µq2

2σ2

)

“1

p2πσ2qn{2exp

!

´1

2σ2

`

nÿ

i“1

X2i ´ 2µ

nÿ

i“1

Xi ` nµ2˘

)

.

Now we let u “řn

i“1 X2i and v “

řni“1 Xi, and define gpX1, . . . , Xnq “ 1,

and

hpu, v;µ, σ2q “1

p2πσ2qn{2exp

!

´1

2σ2

`

u ´ 2µ v ` nµ2˘

)

.

We see that the likelihood function factors into g and h, and, thus, by thefactorization theorem, pu, vq is sufficient for pµ, σ2q. We now define pµ “

v{n “ sX and pσ2 “ 1n´1

pu ´ v2{nq “ 1n´1

řni“1 pXi ´ sXq2. Since pµ and pσ2 are

invertible functions of u and v, we conclude that the vector ppµ, pσ2q is also asufficient statistic for pµ, σ2q.

52

Page 53: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 68 The estimator sX{N is an efficient estimator for p, and, thus,it is the UMVUE for p.

Exercise 69 The estimator sX is an efficient estimator of λ. Therefore, itis the UMVUE for λ.

Exercise 70 The estimator sX is efficient. Hence, it is the UMVUE for β.

Exercise 71 The estimator sX is an efficient estimator for µ. Thus, it is theUMVUE for µ.

Exercise 72 We know that1

n ´ 1

`

nXp1q ´Xpnq

˘

is an unbiased estimator of

a that is based on a sufficient statistic pXp1q, Xpnqq. Likewise,1

n ´ 1

`

nXpnq ´

Xp1q

˘

is an unbiased estimator of b that is based on a sufficient statistic.We only need to show that Uniformpa, bq is a complete distribution. LetX „ Uniformpa, bq and let upXq be an unbiased estimator of zero. We have

that E`

upXq˘

“ 0 “

ż b

a

upxq

b ´ adx, which implies that

ż b

a

upxq dx “ 0, orż b

0

upxq dx “

ż a

0

upxq dx for any a ď b. Consequently, PpupXq “ 0q “ 1.

Thus, by the Lehmann-Scheffe theorem,1

n ´ 1

`

nXp1q ´Xpnq

˘

is the UMVUE

for a, and1

n ´ 1

`

nXpnq ´ Xp1q

˘

is the UMVUE for b.

Exercise 73 The estimatorn ´ 1

n sXis an unbiased but not an efficient es-

timator of β. However, it is based on a sufficient statisticřn

i“1 Xi, and,therefore, we can apply the Lehmann-Scheffe theorem to conclude that itis the UMVUE for β. We only need to show that the exponential distri-bution with mean 1{β is complete. Suppose upXq is an unbiased estimator

of zero. We write 0 “ E`

upXq˘

ż 8

0

upxq β expt´β xu dx. Differentiating

with respect to β, we obtain

ż 8

0

upxqxβ expt´β xu dx “ E`

X upXq˘

“ 0.

Differentiating k times where k “ 1, 2, . . . , we see that E`

Xk upXq˘

“ 0.This means that if we expand upxq in the standard basis 1, x, x2, . . . , all thecoefficients would be equal to zero, and, thus, upXq “ 0 everywhere.

53

Page 54: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 74 Note that we have shown earlier that1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is

not an efficient estimator of σ2. Its variance does not attain the CRLB. Wewill use the Lehmann-Scheffe theorem to prove that it is the UMVUE.

The estimator sX is sufficient and unbiased for µ. Also,1

n ´ 1

nÿ

i“1

pXi ´ sXq2

is sufficient and unbiased for σ2. It remains to show that Normalpµ, σ2q is acomplete distribution. We take X „ Normalpµ, σ2q, and denote by upXq an

unbiased estimator of zero. We have 0 “ E`

upXq˘

ż 8

´8

upxq?2 π σ2

exp!

´

px ´ µq2

2σ2

)

dx. Thus,

ż 8

´8

upxq exp!

´px ´ µq2

2σ2

)

dx “ 0. Now we differenti-

ate with respect to µ to obtain

ż 8

´8

upxq px ´ µq exp!

´px ´ µq2

2σ2

)

dx “ 0,

which implies that

E`

X upXq˘

ż 8

´8

xupxq exp!

´px ´ µq2

2σ2

)

dx

“ µ

ż 8

´8

upxq exp!

´px ´ µq2

2σ2

)

dx “ 0.

Taking the second derivative, we can show that E`

X2 upXq˘

“ 0, and, contin-uing in this manner, we can derive that for any k “ 1, 2, . . . ,, E

`

Xk upXq˘

0. This means that if we expand upxq in the standard basis 1, x, x2, etc., allthe coefficients will be equal to zero, and, thus, upXq “ 0 everywhere.

Hence, we have shown that the conditions of the Lehmann-Scheffe theorem

hold, and1

n ´ 1

nÿ

i“1

pXi ´ sXq2 is the UMVUE for σ2.

Exercise 75 We know that sX{N is a sufficient statistic for p. We have toshow that BinomialpN, pq with fixed N is complete, and that the estimator

n

nN ´ 1sXpN ´ sXq is unbiased for the variance N pp1 ´ pq.

Let X „ BinomialpN, pq and consider upXq, an estimator of zero. We have

E`

upXq˘

“ 0 “

Nÿ

x“0

upxq

ˆ

N

x

˙

px p1 ´ pqN´x.

Since this sum is a polynomial in p, we must have that upxq “ 0 for allx “ 0, . . . , N , and, thus, it is a complete distribution.Further, we compute

E”

sX

N

´

1 ´sX

N

¯ı

“ E´

sX

N

¯

´ E”´

sX

N

¯2ı

“ E´

sX

N

¯

´

Var´

sX

N

¯

`

´

sX

N

¯¯2ı

54

Page 55: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

“EpX1q

”VarpX1q

nN2`

´EpX1q

N

¯2ı

“N p

”Npp1 ´ pq

nN2`

´N p

N

¯2ı

“ p ´

”pp1 ´ pq

nN` p2

ı

“ pp1 ´ pq

´

1 ´1

nN

¯

.

Therefore,nN2

nN ´ 1

sX

N

´

1 ´sX

N

¯

““n

nN ´ 1sX pN ´ sXq is the UMVUE for

N pp1 ´ pq, by the Lehmann-Scheffe theorem.

Exercise 76 We start with proving that a Poissonpλq distribution is com-plete. We take X „ Poissonpλq and denote by upXq an unbiased estimatorof zero. It satisfies

E`

upXq˘

“ 0 “

8ÿ

x“0

upxqλx

x!expt´λu “ expt´λu

up0q`up1qλ`up2qλ2

2`¨ ¨ ¨ s.

Since this sum is a polynomial in λ, its coefficients must be equal to zero,thus, upxq “ 0 for all values of x “ 0, 1, . . . . It means that the distributionis complete.Next, we recall that sX is a sufficient statistic for λ. We need to find anunbiased estimator of λp1 ` λq based on sX. We write

E“

sXp1 ` sXq‰

“ Ep sXq ` Ep sX2q “ Ep sXq ` Varp sXq `“

Ep sXq‰2

“ EpX1q `VarpX1q

n`“

EpX1q‰2

“ λ `λ

n` λ2.

It follows that if we subtract sX{n from sXp1 ` sXq, we will get an unbiasedestimator of λp1 ` λq. Indeed,

E”

sXp1 ` sXq ´sX

n

ı

“ λ `λ

n` λ2 ´

λ

n“ λp1 ` λq.

Thus, by the Lehmann-Scheffe theorem, sXp1` sXq ´sX

n“`

1´1

n

˘

sX ` sX2 is

the UMVUE for the second moment EpX21 q “ λp1 ` λq.

Exercise 77 We know that Normalpµ, σ2q is a complete distribution, and

p sX, pσ2 “1

n ´ 1

nÿ

i“1

pX1 ´ sXq2q is sufficient for pµ, σ2q. Also, Ep sXq “ µ and

Eppσ2q “ σ2. Remained to find an unbiased estimator of µ2. We take sX2

and compute Ep sX2q “ Varp sXq `“

Ep sXq‰2

“σ2

n` µ2. Thus, sX2 ´

pσ2

nis an

unbiased estimator of µ2. Indeed, E”

sX2 ´pσ2

n

ı

“σ2

n`µ2 ´

σ2

n“ µ2. By the

55

Page 56: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Lehmann-Scheffe theorem, it is the UMVUE of µ2.

Exercise 78 (a) The density may be written as fpx; θq “ 1, 0 ă x´θ ă 1.Thus, by definition, θ is a location parameter of this distribution.(b) Each random variable Xi, i “ 1, . . . , n, can be expressed as Xi “ Ui ` θwhere Ui „ Uniformp0, 1q. Hence, the difference X1 ´ X4 “ U1 ` θ ´ pU4 `

θq “ U1 ´ U4, which distribution doesn’t change as θ changes, implying thatX1 ´ X4 is an ancillary statistic for θ.(c) The range R “ Xpnq ´ Xp1q “ max

`

Xi ´ Xj, i ą j, i, j “ 1, . . . , n˘

max`

Ui ´ Uj, i ą j, i, j “ 1, . . . , n˘

is an ancillary statistic for θ.(d) Since Lpθ;X1, . . . , Xnq “ Itθ ă Xp1q ă Xpnq ă θ ` 1u, by the Factor-ization Theorem, pXp1q, Xpnqq is sufficient for θ. It is not complete, as seen

from the relation E`

Xpnq ´Xp1q

˘

ż θ`1

θ

nxpx´ θqn´1 dx ´

ż θ`1

θ

nxpθ ` 1´

xqn´1 dx “ θ`n

n ` 1´`

θ`1

n ` 1q “

n ´ 1

n ` 1. It means that g

`

pXp1q, Xpnqq˘

Xpnq ´ Xp1q ´n ´ 1

n ` 1is an unbiased estimator of zero.

Now, if pXp1q, Xpnqq is fixed, then the range is fixed, thus these two statisticsare not independent. This fact doesn’t contradict Basu’s theorem, becausepXp1q, Xpnqq is not a complete statistic for θ.

Exercise 79 (a) The density has the form fpx; θq “ 1{θ, 0 ă x{θ ă 1,which confirms that θ is a scale parameter.

(b) The ratioX2

X5

“θU2

θU5

where U2 and U5 are iid Uniformp0, 1q random

variables. Thus, the distribution of X2{X5 is independent of θ, and, hence,it is an ancillary statistic for θ.

(c) The statistic1

n

nÿ

i“1

lnXi ´ lnX1 “1

nln´X2

X1

¨ ¨ ¨Xn

X1

¯

“1

nln´U2

U1

¨ ¨ ¨Un

U1

¯

where U1, . . . , Uniid„ Uniformp0, 1q. Since its distribution doesn’t depend on

the parameter θ, it is an ancillary statistic.(d) The MLE Xpnq is a complete sufficient statistic (as shown in Exercise 64

and Example 7), whereasX2X3

X4X5

“U2U3

U4U5

, where U2, . . . , U5iid„ Uniformp0, 1q,

is an ancillary statistic. Therefore, by Basu’s theorem, they are independent.

Exercise 80 (a) The pdf is a function of x´θ, fpx; θq “ expt´px´θqu, x´

θ ą 0, thus θ is a location parameter.(b) The sample variance is a function of Xi ´ Xj, i, j “ 1, . . . , n, S2 “

1

n ´ 1

nÿ

i“1

pXi´ sXq2 “1

n ´ 1

nÿ

i“1

1

n2

´

pXi´X1q`pXi´X2q`¨ ¨ ¨`pXi´Xnq

¯2

,

56

Page 57: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

and, hence, is an ancillary statistic for θ.

(c) The likelihood function has the form Lpθ;X1, . . . , Xnq “ expt´

nÿ

i“1

Xi `

nθuItθ ă Xp1qu, which is maximized at pθ “ Xp1q. Further, applying the

Factorization Theorem with gpX1, . . . , Xnq “ expt´

nÿ

i“1

Xiu and hppθ; θq “

exptnθuItθ ă Xp1qu, we see that Xp1q is a sufficient statistic for θ. Finally,the shifted exponential distribution is complete, as seen from the follow-ing argument. For any unbiased estimator upXq of θ, 0 “ ErupXqs “ş8

θupxq expt´px ´ θqu dx. Taking derivative of both sides with respect to

θ, we arrive at the following identity that holds for any θ: 0 “ ´upθq `ş8

θupxq expt´px ´ θqu dx “ ´upθq. This implies that PpupXq “ 0q “ 1.

(d) The statistics Xp1q and S2 are independent by a straightforward appli-cation of the Basu theorem.

Exercise 81 (a) The density of a Normalpµ, 1q distribution is a function of

the difference x´µ: fpx;µq “1

?2π

exp␣

´px ´ µq2

2

(

. Thus, µ is a location

parameter.(b) The sample variance is a function of Xi ´ Xj, i, j “ 1, . . . , n, S2 “

1

n ´ 1

nÿ

i“1

pXi´ sXq2 “1

n ´ 1

nÿ

i“1

1

n2

´

pXi´X1q`pXi´X2q`¨ ¨ ¨`pXi´Xnq

¯2

,

and, hence, is an ancillary statistic for µ.(c) By Exercises 67 and 74, the sample mean sX is complete, sufficient statis-tic for µ.(d) The statistics sX and S2 are independent by Basu’s theorem.

Exercise 82 The likelihood function has the formnź

i“1

ˆ

N

Xi

˙

pXi p1 ´ pqN´Xi “

i“1

ˆ

N

Xi

˙

ı

přn

i“1 Xi p1 ´ pqnN´řn

i“1 Xi .

As a function of p, this likelihood function has the same algebraic form asthe density of beta distribution. Therefore, the prior distribution Betapα, βq

with some known α and β is the conjugate prior distribution. Thus, theposterior distribution is also beta. To find its parameters we note that theposterior density is proportional to

přn

i“1 Xi p1´pqnN´řn

i“1 Xi pα´1 p1´pqβ´1 “ přn

i“1 Xi`α´1 p1´pqnN´řn

i“1 Xi`β´1.

We arrive at the conclusion that the posterior distribution is Betapřn

i“1 Xi`

α, nN ´řn

i“1 Xi ` βq, and the Bayesian estimator of p is the mean

pp “

řni“1 Xi ` α

řni“1 Xi ` α ` nN ´

řni“1 Xi ` β

řni“1 Xi ` α

nN ` α ` β.

57

Page 58: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 83 The likelihood function can be written as

i“1

p p1 ´ pqXi´1 “ pn p1 ´ pqřn

i“1 Xi´n.

As a function of p, it resembles the density of a beta distribution. Therefore,taking the conjugate prior distribution Betapα, βq with known α and β, wearrive at the posterior density proportional to

pn p1 ´ pqřn

i“1 Xi´n pα´1 p1 ´ pqβ´1 “ pn`α´1 p1 ´ pqřn

i“1 Xi´n`β´1.

Hence, the posterior distribution is Betapn ` α,řn

i“1 Xi ´ n ` βq, and theBayesian estimator of p is the mean

pp “n ` α

n ` α `řn

i“1 Xi ´ n ` β“

n ` αřn

i“1 Xi ` α ` β.

Exercise 84 The likelihood function is of the form

i“1

λXi

Xi!expt´λu “

i“1

Xi!‰´1

λřn

i“1 Xi expt´nλu.

As function of λ, this likelihood function has the same algebraic form as thedensity of a gamma distribution. If we take Gammapα, βq with known αand β as a prior for λ, then the posterior distribution is also gamma, and itsdensity is proportional to

λřn

i“1 Xi expt´nλuλα´1 expt´λ{βu “ λřn

i“1 Xi`α´1 expt´nλ ´ λ{βu.

Therefore, Gamma`řn

i“1 Xi ` α, pn ` 1{βq´1˘

is the posterior distributionof λ, and the Bayesian estimator of λ is the mean

pλ “`

nÿ

i“1

Xi ` α˘ `

n ` 1{β˘´1

řni“1 Xi ` α

n ` 1{β.

Exercise 85 The likelihood function is found as

i“1

1?2π σ2

exp!

´pXi ´ µq2

2σ2

)

“1

p?2π σ2qn

exp!

´

řni“1 pXi ´ µq2

2σ2

)

.

As a function of µ, the likelihood function resembles the density of a normaldistribution. If we take a conjugate prior Normalpµ0, σ

20q, then the posterior

58

Page 59: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

density is computed as (up to the normalizing constant):

exp!

´

řni“1 pXi ´ µq2

2σ2

)

exp!

´pµ ´ µ0q2

2σ20

)

“ exp!

´

řni“1 X2

i ´ 2µřn

i“1 Xi ` nµ2

2σ2´

µ2 ´ 2µµ0 ` µ20

2σ20

)

“ exp!

´1

2

”´ n

σ2`

1

σ20

¯

µ2 ´ 2´

řni“1 Xi

σ2`

µ0

σ20

¯

µı)

ˆ“

some constant‰

“ exp!

´1

2

´ n

σ2`

1

σ20

¯ ”

µ´

´

řni“1 Xi

σ2`µ0

σ20

¯

{

´ n

σ2`

1

σ20

¯ı2 )

ˆ“

some constant‰

.

Thus, the posterior distribution of µ is also Normal with mean´

řni“1 Xi

σ2`

µ0

σ20

¯

{

´ n

σ2`

1

σ20

¯

, and variance´ n

σ2`

1

σ20

¯´1

. The Bayesian estimator of µ is

the mean pµ “

´

řni“1 Xi

σ2`

µ0

σ20

¯

{

´ n

σ2`

1

σ20

¯

.

Exercise 86 The joint distribution of X “ 2 and λ is

λ2

2expt´λuπpλq “

$

&

%

p1{4q expt´1u, if λ “ 1,

p1{2q expt´2u, if λ “ 2,

p9{8q expt´3u, if λ “ 3.

Thus the posterior distribution of λ is

Ppλ “ 1|X “ 2q “

14expt´1u

14expt´1u ` 1

2expt´2u ` 9

8expt´3u

“ 0.4265,

Ppλ “ 2|X “ 2q “

12expt´2u

14expt´1u ` 1

2expt´2u ` 9

8expt´3u

“ 0.3138,

and

Ppλ “ 3|X “ 2q “

98expt´3u

14expt´1u ` 1

2expt´2u ` 9

8expt´3u

“ 0.2597,

The Bayesian estimator of λ is computed as the posterior mean of λ, givenX “ 2, that is,

pλ “ Epλ|X “ 2q “ p1qp0.4265q ` p2qp0.3138q ` p3qp0.2597q “ 1.8332.

59

Page 60: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

Exercise 87 The MLE of p is pp “ sX{N . The likelihood ratio for testingH0 : p “ p0 against H1 : p ­“ p0 is computed as follows:

Λ “

śni“1

`

NXi

˘

pXi0 p1 ´ p0qN´Xi

śni“1

`

NXi

˘ `

sXN

˘Xi`

1 ´sXN

˘N´Xi“

pnsX

0 p1 ´ p0qnN´n sX

`

sXN

˘n sX`1 ´

sXN

˘nN´n sX.

The asymptotic likelihood ratio test statistic is

χ2 “ ´ 2 lnΛ “ 2n sX ln”

sXp1 ´ p0q

pN ´ sXqp0

ı

` 2nN ln”N ´ sX

N ´ p0

ı

.

The decision is to reject the null hypothesis if χ2 ě χ2αp1q.

Exercise 88 The MLE of p is pp “ 1{ sX. Therefore, the likelihood ratio hasthe form

Λ “

śni“1 p0p1 ´ p0q

Xi´1

śni“1

`

1sX

˘`

1 ´ 1sX

˘Xi´1“

psp0qn p1 ´ p0q

n sX´n

`

1sX

˘n `1 ´ 1

sX

˘n sX´n.

The asymptotic likelihood ratio test statistic is

χ2 “ ´ 2 lnΛ “ 2n ln” 1 ´ p0

p sX ´ 1qp0

ı

` 2n sX ln”

sX ´ 1sXp1 ´ p0q

ı

.

If this statistic is in excess of the critical value χ2αp1q, then H0 is rejected.

Exercise 89 The MLE of λ is pλ “ sX, thus the likelihood ratio is

Λ “

śni“1

λXi0

Xi!expt´λ0u

śni“1

sXXi

Xi!expt´ sXu

´λ0

sX

¯n sX

expt´npλ0 ´ sXqu,

and the asymptotic test statistic is

χ2 “ ´2 lnΛ “ 2n sX ln`

sX

λ0

˘

` 2npλ0 ´ sXq.

The rejection region is of the form tX1, . . . , Xn : χ2 ě χ2αp1qu.

Exercise 90 The MLE of θ is pθ “ Xpnq. The likelihood ratio is written as

Λ “

`

1{θ0˘n ItXpnq ď θ0u`

1{Xpnq

˘n “

´Xpnq

θ0

¯n

ItXpnq ď θ0u.

The asymptotic likelihood ratio test statistic is equal to χ2 “ ´2 lnΛ “

2n ln θ0 ´ 2n lnXpnq, and the decision rule is to reject H0 if either Xpnq ą θ0

60

Page 61: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

or χ2 ě χ2αp1q.

Exercise 91 The MLE of β is pβ “ sX. Consequently, the likelihood ratiohas the expression

Λ “

śni“1

`

1{β0

˘

expt´Xi{β0uśn

i“1

`

1{ sX˘

expt´Xi{ sXu“

´

sX

β0

¯n

exp␣

n`

1 ´sX

β0

˘(

.

The asymptotic likelihood ratio test statistic is χ2 “ ´2 lnΛ “ 2n ln β0 ´

2n ln sX ` 2np sX{β0 ´ 1q, and the rejection region is tχ2 ě χ2αp1qu.

Exercise 92 The MLE of µ is pµ “ sX, and hence the likelihood ratio is ofthe form

Λ “

śni“1 p2πσ2q´1{2 exp

´ 12σ2 pXi ´ µ0q

2(

śni“1 p2πσ2q´1{2 exp

´ 12σ2 pXi ´ sXq2

(

“ exp!

´1

2σ2

nÿ

i“1

pXi ´ µ0q2 ´ pXi ´ sXq2‰

)

“ exp!

´n

2σ2p sX ´ µ0q

2)

.

The asymptotic likelihood ratio test statistic is χ2 “ ´2 lnΛ “n

σ2p sX´µ0q

2 “

´

sX ´ µ0

σ{?n

¯2

„ χ2p1q. The decision rule is to reject the null if χ2 ě χ2αp1q.

Exercise 93 For the rejection region tX “ 6u, power “ 1 ´ β “ p6, 1{3 ď

p ď 1.(b) For the rejection region tX “ 5, 6u, power “ 1´β “

`

65

˘

p5p1´ pq ` p6 “

6p5p1 ´ pq ` p6 “ 6p5 ´ 5p6, 1{3 ď p ď 1.(c) For the rejection region tX “ 4, 5, 6u, power “ 1 ´ β “

`

64

˘

p4p1 ´ pq2 ``

65

˘

p5p1´pq `p6 “ 15p4p1´pq2 `6p5p1´pq `p6 “ 15p4 ´24p5 `10p6, 1{3 ď

p ď 1.

Exercise 94 As found in Exercise 90, the test statistic is equal to χ2 “

2n ln θ0 ´ 2n lnXpnq, and the rejection region is tXpnq ą θ0 or χ2 ě χ2αp1qu.

Therefore, the power is computed as:

power “ 1 ´ β “ P`

Xpnq ą θ0 or χ2 ě χ2

αp1q | θ ­“ θ0˘

“ P`

Xpnq ą θ0 or 2n ln θ0 ´ 2n ln Xpnq ě χ2αp1q | θ ­“ θ0

˘

“ P´

Xpnq ą θ0 or Xpnq ď θ0 exp!

´χ2αp1q

2n

)

| θ ­“ θ0

¯

“ 1 ´ FXpnqpθ0q ` FXpnq

´

θ0 exp!

´χ2αp1q

2n

61

Page 62: INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES ...lototsky/MATH705/mathstatistics.pdf · INTRODUCTORY MATHEMATICAL STATISTICS: MUST-KNOW EXERCISES WITH SOLUTIONS by Dr

“ 1 ´θn0θn

`θn0 exp

´χ2αp1q

2

(

θn“ 1 ´

´

1 ´ exp!

´χ2αp1q

2

)¯θn0θn

, θ ­“ θ0.

Exercise 95 The constant k is determined by the significance level α, that

is, it solves α “ Pp sX ď k |µ “ µ0q “ P´

Z ďk ´ µ0

σ{?n

¯

“ Φ´k ´ µ0

σ{?n

¯

.

From here, k “σ

?nΦ´1pαq ` µ0. The power of the test is then computed

as power “ 1 ´ β “ Pp sX ą k |µ “ µ1q “ 1 ´ P´

Z ďk ´ µ1

σ{?n

¯

“ 1 ´

Φ´k ´ µ1

σ{?n

¯

“ 1 ´ Φ´

σ?nΦ´1pαq ` µ0 ´ µ1

σ{?n

¯

“ 1 ´ Φ´

Φ´1pαq ´µ1 ´ µ0

σ{?n

¯

.

62