28
1 STATISTICAL INFERENCE PART I POINT ESTIMATION

STATISTICAL INFERENCE PART I POINT ESTIMATION

  • Upload
    maja

  • View
    49

  • Download
    0

Embed Size (px)

DESCRIPTION

STATISTICAL INFERENCE PART I POINT ESTIMATION. . . (. ). STATISTICAL INFERENCE. Determining certain unknown properties of a probability distribution on the basis of a sample (usually, a r.s.) obtained from that distribution. Point Estimation:. Interval Estimation:. Hypothesis Testing:. - PowerPoint PPT Presentation

Citation preview

1

STATISTICAL INFERENCEPART I

POINT ESTIMATION

2

STATISTICAL INFERENCE

• Determining certain unknown properties of a probability distribution on the basis of a sample (usually, a r.s.) obtained from that distribution

Point Estimation:Interval Estimation:Hypothesis Testing:

5 3 8

0

1

: 5: 5

HH

( )

3

STATISTICAL INFERENCE• Parameter Space (): The set of all

possible values of an unknown parameter, ; .

• A pdf with unknown parameter: f(x; ), .

• Estimation: Where in , is likely to be?

{ f(x; ), } The family of pdfs

4

STATISTICAL INFERENCE• Statistic: A function of rvs (usually a

sample rvs in an estimation) which does not contain any unknown parameters.

2, ,X S etc

• Estimator of an unknown parameter : A statistic used for estimating .

1 2ˆ : , , , nestimator U X X X

:

: : A particular value of an estimator

X Estimator

x Estimate An observed value

5

METHODS OF ESTIMATION

Method of Moments Estimation, Maximum Likelihood Estimation

6

METHOD OF MOMENTS ESTIMATION (MME)

• Let X1, X2,…,Xn be a r.s. from a population with pmf or pdf f(x;1, 2,…, k). The MMEs are found by equating the first k population moments to corresponding sample moments and solving the resulting system of equations.

Sample MomentsPopulation Momentsk

k E X 1

1 nk

k ii

M Xn

7

METHOD OF MOMENTS ESTIMATION (MME)

2 2

2 2

1

1 n

ii

M

E X Xn

1 1

1

1 n

ii

M

E X Xn

3 3

3 3

1

1 n

ii

M

E X Xn

so on…

Continue this until there are enough equations to solve for the unknown parameters.

8

EXAMPLES

• Let X~Exp(). • For a r.s of size n, find the MME of .• For the following sample (assuming it is

from Exp()), find the estimate of :11.37, 3, 0.15, 4.27, 2.56, 0.59.

9

EXAMPLES

• Let X~N(μ,σ²). For a r.s of size n, find the MMEs of μ and σ².

• For the following sample (assuming it is from N(μ,σ²)), find the estimates of μ and σ²: 4.93, 6.82, 3.12, 7.57, 3.04, 4.98, 4.62,

4.84, 2.95, 4.22

10

DRAWBACKS OF MMES

• Although sometimes parameters are positive valued, MMEs can be negative.

• If moments does not exist, we cannot find MMEs.

11

MAXIMUM LIKELIHOOD ESTIMATION (MLE)

• Let X1, X2,…,Xn be a r.s. from a population with pmf or pdf f(x;1, 2,…, k), the likelihood function is defined by

1 2 1 2 1 2 1 2

1 21

1 21

, , , , , , , , , ; , , ,

, , ,

; , , ,

k n n k

n

i ki

n

i ki

L x x x f x x x

f x

f x

12

MAXIMUM LIKELIHOOD ESTIMATION (MLE)

• For each sample point (x1,…,xn), let

be a parameter value at which L(1,…, k| x1,…,xn) attains its maximum as a

function of (1,…, k), with (x1,…,xn) held fixed. A maximum likelihood estimator (MLE) of parameters (1,…, k) based on a sample (X1,…,Xn) is

1 1 1ˆ ˆ,..., , , ,...,n k nx x x x

1 1 1ˆ ˆ,..., , , ,...,n k nx x x x

• The MLE is the parameter point for which the observed sample is most likely.

13

EXAMPLES• Let X~Bin(n,p). One observation on X is available,

and it is known that n is either 2 or 3 and p=1/2 or 1/3. Our objective is to estimate the pair (n,p).

x (2,1/2) (2,1/3) (3,1/2) (3,1/3) Max. Prob.0 1/4 4/9 1/8 8/27 4/9

1 1/2 4/9 3/8 12/27 1/2

2 1/4 1/9 3/8 6/27 3/8

3 0 0 1/8 1/27 1/8

(2,1/ 3) if 0(2,1/ 2) if 1

,ˆ ˆ(3,1/ 2) if 2(3,1/ 2) if 3

xx

n p xxx

14

MAXIMUM LIKELIHOOD ESTIMATION (MLE)

• It is usually convenient to work with the logarithm of the likelihood function.

• Suppose that f(x;1, 2,…, k) is a positive, differentiable function of 1, 2,…, k. If a supremum exists, it must satisfy the likelihood equations

1 2ˆ ˆ ˆ, , , k

1 2 1 2ln , , , ; , , ,0, 1,2,...,k k

j

L x x xj k

• MLE occurring at boundary of cannot be obtained by differentiation. So, use inspection.

MLE

• Moreover, you need to check that you are in fact maximizing the log-likelihood (or likelihood) by checking that the second derivative is negative.

15

16

EXAMPLES

1. X~Exp(), >0. For a r.s of size n, find the MLE of .

17

EXAMPLES

2. X~N(,2). For a r.s. of size n, find the MLEs of and 2.

18

EXAMPLES

3. X~Uniform(0,), >0. For a r.s of size n, find the MLE of .

19

INVARIANCE PROPERTY OF THE MLE

• If is the MLE of , then for any function (), the MLE of () is .

ˆ

Example: X~N(,2). For a r.s. of size n, the MLE of is . By the invariance property of MLE, the MLE of 2 is

X2.X

20

ADVANTAGES OF MLE

• Often yields good estimates, especially for large sample size.

• Usually they are consistent estimators.• Invariance property of MLEs• Asymptotic distribution of MLE is Normal.• Most widely used estimation technique.

21

DISADVANTAGES OF MLE• Requires that the pdf or pmf is known except the

value of parameters.• MLE may not exist or may not be unique.• MLE may not be obtained explicitly (numerical or

search methods may be required.). It is sensitive to the choice of starting values when using numerical estimation. 

• MLEs can be heavily biased for small samples. • The optimality properties may not apply for small

samples.

22

SOME PROPERTIES OF ESTIMATORS

• UNBIASED ESTIMATOR (UE): An estimator is an UE of the unknown parameter , if

ˆE for all

Otherwise, it is a Biased Estimator of .

ˆ ˆBias E

Bias of for estimating

If is UE of , ˆ 0.Bias

23

SOME PROPERTIES OF ESTIMATORS

• ASYMPTOTICALLY UNBIASED ESTIMATOR (AUE): An estimator is an AUE of the unknown parameter , if

ˆ ˆ0 lim 0n

Bias but Bias

24

SOME PROPERTIES OF ESTIMATORS

• CONSISTENT ESTIMATOR (CE): An estimator which converges in probability to an unknown parameter for all is called a CE of .

ˆ .p

• MLEs are generally CEs.

For large n, a CE tends to be closer to the unknown population parameter.

25

EXAMPLE

For a r.s. of size n,

E X X is an UE of . By WLLN,

pX

X is a CE of .

26

MEAN SQUARED ERROR (MSE)

• The Mean Square Error (MSE) of an estimator for estimating is

22ˆ ˆ ˆ ˆMSE E Var Bias

If is smaller, is the better estimator of .

ˆMSE

1 2ˆ ˆ ,For two estimators, and of if

1 2ˆ ˆ ,MSE MSE

1 is better estimator of .

27

MEAN SQUARED ERROR CONSISTENCY

• Tn is called mean squared error consistent (or consistent in quadratic mean) if E{Tn}20 as n.

Theorem: Tn is consistent in MSE iffi) Var(Tn)0 as n.

) lim .nnii E T

• If E{Tn}20 as n, Tn is also a CE of .

28

EXAMPLES

X~Exp(), >0. For a r.s of size n, consider the following estimators of , and discuss their bias and consistency.

Which estimator is better?

1n

XT,

n

XT

n

1ii

2

n

1ii

1