Upload
kelley-horn
View
212
Download
0
Embed Size (px)
Citation preview
Confidence Interval & Unbiased Estimator
Review and Foreword
Central limit theorem vs. the weak law of large numbers
Weak law vs. strong law
Personal research Search on the web or the library Compare and tell me why
Cont.
Maximum Likelihood estimator
Suppose the i.i.d. random variables X1, X2, …Xn, whose joint distribution is assumed given except for an unknown parameter θ, are to be observed and constituted a random sample.
f(x1,x2,…,xn)=f(x1)f(x2)…f(xn), The value of likelihood function f(x1,x2,…,xn/θ) will be determined by the observed sample (x1,x2,…,xn) if the true value of θ could also be found.
valuesobserved offunction likelihood ofy probabilit the
maximize would,by denoted , ofestimator likelihood maximum the^
Differentiate on the θ and let the first order condition equal to zero, and then rearrange the random variables X1, X2, …Xn to obtain θ.
Confidence interval
Confidence vs. Probability
Probability is used to describe the distribution of a certain random variable (interval)
Confidence (trust) is used to argue how the specific sampling consequence would approach to the reality (population)
100(1-α)% Confidence intervals
100(1-α)% confidence intervals for (μ1 -μ2)
Approximate 100(1-α)% confidence intervals for p
Unbiased estimators
Linear combination of several unbiased estimators
If d1,d2,d3,d4…dn are independent unbiased estimators If a new estimator with the form, d=λ1d1+λ2d2+λ3d3+…
λndn and λ1+λ2+…λn=1, it will also be an unbiased estimator.
The mean square error of any estimator is equal to its variance plus the square of the bias r(d, θ)=E[(d(X)-θ)2]=E[d-E(d)2]+(E[d]-θ)2
The Bayes estimator
The value of additional information
The Bayes estimator The set of observed sample revised the p
rior θ distribution Smaller variance of posterior θ distributi
on Ref. pp.274-275