View
232
Download
0
Tags:
Embed Size (px)
Citation preview
2a.1
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
3-variable Regression
Derive OLS estimators of 3-variable regression
Properties of 3-variable OLS estimators
2a.2
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Derive OLS estimators of multiple regression
OLS is to minimize the SSR( 2) ^
min. RSS = min. 2 = min. (Y - 0 - 1X1 - 2X2)2^ ^ ^ ^
RSS
0
=2 ( Y - 0- 1X1 - 2X2)(-1) = 0^ ^ ^ ^
RSS
1
=2 ( Y - 0- 1X1 - 2X2)(-X1) = 0^ ^ ^ ^
RSS
2
=2 ( Y - 0- 1X1 - 2X2)(-X2) = 0^ ^ ^ ^
Y = 0 + 1X1 + 2X2 + ^ ^ ^ ^
= Y - 0 - 1X1 - 2X2^ ^ ^ ^
2a.3
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
rearranging three equations:
n0 + 1 X1 + 2 X2 = Y^ ^ ^
1 X1 + 1 X12
+ 2 X1X2 = X1Y^ ^ ^
0 X2 + 1 X1X2 + 2 X22 = X2Y
^ ^ ^
rewrite in matrix form:
n X1 X2
X1 X12
X1X2
X2 X1X2 X22
0
1
2
^
^
^=
Y
X1Y
X2Y
2-variables Case
3-variables Case
(X’X) ^ = X’Y Matrix notation
2a.4
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
0 = Y - 1X1 - 2X2^ ^ ^_ _ _
n X1 YX1 X1
2 X1Y
X2 X1X1 X2Y
n X1 X2
X1 X12
X1X2
X2 X1X2 X22
=2^ =
(yx2)(x12) - (yx1)(x1x2)
(x12)(x2
2) - (x1x2)2
n Y X2
X1 X1Y X1X2
X2 X2Y X22
n X1 X2
X1 X12
X1X2
X2 X1X2 X22
=1^ =
(yx1)(x22) - (yx2)(x1x2)
(x12)(x2
2) - (x1x2)2
Cramer’s rule:
2a.5
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Variance-Covariance matrix
Var-cov() = ^ Var(0) Cov(0 1) Cov(0 2)
Cov (1 0) Var(1) Cov(1 2)
Cov (2 0) Cov(2 1) Var(2)
^ ^ ^
^ ^
^ ^
^^^
^^ ^^^
= 2(X’X)-1^
or in matrix form:
3x3 3x1 3x1
^(X’X) X’Y=
==> ̂ = (X’X)-1 (X’Y)
3x33x1 3x1
Var-cov() = 2 (X’X)-1 and
2 = ^ ^ ^ 2 ^
n-3
2a.6
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
n X1 X2
X1 X12 X1X2
X2 X2X1 X22
= 2^
-1
2
= ^ u2̂
n-3and =
2̂
n- k -12̂
k=2# of independent variables
( not including the constant term)
2a.7
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Properties of multiple OLS estimators
1. The regression line(surface)passes through the mean of Y1, X1, X2
_ _ _
i.e.,
Y = 0 + 1X1 + 2X2
_ ^ ^ ^ _ _
==>
0 = Y - 1X1 - 2X2^ ^ ^ _ __
Linear in parametersRegression through the mean
3. =0^ Zero mean of error
Y=0^5. ^random sample
X1 = X2 = 0^ ^4. (Xk=0 )^ constant Var() = 2
2. Y = Y + 1x1 + 2x2 ^
_ ^ ^
y = 1x1 + 2x2 ^ ^or
Unbiased: E(i) = i^
2a.8
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Properties of multiple OLS estimators6. As X1 and X2 are closely related ==> var(1) and var(2)
become large and infinite. Therefore the true values of 1 and 2 are difficult to know.
^ ^
All the normality assumptions in the two-variables case regressionare also applied to the multiple variable regression. But one addition assumption isNo exact linear relationship among the independent variables.(No perfect collinearity, i.e., Xk Xj )
7. The greater the variation in the sample values of X1 or X2, the smaller variance of 1 and 2 , and the estimations are more precisely.
^ ^
8. BLUE (Gauss-Markov Theorem)
2a.9
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
The adjusted R2 (R2) as one of indicator of the overall fitness
R2 =ESS
TSS= 1 -
RSS
TSS= 1 -
2
y2
^
R2 = 1 -_ 2
SY2
^
R2 = 1 -_ 2
y2
^ (n-1)
(n-k-1)
2 / (n-k)
y2 / (n-1)R2 = 1 -_ ^
k : # of independent variables plus the constant term.
n : # of obs.
n-1R2 = 1 - (1-R2)_
n-k-1
R2 R2
_
Adjusted R2 can be negative: R2 00 < R2 < 1
Note: Don’t misuse the adjusted R2, read Studenmund(2001) pp. 53-55
2a.10
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
The meaning of partial regression coefficients
Y
X2
= 2holding X1 constant, the direct effect of a unit change in X2 on the mean value of Y.
Holding constant:
To assess the true contribution of X1 to the change in Y, we control the influence of X2.
Y = 0 + 1X1 + 2X2 + (suppose this is a true model)
Y
X1
= 1 : 1measures the change in the mean values of Y, per unit change in X1, holding X2 constant.
or The ‘direct’ effect of a unit change in X1 on the mean value of Y, net of X2
2a.12
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Suppose X3 is not an explanatory Variable but is included in regression
2a.13
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
X2 = b20 + b21X1 + 12
X1 = b1 + b12X2 + 12
X2
X1= b21 = 1.1138
Indirect effect from X2
Partial effect : holding other variables constant
Unemploymentrate(%)
YX
1= 1
= -1
.392
5
Y = 0
+ 1
X 1 +
1
^
^
^
X1
Direct
effe
ct fro
m X
1^
Y
expected inflationrate (%)
Actual inflation rate(%)
Y =
0 + 2 X
2 + 2
^ ^
^
X2
YX2 =
2 = -1.4700D
iret effect from X
2
^
2a.14
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Total effect from X1:
1 + 2 * b21 = -1.392472 + (1.470032)(1.11385) ^ ^
‘direct’ + ‘indirect’= -1.392472 + 1.637395
= 0.244923
Y
X1
= 1’ = 0.2449^
X1Y
Y = 0’ + 1’ X1 + Implicitly reflects the hidden true model
is including the X2
2a.17
All rights reserved by Dr.Bill Wan Sing Hung - HKBU
Total effect from X2:
2 + 1 * b12 = 1.470032 + (-1.392472) (0.369953) ^ ^
‘direct’ + ‘indirect’= 1.470032 - 0.515149
= 0.9548828
Y
X2
= 2’ = 0.954883^
X2Y
Y = 0’ + 2’ X2 + Implicitly reflects the hidden true model
is including the X1