Po906 Week8910 PDF
Po906 Week8910 PDF
Conditions of OLS
The full ideal conditions consist of a collection of assumptions about the true
regression model and the data generating process and can be thought of
as a description of an ideal data set. Ideal conditions have to be met in
order for OLS to be a good estimate (BLUE, unbiased and efficient)
Most real data do not satisfy these conditions, since they are not generated by
an ideal experiment. However, the linear regression model under full ideal
conditions can be thought of as being the benchmark case with which
other models assuming a more realistic DGP should be compared.
One has to be aware of ideal conditions and their violation to be able to control
for deviations from these conditions and render results unbiased or at least
consistent:
1. Linearity in parameters alpha and beta: the DV is a linear function of a set
of IV and a random error component
→ Problems: non-linearity, wrong determinants, wrong estimates; a relationship
that is actually there can not be detected with a linear model
2. The expected value of the error term is zero for all observations
E i 0
→ Problem: intercept is biased
3. Homoskedasticity: The conditional variance of the error term is
constant in all x and over time: the error variance is a measure of
model uncertainty. Homoskedasticity implies that the model
uncertainty is identical across observations.
X i E i X i E i sin ce X i is det
0
If all Gauss-Markov assumptions are met than the OLS estimators alpha
and beta are BLUE – best linear unbiased estimators:
best: variance of the OLS estimator is minimal, smaller than the
variance of any other estimator
linear: if the relationship is not linear – OLS is not applicable.
unbiased: the expected values of the estimated beta and alpha
equal the true values describing the relationship between x and y.
Inference
Is it possible to generalize the regression results for the sample under
observation to the universe of cases (the population)?
Can you draw conclusions for individuals, countries, time-points beyond
those observations in your data-set?
• Significance tests are designed to answer exactly these questions.
• If a coefficient is significant (p-value<0.10, 0.05, 0.01) then you can
draw conclusions for observations beyond the sample under
observation.
But…
• Only in case the samples matches the characteristics of the
population
• This is normally the case if all (Gauss-Markov) assumptions of OLS
regressions are met by the data under observation.
• If this is not the case the standard errors of the coefficients might be
biased and therefore the result of the significance test might be
wrong as well leading to false conclusions.
Significance test: the t-test
2
2
,
N *Var X
The t-test:
• T-test for significance: testing the H0 (Null-Hypothesis) that beta
equals zero: H0: beta=0; HA: beta≠0
• The test statistic follows a student t distribution under the Null
ˆ r ˆ r
t n 2
SE
ˆ SSR
N *Var X
ˆ ˆ
t n 2
SE ˆ SSR
N *Var X
The Null (no significant relationship) will not be rejected if: 1.96 t 1.96
ˆ 0
This condition can be expressed in terms of beta 1.96 1.96
by substituting for t: SE
ˆ
0 1.96*0.8 1 0 1.96*0.8
Substituting beta=1 and se(beta)=0.8:
1.568 1 1.568
Since this in-equality holds true, the null-hypothesis is not rejected. Thus, we
accept that there is rather no relationship between x and y and beta equals
with a high probability (95%) zero.
probability density
function of beta acceptance region
2.5% 2.5%
0
2.5% 2.5%
0
In significance test:
H0: beta is insignificant = 0:
Type I error: wrongly rejecting the null hypothesis
Type II error: wrongly accepting the null that the coefficient is zero.
Selection of significance levels increase or decrease the probability of
type I and type II errors.
The smaller the significance level (5%, 1%) the lower the probability of
type I and the higher the probability of type II errors.
Confidence Intervals
Significance tests assume that hypotheses come before
the test: beta≠0. however, the significance test leaves us
with some vacuum since we know that beta is different
from zero but since we have a probabilistic theory we are
not sure what the exact value should be.
Thus, all values between 0.216 and 1.784 are theoretically possible
and would not be rejected. They are compatible to the estimated b =
1. 1 is the central value of the confidence interval.
Interpretation of regression results:
reg y x
----------------------------------------------------------------------------------------------------
y| Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+-------------------------------------------------------------------------------------
x | 1.941914 .2049419 9.48 0.000 1.535213 2.348614
_cons | .8609647 .4127188 2.09 0.040 .0419377 1.679992
----------------------------------------------------------------------------------------------------
i 1
i 1
n n
SSR ˆ yi xi
2 2
i
i 1 i 1
Goodness of Fit
How well does the explanatory variable explain the dependent
variable?
How well does the regression line fit the data?
(Yˆ Yˆ ) 2
SSE SSR
R
2 i 1
n
1
(Y Y ) 2 SST SST
i 1
Properties of R²:
E ˆ 0
If an estimator is unbiased, then its probability distribution has an expected
value equal to the parameter it is supposed to be estimating. Unbiasedness
does not mean that the estimate we get with any particular sample is equal
to the true parameter or even close. Rather the mean of all estimates from
infinitely drawn random samples equals the true parameter.
3.5
3
2.52
Density
1.5 1
.5
0
.5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1
b1
Sampling Variance of an estimator
Efficiency: is a relative measure between two estimators – measures
the sampling variance of an estimator: V(beta)
Let ̂ and be two unbiased estimator of the true parameter . With
variances V ˆ and V . Then ̂ is called to be relative more
efficient than if V ˆ is smaller than V .
The property of relative efficiency only helps us to rank two unbiased
estimators.
.08
.4
.06
.3
Density
Density
.04
.2
.02
.1
0
0
-20 - 10 0 10 20 -4 -2 0 2 4
b1 b2
Trade-off between Bias and Efficiency
With real world data and the related problems we sometimes have only
the choice between a biased but efficient and an unbiased but
inefficient estimator. Then another criterion can be used to choose
between the two estimators, the root mean squared error (RMSE).
The RMSE is a combination of bias and efficiency and gives us a
measure of overall performance of an estimator.
RMSE:
1 K
RMSE
2.5
2
ˆ ˆ true
K k 1
2
MSE E true
2
xtfevd ˆ
1.5
MSE Var Bias , true
Density
2
ˆ ˆ
1
n n n n
x x2 x x1 yi y xi1 x1 xi 2 x2 xi 2 x2 yi y
2
i2 i1
ˆ1 i 1 i 1 i 1 i 1
2
n n
n
x x1 x x2 xi1 x1 xi 2 x2
2 2
i1 i2
i 1 i 1 i 1
ˆ X ' X 1 X ' y
n
SST1 xi1 x1
2
i 1
n
xˆi1 x1
2
SSE
R12 for the regression of xi1 on xi 2 : R12 i 1
n
xi1 x1
SST 2
i 1
SD ˆ1 SE ˆ1
SST1 1 R12
F – Test: Testing Multiple Linear Restrictions
• t-test (as significance test) is associated with any OLS coefficient.
• We also want to test multiple hypotheses about the underlying
parameters beta_0…beta_k.
• The F-test, tests multiple restriction: e.g. all coefficients jointly equal
zero:
H0: beta_0=beta_1=…=beta_k=0
Ha: H0 is not true, thus at least one beta differs from zero
• The F-statistic (or F-ratio) is defined as:
F
SSRr SSRur / q
• The F-statistic is F distributed under the SSRur / n k 1
Null-Hypothesis.
F Fq , n k 1
As with simple binary regressions we can define SST, SSE and SSR.
And we can calculate the R² in the same way.
BUT: R² never decreases but tends to increase with the number of
explanatory variables.
THUS, R² is a poor tool for deciding whether one variable of several
variables should be added to a model.
We want to know whether a variable has a nonzero partial effect on y in
the population.
Adjusted R²: takes the number of explanatory variables into account
since the R² increases with the number of regressors:
n 1
R 2
adj 1
nk
1 R 2
k is the number of explanatory variables and n the number of
observations
Comparing Coefficients
The size of the slope parameters depends on the scaling of the
variables (on which scale a variables is measured), e.g. population
in thousands or in millions etc.
To be able to compare the size effects of different explanatory
variables in a multiple regression we can use standardized
coefficients:
ˆ x j ˆ j
bˆ j for j 1,..., k
ˆ y
Standardized coefficients take the standard deviation of the dependent
and explanatory variables into account. So they describe how much
y changes if x changes by one standard deviation instead of one
unit. If x changes by 1 SD – y changes by b_hat SD. This makes the
scale of the regressors irrelevant and we can compare the
magnitude of the effects of different explanatory variables (the
variables with the largest standardized coefficient is most important
in explaining changes in the dependent variable).
Problems in Multiple Regressions:
1. Multicolinearity
• Perfect multicolinearity leads to drop out of one of the variables: if x1
and x2 are perfectly correlated (correlation of 1) – the statistical
program at hand does the job.
• The higher the correlation the larger the population variance of the
coefficients, the less efficient the estimation and the higher the
probability to get erratic point estimates. Multicolinearity can result in
numerically unstable estimates of the regression coefficients (small
changes in X can result in large changes to the estimated regression
coefficients).
Solutions:
• Robust Huber-White sandwich estimator (GLS)
• White Heteroskedasticity consistent VC estimate: manipulates the
variance-covariance matrix of the error term.
• More substantially: include omitted variables
• Dummies for groups of individuals or countries that are assumed to
behave more similar than others
Tests for Heteroskedasticity:
a. Breusch-Pagan LM test for known form of Heteroskedasticity:
groupwise 2
n
T 2
si
LM 2
2 i 1 s
1
si2 =sum of group-specific squared residuals
s 2 = OLS residuals
H0: homoskedasticity ~ Chi² with n-1 degrees of freedom
LM-test assumes normality of residuals, not appropriate if
assumption not met.
2 ln NT ln 2 T ln i2 ~ 2 dF n 1
c. White test if form of Heteroskedasticity is unknown:
H0: V i | xi
2
•
• Ha: V i | xi 2i
ˆ ˆ 1 ˆ
Robust White VC matrix: V
n
n X ' X X ' DX X ' X
1 1
ˆ ˆ
Dˆ diag ei2
i 1 i 1
1
N
N
X i ' X i X i ' yi
ˆ ˆ 1 ˆ 1
i 1 i 1
X
Estimated covariance matrix: 1
1
' X
Omega matrix with heteroscedastic error structure and
contemporaneously correlated errors, but in principle
FGLS can handle all different correlation structures…:
12 21 31 n1
12 22 32 n 2
13 23 32 n3
2n 3n n2
1n
4. Autocorrelation
The observation of the residual in t1 is dependent on the
observation in t0: not controlling for autocorrelation
violates on of the basic assumptions of OLS and may
bias the estimation of the beta coefficients
Options:
• lagged dependent variable
• differencing the dependent variable
• differencing all variables
• Prais-Winston Transformation of the data
• HAC constitent VC matrix
i t i t 1 it
Tests:
• Durbin-Watson, Durbin’s m, Breusch-Godfrey test
• Regress e on lag(e)
Autocorrelation
The error term in t1 is dependent on the error term in t0: not controlling
for autocorrelation violates on of the basic assumptions of OLS and
may bias the estimation of the beta coefficients
i t i t 1 it
The residual of a regression model picks up the influences of those
variables affecting the DV that have not been included in the
regression equation. Thus, persistence in excluded variables is the
most frequent cause of autocorrelation.
Autocorrelation does make no predictions about a trend, though a trend
in the DV is often a sign for serial correlation.
Positive autocorrelation: rho is positive: it is more likely that a positive
value of the error-term is followed by a one and a negative by a
negative one.
Negative autocorrelation: rho is negative: it is more likely that a positive
value of the error-term is followed by a negative one and vice versa.
DW test for first order AC: T
et et 1
2
d t 2
T
e
t 1
2
t
S et xt xt l et et 1 xt xt' l xt l xt'
* 1 T 2 ' 1 p T
T t 1 T t 1 t l 1
1
l 1
p 1
OR a simpler Test:
yit ( 0 yi ,t 1 it )
1
xit
First Difference models
• Differencing only the dependent variable – only if theory
predicts effects of levels on changes
• FD estimator assumes that the coefficient of the LDV is
exactly 1 – this is often not true
• Theory predicts effects of changes on changes
• Suggested remedy if time series is non-stationary (has a
single unit root), asymptotic analysis for T→ ∞.
• Consistent
K
yi t yi t 1 k x k i t x k i t 1 i t i t 1
k 1
K
yi t k x k i t i t
k 1
Prais-Winsten Transformation
• Models the serial correlation in the error term –
regression results for X variables are more straight
forwardly interpretable:
yi t x i t i t with i t i t 1 it
The it are iid – with N 0, 2
• The VC matrix of the error term is 1 2 T 1
1 T 2
1 2
1 T 3
1 2
T 1 T 2 T 3 1
y i t x i t i t
2. An estimate of the correlation in the residuals is then obtained by the
following auxiliary regression:
it i t 1 it
1 2 y1
1 2 x1 1 2 1
5. With Iterating to convergence, the whole process is repeated until
the change in the estimate of rho is within a specified tolerance, the
new estimates are used to produce fitted values for y and rho is re-
estimated, by:
yi t yˆ i t yi t 1 yˆ i t 1 i t
Distributed Lag Models
• Simplest form is Cochrane-Orcutt – dynamic structure of
all independent variables is captured by 1 parameter,
either in the error term or as LDV
• If dynamics are that easy – LDV or Prais-Winston is fine
– saves Degrees of Freedom
• Problem: if theory predicts different lags for different right
hand side variables – than a miss-specified model leads
necessarily to bias
• Test down – start with relatively large number of lags for
potential candidates:
yi t x i t 1 x i t 12 x i t 23 x i t n n 1 i t
n 1, , t 1
Specification Issues in Multiple Regressions:
1. Non-Linearity
One or more explanatory variables have a non-linear effect on the
dependent variable: estimating a linear model would lead to wrong
or/and insignificant results. Thus, even though in the population
there exist a relationship between an explanatory variable and the
dependent variable, but this relationship cannot be detected due to
the strict linearity assumption of OLS
Test:
• Ramsay RESET F-test gives a first indication for the whole model
• In general, we can use acprplot to verify the linearity assumption
against an explanatory variable – though this is just “eye-balling”
• Theoretical expectations should guide the inclusion of squared
terms.
Augmented component plus residual
-10 -5 0 5 10
.2 .4 0 .6
institutional openness to trade standardized
1 .8
– Different functional forms give parameter estimates that have different substantial
interpretations. The parameters of the linear model have an interpretation as
marginal effects. The elasticities will vary depending on the data. In contrast the
parameters of the log-log model have an interpretation as elasticities. So the log-
log model assumes a constant elasticity over all values of the data set. Therefore
the coefficients of a log-linear model can be interpreted as percentage changes –
if the explanatory variable changes by one percent the dependent variable
changes by beta percent.
– The log transformation is only applicable when all the observations in the data set
are positive. This can be guaranteed by using a transformation like log(X+k) where
k is a positive scalar chosen to ensure positive values. However, careful thought
has to be given to the interpretation of the parameter estimates.
– For a given data set there may be no particular reason to assume that one
functional form is better than the other. A model selection approach is to estimate
competing models by OLS and choose the model with the highest R-squared.
• include an additional squared term of the IV to test for U-shape and inverse
U-shape relationships. Careful with the interpretation! The size of the two
coefficients (linear and squared) determines whether there is indeed a u-
shaped or inverse u-shaped relationship.
yi 1 x i 2 x i i
2
Hausken, Martin, Plümper 2004: Government Spending and Taxation in
Democracies and Autocracies, Constitutional Political Economy 15, 239-59.
polity_sqr polity govcon
0 0 20.049292
0.25 0.5 18.987946
1 1 18.024796
2.25 1.5 17.159842
4 2 16.393084
6.25 2.5 15.724521
9 3 15.154153
12.25 3.5 14.681982
16 4 14.308005
20.25 4.5 14.032225
25 5 13.85464
30.25 5.5 13.775251
36 6 13.794057
42.25 6.5 13.911059
49 7 14.126257
56.25 7.5 14.43965
64 8 14.851239
72.25 8.5 15.361024
81 9 15.969004
90.25 9.5 16.67518
100 10 17.479551
The „u“ shaped relationship between democracy and government spending:
21
government consumption in % of GDP
20
19
18
17
16
15
14
13
0 2 4 6 8 10
degree of democracy
2. Interaction Effects
45
Low cristian democratic
portfolio
High cristian democratic
portfolio
40
35
Low unemployment High unemployment
70
6050
spend
40 30
20
0 5 10 15 20
unem
0 50 100 150
trade
.015
Marginal Effect of unemployment
.01
.5 1
.005
0
0
0 50 100 150
international trade exposure
Thick dashed lines give 95% confidence interval.
Thin dashed line is a kernel density estimate of trade.
3. Dummy variables
Problem:
The OLS principle implies the minimization of squared
residuals. From this follows that extreme cases can have
a strong impact on the regression line.
Inclusion/exclusion of extreme cases might change the
results significantly.
.06 Belgium
Belgium
Japan
Netherlands
Leverage
Japan
Belgium Belgium Switzerland
IrelandSweden
UKFinland Ireland Ireland
.04
Belgium
Ireland Ireland
Ireland
UK
UK Japan
Japan
Belgium Ireland
UKJapan Switzerland
Ireland
Austria Austria
Netherlands
UK
UK Austria
Belgium
UK
Italy Switzerland
Switzerland
Ireland Ireland
UKBelgium
Japan
Belgium
Italy Finland
UKCanadaAustralia
Netherlands
Netherlands
UK Australia
Ireland
Italy
ItalyItaly
Belgium
UK AustraliaFinlandUK Japan Australia Sweden
Belgium
US UK
Austria
Ireland
US Canada
Ireland US Japan Switzerland
Switzerland
Switzerland
France
Italy
UK
Belgium
Italy
Canada
Italy Italy
Italy
Germany
UK Australia
US
UKFinland
Ireland
UK
UK Japan
Italy
GermanyJapan
Canada
Japan Australia
Australia Switzerland
.02
Italy
Japan
CanadaCanada
Germany
US Canada
Germany
Ireland
Netherlands
Ireland Canada
Germany
Denmark Ireland
Netherlands Japan Australia
Denmark Switzerland
Ireland
Ireland
US Netherlands
Germany
Denmark
Netherlands
Germany
Ireland
Belgium
Germany
CanadaItalyUS
Ireland
Belgium
Finland
Netherlands
Italy
Ireland
Finland Ireland
Canada
Australia Switzerland
Switzerland
SwedenSweden
Japan
Germany
Germany
Ireland
Ireland
Italy
Canada
Finland
France
Japan
Italy
US
UK Sweden
Canada
UK Ireland
Netherlands
Finland
Germany
Canada
ItalyNorway
Italy France
Belgium
Italy
Italy Ireland
Ireland Australia
France
Switzerland
Australia Switzerland
Switzerland
France
Japan
US
Austria
Belgium US
Netherlands
Ireland
Canada
Netherlands
Canada
Italy Netherlands
Netherlands
Netherlands
BelgiumBelgium
Australia
UK Denmark AustraliaSwitzerland
Australia Switzerland
Australia Switzerland
Switzerland
Finland
UK
UK
US Ireland
Italy
UK Germany
Canada
Norway
UK
France
US
Norway
Austria
Finland
Austria
Canada
UK
Canada
DenmarkItaly
Canada
Finland
Austria
Canada Netherlands
Netherlands
Norway
Canada
Finland
Austria
Finland
SwedenNetherlands
Germany
Denmark
Germany
Japan
US Belgium
Italy
Italy
NorwayBelgium
Australia
Netherlands Australia
Australia Sweden Sweden
Sweden
Sweden
Denmark
Belgium
Japan
Belgium
US
Netherlands
Belgium
US
US Austria
Finland
Austria
France
Belgium
UK Canada
Finland
Norway Norway
Denmark
Italy
Netherlands
Italy
Norway
Norway
Belgium
Denmark
Netherlands Canada
Italy Finland
Belgium
Sweden
Netherlands
Sweden
Australia
Austria
Netherlands
US
Denmark
France
Austria
Canada France
Austria
Australia
N France
Australia
orway Norway Denmark
Switzerland
FranceNorway
Denmark Denmark Sweden
Japan
Netherlands
Austria
ItalyNorway
Canada Finland
Belgium
Austria Austria
Finland
Sweden
Norway
Netherlands
Austria
Austria
Sweden
USFrance Norway
Sweden
Netherlands
Norway
Finland
Norway
Japan
Sweden
US France
Sweden
Norway
Japan Australia
Norway
Norway
Canada
CanadaSweden
Austria
Germany
Japan Australia
Netherlands
Australia
Australia
Norway Denmark
DenmarkDenmark
Norway
DenmarkDenmark
Denmark
Denmark Denmark
Sweden
Denmark Sweden Sweden
US
Belgium
Austria
Denmark
Austria Austria
Finland
Finland
Japan
UK
US
Finland
US
Finland
Sweden
Sweden
Germany
Norway
Germany
Canada
Norway
US
Denmark
US
Austria
Austria France
Sweden
France
Australia Norway France
Denmark
Netherlands
Norway
France France
France
Germany Denmark Denmark DenmarkSweden Sweden Sweden Sweden
Netherlands
Austria
Germany
Germany
US DenmarkFinland
France
Germany
US
Austria
Canada
Australia
France
Finland Sweden
Finland
Denmark
France
France France
France
Norway
France
France
Australia Germany
Australia
Netherlands
Finland
Finland France
Germany
Australia
France
France Finland
France
Germany Sweden
Germany
Netherlands Norway
0
jacknife, bootstrap:
• Are both tests and solutions at the same time: they show whether
single observations have an impact on the results. If so, one can use
the jacknifed and bootstrapped coefficients and standard errors
which are more robust to outliers than normal OLS results.
• Jacknife: takes the original dataset, runs the same regression N-1
times, leaving one observation out at a time.
Example command in STATA: „jacknife _b _se, eclass: reg spend unem
growthpc depratio left cdem trade lowwage fdi skand “
• Bootstrapping is a re-sampling technique: for the specified number
of repetitions, the same regression is run for a different sample
randomly drawn from the original dataset.
Example command: „bootstrap _b _se, reps(1000): reg spend unem
growthpc depratio left cdem trade lowwage fdi skand “