Econometric Theory/Ordinary Least Squares (OLS)

From testwiki
Revision as of 21:35, 25 October 2019 by imported>Cannotfindausernamepleasehelpme (The Model: Added a subscript)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Ordinary Least Squares or OLS is one of the simplest (if you can call it so) methods of linear regression. The goal of OLS is to closely "fit" a function with the data. It does so by minimizing the sum of squared errors from the data.

Why we Square Errors before Summing

We are not trying to minimize the sum of errors, but rather the sum of squared errors. Let's take a brief look at our sweater story again.

Model A
Model B
model data point error from line
A 1 5
A 2 10
A 3 -5
A 4 -10
B 1 3
B 2 -3
B 3 3
B 4 -3

Notice that the Sum of Model A is 5+10510=0 and that the Sum of Model B is 33+33=0

The error of both Models sum to 0. Does this mean they are both are great fits! NO!!

So to account for the signs, whenever we sum errors, we square the terms first. Thus, both positive and negative deviations are penalized equally, while trying to minimize the errors of the fitted line.

The Model

These two models each have an intercept term α, and a slope term β (some textbooks use β0 instead of α and β1 instead of β, this is a much better approach once we move to multivariate formulas). We can represent an arbitrary single variable model with the formula: yi=α+βxi+ui The y-values are related to the x-values given this formula. y is called the dependent variable and x is called the independent variable, since the value of yi is determined by the value of xi. We use the subscript i to denote an observation. So y1 is paired with x1, y2 with x2, etc. The ut term is the error term, which is the difference between the effect of xi and the observed value of yi.

Unfortunately, we don't know the values of α,β or ut. We have to approximate them. We can do this by using the ordinary least squares method. The term "least squares" means that we are trying to minimize the sum of squares, or more specifically we are trying to minimize the squared error terms. Since there are two variables that we need to minimize with respect to (α and β), we have two equations:
f=Σui2=Σ(yiαβxi)2
fα=2Σ(yiαβxi)=0
fβ=2Σ(yiαβxi)xi=0
Call the solutions to these equations α^ and β^. Solving we get:
α^=y¯β^x¯
β^=Σ(xix¯)yiΣ(xix¯)2
Where y¯=Σyin and x¯=Σxin. Computing these results can be left as an exercise.

It is important to know that α^ and β^ are not the same as α and β because they are based on a single sample rather than the entire population. If you took a different sample, you would get different values for α^ and β^. Let's call α^ and β^ the OLS estimators of α and β. One of the main goals of econometrics is to analyze the quality of these estimators and see under what conditions these are good estimators and under which conditions they are not.

Once we have α^ and β^, we can construct two more variables. The first is the fitted values, or estimates of y:
y^i=α^+β^xi
The second is the estimates of the error terms, which we will call the residuals:
u^i=yiy^i
These two variables will be important later on.

Template:BookCat