Linear regression (OLS, assumptions)
Log in to access the full course.
The core idea
Linear regression models the relationship between one or more input features and a continuous target variable as a straight line (or hyperplane in multiple dimensions). Given features , the model predicts:
The values are the coefficients — the model learns them from data. is the intercept (the prediction when all features are zero); each is the slope for feature .
Ordinary Least Squares (OLS)
The standard method for fitting a linear regression is Ordinary Least Squares (OLS): find the coefficients that minimize the sum of squared differences between the predicted and actual values.
OLS has a closed-form solution — no iteration required. In matrix notation, with as the feature matrix and as the target vector:
This gives the exact optimal coefficients in one computation. For large numbers of features, gradient descent is used instead (the Normal Equation becomes expensive to invert).
The OLS assumptions
OLS has good statistical properties — unbiased, minimum variance — when its assumptions hold. These are worth knowing because violations lead to unreliable estimates or incorrect inferences.
1. Linearity. The true relationship between features and target is linear. If the actual relationship is curved, the model is systematically wrong.
2. Independence of errors. The residuals (errors) are not correlated with each other. Violated in time-series data, where observations close in time tend to be related.
3. Homoscedasticity. The variance of the residuals is constant across all values of the features. If errors are larger for higher predicted values (a "fan" shape in residual plots), inference is unreliable.
4. Normality of errors. The residuals are approximately normally distributed. This matters primarily for hypothesis tests and confidence intervals on coefficients, not for point predictions.
5. No perfect multicollinearity. No feature is an exact linear combination of others. If it is, is not invertible and OLS has no unique solution. (Imperfect multicollinearity is a separate issue — covered in the Multicollinearity lesson.)
Checking assumptions
The standard diagnostic is a residual plot: plot the residuals against the fitted values. You want to see random scatter with no pattern.
- A curved pattern suggests non-linearity.
- A fan shape suggests heteroscedasticity.
- A Q-Q plot of residuals checks normality.
Evaluating fit
Common metrics for regression:
- R²: fraction of variance in explained by the model. R² = 1 is perfect; R² = 0 is no better than predicting the mean.
- RMSE: root mean squared error — in the same units as , easy to interpret.
- MAE: mean absolute error — less sensitive to outliers than RMSE.
R² on the training set is optimistic; always evaluate on held-out data.