# Introduction to Multivariate Linear Regression

• Errors follow the normal distribution with mean 0 and variance σ2I hence: ε~N(0,σ2I)
• The errors are homoscedastic (meaning that they have the same variance for all)
• Distinct error terms are uncorrelated
• Y is the matrix of order n×1
• X Is the matrix of dependent variables of order n×p
• Beta is the matrix of the coefficients of the regressors of order p×1

# Proof that the estimator we have found is BLUE (Best Linear Unbiased Estimator)

• is a linear combination of x
• Substituting
• in
• Since (XTX)-1XTX=I hence the equation becomes
• Since
• , we can say that the estimator is unbiased
Let β- be another unbiased estimator of the regression equation
• For β- to be an unbiased estimator E(β-) = β-
• Since (XTX)-1XTX=I hence the equation becomes
• For the estimator to be unbiased DX=0
• Now
• Since DDT is a non-negative definite matrix,
• Since the variance of any other estimator is greater than the variance of our estimator we can conclude that our estimator is the Best Linear Unbiased Estimator
• R Squared (R2)
• R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression.
It is the percentage of the response variable variation that is explained by a linear model.
Or:
R-squared = Explained variation / Total variation
The formula for R2 is:

# Python code for Multivariate Linear Regression

--

--

## More from AIGuys

This publication is purely dedicated towards Data Science, AI and ML. From the state-of-the-art research papers to the most fundamental AI concepts.