When Features Collide: Understanding and Mitigating Collinearity

Shubham Sangole
CodeX
Published in
4 min readMay 22, 2024
credits: https://newpblog.netlify.app/

Feature collinearity is a critical concept in the world of statistical modelling and machine learning. It refers to the situation where two or more predictor variables in a model are highly linearly related. Understanding and addressing collinearity is essential because it can significantly affect the performance and interpretability of a model.

In this blog, we will delve deep into the concept of feature collinearity, understand its mathematical underpinnings, and explore practical methods to detect and handle it using Python. By the end, you’ll have a comprehensive understanding of collinearity and how to deal with it in your data science projects.

Table of Contents

  1. What is Feature Collinearity?
  2. Mathematical Formulation
  3. Detecting Collinearity
  4. Practical Implementation in Python
  5. Handling Collinearity
  6. Conclusion
  7. References
  8. Further Reading

1. What is Feature Collinearity?

Feature collinearity occurs when two or more features (independent variables) in a dataset are highly correlated. This means that one feature can be linearly predicted from the others with a significant degree of accuracy. Collinearity can lead to several issues, including:

  • Inflated standard errors of the coefficient estimates.
  • Reduced statistical power of hypothesis tests.
  • Difficulties in determining the individual effect of each predictor.

Types of Collinearity

  • Perfect Collinearity: When one predictor variable is a perfect linear function of another.
  • Near Collinearity: When one predictor variable is approximately a linear function of another.

2. Mathematical Formulation

Consider a linear regression model:

Where Y is the dependent variable, X1, X2,…, Xp​ are the predictor variables, β0, β1,…, βp are the coefficients, and ϵ is the error term.

Variance Inflation Factor (VIF)

One way to quantify collinearity is through the Variance Inflation Factor (VIF). For each predictor Xi, VIF is defined as:

Where Ri²​ is the coefficient of determination of the regression of Xi on all other predictors. A VIF value greater than 10 is often considered indicative of high collinearity.

3. Detecting Collinearity

Correlation Matrix

A simple way to detect collinearity is to look at the correlation matrix of the predictor variables. High correlation values (close to 1 or -1) suggest collinearity.

Condition Number

The condition number of the feature matrix X is another measure. It is the ratio of the largest singular value of X to the smallest singular value. A high condition number (greater than 30) indicates potential collinearity.

Eigenvalues

Examining the eigenvalues of the feature matrix X can also provide insights. Small eigenvalues indicate that the matrix is close to being singular, which suggests collinearity.

4. Practical Implementation in Python

Let’s implement the detection of collinearity using Python.

Loading the Dataset

We will use the Boston housing dataset from the sklearn library for this example.

import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from statsmodels.stats.outliers_influence import variance_inflation_factor

# Load the dataset
boston = load_boston()
X = pd.DataFrame(boston.data, columns=boston.feature_names)

# Add the target variable
y = boston.target

Correlation Matrix

# Calculate the correlation matrix
corr_matrix = X.corr().round(2)

# Display the correlation matrix
import seaborn as sns
import matplotlib.pyplot as plt

plt.figure(figsize=(12, 8))
sns.heatmap(corr_matrix, annot=True, cmap='coolwarm')
plt.title('Correlation Matrix')
plt.show()

Variance Inflation Factor (VIF)

# Calculate VIF for each feature
vif_data = pd.DataFrame()
vif_data["Feature"] = X.columns
vif_data["VIF"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]

print(vif_data)

Condition Number

from numpy.linalg import cond

# Calculate the condition number
condition_number = cond(X)
print(f"Condition Number: {condition_number}")

5. Handling Collinearity

If collinearity is detected, there are several ways to address it:

Removing Variables

Removing one of the collinear variables is the simplest method. This can be based on domain knowledge or by analyzing the VIF values.

# Remove the variable with the highest VIF
X_reduced = X.drop(columns=['NOX']) # Example: Removing 'NOX' based on VIF analysis

Combining Variables

Combining collinear variables into a single predictor can reduce collinearity while preserving information.

# Create a new variable that combines highly collinear variables
X['LSTAT_RM'] = X['LSTAT'] * X['RM']

Principal Component Analysis (PCA)

PCA can transform correlated features into a set of uncorrelated components.

from sklearn.decomposition import PCA

# Apply PCA
pca = PCA(n_components=5) # Reduce to 5 components
X_pca = pca.fit_transform(X)

Ridge Regression

Ridge regression adds a penalty to the regression model that shrinks the coefficients of collinear variables.

from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train a Ridge regression model
ridge = Ridge(alpha=1.0)
ridge.fit(X_train, y_train)

# Predict and evaluate
y_pred = ridge.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse}")

6. Conclusion

Feature collinearity is a crucial issue that can impact the performance and interpretability of statistical models. Understanding its mathematical basis and detecting it through various techniques is essential for building robust models. By removing or combining variables, using PCA, or applying ridge regression, we can effectively handle collinearity and improve our models.

7. References

  1. James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning with Applications in R. Springer.
  2. Draper, N. R., & Smith, H. (1998). Applied Regression Analysis. Wiley-Interscience.
  3. Seber, G. A. F., & Lee, A. J. (2012). Linear Regression Analysis. Wiley.

By following these guidelines and understanding the underlying principles, you can effectively manage collinearity in your data science projects, leading to more accurate and interpretable models.

--

--

Shubham Sangole
CodeX
Writer for

Data-Muncher | On a Data Science voyage to explore new learnings and exciting possibilities.