3 Top Feature Selection Techniques in Machine Learning (part -1)

Sivasai Yadav Mudugandla
The Startup
Published in
5 min readOct 11, 2020

Improve your model performance with features that contribute more to predictions.

Image by Arek Socha from Pixabay


When do you say a model is good? When a model performs well on unseen data then we say its a good model. As a Data Scientist, we perform various operations in order to make a good Machine Learning model. The operations include data pre-processing (dealing with NA’s and outliers, column type conversions, dimensionality reduction, normalization etc), exploratory data analysis (EDA), hyperparameter tuning/optimization (the process of finding the best set of hyper-parameters of the ML algorithm that delivers the best performance), feature selection etc.

“Garbage in, Garbage out.”
If data fed into an ML model is of poor quality, the model will be of poor quality

Below articles are related to hyperparameter tuning techniques, open-source frameworks for hyperparameter tuning and data leakage in hyperparameter tuning.

Feature Selection

Feature Selection is the process of selecting the best subset from the existing set of features that contribute more in predicting the output. This helps to improve the performance of the model (i.e removes redundant or/and irrelevant features that are carrying noise and decreasing the accuracy of the model). One of the major advantages of feature selection is it rescues the model from the high…

Sivasai Yadav Mudugandla
The Startup

Sr. Data Analyst at CITI Bank| London | Ex — Data scientist | Post Graduate in AI & ML | Pythonista | https://www.linkedin.com/in/sivasai-mudugandla-89a156104/