Feature Selection Vs Feature Extraction

Dedeepya Lekkala
4 min readSep 5, 2023

--

Introduction

After data preprocessing, the next crucial step in any data analysis or machine learning project is handling the issue of high dimensionality. Real-world datasets often come with a variety of features, and while more data can be beneficial, it can also introduce challenges. High dimensionality can lead to increased computational demands, overfitting, and a decrease in model interpretability.

In this blog post, we will explore dimensionality reduction techniques, specifically focusing on two fundamental approaches: Feature Extraction and Feature Selection. These techniques play an important role in simplifying complex datasets, can help us extract valuable insights and build more robust predictive models.

What is Dimensionality Reduction?

Dimensionality Reduction is the process of systematically reducing the number of features in a dataset while retaining the important information.It reduces the challenges posed by the high-dimensional datasets.When we say “high dimensionality,” we are referring to datasets with a large number of features or variables.

The Need for Dimensionality Reduction

Imagine working with a dataset that has hundreds or even thousands of features. The complexity of such data can lead to several issues:

  1. Increased Computational Demands: Processing and analyzing high-dimensional data can be computationally expensive and time-consuming. It may even exceed the capabilities of our hardware.
  2. Overfitting: In high-dimensional spaces, machine learning models may become more complex, fitting the noise rather than the underlying patterns in the data. This phenomenon, known as overfitting, can severely impact a model’s predictive power.
  3. Loss of Interpretability: As the number of features increases,models become less interpretable. It becomes harder to pinpoint which features are most influential in making predictions.

Techniques of Dimensionality Reduction:

Feature Extraction: Feature extraction involves extracting valuable information from the initial feature set to generate a new, reduced feature space. Its main objective is to compress data while retaining the most important information.

Let’s think about working with pictures of people’s faces. Each picture has lots of tiny details, like how bright or dark each pixel is. Imagine having thousands of these details for each face — it’s like having too much information.

Now, if you want to teach a computer to recognize these faces, dealing with all those thousands of details for each face is really hard and takes a lot of computer power and, not all of those details are important for recognizing faces.

So, here’s where Feature Extraction helps:

  • Feature Extraction is like a smart tool that looks at all those tiny details in the pictures and picks out a few really important things.
  • Feature Extraction makes the pictures simpler but still keeps the important stuff that helps the computer recognize faces.
  • It’s like making a shortcut to understand faces better without all the complex details.

Some common techniques used for feature extraction are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).

Feature Selection: Instead of creating new features, Feature Selection focuses on choosing a subset of the existing features that contribute most significantly to the problem. This process eliminates irrelevant or redundant features while preserving the important ones.

Imagine you’re creating a health app for smartwatches. These watches can track many health things like heart rate, temperature, sleep, steps, and more.

Here’s how Feature Selection applies in this scenario:

  • The app collects lots of health info, like heart rate, temperature, sleep, and steps. Each piece of info is like a building block.
  • Instead of using all the building blocks, Feature Selection is like picking only the most important ones. It chooses things like heart rate and sleep because they tell us lot about a person’s health.
  • Now, your app only looks at heart rate and sleep, so it’s simpler to use and understand simplifying the analysis process.

Types of Feature Selection Methods:

  1. Filter Methods
  2. Wrapper Methods
  3. Embedded Methods
  4. Greedy Feature Selection Algorithms
  5. Recursive Feature Elimination with Cross-Validation (RFECV)
  6. Sequential Feature Selection Methods

Choosing between Feature Selection and Feature Extraction

Usually the choice depends on your specific dataset, problem, and goals. Below are the steps that can help you decide.

Choose Feature Extraction when:

  1. If you believe that there might be meaningful combinations of existing features that can capture essential patterns or relationships in your data, go for Feature Extraction. It’s useful when the original features may not adequately represent the underlying structure of the data.
  2. If you’re dealing with extremely high-dimensional data, and reducing dimensionality is a top priority, Feature Extraction, like Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA), can be effective in compressing data while retaining key information.
  3. Feature Extraction involves transforming the data into a new feature space, which may not have a direct interpretation in the original domain. If you’re comfortable with this transformation and are focused on improved model performance, this approach can be beneficial.

Choose Feature Selection when:

  1. If it’s really important to keep your data’s original meaning and not change it too much, Feature Selection is the way to go. It allows you to keep the most relevant features while discarding the rest.
  2. Feature Selection is like finding the best parts in a book without rewriting the whole story. It’s fast because it doesn’t create new features like Feature Extraction. So, if you don’t have a lot of time or your computer isn’t super powerful, go for Feature Selection.
  3. If you suspect that some of your data doesn’t really matter or is just repeating what other data says, Feature Selection helps get rid of it. This makes your models simpler and better because they focus on what’s really useful
  4. Feature Selection often results in models that are easier to explain and interpret.This is important in fields like healthcare or finance, where people need to know why a decision was made.

In summary, the right feature selection method depends on your unique problem, dataset, and the type of model you’re using.Some methods are better suited for linear models, while others work well with tree-based models or high-dimensional data. It’s often best to try out different techniques and see which one works best for your specific task.

--

--

Dedeepya Lekkala

Computer Vision Enthusiast | Deep learning | Machine learning | Autonomous Driving