Explaining Random Forest and Parameter Tuning in R

Manish Saraswat
Feb 7, 2017 · 2 min read

Introduction

Treat “forests” well. Not for the sake of nature, but for solving problems too!

Random Forest is one of the most versatile machine learning algorithms available today. With its built-in ensembling capacity, the task of building a decent generalized model (on any dataset) gets much easier.

However, I’ve seen people using random forest as a black box model; i.e., they don’t understand what’s happening beneath the code. They just code.

In fact, the easiest part of machine learning is coding. If you are new to machine learning, the random forest algorithm should be on your tips.

Its ability to solve — both regression and classification problems along with robustness to correlated features and variable importance plot gives us enough head start to solve various problems.

Most often, I’ve seen people getting confused in bagging and random forest. Do you know the difference?

In this article, I’ll explain the complete concept of random forest and bagging. For ease of understanding, I’ve kept the explanation simple yet enriching.

I’ve used MLR, data.table packages to implement bagging, and random forest with parameter tuning in R. Also, you’ll learn the techniques I’ve used to improve model accuracy from ~82% to 86%.

Table of Contents

  1. What is the Random Forest algorithm?
  2. How does it work? (Decision Tree, Random Forest)
  3. What is the difference between Bagging and Random Forest?
  4. Advantages and Disadvantages of Random Forest
  5. Solving a Problem
  6. Parameter Tuning in Random Forest

What is the Random Forest algorithm?

Random forest is a tree-based algorithm which involves building several trees (decision trees), then combining their output to improve generalization ability of the model.

The method of combining trees is known as an ensemble method. Ensembling is nothing but a combination of weak learners (individual trees) to produce a strong learner.

Say, you want to watch a movie. But you are uncertain of its reviews. You ask 10 people who have watched the movie. 8 of them said “ the movie is fantastic.” Since the majority is in favor, you decide to watch the movie. This is how we use ensemble techniques in our daily life too.

Random Forest can be used to solve regression and classification problems. In regression problems, the dependent variable is continuous. In classification problems, the dependent variable is categorical.

Trivia: The random Forest algorithm was created by Leo Brieman and Adele Cutler in 2001.

Complete Tutorial Here: Random Forest Algorithm

Ask Live to Jesse S. Woods (12th February 2017) — How to become a Data Scientist in 2017 ?

Written by

Self Learned Data Science Guy | Only Books | Highly Curious | Loves Writing | Codes in R |

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade