Decision Tree-End to End Implementation

Vijay Choubey
Analytics Vidhya
Published in
7 min readOct 27, 2020

Well, In this blog and I’m super excited to start with this concept of Decision Tree. I will discuss the end to end understanding of Decision Tree. So lets first understand it and also implement it using python

What is Decision Tree ?

  • Decision Tree is a Supervised learning technique that can be used for both classification and Regression problems, but mostly it is preferred for solving Classification problems. It is a tree-structured classifier, where internal nodes represent the features of a dataset, branches represent the decision rules and each leaf node represents the outcome.
  • In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node. Decision nodes are used to make any decision and have multiple branches, whereas Leaf nodes are the output of those decisions and do not contain any further branches.
  • The decisions or the test are performed on the basis of features of the given dataset.

It is a graphical representation for getting all the possible solutions to a problem/decision based on given conditions.

  • It is called a decision tree because, similar to a tree, it starts with the root node, which expands on further branches and constructs a tree-like structure.
  • In order to build a tree, we use the CART algorithm, which stands for Classification and Regression Tree algorithm.
  • A decision tree simply asks a question, and based on the answer (Yes/No), it further split the tree into subtrees.
  • Below diagram explains the general structure of a decision tree:

Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.

Why use Decision Trees?

There are various algorithms in Machine learning, so choosing the best algorithm for the given dataset and problem is the main point to remember while creating a machine learning model. Below are the two reasons for using the Decision tree:

  • Decision Trees usually mimic human thinking ability while making a decision, so it is easy to understand.
  • The logic behind the decision tree can be easily understood because it shows a tree-like structure.

Decision Tree Terminologies

  • Root Node: Root node is from where the decision tree starts. It represents the entire dataset, which further gets divided into two or more homogeneous sets.
  • Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further after getting a leaf node.
  • Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes according to the given conditions.
  • Branch/Sub Tree: A tree formed by splitting the tree.
  • Pruning: Pruning is the process of removing the unwanted branches from the tree.
  • Parent/Child node: The root node of the tree is called the parent node, and other nodes are called the child nodes.

How does the Decision Tree algorithm Work?

In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root node of the tree. This algorithm compares the values of root attribute with the record (real dataset) attribute and, based on the comparison, follows the branch and jumps to the next node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and move further. It continues the process until it reaches the leaf node of the tree. The complete process can be better understood using the below algorithm:

  • Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
  • Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
  • Step-3: Divide the S into subsets that contains possible values for the best attributes.
  • Step-4: Generate the decision tree node, which contains the best attribute.
  • Step-5: Recursively make new decision trees using the subsets of the dataset created in step -3. Continue this process until a stage is reached where you cannot further classify the nodes and called the final node as a leaf node.

Example: Suppose there is a candidate who has a job offer and wants to decide whether he should accept the offer or Not. So, to solve this problem, the decision tree starts with the root node (Salary attribute by ASM). The root node splits further into the next decision node (distance from the office) and one leaf node based on the corresponding labels. The next decision node further gets split into one decision node (Cab facility) and one leaf node. Finally, the decision node splits into two leaf nodes (Accepted offers and Declined offer). Consider the below diagram:

Attribute Selection Measures

While implementing a Decision tree, the main issue arises that how to select the best attribute for the root node and for sub-nodes. So, to solve such problems there is a technique which is called as Attribute selection measure or ASM. By this measurement, we can easily select the best attribute for the nodes of the tree. There are two popular techniques for ASM, which are:

  • Information Gain
  • Gini Index

1. Information Gain:

  • Information gain is the measurement of changes in entropy after the segmentation of a dataset based on an attribute.
  • It calculates how much information a feature provides us about a class.
  • According to the value of information gain, we split the node and build the decision tree.
  • A decision tree algorithm always tries to maximize the value of information gain, and a node/attribute having the highest information gain is split first. It can be calculated using the below formula:

Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies randomness in data. Entropy can be calculated as:

Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)

Where,

  • S= Total number of samples
  • P(yes)= probability of yes
  • P(no)= probability of no

2. Gini Index:

  • Gini index is a measure of impurity or purity used while creating a decision tree in the CART(Classification and Regression Tree) algorithm.
  • An attribute with the low Gini index should be preferred as compared to the high Gini index.
  • It only creates binary splits, and the CART algorithm uses the Gini index to create binary splits.
  • Gini index can be calculated using the below formula:
Gini Index= 1- ∑jPj2

Pruning: Getting an Optimal Decision tree

Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal decision tree.

A too-large tree increases the risk of overfitting, and a small tree may not capture all the important features of the dataset. Therefore, a technique that decreases the size of the learning tree without reducing accuracy is known as Pruning. There are mainly two types of tree pruning technology used:

  • Cost Complexity Pruning
  • Reduced Error Pruning.

Advantages of the Decision Tree

  • Clear Visualization: The algorithm is simple to understand, interpret and visualize as the idea is mostly used in our daily lives. Output of a Decision Tree can be easily interpreted by humans.
  • Simple and easy to understand: Decision Tree looks like simple if-else statements which are very easy to understand.
  • Decision Tree can be used for both classification and regression problems.
  • Decision Tree can handle both continuous and categorical variables.
  • No feature scaling required: No feature scaling (standardization and normalization) required in case of Decision Tree as it uses rule based approach instead of distance calculation.
  • Handles nonlinear parameters efficiently: Non linear parameters don’t affect the performance of a Decision Tree unlike curve based algorithms. So, if there is high nonlinearity between the independent variables, Decision Trees may outperform as compared to other curve based algorithms.
  • Decision Tree can automatically handle missing values.
  • Decision Trees are usually robust to outliers and can handle them automatically.
  • Less Training Period: Training period is less as compared to Random Forest because it generates only one tree unlike forest of trees in the Random Forest.

Disadvantages of the Decision Tree

  • Overfitting: This is the main problem of the Decision Tree. It generally leads to overfitting of the data which ultimately leads to wrong predictions. In order to fit the data (even noisy data), it keeps generating new nodes and ultimately the tree becomes too complex to interpret. In this way, it loses its generalization capabilities. It performs very well on the trained data but starts making a lot of mistakes on the unseen data.
  • High variance: As mentioned in point 1, Decision Tree generally leads to the overfitting of data. Due to the overfitting, there are very high chances of high variance in the output which leads to many errors in the final estimation and shows high inaccuracy in the results. In order to achieve zero bias (overfitting), it leads to high variance.
  • Unstable: Adding a new data point can lead to re-generation of the overall tree and all nodes need to be recalculated and recreated.
  • Affected by noise: Little bit of noise can make it unstable which leads to wrong predictions.
  • Not suitable for large datasets: If data size is large, then one single tree may grow complex and lead to overfitting. So in this case, we should use Random Forest instead of a single Decision Tree.

Python Implementation of Decision Tree

Using SKLearn

  1. #Import the DecisionTreeClassifier
  2. from sklearn.tree import DecisionTreeClassifier
  3. #Import the dataset
  4. dataset = pd.read_csv(‘zoo.csv’,names=[‘animal_name’,’hair’,’feathers’,’eggs’,’milk’,‘airbone’,’aquatic’,’predator’,’toothed’,’backbone’,‘breathes’,’venomous’,’fins’,’legs’,’tail’,’domestic’,’catsize’,’class’,])
  5. #We drop the animal names since this is not a good feature to split the data on
  6. dataset=dataset.drop(‘animal_name’,axis=1)

7. train_features = dataset.iloc[:80,:-1]

8.test_features = dataset.iloc[80:,:-1]

9.train_targets = dataset.iloc[:80,-1]

10.test_targets = dataset.iloc[80:,-1]

11.tree = DecisionTreeClassifier(criterion =‘entropy’).fit(train_features,train_targets)

12.prediction = tree.predict(test_features)

13.print(“The prediction accuracy is: “,tree.score(test_features,test_targets)*100,”%”)

The output that I got is:

The prediction accuracy is: 80.95238095238095 %.

Conclusion:

Thanks for reading! I am going to be writing more Machine Learning articles in the future too. Follow me on Medium to be informed about them. And I am also a freelancer,If there is some freelancing work on data-related projects feel free to reach out over Linkedin.Nothing beats working on real projects!

Clap if you liked the article!

--

--

Vijay Choubey
Analytics Vidhya

Data Scientist @ Accenture AI|| Medium Blogger || NLP Enthusiast || Freelancer LinkedIn: https://www.linkedin.com/in/vijay-choubey-3bb471148/