How do Tree-based algorithm work?
The main four workflows of tree-based algorithms are discussed below:
- Feature Splitting: Tree-based algorithms begin by selecting more informative features to split a data set based on a specific criterion, such as Gini impurity or information gain etc.
- Recursive splitting: The selected feature of dataset is used to split the data in two, and the process is repeated for each resulting subset, forming a hierarchical binary tree structure. This recursive splitting until stops a predefined criterion, like a maximum depth or a minimum number of samples per train data, is met as long as it lasts.
- Leaf Node Function: As the tree grows, each terminal node (leaf) is given a predicted outcome based on majority learning (for classification) or the sample value of that node of the (for regression). This activates the tree to capture complex decision boundaries and relationships in the data.
- Ensemble Learning: For ensemble methods like Random Forests and Gradient Boosting Machines, multiple trees are trained independently, and their predictions are combined to obtain the final result. This group approach helps to reduce overfitting, increase generalization, and improve overall model performance by combining the strengths of individual trees and reducing their weaknesses.
Splitting Process
Gini Impurity
Gini impurity is a measure of the lack of homogeneity in a dataset which specifically calculates the probability of misclassifying an instance chosen uniformly at random. The splitting process involves evaluating potential splits based on Gini impurity for each feature. The algorithm selects the split that minimizes the weighted sum of impurities in the resulting subsets, aiming to create nodes with predominantly homogeneous class distributions.
Python3
import graphviz from sklearn.datasets import load_breast_cancer from sklearn.tree import DecisionTreeClassifier, export_graphviz # Load Breast Cancer dataset data = load_breast_cancer() X, y = data.data, data.target # Create a decision tree classifier clf = DecisionTreeClassifier(criterion = 'gini' , random_state = 42 ) # Fit the classifier on the dataset clf.fit(X, y) # Extract decision tree information dot_data = export_graphviz(clf, out_file = None , feature_names = data.feature_names) # Create a graph object and render graph = graphviz.Source(dot_data) graph.render( "decision_tree" ) |
Output:
decision_tree.pdf
The image decision tree will be stored in decision_tree.pdf.
Entropy
Entropy is a measure of information uncertainty in a dataset. In the context of decision trees, it quantifies the impurity or disorder within a node. The splitting process involves assessing candidate splits based on the reduction in entropy they induce. The algorithm selects the split that maximizes the information gain, representing the reduction in uncertainty achieved by the split. This results in nodes with more ordered and homogenous class distributions, contributing to the overall predictive power of the tree.
Python3
import graphviz from sklearn.datasets import load_breast_cancer from sklearn.tree import DecisionTreeClassifier, export_graphviz # Load Breast Cancer dataset data = load_breast_cancer() X, y = data.data, data.target # Create a decision tree classifier clf = DecisionTreeClassifier(criterion = 'entropy' , random_state = 42 ) # Fit the classifier on the dataset clf.fit(X, y) # Extract decision tree information dot_data = export_graphviz(clf, out_file = None , feature_names = data.feature_names) # Create a graph object and render graph = graphviz.Source(dot_data) graph.render( "decision_tree2" ) |
Output:
decision_tree2.pdf
The image decision tree will be stored in decision_tree2.pdf.
Information Gain
Information gain is a concept derived from entropy, measuring the reduction in uncertainty about the outcome variable achieved by splitting a dataset based on a particular feature. In tree-based algorithms, the splitting process involves selecting the feature and split point that maximize information gain. High information gain implies that the split effectively organizes and separates instances, resulting in more homogeneous subsets with respect to the target variable. The goal is to iteratively choose splits that collectively lead to a tree structure capable of making accurate predictions on unseen data.
Tree Based Machine Learning Algorithms
Tree-based algorithms are a fundamental component of machine learning, offering intuitive decision-making processes akin to human reasoning. These algorithms construct decision trees, where each branch represents a decision based on features, ultimately leading to a prediction or classification. By recursively partitioning the feature space, tree-based algorithms provide transparent and interpretable models, making them widely utilized in various applications. In this article, we going to learn the fundamentals of tree based algorithms.