ADASYN: Adaptive Synthetic Sampling Approach
ADASYN, an extension of the SMOTE technique, is also used in handling imbalanced datasets. ADASYN focuses on local densities of minority classes. It finds out the regions where the imbalance is very severe and applies the strategy to generate synthetic samples there. It generates more samples where the density is high and fewer samples where the density is low. This approach is highly useful in scenarios where class distribution varies across the feature space.
Working Procedure of ADASYN
- Class Imbalance Ratios: The initial step is ADASYN is to calculate the ratio of minority class which is obtained by dividing the number of majority class samples by the number of minority class samples.
- Finding density distribution: For every minority instance, we find its k-nearest neighbors. Then we find the distance between them using metrics like Manhattan distance or Euclidean distance. If the instances are surrounded by more nearby neighbors, then we consider the density to be higher else the density is considered to be low.
- Sample generation ratio: Once both class imbalance ratio and density distribution are calculated, we compute the sample generation ratio. It finds out how many samples are to be generated for each minority class instance. For Higher densities and larger imbalanced instances, more synthetic samples are generated.
- Generating synthetic samples: By combining the minority instances with their nearest neighbors, new samples are generated.
- Balanced dataset creation: By combining the new synthetic samples with the original minority instances, the frequency of the minority classes increases. This makes the dataset balanced and helps the model to learn more accurately.
Python Implementation For ADASYN
from imblearn.over_sampling import ADASYN
# Applying ADASYN
adasyn = ADASYN(sampling_strategy='minority')
x_resampled, y_resampled = adasyn.fit_resample(x, y)
# Count outcome values after applying ADASYN
y_resampled.value_counts()
Output:
Outcome
1 500
0 500
Name: count, dtype: int64
SMOTE for Imbalanced Classification with Python
Imbalanced datasets impact the performance of the machine learning models and the Synthetic Minority Over-sampling Technique (SMOTE) addresses the class imbalance problem by generating synthetic samples for the minority class. The article aims to explore the SMOTE, its working procedure, and various extensions to enhance its capability. The article provides Python implementations for SMOTE and its extensions, offering a comprehensive guide to tackle the problem of Imbalanced datasets in Python.
Table of Content
- Data Imbalance in Classification Problem
- SMOTE : Synthetic Minority Over-Sampling Technique
- Extensions of SMOTE
- ADASYN: Adaptive Synthetic Sampling Approach
- Borderline SMOTE
- SMOTE-ENN (Edited Nearest Neighbors)
- SMOTE- TOMEK Links
- SMOTE-NC (Nominal Continuous)
- SMOTE for Imbalanced Classification: When to Use