Implmentation using Iris Dataset
Let’s consider an example where we apply the above explained steps, with the famous Iris dataset or a custom dataset. Below is an example of building and training a neural network to classify iris flowers
Importing Libraries
Python3
# Importing required libraries import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPClassifier from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score |
The necessary libraries for using a neural network-based classifier are imported by this code. It contains libraries for performing mathematical operations, dividing data into smaller chunks, scaling features, building MLP (Multi-Layer Perceptron) classifiers, importing the Iris dataset, and assessing the model’s precision.
Loading Dataset
Python3
# Loading dataset iris = load_iris() X, y = iris.data, iris.target |
Using scikit-learn’s load_iris() function, this program loads the Iris dataset while allocating the feature data to X and the target labels to y. A well-liked dataset for classification problems in machine learning is the Iris dataset.
Splitting Data into Train and Test Sets
Python3
# Splitting data set into train & test X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.2 , random_state = 42 ) |
Using train_test_split() from scikit-learn, this code divides the loaded dataset (X and y) into training and testing sets. By fixing the random seed, the random_state option ensures repeatability while the test_size parameter determines the percentage of data to be allotted to the test set (20% in this case).
Feature Scaling
Python3
# Creating Object scaler = StandardScaler() # Standardizing the features X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) |
To standardize the feature data, this code creates an object scaler called StandardScaler. To standardize the data, the fit_transform() method applies to the training set (X_train) and computes the mean and standard deviation of each feature. Then, using the transform() method, the identical transformation is applied to the test set (X_test), ensuring that both sets are uniformed based on the statistics from the training set. For many machine learning algorithms to work well, features must have sizes that are similar, hence this standardization procedure is crucial.
Model Development
Python3
# Creating (MLP) classifier clf = MLPClassifier(hidden_layer_sizes = ( 64 , 32 ), max_iter = 1000 , random_state = 42 ) |
The MLPClassifier class from scikit-learn is used in this code to generate an instance of the Multi-Layer Perceptron (MLP) classifier. The neural network’s architecture is specified by the hidden_layer_sizes argument, which is set to a tuple (64, 32), which indicates that there are two hidden layers, each with 64 and 32 neurons. The solver’s maximum number of iterations is indicated by the max_iter parameter, which is set to 1000. For repeatability, random_state is set to 42.
Training the model and Prediction
Python3
# Training the model clf.fit(X_train, y_train) # Making prediction y_pred = clf.predict(X_test) |
This code uses the fit method to train an MLP classifier (clf) utilizing standardized training data (X_train) and labels (y_train). Then, using the trained model, predictions are made on the test data (X_test), and the predicted labels are saved in the variable y_pred.
Evaluation of the model
Python3
# Determining Accuracy accuracy = accuracy_score(y_test, y_pred) print (f "Accuracy: {accuracy:.2f}" ) |
Accuracy: 0.97
The scikit-learn accuracy_score function is used in this code to determine the precision of the MLP classifier’s predictions (y_pred) on the test data (y_test). The final accuracy value, which has two decimal places for reading, is then written to the console.
Multi-layer Perceptron a Supervised Neural Network Model using Sklearn
An artificial neural network (ANN), often known as a neural network or simply a neural net, is a machine learning model that takes its cues from the structure and operation of the human brain. It is a key element in machine learning’s branch known as deep learning. Interconnected nodes, also referred to as artificial neurons or perceptrons, are arranged in layers to form neural networks. An input layer, one or more hidden layers, and an output layer are examples of these layers. A neural network’s individual neurons each execute a weighted sum of their inputs, apply an activation function to the sum, and then generate an output. The architecture of the network, including the number of layers and neurons in each layer, might vary significantly depending on the particular task at hand. Several machine learning tasks, such as classification, regression, image recognition, natural language processing, and others, can be performed using neural networks because of their great degree of versatility.
In order to reduce the discrepancy between expected and actual outputs, a neural network must be trained by changing the weights of its connections. Optimization techniques like gradient descent are used to do this. In particular, deep neural networks have made significant advances in fields like computer vision, speech recognition, and autonomous driving. Neural networks have demonstrated an exceptional ability to resolve complicated issues. They play a key role in modern AI and machine learning due to their capacity to automatically learn and extract features from data.