Plotting the results

Plotting the training and validation accuracy and loss plots.

Python3




# Plotting the accuracy and loss over time
 
# Training history
history_dict = history.history
 
# Seperating validation and training accuracy
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
 
# Seperating validation and training loss
loss = history_dict['loss']
val_loss = history_dict['val_loss']
 
# Plotting
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.plot(acc)
plt.plot(val_acc)
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(['Accuracy', 'Validation Accuracy'])
 
plt.subplot(1, 2, 2)
plt.plot(loss)
plt.plot(val_loss)
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(['Loss', 'Validation Loss'])
 
plt.show()


Output:

Plot of training and validation accuracy and loss

The code visualizes the training and validation accuracy as well as the training and validation loss over epochs. It extracts accuracy and loss values from the training history (history_dict). The matplotlib library is then used to create a side-by-side subplot, where the left subplot displays accuracy trends, and the right subplot shows loss trends over epochs.

5. Testing the trained model

Now, we will test the trained model with a random review and check its output.

Python3




# Making predictions
sample_text = (
    '''The movie by w3wiki was so good and the animation are so dope.
    I would recommend my friends to watch it.'''
)
predictions = model.predict(np.array([sample_text]))
print(*predictions[0])
 
# Print the label based on the prediction
if predictions[0] > 0:
    print('The review is positive')
else:
    print('The review is negative')


Output:

1/1 [==============================] - 0s 33ms/step
5.414222
The review is positive

The code makes predictions on a sample text using the trained model (model.predict(np.array([sample_text]))). The result (predictions[0]) represents the model’s confidence in the sentiment, where positive values indicate a positive sentiment and negative values indicate a negative sentiment. The subsequent conditional statement interprets the prediction, printing either ‘The review is positive’ or ‘The review is negative’ based on the model’s classification.

RNN for Text Classifications in NLP

In this article, we will learn how we can use recurrent neural networks (RNNs) for text classification tasks in natural language processing (NLP). We would be performing sentiment analysis, one of the text classification techniques on the IMDB movie review dataset. We would implement the network from scratch and train it to identify if the review is positive or negative.

Table of Content

  • RNN for Text Classifications in NLP
  • Recurrent Neural Networks (RNNs)
  • Implementation of RNN for Text Classifications

Similar Reads

RNN for Text Classifications in NLP

In Natural Language Processing (NLP), Recurrent Neural Networks (RNNs) are a potent family of artificial neural networks that are crucial, especially for text classification tasks. RNNs are uniquely able to capture sequential dependencies in data, which sets them apart from standard feedforward networks and makes them ideal for processing and comprehending sequential information, like language. RNNs are particularly good at evaluating the contextual links between words in NLP text classification, which helps them identify patterns and semantics that are essential for correctly classifying textual information. Because of their versatility, RNNs are essential for creating complex models for tasks like document classification, spam detection, and sentiment analysis....

Recurrent Neural Networks (RNNs)

Recurrent neural networks are a specific kind of artificial neural network created to work with sequential data. It is utilized particularly in activities involving natural language processing, such as language translation, speech recognition, sentiment analysis, natural language generation, and summary writing. Unlike feedforward neural networks, RNNs include a loop or cycle built into their architecture that acts as a “memory” to hold onto information over time. This distinguishes them from feedforward neural networks....

Implementation of RNN for Text Classifications

First, we will need the following dependencies to be imported....

4. Plotting the results

...

Advantages and Disadvantages of RNN for Text Classifications in NLP

...

Frequently Asked Questions (FAQs)

...