Plotting the results
Plotting the training and validation accuracy and loss plots.
Python3
# Plotting the accuracy and loss over time # Training history history_dict = history.history # Seperating validation and training accuracy acc = history_dict[ 'accuracy' ] val_acc = history_dict[ 'val_accuracy' ] # Seperating validation and training loss loss = history_dict[ 'loss' ] val_loss = history_dict[ 'val_loss' ] # Plotting plt.figure(figsize = ( 8 , 4 )) plt.subplot( 1 , 2 , 1 ) plt.plot(acc) plt.plot(val_acc) plt.title( 'Training and Validation Accuracy' ) plt.xlabel( 'Epochs' ) plt.ylabel( 'Accuracy' ) plt.legend([ 'Accuracy' , 'Validation Accuracy' ]) plt.subplot( 1 , 2 , 2 ) plt.plot(loss) plt.plot(val_loss) plt.title( 'Training and Validation Loss' ) plt.xlabel( 'Epochs' ) plt.ylabel( 'Loss' ) plt.legend([ 'Loss' , 'Validation Loss' ]) plt.show() |
Output:
The code visualizes the training and validation accuracy as well as the training and validation loss over epochs. It extracts accuracy and loss values from the training history (history_dict). The matplotlib library is then used to create a side-by-side subplot, where the left subplot displays accuracy trends, and the right subplot shows loss trends over epochs.
5. Testing the trained model
Now, we will test the trained model with a random review and check its output.
Python3
# Making predictions sample_text = ( '''The movie by w3wiki was so good and the animation are so dope. I would recommend my friends to watch it.''' ) predictions = model.predict(np.array([sample_text])) print ( * predictions[ 0 ]) # Print the label based on the prediction if predictions[ 0 ] > 0 : print ( 'The review is positive' ) else : print ( 'The review is negative' ) |
Output:
1/1 [==============================] - 0s 33ms/step
5.414222
The review is positive
The code makes predictions on a sample text using the trained model (model.predict(np.array([sample_text]))). The result (predictions[0]) represents the model’s confidence in the sentiment, where positive values indicate a positive sentiment and negative values indicate a negative sentiment. The subsequent conditional statement interprets the prediction, printing either ‘The review is positive’ or ‘The review is negative’ based on the model’s classification.
RNN for Text Classifications in NLP
In this article, we will learn how we can use recurrent neural networks (RNNs) for text classification tasks in natural language processing (NLP). We would be performing sentiment analysis, one of the text classification techniques on the IMDB movie review dataset. We would implement the network from scratch and train it to identify if the review is positive or negative.
Table of Content
- RNN for Text Classifications in NLP
- Recurrent Neural Networks (RNNs)
- Implementation of RNN for Text Classifications