LSTM Model For Sentence Autocompletion

​We will use a long-short-term memory network (LSTM) for our model. As we know LSTM network has an edge over simple RNNs since they have three extra gates which prevents gradient vanishing problem in the neural networks. Let us see the parameters and methods we are using in this model one by one.

  • __init__(self, dataset): This is the initialization method of the class. It sets up the architecture and parameters of the LSTM model. Here are the key components that are being used within this method: self.lstm_size: It specifies the size (number of units) in the LSTM hidden state. self.embedding_dim: It Specifies the dimensionality of the word embeddings. self.num_layers: It specifies the number of layers in the LSTM. Since we don’t want our model to be computationally expensive we will use only 3 layers. self.embedding: It defines an embedding layer that converts input indices to dense word embeddings. self.lstm: This argument defines the LSTM layer with the specified input size, hidden size, number of layers, and dropout rate. self.fc: Here we are using a fully connected later at the end of the network Definesar layer that maps the LSTM output to the vocabulary size to generate logits.
  • forward(self, x, prev_state): This method performs the forward pass of the model. It takes an input tensor x and the previous state of the LSTM prev_state as input. Here are the steps within this method: self.embedding(x): The embed argument passes the input tensor x through the embedding layer to get the word embeddings. self.lstm(embed, prev_state): The output state passes the embeddings and previous state through the LSTM layer to get the output and updated state from the network. self.fc(output): Whereas the fully connected layer produces the logits value (the model’s predictions) and state (the updated LSTM state) for each incomplete sentence.
  • init_state(self, sequence_length): This method initializes the LSTM state with all zeros. It takes the length of the input sequence as input and returns an initial state tensor with zeros. The state tensor has a shape of (num_layers, sequence_length, lstm_size)

Python3




from torch import nn
class LSTMModel(nn.Module):
    def __init__(self, dataset):
        super(LSTMModel, self).__init__()
        self.lstm_size = 128
        self.embedding_dim = 128
        self.num_layers = 3
  
        n_vocab = len(dataset.unique_words)
        self.embedding = nn.Embedding(
            num_embeddings=n_vocab,
            embedding_dim=self.embedding_dim,
        )
        self.lstm = nn.LSTM(
            input_size=self.embedding_dim,
            hidden_size=self.lstm_size,
            num_layers=self.num_layers,
            dropout=0.2,
        )
        self.fc = nn.Linear(self.lstm_size, n_vocab)
  
    def forward(self, x, prev_state):
        embed = self.embedding(x)
        output, state = self.lstm(embed, prev_state)
        logits = self.fc(output)
  
        return logits, state
  
    def init_state(self, sequence_length):
        return (
            torch.zeros(self.num_layers, \
                        sequence_length, self.lstm_size),
            torch.zeros(self.num_layers, \
                        sequence_length, self.lstm_size)
        )


Sentence Autocomplete Using Pytorch

Natural Language Processing(NLP) is one of the most flourishing parts of deep learning. Several applications of NLP are being used continuously in daily life. In this article, we are going to see how we can use NLP to autocomplete half-written sentences using deep learning methods. We will also see how we can generate clean data for training our NLP model. We will cover the following steps in this article

  1. Cleaning the text data for training the NLP model
  2. Loading the dataset using PyTorch
  3. Creating the LSTM model
  4. Training an NLP model
  5. Making inferences from the trained model

We have seen applications like google keyboard where Google recommends what to type next based on the words which we have already written in the chatbox draft. However, to recommend the next term application like Google has been trained on billions of written sentences. In our model, we will use Wikipedia sentences that are freely available on the internet to download and that we can use for training our model.

Similar Reads

Dataset for Sentence Autocomplete Model

We will use a Wikipedia dataset that we can download from here. The one main problem with the Wikipedia dataset is that it has special characters, non-meaningful words, and unknown words we can not use directly in our model that is why before using the dataset for training our model we must have to clean it. Since our dataset is an ms-word document we will use the python-docx library for reading the dataset document. We can use the following command for installing the library....

Class For Loading The Dataset Using Pytorch

...

LSTM Model For Sentence Autocompletion

We will define a custom Python class having a torch.utils.data.Dataset as metaclass for loading the dataset. In this class, we will use the load_words method for reading the CSV file. Also, we will use the get_unique_words method for counting the frequency of unique words in the dataset. The __len__ method will determine the length of the dataset. Whereas the __getitems__ method will create a tensor for each word....

Initiating Hyperparameters and DataLoader for Training Sentence Autocomplete Model

...

Making Inference From The Model

​We will use a long-short-term memory network (LSTM) for our model. As we know LSTM network has an edge over simple RNNs since they have three extra gates which prevents gradient vanishing problem in the neural networks. Let us see the parameters and methods we are using in this model one by one....