What are Stacked RNNs
A single-layered RNN model has only a hidden layer which is liable to process sequential data. But Stacked RNN is a special kind of model that has multiple RNN layers one on each layer. This creates a ‘Stack’. Each layer of this stack processes the input sequence.
- When an Input is passed to Layer 1:
- The input ( ) passes through the RNN layer 1. There, the hidden state gets updated as:
where- = present Hidden state
- = previous hidden state
- = input to the RNN layer
- = weights associated with the input
- = weights associated with the hidden layer
- = bias associated with RNN layer
- = activation function
- The input ( ) passes through the RNN layer 1. There, the hidden state gets updated as:
- What happens in the hidden state is that, using the information or knowledge it retained in the previous time step, the hidden state updates itself.
- The present hidden state is used in getting the output of the hidden layer, using an appropriate activation function.
- where,
- W = Weights assigned to the layer
- = hidden state
- = bias associated with output layer
- where,
- For the second layer, the output of first RNN layer is fed into it, which goes through the same process again.
This feature of Stacked RNNs enables to capture of both short-term and long-term patterns. For that reason, Stacked RNNs can learn and remember information patterns in longer sequences and at the same it can analyze their current state’s information with just learned previous state’s information. The more layers you add to your model, the stacked network will able to capture more complex patterns present in the sequential data. If your data is nested and has different types of complex patterns then Stacked RNNs will be a better model as its each layer can learn different abstractions present in your data.
Understanding Stacked RNN with code implementation
First we will implement the Stacked RNN as code implementation where we will output the predicted value using the model’s method.
Python3
import numpy as np from keras.models import Sequential from keras.layers import SimpleRNN, Dense # Define a simple RNN model hidden_units = 2 input_shape = ( 3 , 1 ) # Define the input layer input_layer = Input (shape = input_shape) # First RNN layer rnn_layer1 = SimpleRNN(hidden_units,return_sequences = True )(input_layer) # Second RNN layer rnn_layer2 = SimpleRNN(hidden_units)(rnn_layer1) # Dense layer output_layer = Dense( 1 , activation = 'sigmoid' )(rnn_layer2) # Create the model model = Model(inputs = input_layer, outputs = output_layer) # Compile the model (add optimizer and loss function as needed) model. compile (optimizer = 'adam' , loss = 'binary_crossentropy' ) model.summary() # Define the input data x = np.array([ 1 , 2 , 3 ]) # Reshape the input to the required sample_size x time_steps x features x_input = np.reshape(x, ( 1 , 3 , 1 )) # Make a prediction using the model y_pred_model = model.predict(x_input) print ( "Prediction from the neural network: \n" , y_pred_model) |
Output:
1/1 [==============================] - 0s 344ms/step
Prediction from the neural network:
[[0.8465775]]
Now we will explore the mathematical concept behind the working of the stacked RNN
After the initial model is trained, we will get the weights associated with each layer in the architecture.
Python3
# Get the weights from the model wx = model.get_weights()[ 0 ] # Weight matrix for input x wh = model.get_weights()[ 1 ] # Weight matrix for hidden state h bh = model.get_weights()[ 2 ] # Bias for hidden state h wx1 = model.get_weights()[ 3 ] # Weight matrix for first hidden state wh1 = model.get_weights()[ 4 ] # Weight matrix for present hidden state h1 bh1 = model.get_weights()[ 5 ] # Bias for present hidden state h1 wy = model.get_weights()[ 6 ] # Weight matrix for output y by = model.get_weights()[ 7 ] # Bias for output y |
As per the equations used above, the present hidden state can be explained as a function of the activation function applied on the product of weights for input and respective inputs added to the product of weights for hidden states and hidden states with bias terms.
Python3
# Initialize hidden states m = 2 h = np.zeros((m)) # Compute the hidden states and output manually for layer 1 rnn_layer1_outputs = [] for t in range (x_input.shape[ 1 ]): h = np.tanh(np.dot(x_input[ 0 , t], wx) + np.dot(h, wh) + bh) rnn_layer1_outputs.append(h) rnn_layer1_outputs = np.array(rnn_layer1_outputs) # Compute the hidden states for layer 2 h30 = np.zeros( 2 ) h31 = np.tanh(np.dot(rnn_layer1_outputs[ 0 ], wx1) + h30 + bh1) h32 = np.tanh(np.dot(rnn_layer1_outputs[ 1 ] , wx1) + np.dot(h31,wh1) + bh1) h33 = np.tanh(np.dot(rnn_layer1_outputs[ 2 ] , wx1) + np.dot(h32,wh1) + bh1) # Compute the output manually for output layer outputs = 1 / ( 1 + np.exp( - (np.dot(h33, wy) + by))) outputs = np.array(outputs) print ( "Prediction from manual computation: \n" , outputs) |
Output:
Prediction from manual computation:
[0.84657752]
Stacked RNNs in NLP
Stacked RNNs refer to a special kind of RNNs that have multiple recurrent layers on top of one layer. Stacked RNNs are also called Deep RNNs for that reason. In this article, we will load the IMDB dataset and make multiple layers of SimpleRNN (stacked SimpleRNN) as an example of Stacked RNN.