Encoder Structure

This structure comprises a conventional, feed-forward neural network that is structured to predict the latent view representation of the input data. It is given by:

Where  represents the hidden layer 1,   represents the hidden layer 2,  represents the input of the autoencoder, and h represents the low-dimensional, data space of the input

Implementing an Autoencoder in PyTorch

Autoencoders are a type of neural network which generates an “n-layer” coding of the given input and attempts to reconstruct the input using the code generated. This Neural Network architecture is divided into the encoder structure, the decoder structure, and the latent space, also known as the “bottleneck”. To learn the data representations of the input, the network is trained using Unsupervised data. These compressed, data representations go through a decoding process wherein which the input is reconstructed. An autoencoder is a regression task that models an identity function.

Similar Reads

Encoder Structure

This structure comprises a conventional, feed-forward neural network that is structured to predict the latent view representation of the input data. It is given by:...

Decoder Structure

This structure comprises a feed-forward neural network but the dimension of the data increases in the order of the encoder layer for predicting the input. It is given by:...

Latent Space Structure

This is the data representation or the low-level, compressed representation of the model’s input. The decoder structure uses this low-dimensional form of data to reconstruct the input. It is represented by...

Modules Needed

torch: This python package provides high-level tensor computation and deep neural networks built on autograd system....

Implementation of Autoencoder in Pytorch

Step 1: Importing Modules...