Structure and Restriction of RBMs
An RBM consists of two layers of nodes:
- Visible Layer (V): This layer contains nodes that correspond to the input features of the data. Each node represents an input variable and can be binary or real-valued.
- Hidden Layer (H): This layer contains nodes that capture the underlying features or patterns in the data. The number of hidden nodes is typically less than the number of visible nodes.
The primary restriction that distinguishes RBMs from traditional neural networks is the absence of connections between neurons within the same layer. This means that neurons in the visible layer are only connected to neurons in the hidden layer, and there are no connections among neurons within the visible layer itself or within the hidden layer.
Why the Restriction?
This restriction simplifies the training process and forces the RBM to focus on the interactions between the visible features, learning complex relationships and dependencies between them. By eliminating intra-layer connections, RBMs reduce the computational complexity and make the learning process more efficient.
Restricted Boltzmann Machine : How it works
A Restricted Boltzmann Machine (RBM), Introduced by Geoffrey Hinton and Terry Sejnowski in 1985, Since, It become foundational in unsupervised machine learning, particularly in the context of deep learning architectures. They are widely used for dimensionality reduction, classification, regression, collaborative filtering, feature learning, and topic modelling.