Create Model Neural Network
Keras module is built on top of TensorFlow and provides us all the functionality to create a variety of neural network architectures. We’ll use the Sequential class in Keras to build our model. First, you can try using the linear model, since the neural network basically follows the same ‘math’ as regression you can create a linear model using a neural network as follows :
Create a linear Model
Python3
model = tf.keras.Sequential([ tf.keras.layers.Dense(units = 1 ,input_shape = input_shape)]) # after you create your model it's # always a good habit to print out it's summary model.summary() |
Output:
But this is basically a linear model, what if your dataset is a bit more complex, and the relations between the features are much more diverse and you want a non-linear model? What do you need? The answer is Activation Functions. This is where neural networks truly start to shine. We can’t go in-depth about activation functions in this article but basically, these add/introduce non-linearity to our model, the more you use them the more complex patterns our model can find.
Creating a Multilayered Neural Network
We’ll create a 3 layer network with 1 input layer, 1 hidden layer1 with 64 units, and 1 output layer. We’ll use ‘relu’ activation function in the hidden layers. We’ll use the Sequential method in Keras module, which is very often used to create multilayered neural networks. In keras, we have different types of neural network layers and/or transformation layers which you can use to build various types of neural network, but here we have only used 3 Dense layers(in keras.layers) with relu activation function.
Python3
model = tf.keras.Sequential([ tf.keras.layers.Dense(units = 64 , activation = 'relu' , input_shape = input_shape), tf.keras.layers.Dense(units = 64 , activation = 'relu' ), tf.keras.layers.Dense(units = 1 ) ]) model.summary() |
Output:
In Keras after you create your model, you need to ‘compile’ other parameters for it, like it’s shown below. This is kind of like us setting all the parameters for our model.
Python3
# adam optimizer works pretty well for # all kinds of problems and is a good starting point model. compile (optimizer = 'adam' , # MAE error is good for # numerical predictions loss = 'mae' ) |
So we used adam optimizer, and also told the model to compute the mae (mean absolute error) loss.
Implementing Neural Networks Using TensorFlow
Deep learning has been on the rise in this decade and its applications are so wide-ranging and amazing that it’s almost hard to believe that it’s been only a few years in its advancements. And at the core of deep learning lies a basic “unit” that governs its architecture, yes, It’s neural networks.
A neural network architecture comprises a number of neurons or activation units as we call them, and this circuit of units serves their function of finding underlying relationships in data. And it’s mathematically proven that neural networks can find any kind of relation/function regardless of its complexity, provided it is deep/optimized enough, that is how much potential it has.
Now let’s learn to implement a neural network using TensorFlow