It is one of the most useful concepts in entire deep learning. Most of the algorithms are trained using backpropagation algorithm only. Here in this article, we are going to talk about, what are the various steps in training an algorithm using the backpropagation algorithm.
Steps in Backpropagation Algorithm:
We will be given some dataset to train the model. it will be in the form (Xi, Yi). Where Xi is the x values and Yi is the corresponding predicted value.
- First, we will initialize the weights using various methods such as random_uniform, random_normal, glorot_normal, glorot_uniform, he_normal, etc.
- Pass each data point Xi into the network (also called forward propagation).
- Calculate the loss by using (Yi and Ypredicted).
- Compute all the derivatives using the chain rule and to increase the training time use memoization to calculate the derivative.
- Update the weights using the available algorithms such as SGD, Adagrad, Adam, Adadelta, etc.
- Until convergence, repeat the steps from 2 to 5.
An important thing about Back-Propagation is that it works only when the activation function is differentiable. If the function is easily differentiable, we can train our model very fast.