If there are more than one-layer in the neural network, excluding the input and output layer, it is called Multilayer Deep Neural Network. Each node in the network represents a neuron. We apply activation function over these neurons to get the output. MLP is trained with the help of Backpropagation RULE.
Whereas classical neural network has only a single hidden layer. In classical neural network, we generally don’t use any activation function.
MNN can learn very complex non-linear function but it is not the case with Classical Neural Network.
The need for Activation Function
To create the non-linearity in the model we need to use the activation function. An activation function is applied to the output of every node. If we are not applying the activation function, it will not be possible to simulate the non-linear function.
Eg: Y = W1*X + B (its output will always be linear, doesn’t matter how many layers we are using)
Y = f (W1*X + B) (Now the output of this function will not be linear if are using some other function except identity.)