Adaptive Moment Estimation (Adam) is an optimization algorithm that is inspired from
Adagrad and RMSprop optimization algorithms.
Remember that Adagrad and RMSprop have their own limitations.
In the case of Adagrad, the learning rate diminishes over time and becomes
too small since the algorithm takes account all of the previous gradients.
Thus making the model stops learning.
Even though RMSprop solves the problem of Adagrad by taking account only
the average of the previous gradients, it still stuffers from
the same problem as Adagrad.
To solve the limitations of Adagrad and RMSprop,
Adam introduces the concept of momentum.
Let's see how Adam works.
Mathematics of Adam
The parameter update rule is expressed as
θt+1=θt−v^t+ϵαm^t
where
θt is the parameter at time t
α is the learning rate
m^t is the corrected first moment estimate
v^t is the corrected second moment estimate
ϵ is a small value to prevent division by zero
The corrected first moment estimate is expressed as
m^t=1−β1tmtmt=β1mt−1+(1−β1)gt
where
mt is the first moment estimate
β1t is the exponential decay rate for the first moment estimate at time t
gt is the gradient of the cost function at time t
The corrected second moment estimate is expressed as
v^t=1−β2tvtvt=β2vt−1+(1−β2)gt2
where
vt is the second moment estimate
β2t is the exponential decay rate for the second moment estimate at time t
gt2 is the gradient of the cost function at time t
It's worth mentioning is that the first moment estimate mt and the second moment estimate vt
work just like Momentum to maintain directionality since they both take account of the previous gradients.
By accumulating previous gradients, Adam accelerates convergence especially in the area
with small gradients.
The reason why the first moment estimate mt has to be corrected is that the first moment estimate mt is biased to
smaller values at the beginning of the training when it is zero or close to zero.
The bias could lead to overly aggresive parameter updates, instability, or slow convergence.
Similarly with the second moment estimate vt.
Since we only have two parameters, we are going to need gt,0
to represent the gradient of the cost function with respect to the intercept,
and gt,1 to represent the gradient of the cost function with respect to the coefficient.
These two can be expressed as follows:
First, calculate the intercept and the coefficient gradient.
Notice that the intercept gradient gt,0 is the prediction error.
Second, calculate the first moment estimate mt and the second moment estimate vt.
Third, correct the first moment estimate mt and the second moment estimate vt.
Finally, update the intercept and the coefficient.
Conclusion
Pathways of Adadelta, RMSprop, and Adam along the 2D MSE contour.
From the figure above, we can see that the pathway Adam took a direct path down the hill compared to the other two.
That proves that Adam can accelerate convergence especially in the area with small gradients and avoid
frequent updates with the help of Momentum.
Code
References
Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv:1609.04747 (2016).
Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 (2014).