BSR
All posts

Posts tagged with

Optimization

May 3, 2024

Adadelta

Reducing the aggresive learning rate decay in Adagrad

May 1, 2024

Adagrad

Parameter updates with unique learning rate for each parameter

May 5, 2024

Adam

The best version of all adaptive learning rate optimization algorithms

February 16, 2022

Batch Gradient Descent

Minimizing cost functions with a subset of the dataset

May 4, 2024

RMSprop

Reducing the aggresive learning rate decay in Adagrad using the twin sibling of Adadelta

April 4, 2022

Stochastic Gradient Descent

Minimizing cost functions with a random data point at a time

© Bijon Setyawan Raya 2025