BSR
All posts

Posts tagged with

Optimization

May 3, 2024

Adadelta

Reducing the aggresive learning rate decay in Adagrad

May 1, 2024

Adagrad

Parameter updates with unique learning rate for each parameter

May 5, 2024

Adam

RMSprop with momentum

May 5, 2024

Adamax

Adamax description

February 16, 2022

Batch Gradient Descent

Minimizing cost functions with a subset of the dataset

May 5, 2024

Nadam

Adam with Nesterov momentum

May 4, 2024

RMSprop

Reducing the aggresive learning rate decay in Adagrad using the twin sibling of Adadelta

April 4, 2022

Stochastic Gradient Descent

Minimizing cost functions with a random data point at a time

© Bijon Setyawan Raya 2025