BSR

Optimization

05 May 2024

Adam

The best version of all adaptive learning rate optimization algorithms

04 May 2024

RMSprop

Reducing the aggresive learning rate decay in Adagrad using the twin sibling of Adadelta

03 May 2024

Adadelta

Reducing the aggresive learning rate decay in Adagrad

01 May 2024

Adagrad

Parameter updates with unique learning rate for each parameter

30 April 2024

SGD with Nesterov

A more conscience version of Stochastic Gradient Descent with Momentum

27 April 2024

SGD with Momentum

Fast convergence using Stochastic Gradient Descent with Momentum

04 April 2022

Stochastic Gradient Descent

Minimizing cost functions with less data points

03 March 2022

Mini-Batch Gradient Descent

Updating the parameters after seeing a subset of the dataset

16 February 2022

Batch Gradient Descent

Linear Regression + Batch Gradient Descent in Python

13 February 2022

Mathematics of Gradient Descent

A mathematical adventure into Gradient Descent

11 January 2022

Introduction to Gradient Descent Algorithm

It's about to go down! 👇