In the Mathematics of Gradient Descent, we have discussed what Gradient Descent is, how it works,
and how to derive the equations needed to update the parameters of the model.
In this post, we are going to write Batch Gradient Descent from scratch in Python.
Setting Up The Dataset
Throughout this series, we are going to use the Iris Dataset from UCI Machine Learning Repository imported from scikit-learn.
There are two features in the dataset that we are going to analyse, namely sepal_length and petal_width shown in the highlighted lines.
Setting Up A Baseline
Before we implement Batch Gradient Descent in Python, we need to set a baseline to compare against our own implementation.
So, we are going to train our dataset into the Linear Regression built-in function made by scikit-learn.
First, let's fit our dataset to LinearRegression() model that we imported from sklearn.linear_model.
Once we have the intercept and the coefficient values, let's make a regression line to see if the line is close to most data points.
The iris dataset regression line with Scikit
Clearly, the line is indeed very close to the most data points and we want to see the MSE of this regression line.
From the result we got from sklearn, the best regression line is
y=−3.200215+0.75291757⋅x
with MSE value around 0.191. The equation above is going to be our base line for this experiment to determine how good our own Gradient Descent implementation.
Mathematics of Batch Gradient Descent
The parameter update rule is expressed as
θ=θ−α∇θJ(θ)
where
θ is the parameter vector
α is the learning rate
J(θ) is the cost function
∇θJ(θ) is the gradient of the cost function
The gradient of the cost function w.r.t. to the intercept θ0
and the coefficient θ1 are expresed as the following.
Second, determine the prediction error and the gradient of the cost function w.r.t the intercept θ0 and the coefficient θ1.
Lastly, update the intercept θ0 and the coefficient θ1.
Conclusion
BGD Loss Function Graph
The change of the regression line over time
Regression line animation
From the graph above, we can see that how the regression line changes from the time to time.
After 10,000 iterations, the MSE value of our own Gradient Descent is 0.195 which is quite close to our baseline, 0.191.
The pathway of the cost function over the 2D MSE contour
Here are some keypoints for Batch Gradient Descent:
Batch Gradient Descent only updates the parameters once after considering all the data points. Thus, it takes longer time for the algorithm to converge.
Not only does it takes longer to converge, but it also takes up a lot of computational resources.
Batch Gradient Descent is not the best algorithm for large datasets.
Code
References
Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv:1609.04747 (2016).