Benchmarking Gradient Based Optimizers' Sensitivity to Learning Rate

33 Pages Posted: 8 Jan 2023

Date Written: January 5, 2023

Abstract

Initial choice of Learning Rate is a key part of gradient based methods and has a great effect on the performance of the Deep Learning Model.

This paper studies the behavior of multiple gradient based optimization algorithm which are commonly used in Deep Learning and compare their performance on various learning rate. As observed popular choice of optimization algorithms are highly sensitive to various choice of learning rates.

Our goal is to find which optimizer has an edge over others for a specific setting. We look at two datasets namely MNIST and CIFAR10 for benchmarking. The results are quite surprising, and it will help us to choose a learning rate more efficiently.

Keywords: Deep Learning, Optimization, Benchmarking, Gradient based optimizers

Suggested Citation

Guha, Rehan, Benchmarking Gradient Based Optimizers' Sensitivity to Learning Rate (January 5, 2023). Available at SSRN: https://ssrn.com/abstract=4318767 or http://dx.doi.org/10.2139/ssrn.4318767

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
69
Abstract Views
417
Rank
721,667
PlumX Metrics