GLR: Gradient-Based Learning Rate Scheduler

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2023 (ICIAP 2023)

Abstract

Training a neural network is a complex and time-consuming process because of many combinations of hyperparameters that have to be adjusted and tested. One of the most crucial hyperparameters is the learning rate which controls the speed and direction of updates to the weights during training. We proposed an adaptive scheduler called Gradient-based Learning Rate scheduler (GLR) that significantly reduces the tuning effort thanks to a single user-defined parameter. GLR achieves competitive results in a very wide set of experiments compared to the state-of-the-art schedulers and optimizers. The computational cost of our method is trivial and can be used to train different network topologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 69.54
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 86.50
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html.

  2. 2.

    https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html.

  3. 3.

    https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_wine.html.

  4. 4.

    https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html.

  5. 5.

    It consists of only one linear layer.

  6. 6.

    It is generated from VGG11 removing 4 convolutional layers.

References

  1. Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  2. Bottou, L.: Online learning and stochastic approximations. Online Learn. Neural Netw. 17, 142 (1998)

    Google Scholar 

  3. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  4. Guo, T., Dong, J., Li, H., Gao, Y.: Simple convolutional neural network on image classification. In: 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), pp. 721–724 (2017). https://doi.org/10.1109/ICBDA.2017.8078730

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  6. He, T., Zhang, Z., Zhang, H., Zhang, Z., **e, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 558–567 (2019)

    Google Scholar 

  7. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2016)

    Google Scholar 

  8. Hutter, F., Lücke, J., Schmidt-Thieme, L.: Beyond manual tuning of hyperparameters. KI - Künstl. Intell. 29, 329–337 (2015)

    Google Scholar 

  9. Khodamoradi, A., Denolf, K., Vissers, K., Kastner, R.C.: ASLR: an adaptive scheduler for learning rate. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2021). https://doi.org/10.1109/IJCNN52387.2021.9534014

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR (2015)

    Google Scholar 

  11. Konar, J., Khandelwal, P., Tripathi, R.: Comparison of various learning rate scheduling techniques on convolutional neural network. In: 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS) (2020). https://doi.org/10.1109/SCEECS48394.2020.94

  12. Krizhevsky, A.: Learning multiple layers of features from tiny images. Toronto University, ON, Canada - Master’s thesis (2009)

    Google Scholar 

  13. Lewkowycz, A.: How to decay your learning rate. Ar**v abs/2103.12682 (2021)

    Google Scholar 

  14. Martens, J.: Deep learning via hessian-free optimization. In: International Conference on Machine Learning (2010)

    Google Scholar 

  15. Martens, J., Grosse, R.: Optimizing neural networks with Kronecker-factored approximate curvature. In: International Conference on Machine Learning (2015)

    Google Scholar 

  16. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999). https://doi.org/10.1007/978-0-387-40065-5

  17. Reddi, S.J., Kale, S., Kumar, S.: On the convergence of ADAM and beyond. Ar**v abs/1904.09237 (2018)

    Google Scholar 

  18. Reed, R., MarksII, R.J.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press (1999)

    Google Scholar 

  19. Ruder, S.: An overview of gradient descent optimization algorithms. Ar**v abs/1609.04747 (2016)

    Google Scholar 

  20. Schmidt, R.M., Schneider, F., Hennig, P.: Descending through a crowded valley-benchmarking deep learning optimizers. In: International Conference on Machine Learning (2021)

    Google Scholar 

  21. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015) (2015)

    Google Scholar 

  22. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (BMVC) (2016). https://doi.org/10.5244/C.30.87

Download references

Acknowledge financial support from

PNRR MUR project PE0000013-FAIR

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Ausilia Napoli Spatafora .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Napoli Spatafora, M.A., Ortis, A., Battiato, S. (2023). GLR: Gradient-Based Learning Rate Scheduler. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14233. Springer, Cham. https://doi.org/10.1007/978-3-031-43148-7_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43148-7_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43147-0

  • Online ISBN: 978-3-031-43148-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation