AdaDerivative optimizer: Adapting step-sizes by the derivative term in past gradient information

Weidong Zou, Yuanqing Xia, Weipeng Cao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

AdaBelief fully utilizes “belief” to iteratively update the parameters of deep neural networks. However, the reliability of the “belief” is determined by the gradient's prediction accuracy, and the key to this prediction accuracy is the selection of the smoothing parameter β1. AdaBelief also suffers from the overshoot problem, which occurs when the value of parameters exceeds the value of the target and cannot be changed along the gradient direction. In this paper, we propose AdaDerivative to eliminate the overshoot problem of AdaBelief. The key to AdaDerivative is that the “belief” of AdaBelief is replaced by the derivative term's exponential moving average (EMA), which can be constructed as (1−β2)∑i=1tβ2t−i(gi−gi−1)2 based on the past and current gradients. We validate the performance of AdaDerivative on a variety of tasks, including image classification, language modeling, node classification, image generation, and object detection tasks. Extensive experimental results demonstrate that AdaDerivative can achieve state-of-the-art performance.

Original languageEnglish
Article number105755
JournalEngineering Applications of Artificial Intelligence
Volume119
DOIs
Publication statusPublished - Mar 2023

Keywords

  • Adam
  • Deep neural networks
  • Optimization algorithms
  • Stochastic gradient descent

Fingerprint

Dive into the research topics of 'AdaDerivative optimizer: Adapting step-sizes by the derivative term in past gradient information'. Together they form a unique fingerprint.

Cite this