Large-scale Riemannian meta-optimization via subspace adaptation

Peilin Yu, Yuwei Wu*, Zhi Gao, Xiaomeng Fan, Yunde Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Riemannian meta-optimization provides a promising approach to solving non-linear constrained optimization problems, which trains neural networks as optimizers to perform optimization on Riemannian manifolds. However, existing Riemannian meta-optimization methods take up huge memory footprints in large-scale optimization settings, as the learned optimizer can only adapt gradients of a fixed size and thus cannot be shared across different Riemannian parameters. In this paper, we propose an efficient Riemannian meta-optimization method that significantly reduces the memory burden for large-scale optimization via a subspace adaptation scheme. Our method trains neural networks to individually adapt the row and column subspaces of Riemannian gradients, instead of directly adapting the full gradient matrices in existing Riemannian meta-optimization methods. In this case, our learned optimizer can be shared across Riemannian parameters with different sizes. Our method reduces the model memory consumption by six orders of magnitude when optimizing an orthogonal mainstream deep neural network (e.g. ResNet50). Experiments on multiple Riemannian tasks show that our method can not only reduce the memory consumption but also improve the performance of Riemannian meta-optimization.

Original languageEnglish
Article number104306
JournalComputer Vision and Image Understanding
Volume253
DOIs
Publication statusPublished - Mar 2025

Keywords

  • Large-scale optimization
  • Riemannian manifolds
  • Riemannian meta-optimization
  • Subspace adaptation

Fingerprint

Dive into the research topics of 'Large-scale Riemannian meta-optimization via subspace adaptation'. Together they form a unique fingerprint.

Cite this