Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-dimensional Vector Autoregressive Models

Xiao Lv, Wei Cui, Yulong Liu*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

2 引用 (Scopus)

摘要

In this paper, we present non-asymptotic optimization guarantees of gradient descent methods for estimating structured transition matrices in high-dimensional vector autoregressive (VAR) models. We adopt the projected gradient descent (PGD) for single-structured transition matrices and the alternating projected gradient descent (AltPGD) for superposition-structured ones. Our analysis demonstrates that both gradient algorithms converge linearly to the statistical error even though the strong convexity of the objective function is absent under the high-dimensional settings. Moreover our result is sharp (up to a constant factor) in the sense of matching the phase transition theory of the corresponding model with independent samples. To the best of our knowledge, this analysis constitutes first non-asymptotic optimization guarantees of the linear rate for regularized estimation in high-dimensional VAR models. Numerical results are provided to support our theoretical analysis.

源语言英语
主期刊名Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
编辑Marc'Aurelio Ranzato, Alina Beygelzimer, Yann Dauphin, Percy S. Liang, Jenn Wortman Vaughan
出版商Neural information processing systems foundation
16751-16763
页数13
ISBN(电子版)9781713845393
出版状态已出版 - 2021
活动35th Conference on Neural Information Processing Systems, NeurIPS 2021 - Virtual, Online
期限: 6 12月 202114 12月 2021

出版系列

姓名Advances in Neural Information Processing Systems
20
ISSN(印刷版)1049-5258

会议

会议35th Conference on Neural Information Processing Systems, NeurIPS 2021
Virtual, Online
时期6/12/2114/12/21

指纹

探究 'Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-dimensional Vector Autoregressive Models' 的科研主题。它们共同构成独一无二的指纹。

引用此