Optimal subsampling algorithms for big data regressions

Mingyao Ai, Jun Yu, Huiming Zhang, Hai Ying Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

72 Citations (Scopus)

Abstract

In order to quickly approximate maximum likelihood estimators from massive data, this study examines the optimal subsampling method under the A-optimality criterion (OSMAC) for generalized linear models. The consistency and asymptotic normality of the estimator from a general subsampling algorithm are established, and optimal subsampling probabilities under the A- and L-optimality criteria are derived. Furthermore, using Frobenius-norm matrix concentration inequalities, the finite-sample properties of the subsample estimator based on optimal subsampling probabilities are also derived. Because the optimal subsampling probabilities depend on the full data estimate, an adaptive two-step algorithm is developed. The asymptotic normality and optimality of the estimator from this adaptive algorithm are established. The proposed methods are illustrated and evaluated using numerical experiments on simulated and real data sets.

Original languageEnglish
Pages (from-to)749-772
Number of pages24
JournalStatistica Sinica
Volume31
Issue number2
DOIs
Publication statusPublished - Apr 2021

Keywords

  • Generalized linear models
  • Massive data
  • Matrix concentration inequality

Fingerprint

Dive into the research topics of 'Optimal subsampling algorithms for big data regressions'. Together they form a unique fingerprint.

Cite this