Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators With Massive Data

Jun Yu, Hai Ying Wang, Mingyao Ai*, Huiming Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

54 Citations (Scopus)

Abstract

Nonuniform subsampling methods are effective to reduce computational burden and maintain estimation efficiency for massive data. Existing methods mostly focus on subsampling with replacement due to its high computational efficiency. If the data volume is so large that nonuniform subsampling probabilities cannot be calculated all at once, then subsampling with replacement is infeasible to implement. This article solves this problem using Poisson subsampling. We first derive optimal Poisson subsampling probabilities in the context of quasi-likelihood estimation under the A- and L-optimality criteria. For a practically implementable algorithm with approximated optimal subsampling probabilities, we establish the consistency and asymptotic normality of the resultant estimators. To deal with the situation that the full data are stored in different blocks or at multiple locations, we develop a distributed subsampling framework, in which statistics are computed simultaneously on smaller partitions of the full data. Asymptotic properties of the resultant aggregated estimator are investigated. We illustrate and evaluate the proposed strategies through numerical experiments on simulated and real datasets. Supplementary materials for this article are available online.

Original languageEnglish
Pages (from-to)265-276
Number of pages12
JournalJournal of the American Statistical Association
Volume117
Issue number537
DOIs
Publication statusPublished - 2022

Keywords

  • Big data
  • Distributed subsampling
  • Poisson sampling
  • Quasi-likelihood

Fingerprint

Dive into the research topics of 'Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators With Massive Data'. Together they form a unique fingerprint.

Cite this