Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws

Paul von Hippel, Jonathan Bartlett

Research output: Contribution to journalArticlepeer-review

33 Citations (SciVal)
671 Downloads (Pure)

Abstract

Multiple imputation (MI) is a method for repairing and analyzing data with missing values. MI replaces missing values with a sample of random values drawn from an imputation model. The most popular form of MI, which we call posterior draw multiple imputation (PDMI), draws the parameters of the imputation model from a Bayesian posterior distribution. An alternative, which we call maximum likelihood multiple imputation (MLMI), estimates the parameters of the imputation model using maximum likelihood (or equivalent). Compared to PDMI, MLMI is faster and yields slightly more efficient point estimates. A past barrier to using MLMI was the difficulty of estimating the standard errors of MLMI point estimates. We derive, implement and evaluate three consistent standard error formulas: (1) one combines variances within and between the imputed datasets, (2) one uses the score function and (3) one uses the bootstrap with two imputations of each bootstrapped sample. Formula (1) modifies for MLMI a formula that has long been used under PDMI, while formulas (2) and (3) can be used without modification under either PDMI or MLMI. We have implemented MLMI and the standard error estimators in the mlmi and bootImpute packages for R.

Original languageEnglish
Pages (from-to)400-420
Number of pages21
JournalStatistical Science
Volume36
Issue number3
Early online date28 Jul 2021
DOIs
Publication statusPublished - 31 Aug 2021

Fingerprint

Dive into the research topics of 'Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws'. Together they form a unique fingerprint.

Cite this