Generalized additive models for gigadata

modelling the UK black smoke network daily data

Simon N. Wood, Zheyuan Li, Gavin Shaddick, Nicole H. Augustin

Research output: Contribution to journalArticle

21 Citations (Scopus)

Abstract

We develop scalable methods for fitting penalized regression spline based generalized additive models with of the order of 104 coefficients to up to 108 data. Computational feasibility rests on: (i) a new iteration scheme for estimation of model coefficients and smoothing parameters, avoiding poorly scaling matrix operations; (ii) parallelization of the iteration’s pivoted block Cholesky and basic matrix operations; (iii) the marginal discretization of model covariates to reduce memory footprint, with efficient scalable methods for computing required crossproducts directly from the discrete representation. Marginal discretization enables much finer discretization than joint discretization would permit. We were motivated by the need to model four decades worth of daily particulate data from the UK Black Smoke and Sulphur Dioxide monitoring network. Although reduced in size recently, over 2000 stations have at some time been part of the network, resulting in some 10 million measurements. Modelling at a daily scale is desirable for accurate trend estimation and mapping, and to provide daily exposure estimates for epidemiological cohort studies. Because of the data set size, previous work has focussed on modelling time or space averages pollution levels, but this is unsatisfactory from a health perspective, since it is often acute exposure locally and on the time scale of days that is of most importance in driving adverse health outcomes. If computed by conventional means our black smoke model would require a half terabyte of storage just for the model matrix, whereas we are able to compute with it on a desktop workstation. The best previously available reduced memory footprint method would have required three orders of magnitude more computing time than our new method.
Original languageEnglish
Pages (from-to)1199-1210
JournalJournal of the American Statistical Association
Volume112
Issue number519
Early online date24 Jun 2016
DOIs
Publication statusPublished - 25 Apr 2017

Fingerprint

Generalized Additive Models
Discretization
Modeling
Health
Sulfur Dioxide
Penalized Regression
Regression Splines
Penalized Splines
Cholesky
Network Monitoring
Cohort Study
Iteration Scheme
Computing
Smoothing Parameter
Matrix Models
Coefficient
Pollution
Parallelization
Model
Acute

Cite this

Generalized additive models for gigadata : modelling the UK black smoke network daily data. / Wood, Simon N.; Li, Zheyuan; Shaddick, Gavin; Augustin, Nicole H.

In: Journal of the American Statistical Association, Vol. 112, No. 519, 25.04.2017, p. 1199-1210.

Research output: Contribution to journalArticle

@article{9ec0de0419ee42bea45276d2b04156df,
title = "Generalized additive models for gigadata: modelling the UK black smoke network daily data",
abstract = "We develop scalable methods for fitting penalized regression spline based generalized additive models with of the order of 104 coefficients to up to 108 data. Computational feasibility rests on: (i) a new iteration scheme for estimation of model coefficients and smoothing parameters, avoiding poorly scaling matrix operations; (ii) parallelization of the iteration’s pivoted block Cholesky and basic matrix operations; (iii) the marginal discretization of model covariates to reduce memory footprint, with efficient scalable methods for computing required crossproducts directly from the discrete representation. Marginal discretization enables much finer discretization than joint discretization would permit. We were motivated by the need to model four decades worth of daily particulate data from the UK Black Smoke and Sulphur Dioxide monitoring network. Although reduced in size recently, over 2000 stations have at some time been part of the network, resulting in some 10 million measurements. Modelling at a daily scale is desirable for accurate trend estimation and mapping, and to provide daily exposure estimates for epidemiological cohort studies. Because of the data set size, previous work has focussed on modelling time or space averages pollution levels, but this is unsatisfactory from a health perspective, since it is often acute exposure locally and on the time scale of days that is of most importance in driving adverse health outcomes. If computed by conventional means our black smoke model would require a half terabyte of storage just for the model matrix, whereas we are able to compute with it on a desktop workstation. The best previously available reduced memory footprint method would have required three orders of magnitude more computing time than our new method.",
author = "Wood, {Simon N.} and Zheyuan Li and Gavin Shaddick and Augustin, {Nicole H.}",
year = "2017",
month = "4",
day = "25",
doi = "10.1080/01621459.2016.1195744",
language = "English",
volume = "112",
pages = "1199--1210",
journal = "Journal of the American Statistical Association",
issn = "0162-1459",
publisher = "Taylor and Francis",
number = "519",

}

TY - JOUR

T1 - Generalized additive models for gigadata

T2 - modelling the UK black smoke network daily data

AU - Wood, Simon N.

AU - Li, Zheyuan

AU - Shaddick, Gavin

AU - Augustin, Nicole H.

PY - 2017/4/25

Y1 - 2017/4/25

N2 - We develop scalable methods for fitting penalized regression spline based generalized additive models with of the order of 104 coefficients to up to 108 data. Computational feasibility rests on: (i) a new iteration scheme for estimation of model coefficients and smoothing parameters, avoiding poorly scaling matrix operations; (ii) parallelization of the iteration’s pivoted block Cholesky and basic matrix operations; (iii) the marginal discretization of model covariates to reduce memory footprint, with efficient scalable methods for computing required crossproducts directly from the discrete representation. Marginal discretization enables much finer discretization than joint discretization would permit. We were motivated by the need to model four decades worth of daily particulate data from the UK Black Smoke and Sulphur Dioxide monitoring network. Although reduced in size recently, over 2000 stations have at some time been part of the network, resulting in some 10 million measurements. Modelling at a daily scale is desirable for accurate trend estimation and mapping, and to provide daily exposure estimates for epidemiological cohort studies. Because of the data set size, previous work has focussed on modelling time or space averages pollution levels, but this is unsatisfactory from a health perspective, since it is often acute exposure locally and on the time scale of days that is of most importance in driving adverse health outcomes. If computed by conventional means our black smoke model would require a half terabyte of storage just for the model matrix, whereas we are able to compute with it on a desktop workstation. The best previously available reduced memory footprint method would have required three orders of magnitude more computing time than our new method.

AB - We develop scalable methods for fitting penalized regression spline based generalized additive models with of the order of 104 coefficients to up to 108 data. Computational feasibility rests on: (i) a new iteration scheme for estimation of model coefficients and smoothing parameters, avoiding poorly scaling matrix operations; (ii) parallelization of the iteration’s pivoted block Cholesky and basic matrix operations; (iii) the marginal discretization of model covariates to reduce memory footprint, with efficient scalable methods for computing required crossproducts directly from the discrete representation. Marginal discretization enables much finer discretization than joint discretization would permit. We were motivated by the need to model four decades worth of daily particulate data from the UK Black Smoke and Sulphur Dioxide monitoring network. Although reduced in size recently, over 2000 stations have at some time been part of the network, resulting in some 10 million measurements. Modelling at a daily scale is desirable for accurate trend estimation and mapping, and to provide daily exposure estimates for epidemiological cohort studies. Because of the data set size, previous work has focussed on modelling time or space averages pollution levels, but this is unsatisfactory from a health perspective, since it is often acute exposure locally and on the time scale of days that is of most importance in driving adverse health outcomes. If computed by conventional means our black smoke model would require a half terabyte of storage just for the model matrix, whereas we are able to compute with it on a desktop workstation. The best previously available reduced memory footprint method would have required three orders of magnitude more computing time than our new method.

UR - http://dx.doi.org/10.1080/01621459.2016.1195744

UR - http://dx.doi.org/10.1080/01621459.2016.1195744

U2 - 10.1080/01621459.2016.1195744

DO - 10.1080/01621459.2016.1195744

M3 - Article

VL - 112

SP - 1199

EP - 1210

JO - Journal of the American Statistical Association

JF - Journal of the American Statistical Association

SN - 0162-1459

IS - 519

ER -