Forecasting for big data: does suboptimality matter?

Konstantinos Nikolopoulos, Fotios Petropoulos

Research output: Contribution to journalArticle

  • 2 Citations

Abstract

Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.
LanguageEnglish
Pages322-329
JournalComputers and Operations Research
Volume98
Early online date11 May 2017
DOIs
StatusPublished - 1 Oct 2018

Fingerprint

Forecasting
Pursuit
Fruit
Optimal Parameter
Extrapolation
Forecast
Computational Cost
Optimality
Quantify
Trade-offs
Fruits
Big data
Costs
Model
Savings
Empirical investigation
Targeting
Forecasting performance

Cite this

Forecasting for big data: does suboptimality matter? / Nikolopoulos, Konstantinos; Petropoulos, Fotios.

In: Computers and Operations Research, Vol. 98, 01.10.2018, p. 322-329.

Research output: Contribution to journalArticle

@article{74bb53aca7484ab0aa0463b758bcce25,
title = "Forecasting for big data: does suboptimality matter?",
abstract = "Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.",
author = "Konstantinos Nikolopoulos and Fotios Petropoulos",
year = "2018",
month = "10",
day = "1",
doi = "10.1016/j.cor.2017.05.007",
language = "English",
volume = "98",
pages = "322--329",
journal = "Computers and Operations Research",
issn = "0305-0548",
publisher = "Elsevier",

}

TY - JOUR

T1 - Forecasting for big data: does suboptimality matter?

AU - Nikolopoulos,Konstantinos

AU - Petropoulos,Fotios

PY - 2018/10/1

Y1 - 2018/10/1

N2 - Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.

AB - Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.

U2 - 10.1016/j.cor.2017.05.007

DO - 10.1016/j.cor.2017.05.007

M3 - Article

VL - 98

SP - 322

EP - 329

JO - Computers and Operations Research

T2 - Computers and Operations Research

JF - Computers and Operations Research

SN - 0305-0548

ER -