Forecasting for big data: does suboptimality matter?

Konstantinos Nikolopoulos, Fotios Petropoulos

Research output: Contribution to journalArticle

Abstract

Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.
Original languageEnglish
JournalComputers and Operations Research
Early online date11 May 2017
DOIs
StateE-pub ahead of print - 11 May 2017

Fingerprint

Fruits
Extrapolation
Costs
Big data

Cite this

Forecasting for big data: does suboptimality matter? / Nikolopoulos, Konstantinos; Petropoulos, Fotios.

In: Computers and Operations Research, 11.05.2017.

Research output: Contribution to journalArticle

Nikolopoulos, Konstantinos; Petropoulos, Fotios / Forecasting for big data: does suboptimality matter?

In: Computers and Operations Research, 11.05.2017.

Research output: Contribution to journalArticle

@article{74bb53aca7484ab0aa0463b758bcce25,
title = "Forecasting for big data: does suboptimality matter?",
abstract = "Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.",
author = "Konstantinos Nikolopoulos and Fotios Petropoulos",
year = "2017",
month = "5",
doi = "10.1016/j.cor.2017.05.007",
journal = "Computers and Operations Research",
issn = "0305-0548",
publisher = "Elsevier",

}

TY - JOUR

T1 - Forecasting for big data: does suboptimality matter?

AU - Nikolopoulos,Konstantinos

AU - Petropoulos,Fotios

PY - 2017/5/11

Y1 - 2017/5/11

N2 - Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.

AB - Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.

U2 - 10.1016/j.cor.2017.05.007

DO - 10.1016/j.cor.2017.05.007

M3 - Article

JO - Computers and Operations Research

T2 - Computers and Operations Research

JF - Computers and Operations Research

SN - 0305-0548

ER -